paper_id
stringlengths
9
12
venue
stringclasses
139 values
year
stringclasses
7 values
paper_title
stringlengths
0
181
paper_authors
stringlengths
4
925
paper_abstract
stringlengths
1
5k
paper_keywords
stringlengths
2
436
paper_content
stringlengths
0
100k
review_id
stringlengths
9
12
review_title
stringlengths
0
500
review_rating
stringclasses
61 values
review_text
stringlengths
2
28.3k
review_confidence
stringclasses
13 values
text
stringlengths
402
130k
Px7xIKHjmMS
ICLR.cc/2021/Conference
2021
Beyond GNNs: A Sample Efficient Architecture for Graph Problems
["Pranjal Awasthi", "Abhimanyu Das", "Sreenivas Gollapudi"]
Despite their popularity in learning problems over graph structured data, existing Graph Neural Networks (GNNs) have inherent limitations for fundamental graph problems such as shortest paths, $k$-connectivity, minimum spanning tree and minimum cuts. In all these instances, it is known that one needs GNNs of high depth, scaling at a polynomial rate with the number of nodes $n$, to provably encode the solution space. This in turn affects their statistical efficiency thus requiring a significant amount of training data in order to obtain networks with good performance. In this work we propose a new hybrid architecture to overcome this limitation. Our proposed architecture that we call as GNNplus networks involve a combination of multiple parallel low depth GNNs along with simple pooling layers involving low depth fully connected networks. We provably demonstrate that for many graph problems, the solution space can be encoded by GNNplus networks using depth that scales only poly-logarithmically in the number of nodes. This significantly improves the amount of training data needed that we establish via improved generalization bounds. Finally, we empirically demonstrate the effectiveness of our proposed architecture for a variety of graph problems.
["Graph Neural Networks", "Deep Learning Theory", "Graph Connectivity", "Minimum Spanning Trees"]
ABSTRACTDespite their popularity in learning problems over graph structured data, exist-ingGraph Neural Networks (GNNs) have inherent limitations for fundamentalgraph problems such as shortest paths, k-connectivity, minimum spanning tree andminimum cuts. In all these instances, it is known that one needs GNNs of highdepth, scaling at a polynomial rate with the number of nodes n, to provably encodethe solution space. This in turn affects their statistical efficiency thus requiringa significant amount of training data in order to obtain networks with good per-formance. In this work we propose a new hybrid architecture to overcome thislimitation. Our proposed architecture that we call as GNN+networks involve acombination of multiple parallel low depth GNNs along with simple pooling layersinvolving low depth fully connected networks. We provably demonstrate that formany graph problems, the solution space can be encoded by GNN+networks usingdepth that scales only poly-logarithmically in the number of nodes. This signifi-cantly improves the amount of training data needed that we establish via improvedgeneralization bounds. Finally, we empirically demonstrate the effectiveness ofour proposed architecture for a variety of graph problems.1 I NTRODUCTIONIn recent years Graph Neural Networks (GNNs) have become the predominant paradigm for learningproblems over graph structured data (Hamilton et al., 2017; Kipf & Welling, 2016; Veli ˇckovi ́c et al.,2017). Computation in GNNs is performed by each node sending and receiving messages along theedges of the graph, and aggregating messages from its neighbors to update its own embedding vector.After a few rounds of message passing, the computed node embeddings are aggregated to computethe final output (Gilmer et al., 2017). The analogy to message passing leads to a simple and elegantarchitecture for learning functions on graphs. On the other hand, from a theoretical and practicalperspective, we also need these architectures to be sample efficient , i.e., learnable from a smallnumber of training examples, where each training example corresponds to a graph. Recent workshave shown that generalization in GNNs depends upon the depth of the architecture, i.e., the numberof rounds of message passing, as well as the embedding size for each node in the graph (Garg et al.,2020). However, this requirement is in fundamental conflict with the message passing framework.In particular, using GNNs to compute several fundamental graph problems such as shortest paths,minimum spanning tree, min cut etc., necessarily requires the product of the depth of the GNN and theembedding size to scale aspnwherenis the size of the graph (Loukas, 2020). This in turn places asignificant statistical burden when learning these fundamental problems on large scale graphs. Theabove raises the the following question: Can one develop sample efficient architectures for graphproblems while retaining the simplicity of the message passing framework?Several recent works have tried to address the above question by proposing extensions to the basicGNN framework by augmenting various pooling operations in conjunction with message passingrounds to capture more global structure (Ying et al., 2018; Simonovsky & Komodakis, 2017; Feyet al., 2018). While these works demonstrate an empirical advantage over GNNs, we currently donot know of a general neural architecture that is versatile enough to provably encode the solutionspace of a variety of graph problems such as shortest paths and minimum spanning trees, while beingsignificantly superior to GNNs in terms of statistical efficiency. In this work we propose a theoreticallyprincipled architecture, called GNN+networks for learning graph problems. While the basic GNNframework is inspired from classical message passing style models studied in distributed computing,1Under review as a conference paper at ICLR 2021we borrow from two fundamental paradigms in graph algorithm design namely, sketching and parallelcomputation, to design GNN+networks. As a result of combining these two powerful paradigms,we get a new neural architecture that simultaneously achieve low depth and low embedding size formany fundamental graph problems. As a result our proposed GNN+architecture have a significantlysmaller number of parameters that provably leads to better statistical efficiency than GNNs. Beforewe present our improved architecture, we briefly describe the standard GNN framework.Model for GNNs. In this work we will study GNNs that fall within the message passing frameworkand using notation from previous works we denote such networks as GNNmp(Loukas, 2020). AGNNmpnetwork operates in the AGGREGATE and COMBINE model (Gilmer et al., 2017) thatcaptures many popular variants such as GraphSAGE, Graph Convolutional Networks (GCNs) andGIN networks (Hamilton et al., 2017; Kipf & Welling, 2016; Xu et al., 2019a). Given a graphG= (V;E), letx(k)idenote the feature representation of node iat layerk. Then we havea(k1)i =AGGREGATE (fx(k1)j :j2N(i)g) (1)x(k)i=COMBINE (x(k1)i;a(k1)i ): (2)HereN(i)is the set of neighbors for node i. Typically the aggregation and combination is per-formed via simple one or two layer full connected networks (FNNs), also known as multi layerperceptrons (MLPs). In the rest of the paper we will use the two terms interchangeably.GNN+Networks. Our proposed GNN+networks consist of one or more layers of a GNN+block shown in Figure 1. The GNN+block comprises of rparallel GNNmpnetworks follows by sparallel fully connected network modules for pooling where randsare the hyperparameters of thearchitecture. More importantly we restrict the rGNNmpmodules to share the same set of weights.Hence the parallel GNNmpmodules only differ in the way the node embeddings are initialized.Furthermore, we restrict each GNNmpto be of low depth. In particular, for degree- dgraphs ofdiameterD, overnnodes, we will restrict the GNNmpto be of depth O((d+D)polylog (n)).Similarly, we require the sfully connected networks to be of depth O((d+D)polylog (n))andshare the network weights. We connect the outputs of the GNNmpmodules to the fully connectedpooling networks in a sparse manner and restrict the input size of each fully connected network tobeO((d+D)polylog (n)). Stacking up Llayers of GNN+blocks results in a GNN+networkthat is highly parameter efficient and in total has O((d+D)Lpolylog (n))parameters. For sucha network we call the depth as the total number of message passing rounds and the number ofMLP layers used across all the Lstacks. Since we restrict our MLPs and GNNmpblocks inside aGNN+network to be of low depth, we will often abuse notation and refer to a GNN+architecturewithLstacks of GNN+blocks as a depth Larchitecture. Our proposed design lets us alternatebetween local computations involving multiple parallel GNN blocks and global post-processingstages, while still being sample efficient due to the enforced parameter sharing. We will showvia several applications that optimal or near-optimal solutions to many popular graph problemscan indeed be computed via a GNN+architecture. Below we briefly summarize our main results.GNN1MLP1GNN2GNNrMLPsFigure 1: The basic GNN+block.Overview of Results. To demonstrate the general-ity of our proposed GNN+architecture, we studyseveral fundamental graph problems and show howto construct efficient GNN+networks to computeoptimal or near optimal solutions to these problems.In particular, we will focus on degree- dgraphs, i.e.,graphs of maximum node degree d, withnnodes anddiameterDand will construct GNN+networks ofdepth polylog (n)andO(D+d)polylog (n)totalparameters.Shortest Paths. The first problem we consider is the fundamental graph problem of computing(approximate) all pairs shortest paths in undirected graphs. Given a graph G= (V;E), letdG(u;v)be the shortest path between nodes uandv. We say that an output f~dG(u;v) :u;v2Vgis an2Under review as a conference paper at ICLR 2021-approximate solution if for all u6=vit holds thatdG(u;v)~dG(u;v)dG(u;v):We construct efficient GNN+networks for all pairs shortest paths with the following guarantee.Theorem 1 (Informal Theorem) .For any constant c>1, there is a depth O(Dlogd+ logn)GNN+network with O(n2c+d)polylog (n)parameters that computes (4c2)-approximate all pairsshortest paths in the undirected unweighted degree- dgraphs over nnodes. On the other hand,computing a c-approximate shortest paths using GNNmpnetworks requires a network of depth (n).From the above theorem we see that by setting c=O(logn)we can encode a c-approximate solutionusing anO(Dlogd+ logn)GNN+network with only O(dpolylog (n))parameters. This is in starkcontrast with the depth requirement of (n)for the traditional GNNmpnetworks.Connectivity Measures. Next we consider computing various graph connectivity measures. Wefirst study the popular measure based on graph effective resistances (Chandra et al., 1996).Definition 1 (Effective Resistance) .LetGbe a weighted undirected graph Gwith adjacency matrixAand the associated Laplacian L=DA. Given an edge u;v, the effective resistance betweenu;vis defined asRu;v=>u;vLyu;v:Hereu;vis anndimensional vector with +1at positionu,1at positionvand zeros everywhere.Lyrefers to the matrix pseudo-inverse.We also study the following connectivity measure that was proposed by Panigrahy et al. (2012) inthe context of web graphs. Given an undirected graph G, letGpbe the random graph obtained bysampling each edge with probability p.Definition 2 (Affinity) .For any two vertices u;v and forp2[0;1], defineAp(u;v)to be theprobability that u;vare connected in Gp. Then the affinity between uandvis defined asA(u;v) =Ep[Ap(u;v)]where the expectation is taken over pdrawn from the uniform distribution in [0;1].For the above measures we show the followingTheorem 2 (Informal Theorem) .There exists a GNN+architecture with O(Dlog(nd))parameters,and depthO(Dlog(nd))on graphs of diameter Dwithnnodes and maximum degree d, thatapproximate the above connectivity measures up to constant factors. On the other hand using GNNmpnetworks to compute the above measures, even approximately, necessarily requires a network ofdepth (pn).Clustering, Minimum Cuts and Minimum Spanning Trees. Finally, we showcase the power ofaGNN+architecture for computing other fundamental graph problems. Given an undirected graphG, the spectral clustering of Gcorresponds to the cut obtained by taking the sign of the eigenvectorvcorresponding to the second smallest eigenvalue 2(L), whereLis the graph Laplacian. Forcomputing the spectral clustering via GNN+networks we show the followingTheorem 3 (Informal Theorem) .There is a GNN+network of depth `=O(12(L)2logn), withO(d)parameters that computes an -approximate spectral clustering on graphs of degree d. On theother hand, using GNNmpnetworks to even approximately compute the spectral clustering requiresdepth (pn).Next we consider the classical problems of computing a global minimum cut and minimum spanningtrees in undirected graphs.Theorem 4 (Informal Theorem) .There exist GNN+networks of of depth O((D+ logn) logn), andO(d)parameters for computing a global minimum cut (MINCUT ) and minimum spanning tree (MST)in degreedgraphs of diameter D. Furthermore, using GNNmpnetworks to compute these primitives(even approximately) necessarily requires depth (pn).3Under review as a conference paper at ICLR 2021Generalization Bounds. Our final result concerns the generalization properties of a depth LGNN+architecture. For ease of exposition, we state here the results for the case when the GNN+architecture produces a one dimensional output. More general results are presented in Appendix D.Our generalization bounds depend on the depth Land the total number of parameters Pin theGNN+network. Following recent work on providing generalization bounds for fully connectedand convolutional neural networks (Bartlett et al., 2017; Long & Sedghi, 2020) that are based ondistance to initialization , we consider the class Fof depthLGNN+networks with Pparametersthat are at a distance from a reference parameter configuration (typically the parameters at randominitialization). Let y2Rdenote the output of the network and consider a Lipschitz loss function`(y;^y). Then, we provide following guarantee.Theorem 5 (Informal Theorem) .Let`(^y;y)be a Lipschitz loss function bounded in [0;B]. Then,givenmi.i.d. samples (G1;y1);(G2;y2);:::(Gm;ym)generated from a distribution D, with proba-bility at least 2=3, it holds that for all f2F,^ED[`f]ED[`f]OBrP(+L)m:We refer the reader to Theorem 16 in Appendix D for a formal statement and the proof. Notice thatthe above theorem implies that our proposed GNN+architecture for the above graph problems canindeed be trained using very few samples as opposed to the traditional GNNmpnetworks since theGNN+network requires much fewer parameters and depth. Furthermore, since a GNNmpnetworkis a special case of a GNN+architecture, our analysis also leads to an improved bound on thegeneralization guarantees for GNNmpnetworks as well. In particular, the above theorem improvesupon the recent work of Garg et al. (2020) that provides generalization guarantees for training GNNsthat scale with the branching factor of the graph. Using our improved analysis we are able to removethis dependence on the branching factor. See Appendix D for details.2 R ELATED WORKGNNs operate primarily in the message passing framework where nodes aggregate and combinemessages from their neighbors to update their embeddings. Several variants of this basic paradigmhave been proposed, with each differing in how the aggregation and combination is performed.Popular variants include GraphSAGE (Hamilton et al., 2017), Graph Convolutions Networks (Kipf &Welling, 2016), GIN networks (Xu et al., 2019a), and graph pooling (Ying et al., 2018).Various recent works have also studied the representation power of GNNs. The work of Xu et al.(2019a) demonstrates that the GNNs as considered in equation 1 are as powerful as the Weisfeiler-Lehman test for graph isomorphism (Weisfeiler & Lehman, 1968). The recent work of Xu et al.(2019b) compares the message passing framework of GNNs in representing computations involvingdynamic programming. GNN networks that can capture higher order variants of the WL test havealso been proposed recently (Maron et al., 2019).Several works have also explored the limitations of GNNs for computing graph primitives. Thework of Loukas (2020) established a correspondence between the message passing GNN frameworkand the well studied CONGEST model of distributed computing (Peleg, 2000). Based on the abovecorrespondence it follows that in order to represent several important graph problems such as shortestpaths, minimum cuts and minimum spanning tree, either the depth of the GNN or the embeddingsize of the nodes has to scale with the graph size at a polynomial rate. Notice that these lowerbounds apply to any form of message passing framework and as a result recent work in incorporatingnon-symmetric node messages (Sato et al., 2019) in GNNs also run into the same barriers.In order to address the above limitations recent works have proposed combining the GNN architecturewith pooling mechanisms for aggregating more global information (Ying et al., 2018; Defferrardet al., 2016; Simonovsky & Komodakis, 2017; Fey et al., 2018; Bianchi et al., 2019; Du et al.,2019). For example the work of Ying et al. (2018) proposes a hierarchical approach where a GNNnetwork is followed by a clustering step to compute higher level “nodes” to be used in the subsequentGNN operation. While these approaches show empirical promise, ours is the first work to design aprincipled architecture with theoretical guarantees that merges local distributed computations withglobal postprocessing stages.4Under review as a conference paper at ICLR 2021Finally, the question of generalization for GNNs has also been studied in recent works. The mostrelevant to us is the recent work of Garg et al. (2020) that analyzes the Rademacher complexity ofGNNs with the aggregate mechanism being a simple addition and the combine mechanism beinga one layer neural network. Via analyzing the Rademacher complexity the authors show that thegeneralization for GNNs depends on the depth, the embedding size and the branching factor ofthe graph. Our improved analysis in Section D extends the result of Garg et al. (2020). Not onlydoes our generalization bound apply to the more general GNN+networks, for the case of GNNsconsidered in (Garg et al., 2020) our analysis shows that the dependence on the branching factor canbe eliminated in the generalization bounds. Generalization bounds have also been proved recentlyfor GNN based networks that use the Neural Tangent Kernel (NTK) during the aggregation andcombination operations (Du et al., 2019).3 S HORTEST PATHSIn this section we provide a proof sketch of Theorem 1 showing how to construct an efficient GNN+architecture for the Shortest Paths problem. In particular we study all pairs shortest paths.All Pairs Shortest Paths. The input is a graph G= (V;E)withnnodes. The desired output is ann2dimensional vector containing (approximate) shortest path values between each pair of vertices.Given a graph G, letdG(u;v)be the shortest path between nodes uandv. We say that an outputf~dG(u;v) :u;v2Vgis an-approximate solution if for all u6=vit holds thatdG(u;v)~dG(u;v)dG(u;v):We first show that the GNNmpnetworks are highly inefficient for learning this problem.Theorem 6. Consider a GNNmpNof depthLovernnodes where each node has a representationsize ofB. IfNencodes-approximate all pairs shortest paths for graphs of diameter bounded by D,and for<3=2, then it must hold that BL(n). Furthermore, for any GNNmpthat encodes(n)-approximate all pairs shortest paths it must hold that BLn(n) logn. The lower boundholds for undirected unweighted graphs as well.Proof. The recent work of Loukas (2020) established that computation in GNNmpnetworks isequivalent to the CONGEST model of computation popularly studied in the design of distributedalgorithms (Peleg, 2000). In particular, a lower bound on the product of depth ( L) and representationsize (B) can be obtained by establishing the corresponding lower bound on the product of the numberof rounds and the size of messages in the CONGEST model of computing. Furthermore, the resultof Holzer & Wattenhofer (2012) shows that in the CONGEST model approximating all pairs shortestpaths, even on unweighted undirected graphs requires the product of the number of rounds and themessage size to be (n). This was improved in the work of Nanongkai (2014) to show that for any(n)-approximation, the product of the number of rounds and the message size to be n(n) logn.Hence the corresponding lower bound on BLfollows.Circumventing Lower Bounds via GNN+.Next we detail our proposed GNN+architecture thatcan encode approximate shortest paths with significantly smaller depth and parameter requirements.Unweighted Graphs. To illustrate the main ideas we study the case of undirected unweighted graphs.See Appendix A for the more general case of weighted graphs. The starting point of our constructionis the following fundamental theorem of Bourgain (1985) regarding metric embeddings.Theorem 7 ((Bourgain, 1985)) .Anyn-point metric (X;d)can be embedded into the Euclideanmetric of dimensionality O(logn)and distortion O(logn).The above theorem suggests that in principle, if we only want to estimate shortest paths up to anapproximation of O(logn), then we only need node embeddings of size O(logn). If there were aGNNmpnetwork that could produce such embeddings, then one could simply compute the Euclideandistance between each pair of points to get the approximate shortest path. Furthermore, computingthe Euclidean distance given the node embeddings can be done easily via a low depth full connectednetwork. Unfortunately, producing the necessary low dimensional embeddings is exactly the task for5Under review as a conference paper at ICLR 2021which GNNmpnetworks require large depth as proved in Theorem 6 above. While there do exist semi-definite programming based algorithms (Linial et al., 1995) for computing the embeddings requiredfor Bourgain’s theorem, they are not suitable for implementation via efficient neural architectures.Instead we rely on sketching based algorithms for computing shortest path distances.In particular, for the unweighted case we adapt the sketch based approximate shortest path algorithmsof Das Sarma et al. (2010) for designing an efficient network architecture. The sketch proposed inthe work of Das Sarma et al. (2010) computes, for each node u, the distance of ufrom a randomsubsetSof the nodes. This can be done via a simple breadth first search (BFS). Repeating thisprocessk-times provides a k-dimensional embedding for each vertex and for an appropriate choiceofk, these embeddings can be used to compute approximate shortest paths. Notice that this sketchbased procedure is highly amenable to be implemented in a message passing framework. Overall, thealgorithm performs multiple parallel BFS subroutines to compute the embeddings. It is also wellknown that BFS on diameter Dcan be implemented by a GNNmpof depthO(D).Based on the above intuition, our proposed architecture is shown in Figure 2. It consists of kparallelbreadth first search (BFS) modules for k= (n1clogn)for a constant c>1. Moduleicomputes theshortest path from each vertex in Gto any vertex in the set Si. The setsS1;S2;:::;Skare randomlychosen subsets of the vertex set Vof various sizes. In particular there are (n1c)subsets of size 1,(n1c)subsets of size 2,(n1c)subsets of size 22, and so on up to (n1c)subsets of size 2blognc.The BFS module iproducesndistance values v(i)1;:::;v(i)n. These modules are followed byn2fully connected networks where each module is responsible for computing the approximate shortestpath distance between a pair of vertices. In particular we have ~dG(s;t) = maxijv(i)sv(i)tj.Figure 2: The network architecture for approximate all pairsshortest paths in unweighted graphs.Notice from the discussion in Sec-tion 1 that the architecture in Fig-ure 2 is a GNN+network with a sin-gleGNN+block. In the next sectionwe will show how we can generalizeto a suite of graph problems by stack-ing up multiple GNN+blocks. Forour proposed network we have the fol-lowing guarantee.Theorem 8. For any integer c > 1,and for a fixed graph topology overnnodes with maximum degree danddiameterD, there exists a neural net-work as shown in Figure 2 of sizeO(n2+1=c),~O(n2c)parameters, anddepthO(Dlogd+ logn), that en-codes (2c1)-approximate all pairsshortest paths in G.Before proving the theorem above we establish two supporting lemmas concerning the implementationof the BFS modules and the MLP module in the network architecture described in Figure 2.Lemma 1. The BFS module in Figure 2 can be implemented by a GNN of depth O(D),O(1)totalparameters and with each node having a representation size of O(1).Lemma 2. For anyk, the MLP module in Figure 2 can be implemented by a network of depthO(logk),O(k2)total parameters.Proof of Theorem 8. The correctness of the network architecture follows from the work of Das Sarmaet al. (2010). Next we establish bounds on the total depth, size and the number of parameters. Wehavek= (n1clogn)copies of the BFS module. Each BFS module is of size O(ndlogd)sincethere arennodes and each node implements a min function of size O(dlogd). Hence, in totalthe BFS modules have size O(n1+1=cdlogdlogn). Next we haven2MLP modules each of sizeO(klogk)for a total size of O(n2+1=clogn). Hence the total size of the neural network is boundedbyO(n2+1=clogn).6Under review as a conference paper at ICLR 2021Next we bound the depth and the total number of parameters. The BFS module has O(D)roundswith each requiring a depth of O(logd)for a total depth of O(Dlogd). The MLP module has adepth bounded by O(logk) =O(logn). Hence the total depth is O(DlogD+ logn). Finally, theBFS module requires O(1)parameters and the MLP module requires O(k2)parameters. Hence, thetotal number of parameters in our architecture are bounded by O(k2) =O(n2=c).4 M INIMUM CUTSTo illustrate another application, in this section we design an efficient GNN+based architecturefor computing the minimum cut in an undirected graph. We first argue in Appendix C that evencomputing an approximate mincut using traditional GNNmpnetworks requires (pn)rounds. Ourefficient GNN+based architecture is based on the parallel algorithm for computing mincut (Karger& Stein, 1996) and is shown in Figure 3. More importantly the architecture comprises of mul-tiple layers of GNN+blocks in contrast to a single GNN+block in the case of shortest paths.GGNNGNNUpdate PrefixUpdate PrefixFigure 3: The network archi-tecture for minimum cut.The algorithm of Karger & Stein (1996) relies on the followinglemma.Lemma 3 ((Karger & Stein, 1996)) .LetG= (V;E)be an undi-rected unweighted graph with medges andnvertices. Then withprobability at least1n2, a random ordering Lof the edges contains aprefixL0ofLsuch that the graph G0= (V;L0)contains exactly twoconnected components corresponding to the global minimum cut inthe graph.Using the above, Karger & Stein (1996) proposed a Monte-Carlorandomized algorithm to compute the global minimum cut. For agiven ordering L, the algorithm estimates the length of the prefix L0corresponding to the cut by using a binary search procedure. Thisprovides the set of active edges , i.e., edges in L0. Then one can runa connected components algorithm using edges in L0. If the prefixis too small, it results in more than two connected components; if itis too large it produces one connected component. If the number ofconnected components is two then the algorithm stops. Otherwiseit recurses on the appropriate side of L0.We can implement the above algorithm using the GNN+architectureof depthO(logm)where the computation in each pair of (GNN,Update Prefix) blocks correspondsto executing one call of the above binary search procedure. During each call one needs to perform aconnected component subroutine. This can be done via BFS and is implemented in the GNN block asshown in Figure 3. The GNN is followed by the UpdatePrefix module that is an MLP implementingthe logic of selecting the appropriate side of the permutation to recurse on.More formally, at each of the O(logm) =O(logn)stages, each vertex in the GNNmpmaintains alist of which of its connecting edges are active . This requires O(d)representation size. The goalnext is to infer whether the number of connected components induced by the active edges is one,two, or more than two. This in turn decides the part of the ordering the next stage will focus on. Thecomputation of connected components can be carried out using at most two breadth first searchesand hence via O(D)rounds of a GNNmpnetwork. Following this intuition we arrive at the proposedarchitecture in Figure 3. Formally, we have the following guarantee.Theorem 9. For a fixed graph topology over nnodes with maximum degree dand diameter D, thenetwork in Figure 3 produces the minimum cut. Furthermore, the network is of depth `=O(Dlog2n),sizeO(n`), and hasO(d+ logn)parameters.Proof. Each vertex maintains an O(d)sized representation indicating which of its edges are currentlyactive plus additional constant number of values to indicate its component id during different runs ofBFS. Given a list of active edges, the GNN module simply performs a procedure to compute whetherthe number of connected components is one, two, or more than two. This can be done with at mosttwo BFS runs over the active edges. As we saw before in Section 3, this requires O(D)depth.7Under review as a conference paper at ICLR 2021At the end of the GNN module each vertex gets an integer value specifying its component id. TheUpdatePrefix module then takes this information and is required to perform two computations: a)check if the number of connected components is one, two, or more than two. This requires checkingthe number of distinct elements in a list of nnumbers and can be done with an MLP O(logn)parameters and depth O(logn), b) update the set of active edges for each vertex depending onthe number of connected components. This requires taking in the O(d)sized representation andproducing a new O(d)sized representation for each vertex. This can be achieved again by an MLPusingO(d)parameters and depth O(logd). Once a given round of GNN and UpdatePrefix ends, thecomputations proceeds to the next layer. Importantly, the set of model parameters are shared acrossthe different layers of the architecture as each time the computation required is the same. Henceoverall we get O(Dlogn)depth andO(d+ log2n)parameters.5 E XPERIMENTSWe show the efficacy of GNN+on the aforementioned graph problems: Shortest Paths, EffectiveResistance, Affinity, MINCUT and MST, and compare to a state-of-the-art GNNmpmodel (Xu et al.,2019a).Dataset. We generated synthetic random graphs between 500and1000 nodes. For the affinitymeasure, we used graphs with 250nodes because of the need for using very dense graphs to have areasonable number of alternate paths between any two end points. In general, we generated the datasets as follows: we fix the number of nodes nin the graph to take values in [250;1000] . For each valueofnwe generate graphs from the Erdos-Renyi model G(n;p)with edge sampling probability p=n.We varydepending on the problem. Specifically, we set to be a constant in [1;100] to capturevarying degrees of sparsity. For each n;pwe generate 30;000training examples consisting of tuplesof the form (G;s;t;d (s;t))whereGis a random graph drawn from G(n;p),s;tare two verticesuniformly drawn at random and d(s;t)is one of shortest path value, effective resistance, or affinitybetween the two vertices. In the case of min cut and minimum spanning tree, we generate tuples(g;vG)wherevGcorresponds to the value of the minimum spanning tree or the global minimum cut.Models and Configurations. For our baseline GNNmpimplementation, we used the GIN modelproposed in Xu et al. (2019a). This has been empirically shown (Xu et al., 2019a; Loukas, 2020; Erricaet al., 2020) to be a state-of-the-art GNNmpmodel on several datasets. GIN updates feature represen-tationsx(k)vof each node vat iterationkas:x(k)v=MLP(1 +(k))x(k1)v +Pu2N(v)x(k1)u,where MLP refers to a Multi-Layer Perceptron, N(v)is the set of neighbors of v, andis a learnableparameter. For problems that involved weighted graphs (e.g. MST), we incorporated edge weightsinto the GIN update equation by replacing the sum of neighbor representations by a weighted sum.OurGNN+implementation also used the same GIN implementation as its internal GNNmpblock.All graphs in our experiments were undirected. For both baseline and GNN+, we used node degree asthe input node features for MINCUT and MST. For Shortest Paths, Effective Resistance and Affinity,we set input node features to be Booleans indicating if the node is a source/destination node or not.Following Xu et al. (2019a), we performed 10-fold cross-validation for each of our experiments(corresponding to the two models and five problems), and report the average validation mean squarederror(MSE) across the 10folds. We run each 10-fold cross-validation experiment 10times to computeconfidence intervals. We apply batch normalization at each layer, and use an Adam optimizer anddecay the learning rate by 0:5every 50epochs, and train for up to 600epochs. For both thebaseline GNNmpand the GNN+model, we tune the following parameters: initial learning rate2f0:001;0:003;0:005;0:007;0:01;0:03;0:05g, number of hidden units 2f8;16;32;64g, batch-size2 f32;64g, dropout2 f0;0:5g. For GNNmpwe also tuned the depth (number of layers)2f2;4;8;12g. For the GNN+model, we tuned the number of parallel GNNs in each GNN+blockto2f1;2;3gwith GNN depth2f2;4g. We also tuned the number of GNN+layers2f1;2;3g. Wefixed the depth of each MLP block in GNNmpand GNN+to2.Results. To validate our theory regarding the better generalization bounds for GNN+modelscompared to GNNmp, we compare the test mean squared errors for the two models across the fivedatasets. For all the five problem, Table 1 lists the test MSEs and corresponding standard deviations8Under review as a conference paper at ICLR 2021Problem Label Variance Avg. MSE (GNNmp)Avg. MSE (GNN+)Shortest Path 7.482 0:9750:031 0:8490:022Effective Resistance 7.949 0:3970:025 0:1870:008Affinity 3.030 0:00251:75e04 0:00181:89e05MST 4637.4 1011:39106:94 733:90130:97MINCUT 11.964 0:9630:110 0:6940:07Table 1: Performance of the GNNmpand GNN+architecturesfor the two models. As a sanity check, we also plot the variance of the labels in our datasets, whichcorresponds to the MSE obtained by a naive model that predicts the mean label. We observe significantgains in accuracy of anywhere between 15% relative MSE improvement over the GNNmpbaseline(for Shortest Paths) to as much as 108% relative MSE improvement (for Effective Resistance). Notethat the naive mean predictor’s MSE is at least an order of magnitude larger than all the MSE valuesforGNNmpandGNN+(except for the MSTdataset, where it is around five times larger - we suspectthat the weighted graphs involved in this dataset make this a harder problem).We posit that these accuracy gains directly stem from the sample-efficiency of the GNN+modelsas captured in Theorems 1,2 and 4 - the most compact GNN+networks that can represent theseproblems are smaller than the corresponding most compact GNNmpnetworks. Hence, by Theorem 5,such networks will have smaller generalization errors. In the appendix, we also plot the test accuracyas a function of number of epochs that suggest that our models also converge faster than the baselineGNNmpmodels, though we do not have any theoretical justification supporting this observation.Experiments on Real World Data. We further demonstrate the applicability of our proposedGNN+architecture for solving classification tasks involving real world graphs. We experimentwith the following real world datasets (Yanardag & Vishwanathan, 2015) that have been used inrecent works for evaluating various GNN architectures (Xu et al., 2019a): 1) IMDB-BINARY and 2)IMDB-MULTI datasets: These are movie collaboration datasets with nodes as actors and the classlabel being the genre. 3) COLLAB: This is a scientific collaboration dataset with three classes. 4)PROTEINS: This is a bioinformatics dataset with 3class labels. 5) PTC, 6) NCI1 and 7) MUTAG:These are various datasets of chemical compounds with two class labels each.We train our GNN+proposed architecture on these graphs using the cross-entropy loss and as beforecompare with the GIN architecture of Xu et al. (2019a). We use the same input node features asin Xu et al. (2019a) and use the same experimental methodology as that for synthetic graphs above.In particular, when tuning hyperparameter tuning we allow the GNNmparchitecture to explore depthupto 9whereas the GNN+architecture is tuned by restricting the depth upto 3. The results aresummarized in Table 2 below. As can be seen, in each instance GNN+either outperforms or matchesthe performance of the GNNmparchitecture in terms of final test accuracy.Dataset Test Acc. (GNNmp)Test Acc. (GNN+)IMDB-BINARY 0:7420:09 0:7690:02IMDB-MULTI 0:5230:06 0:5270:04COLLAB 0:8020:02 0:8160:004PROTEINS 0:76020:008 0:76540:015NCI1 0:8490:004 0:8510:003PTC 0:6860:02 0:7080:018MUTAG 0:8760:016 0:8980:012Table 2: Performance of the GNNmpand GNN+architectures on real world classification datasets.
1ejc1mWKI2r
Interesting paper
8: Top 50% of accepted papers, clear accept
The main question this paper tackles is: can one develop sample efficient architectures for graph problems while retaining the simplicity of the message passing framework? While combining message passing with GNNs has shown to have positive empirical results, we do not know of a general neural architecture that is versatile enough to provably encode the solution space of a variety of graph problems such as shortest paths and minimum spanning trees. This paper introduces a theoretically principled architecture -- GCN+ -- which is attempts to make GCNs more efficient by using ideas from the subfields of sketching approximations and parallel computing. ############ I vote for accepting the paper due to its novelty and the pros listed below. ############ Pros + An interesting paper with novel contribution combining ideas from parallel computing and sketching approximations. + Major algorithmic contribution of interest to practitioners + Theoretical contributions in the form of several theorems in the paper Cons - No code provided with the submission
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Beyond GNNs: A Sample Efficient Architecture for Graph Problems ### Paper Abstract Despite their popularity in learning problems over graph structured data, existing Graph Neural Networks (GNNs) have inherent limitations for fundamental graph problems such as shortest paths, $k$-connectivity, minimum spanning tree and minimum cuts. In all these instances, it is known that one needs GNNs of high depth, scaling at a polynomial rate with the number of nodes $n$, to provably encode the solution space. This in turn affects their statistical efficiency thus requiring a significant amount of training data in order to obtain networks with good performance. In this work we propose a new hybrid architecture to overcome this limitation. Our proposed architecture that we call as GNNplus networks involve a combination of multiple parallel low depth GNNs along with simple pooling layers involving low depth fully connected networks. We provably demonstrate that for many graph problems, the solution space can be encoded by GNNplus networks using depth that scales only poly-logarithmically in the number of nodes. This significantly improves the amount of training data needed that we establish via improved generalization bounds. Finally, we empirically demonstrate the effectiveness of our proposed architecture for a variety of graph problems. ### Paper Keywords ["Graph Neural Networks", "Deep Learning Theory", "Graph Connectivity", "Minimum Spanning Trees"] ### Paper Content ABSTRACTDespite their popularity in learning problems over graph structured data, exist-ingGraph Neural Networks (GNNs) have inherent limitations for fundamentalgraph problems such as shortest paths, k-connectivity, minimum spanning tree andminimum cuts. In all these instances, it is known that one needs GNNs of highdepth, scaling at a polynomial rate with the number of nodes n, to provably encodethe solution space. This in turn affects their statistical efficiency thus requiringa significant amount of training data in order to obtain networks with good per-formance. In this work we propose a new hybrid architecture to overcome thislimitation. Our proposed architecture that we call as GNN+networks involve acombination of multiple parallel low depth GNNs along with simple pooling layersinvolving low depth fully connected networks. We provably demonstrate that formany graph problems, the solution space can be encoded by GNN+networks usingdepth that scales only poly-logarithmically in the number of nodes. This signifi-cantly improves the amount of training data needed that we establish via improvedgeneralization bounds. Finally, we empirically demonstrate the effectiveness ofour proposed architecture for a variety of graph problems.1 I NTRODUCTIONIn recent years Graph Neural Networks (GNNs) have become the predominant paradigm for learningproblems over graph structured data (Hamilton et al., 2017; Kipf & Welling, 2016; Veli ˇckovi ́c et al.,2017). Computation in GNNs is performed by each node sending and receiving messages along theedges of the graph, and aggregating messages from its neighbors to update its own embedding vector.After a few rounds of message passing, the computed node embeddings are aggregated to computethe final output (Gilmer et al., 2017). The analogy to message passing leads to a simple and elegantarchitecture for learning functions on graphs. On the other hand, from a theoretical and practicalperspective, we also need these architectures to be sample efficient , i.e., learnable from a smallnumber of training examples, where each training example corresponds to a graph. Recent workshave shown that generalization in GNNs depends upon the depth of the architecture, i.e., the numberof rounds of message passing, as well as the embedding size for each node in the graph (Garg et al.,2020). However, this requirement is in fundamental conflict with the message passing framework.In particular, using GNNs to compute several fundamental graph problems such as shortest paths,minimum spanning tree, min cut etc., necessarily requires the product of the depth of the GNN and theembedding size to scale aspnwherenis the size of the graph (Loukas, 2020). This in turn places asignificant statistical burden when learning these fundamental problems on large scale graphs. Theabove raises the the following question: Can one develop sample efficient architectures for graphproblems while retaining the simplicity of the message passing framework?Several recent works have tried to address the above question by proposing extensions to the basicGNN framework by augmenting various pooling operations in conjunction with message passingrounds to capture more global structure (Ying et al., 2018; Simonovsky & Komodakis, 2017; Feyet al., 2018). While these works demonstrate an empirical advantage over GNNs, we currently donot know of a general neural architecture that is versatile enough to provably encode the solutionspace of a variety of graph problems such as shortest paths and minimum spanning trees, while beingsignificantly superior to GNNs in terms of statistical efficiency. In this work we propose a theoreticallyprincipled architecture, called GNN+networks for learning graph problems. While the basic GNNframework is inspired from classical message passing style models studied in distributed computing,1Under review as a conference paper at ICLR 2021we borrow from two fundamental paradigms in graph algorithm design namely, sketching and parallelcomputation, to design GNN+networks. As a result of combining these two powerful paradigms,we get a new neural architecture that simultaneously achieve low depth and low embedding size formany fundamental graph problems. As a result our proposed GNN+architecture have a significantlysmaller number of parameters that provably leads to better statistical efficiency than GNNs. Beforewe present our improved architecture, we briefly describe the standard GNN framework.Model for GNNs. In this work we will study GNNs that fall within the message passing frameworkand using notation from previous works we denote such networks as GNNmp(Loukas, 2020). AGNNmpnetwork operates in the AGGREGATE and COMBINE model (Gilmer et al., 2017) thatcaptures many popular variants such as GraphSAGE, Graph Convolutional Networks (GCNs) andGIN networks (Hamilton et al., 2017; Kipf & Welling, 2016; Xu et al., 2019a). Given a graphG= (V;E), letx(k)idenote the feature representation of node iat layerk. Then we havea(k1)i =AGGREGATE (fx(k1)j :j2N(i)g) (1)x(k)i=COMBINE (x(k1)i;a(k1)i ): (2)HereN(i)is the set of neighbors for node i. Typically the aggregation and combination is per-formed via simple one or two layer full connected networks (FNNs), also known as multi layerperceptrons (MLPs). In the rest of the paper we will use the two terms interchangeably.GNN+Networks. Our proposed GNN+networks consist of one or more layers of a GNN+block shown in Figure 1. The GNN+block comprises of rparallel GNNmpnetworks follows by sparallel fully connected network modules for pooling where randsare the hyperparameters of thearchitecture. More importantly we restrict the rGNNmpmodules to share the same set of weights.Hence the parallel GNNmpmodules only differ in the way the node embeddings are initialized.Furthermore, we restrict each GNNmpto be of low depth. In particular, for degree- dgraphs ofdiameterD, overnnodes, we will restrict the GNNmpto be of depth O((d+D)polylog (n)).Similarly, we require the sfully connected networks to be of depth O((d+D)polylog (n))andshare the network weights. We connect the outputs of the GNNmpmodules to the fully connectedpooling networks in a sparse manner and restrict the input size of each fully connected network tobeO((d+D)polylog (n)). Stacking up Llayers of GNN+blocks results in a GNN+networkthat is highly parameter efficient and in total has O((d+D)Lpolylog (n))parameters. For sucha network we call the depth as the total number of message passing rounds and the number ofMLP layers used across all the Lstacks. Since we restrict our MLPs and GNNmpblocks inside aGNN+network to be of low depth, we will often abuse notation and refer to a GNN+architecturewithLstacks of GNN+blocks as a depth Larchitecture. Our proposed design lets us alternatebetween local computations involving multiple parallel GNN blocks and global post-processingstages, while still being sample efficient due to the enforced parameter sharing. We will showvia several applications that optimal or near-optimal solutions to many popular graph problemscan indeed be computed via a GNN+architecture. Below we briefly summarize our main results.GNN1MLP1GNN2GNNrMLPsFigure 1: The basic GNN+block.Overview of Results. To demonstrate the general-ity of our proposed GNN+architecture, we studyseveral fundamental graph problems and show howto construct efficient GNN+networks to computeoptimal or near optimal solutions to these problems.In particular, we will focus on degree- dgraphs, i.e.,graphs of maximum node degree d, withnnodes anddiameterDand will construct GNN+networks ofdepth polylog (n)andO(D+d)polylog (n)totalparameters.Shortest Paths. The first problem we consider is the fundamental graph problem of computing(approximate) all pairs shortest paths in undirected graphs. Given a graph G= (V;E), letdG(u;v)be the shortest path between nodes uandv. We say that an output f~dG(u;v) :u;v2Vgis an2Under review as a conference paper at ICLR 2021-approximate solution if for all u6=vit holds thatdG(u;v)~dG(u;v)dG(u;v):We construct efficient GNN+networks for all pairs shortest paths with the following guarantee.Theorem 1 (Informal Theorem) .For any constant c>1, there is a depth O(Dlogd+ logn)GNN+network with O(n2c+d)polylog (n)parameters that computes (4c2)-approximate all pairsshortest paths in the undirected unweighted degree- dgraphs over nnodes. On the other hand,computing a c-approximate shortest paths using GNNmpnetworks requires a network of depth (n).From the above theorem we see that by setting c=O(logn)we can encode a c-approximate solutionusing anO(Dlogd+ logn)GNN+network with only O(dpolylog (n))parameters. This is in starkcontrast with the depth requirement of (n)for the traditional GNNmpnetworks.Connectivity Measures. Next we consider computing various graph connectivity measures. Wefirst study the popular measure based on graph effective resistances (Chandra et al., 1996).Definition 1 (Effective Resistance) .LetGbe a weighted undirected graph Gwith adjacency matrixAand the associated Laplacian L=DA. Given an edge u;v, the effective resistance betweenu;vis defined asRu;v=>u;vLyu;v:Hereu;vis anndimensional vector with +1at positionu,1at positionvand zeros everywhere.Lyrefers to the matrix pseudo-inverse.We also study the following connectivity measure that was proposed by Panigrahy et al. (2012) inthe context of web graphs. Given an undirected graph G, letGpbe the random graph obtained bysampling each edge with probability p.Definition 2 (Affinity) .For any two vertices u;v and forp2[0;1], defineAp(u;v)to be theprobability that u;vare connected in Gp. Then the affinity between uandvis defined asA(u;v) =Ep[Ap(u;v)]where the expectation is taken over pdrawn from the uniform distribution in [0;1].For the above measures we show the followingTheorem 2 (Informal Theorem) .There exists a GNN+architecture with O(Dlog(nd))parameters,and depthO(Dlog(nd))on graphs of diameter Dwithnnodes and maximum degree d, thatapproximate the above connectivity measures up to constant factors. On the other hand using GNNmpnetworks to compute the above measures, even approximately, necessarily requires a network ofdepth (pn).Clustering, Minimum Cuts and Minimum Spanning Trees. Finally, we showcase the power ofaGNN+architecture for computing other fundamental graph problems. Given an undirected graphG, the spectral clustering of Gcorresponds to the cut obtained by taking the sign of the eigenvectorvcorresponding to the second smallest eigenvalue 2(L), whereLis the graph Laplacian. Forcomputing the spectral clustering via GNN+networks we show the followingTheorem 3 (Informal Theorem) .There is a GNN+network of depth `=O(12(L)2logn), withO(d)parameters that computes an -approximate spectral clustering on graphs of degree d. On theother hand, using GNNmpnetworks to even approximately compute the spectral clustering requiresdepth (pn).Next we consider the classical problems of computing a global minimum cut and minimum spanningtrees in undirected graphs.Theorem 4 (Informal Theorem) .There exist GNN+networks of of depth O((D+ logn) logn), andO(d)parameters for computing a global minimum cut (MINCUT ) and minimum spanning tree (MST)in degreedgraphs of diameter D. Furthermore, using GNNmpnetworks to compute these primitives(even approximately) necessarily requires depth (pn).3Under review as a conference paper at ICLR 2021Generalization Bounds. Our final result concerns the generalization properties of a depth LGNN+architecture. For ease of exposition, we state here the results for the case when the GNN+architecture produces a one dimensional output. More general results are presented in Appendix D.Our generalization bounds depend on the depth Land the total number of parameters Pin theGNN+network. Following recent work on providing generalization bounds for fully connectedand convolutional neural networks (Bartlett et al., 2017; Long & Sedghi, 2020) that are based ondistance to initialization , we consider the class Fof depthLGNN+networks with Pparametersthat are at a distance from a reference parameter configuration (typically the parameters at randominitialization). Let y2Rdenote the output of the network and consider a Lipschitz loss function`(y;^y). Then, we provide following guarantee.Theorem 5 (Informal Theorem) .Let`(^y;y)be a Lipschitz loss function bounded in [0;B]. Then,givenmi.i.d. samples (G1;y1);(G2;y2);:::(Gm;ym)generated from a distribution D, with proba-bility at least 2=3, it holds that for all f2F,^ED[`f]ED[`f]OBrP(+L)m:We refer the reader to Theorem 16 in Appendix D for a formal statement and the proof. Notice thatthe above theorem implies that our proposed GNN+architecture for the above graph problems canindeed be trained using very few samples as opposed to the traditional GNNmpnetworks since theGNN+network requires much fewer parameters and depth. Furthermore, since a GNNmpnetworkis a special case of a GNN+architecture, our analysis also leads to an improved bound on thegeneralization guarantees for GNNmpnetworks as well. In particular, the above theorem improvesupon the recent work of Garg et al. (2020) that provides generalization guarantees for training GNNsthat scale with the branching factor of the graph. Using our improved analysis we are able to removethis dependence on the branching factor. See Appendix D for details.2 R ELATED WORKGNNs operate primarily in the message passing framework where nodes aggregate and combinemessages from their neighbors to update their embeddings. Several variants of this basic paradigmhave been proposed, with each differing in how the aggregation and combination is performed.Popular variants include GraphSAGE (Hamilton et al., 2017), Graph Convolutions Networks (Kipf &Welling, 2016), GIN networks (Xu et al., 2019a), and graph pooling (Ying et al., 2018).Various recent works have also studied the representation power of GNNs. The work of Xu et al.(2019a) demonstrates that the GNNs as considered in equation 1 are as powerful as the Weisfeiler-Lehman test for graph isomorphism (Weisfeiler & Lehman, 1968). The recent work of Xu et al.(2019b) compares the message passing framework of GNNs in representing computations involvingdynamic programming. GNN networks that can capture higher order variants of the WL test havealso been proposed recently (Maron et al., 2019).Several works have also explored the limitations of GNNs for computing graph primitives. Thework of Loukas (2020) established a correspondence between the message passing GNN frameworkand the well studied CONGEST model of distributed computing (Peleg, 2000). Based on the abovecorrespondence it follows that in order to represent several important graph problems such as shortestpaths, minimum cuts and minimum spanning tree, either the depth of the GNN or the embeddingsize of the nodes has to scale with the graph size at a polynomial rate. Notice that these lowerbounds apply to any form of message passing framework and as a result recent work in incorporatingnon-symmetric node messages (Sato et al., 2019) in GNNs also run into the same barriers.In order to address the above limitations recent works have proposed combining the GNN architecturewith pooling mechanisms for aggregating more global information (Ying et al., 2018; Defferrardet al., 2016; Simonovsky & Komodakis, 2017; Fey et al., 2018; Bianchi et al., 2019; Du et al.,2019). For example the work of Ying et al. (2018) proposes a hierarchical approach where a GNNnetwork is followed by a clustering step to compute higher level “nodes” to be used in the subsequentGNN operation. While these approaches show empirical promise, ours is the first work to design aprincipled architecture with theoretical guarantees that merges local distributed computations withglobal postprocessing stages.4Under review as a conference paper at ICLR 2021Finally, the question of generalization for GNNs has also been studied in recent works. The mostrelevant to us is the recent work of Garg et al. (2020) that analyzes the Rademacher complexity ofGNNs with the aggregate mechanism being a simple addition and the combine mechanism beinga one layer neural network. Via analyzing the Rademacher complexity the authors show that thegeneralization for GNNs depends on the depth, the embedding size and the branching factor ofthe graph. Our improved analysis in Section D extends the result of Garg et al. (2020). Not onlydoes our generalization bound apply to the more general GNN+networks, for the case of GNNsconsidered in (Garg et al., 2020) our analysis shows that the dependence on the branching factor canbe eliminated in the generalization bounds. Generalization bounds have also been proved recentlyfor GNN based networks that use the Neural Tangent Kernel (NTK) during the aggregation andcombination operations (Du et al., 2019).3 S HORTEST PATHSIn this section we provide a proof sketch of Theorem 1 showing how to construct an efficient GNN+architecture for the Shortest Paths problem. In particular we study all pairs shortest paths.All Pairs Shortest Paths. The input is a graph G= (V;E)withnnodes. The desired output is ann2dimensional vector containing (approximate) shortest path values between each pair of vertices.Given a graph G, letdG(u;v)be the shortest path between nodes uandv. We say that an outputf~dG(u;v) :u;v2Vgis an-approximate solution if for all u6=vit holds thatdG(u;v)~dG(u;v)dG(u;v):We first show that the GNNmpnetworks are highly inefficient for learning this problem.Theorem 6. Consider a GNNmpNof depthLovernnodes where each node has a representationsize ofB. IfNencodes-approximate all pairs shortest paths for graphs of diameter bounded by D,and for<3=2, then it must hold that BL(n). Furthermore, for any GNNmpthat encodes(n)-approximate all pairs shortest paths it must hold that BLn(n) logn. The lower boundholds for undirected unweighted graphs as well.Proof. The recent work of Loukas (2020) established that computation in GNNmpnetworks isequivalent to the CONGEST model of computation popularly studied in the design of distributedalgorithms (Peleg, 2000). In particular, a lower bound on the product of depth ( L) and representationsize (B) can be obtained by establishing the corresponding lower bound on the product of the numberof rounds and the size of messages in the CONGEST model of computing. Furthermore, the resultof Holzer & Wattenhofer (2012) shows that in the CONGEST model approximating all pairs shortestpaths, even on unweighted undirected graphs requires the product of the number of rounds and themessage size to be (n). This was improved in the work of Nanongkai (2014) to show that for any(n)-approximation, the product of the number of rounds and the message size to be n(n) logn.Hence the corresponding lower bound on BLfollows.Circumventing Lower Bounds via GNN+.Next we detail our proposed GNN+architecture thatcan encode approximate shortest paths with significantly smaller depth and parameter requirements.Unweighted Graphs. To illustrate the main ideas we study the case of undirected unweighted graphs.See Appendix A for the more general case of weighted graphs. The starting point of our constructionis the following fundamental theorem of Bourgain (1985) regarding metric embeddings.Theorem 7 ((Bourgain, 1985)) .Anyn-point metric (X;d)can be embedded into the Euclideanmetric of dimensionality O(logn)and distortion O(logn).The above theorem suggests that in principle, if we only want to estimate shortest paths up to anapproximation of O(logn), then we only need node embeddings of size O(logn). If there were aGNNmpnetwork that could produce such embeddings, then one could simply compute the Euclideandistance between each pair of points to get the approximate shortest path. Furthermore, computingthe Euclidean distance given the node embeddings can be done easily via a low depth full connectednetwork. Unfortunately, producing the necessary low dimensional embeddings is exactly the task for5Under review as a conference paper at ICLR 2021which GNNmpnetworks require large depth as proved in Theorem 6 above. While there do exist semi-definite programming based algorithms (Linial et al., 1995) for computing the embeddings requiredfor Bourgain’s theorem, they are not suitable for implementation via efficient neural architectures.Instead we rely on sketching based algorithms for computing shortest path distances.In particular, for the unweighted case we adapt the sketch based approximate shortest path algorithmsof Das Sarma et al. (2010) for designing an efficient network architecture. The sketch proposed inthe work of Das Sarma et al. (2010) computes, for each node u, the distance of ufrom a randomsubsetSof the nodes. This can be done via a simple breadth first search (BFS). Repeating thisprocessk-times provides a k-dimensional embedding for each vertex and for an appropriate choiceofk, these embeddings can be used to compute approximate shortest paths. Notice that this sketchbased procedure is highly amenable to be implemented in a message passing framework. Overall, thealgorithm performs multiple parallel BFS subroutines to compute the embeddings. It is also wellknown that BFS on diameter Dcan be implemented by a GNNmpof depthO(D).Based on the above intuition, our proposed architecture is shown in Figure 2. It consists of kparallelbreadth first search (BFS) modules for k= (n1clogn)for a constant c>1. Moduleicomputes theshortest path from each vertex in Gto any vertex in the set Si. The setsS1;S2;:::;Skare randomlychosen subsets of the vertex set Vof various sizes. In particular there are (n1c)subsets of size 1,(n1c)subsets of size 2,(n1c)subsets of size 22, and so on up to (n1c)subsets of size 2blognc.The BFS module iproducesndistance values v(i)1;:::;v(i)n. These modules are followed byn2fully connected networks where each module is responsible for computing the approximate shortestpath distance between a pair of vertices. In particular we have ~dG(s;t) = maxijv(i)sv(i)tj.Figure 2: The network architecture for approximate all pairsshortest paths in unweighted graphs.Notice from the discussion in Sec-tion 1 that the architecture in Fig-ure 2 is a GNN+network with a sin-gleGNN+block. In the next sectionwe will show how we can generalizeto a suite of graph problems by stack-ing up multiple GNN+blocks. Forour proposed network we have the fol-lowing guarantee.Theorem 8. For any integer c > 1,and for a fixed graph topology overnnodes with maximum degree danddiameterD, there exists a neural net-work as shown in Figure 2 of sizeO(n2+1=c),~O(n2c)parameters, anddepthO(Dlogd+ logn), that en-codes (2c1)-approximate all pairsshortest paths in G.Before proving the theorem above we establish two supporting lemmas concerning the implementationof the BFS modules and the MLP module in the network architecture described in Figure 2.Lemma 1. The BFS module in Figure 2 can be implemented by a GNN of depth O(D),O(1)totalparameters and with each node having a representation size of O(1).Lemma 2. For anyk, the MLP module in Figure 2 can be implemented by a network of depthO(logk),O(k2)total parameters.Proof of Theorem 8. The correctness of the network architecture follows from the work of Das Sarmaet al. (2010). Next we establish bounds on the total depth, size and the number of parameters. Wehavek= (n1clogn)copies of the BFS module. Each BFS module is of size O(ndlogd)sincethere arennodes and each node implements a min function of size O(dlogd). Hence, in totalthe BFS modules have size O(n1+1=cdlogdlogn). Next we haven2MLP modules each of sizeO(klogk)for a total size of O(n2+1=clogn). Hence the total size of the neural network is boundedbyO(n2+1=clogn).6Under review as a conference paper at ICLR 2021Next we bound the depth and the total number of parameters. The BFS module has O(D)roundswith each requiring a depth of O(logd)for a total depth of O(Dlogd). The MLP module has adepth bounded by O(logk) =O(logn). Hence the total depth is O(DlogD+ logn). Finally, theBFS module requires O(1)parameters and the MLP module requires O(k2)parameters. Hence, thetotal number of parameters in our architecture are bounded by O(k2) =O(n2=c).4 M INIMUM CUTSTo illustrate another application, in this section we design an efficient GNN+based architecturefor computing the minimum cut in an undirected graph. We first argue in Appendix C that evencomputing an approximate mincut using traditional GNNmpnetworks requires (pn)rounds. Ourefficient GNN+based architecture is based on the parallel algorithm for computing mincut (Karger& Stein, 1996) and is shown in Figure 3. More importantly the architecture comprises of mul-tiple layers of GNN+blocks in contrast to a single GNN+block in the case of shortest paths.GGNNGNNUpdate PrefixUpdate PrefixFigure 3: The network archi-tecture for minimum cut.The algorithm of Karger & Stein (1996) relies on the followinglemma.Lemma 3 ((Karger & Stein, 1996)) .LetG= (V;E)be an undi-rected unweighted graph with medges andnvertices. Then withprobability at least1n2, a random ordering Lof the edges contains aprefixL0ofLsuch that the graph G0= (V;L0)contains exactly twoconnected components corresponding to the global minimum cut inthe graph.Using the above, Karger & Stein (1996) proposed a Monte-Carlorandomized algorithm to compute the global minimum cut. For agiven ordering L, the algorithm estimates the length of the prefix L0corresponding to the cut by using a binary search procedure. Thisprovides the set of active edges , i.e., edges in L0. Then one can runa connected components algorithm using edges in L0. If the prefixis too small, it results in more than two connected components; if itis too large it produces one connected component. If the number ofconnected components is two then the algorithm stops. Otherwiseit recurses on the appropriate side of L0.We can implement the above algorithm using the GNN+architectureof depthO(logm)where the computation in each pair of (GNN,Update Prefix) blocks correspondsto executing one call of the above binary search procedure. During each call one needs to perform aconnected component subroutine. This can be done via BFS and is implemented in the GNN block asshown in Figure 3. The GNN is followed by the UpdatePrefix module that is an MLP implementingthe logic of selecting the appropriate side of the permutation to recurse on.More formally, at each of the O(logm) =O(logn)stages, each vertex in the GNNmpmaintains alist of which of its connecting edges are active . This requires O(d)representation size. The goalnext is to infer whether the number of connected components induced by the active edges is one,two, or more than two. This in turn decides the part of the ordering the next stage will focus on. Thecomputation of connected components can be carried out using at most two breadth first searchesand hence via O(D)rounds of a GNNmpnetwork. Following this intuition we arrive at the proposedarchitecture in Figure 3. Formally, we have the following guarantee.Theorem 9. For a fixed graph topology over nnodes with maximum degree dand diameter D, thenetwork in Figure 3 produces the minimum cut. Furthermore, the network is of depth `=O(Dlog2n),sizeO(n`), and hasO(d+ logn)parameters.Proof. Each vertex maintains an O(d)sized representation indicating which of its edges are currentlyactive plus additional constant number of values to indicate its component id during different runs ofBFS. Given a list of active edges, the GNN module simply performs a procedure to compute whetherthe number of connected components is one, two, or more than two. This can be done with at mosttwo BFS runs over the active edges. As we saw before in Section 3, this requires O(D)depth.7Under review as a conference paper at ICLR 2021At the end of the GNN module each vertex gets an integer value specifying its component id. TheUpdatePrefix module then takes this information and is required to perform two computations: a)check if the number of connected components is one, two, or more than two. This requires checkingthe number of distinct elements in a list of nnumbers and can be done with an MLP O(logn)parameters and depth O(logn), b) update the set of active edges for each vertex depending onthe number of connected components. This requires taking in the O(d)sized representation andproducing a new O(d)sized representation for each vertex. This can be achieved again by an MLPusingO(d)parameters and depth O(logd). Once a given round of GNN and UpdatePrefix ends, thecomputations proceeds to the next layer. Importantly, the set of model parameters are shared acrossthe different layers of the architecture as each time the computation required is the same. Henceoverall we get O(Dlogn)depth andO(d+ log2n)parameters.5 E XPERIMENTSWe show the efficacy of GNN+on the aforementioned graph problems: Shortest Paths, EffectiveResistance, Affinity, MINCUT and MST, and compare to a state-of-the-art GNNmpmodel (Xu et al.,2019a).Dataset. We generated synthetic random graphs between 500and1000 nodes. For the affinitymeasure, we used graphs with 250nodes because of the need for using very dense graphs to have areasonable number of alternate paths between any two end points. In general, we generated the datasets as follows: we fix the number of nodes nin the graph to take values in [250;1000] . For each valueofnwe generate graphs from the Erdos-Renyi model G(n;p)with edge sampling probability p=n.We varydepending on the problem. Specifically, we set to be a constant in [1;100] to capturevarying degrees of sparsity. For each n;pwe generate 30;000training examples consisting of tuplesof the form (G;s;t;d (s;t))whereGis a random graph drawn from G(n;p),s;tare two verticesuniformly drawn at random and d(s;t)is one of shortest path value, effective resistance, or affinitybetween the two vertices. In the case of min cut and minimum spanning tree, we generate tuples(g;vG)wherevGcorresponds to the value of the minimum spanning tree or the global minimum cut.Models and Configurations. For our baseline GNNmpimplementation, we used the GIN modelproposed in Xu et al. (2019a). This has been empirically shown (Xu et al., 2019a; Loukas, 2020; Erricaet al., 2020) to be a state-of-the-art GNNmpmodel on several datasets. GIN updates feature represen-tationsx(k)vof each node vat iterationkas:x(k)v=MLP(1 +(k))x(k1)v +Pu2N(v)x(k1)u,where MLP refers to a Multi-Layer Perceptron, N(v)is the set of neighbors of v, andis a learnableparameter. For problems that involved weighted graphs (e.g. MST), we incorporated edge weightsinto the GIN update equation by replacing the sum of neighbor representations by a weighted sum.OurGNN+implementation also used the same GIN implementation as its internal GNNmpblock.All graphs in our experiments were undirected. For both baseline and GNN+, we used node degree asthe input node features for MINCUT and MST. For Shortest Paths, Effective Resistance and Affinity,we set input node features to be Booleans indicating if the node is a source/destination node or not.Following Xu et al. (2019a), we performed 10-fold cross-validation for each of our experiments(corresponding to the two models and five problems), and report the average validation mean squarederror(MSE) across the 10folds. We run each 10-fold cross-validation experiment 10times to computeconfidence intervals. We apply batch normalization at each layer, and use an Adam optimizer anddecay the learning rate by 0:5every 50epochs, and train for up to 600epochs. For both thebaseline GNNmpand the GNN+model, we tune the following parameters: initial learning rate2f0:001;0:003;0:005;0:007;0:01;0:03;0:05g, number of hidden units 2f8;16;32;64g, batch-size2 f32;64g, dropout2 f0;0:5g. For GNNmpwe also tuned the depth (number of layers)2f2;4;8;12g. For the GNN+model, we tuned the number of parallel GNNs in each GNN+blockto2f1;2;3gwith GNN depth2f2;4g. We also tuned the number of GNN+layers2f1;2;3g. Wefixed the depth of each MLP block in GNNmpand GNN+to2.Results. To validate our theory regarding the better generalization bounds for GNN+modelscompared to GNNmp, we compare the test mean squared errors for the two models across the fivedatasets. For all the five problem, Table 1 lists the test MSEs and corresponding standard deviations8Under review as a conference paper at ICLR 2021Problem Label Variance Avg. MSE (GNNmp)Avg. MSE (GNN+)Shortest Path 7.482 0:9750:031 0:8490:022Effective Resistance 7.949 0:3970:025 0:1870:008Affinity 3.030 0:00251:75e04 0:00181:89e05MST 4637.4 1011:39106:94 733:90130:97MINCUT 11.964 0:9630:110 0:6940:07Table 1: Performance of the GNNmpand GNN+architecturesfor the two models. As a sanity check, we also plot the variance of the labels in our datasets, whichcorresponds to the MSE obtained by a naive model that predicts the mean label. We observe significantgains in accuracy of anywhere between 15% relative MSE improvement over the GNNmpbaseline(for Shortest Paths) to as much as 108% relative MSE improvement (for Effective Resistance). Notethat the naive mean predictor’s MSE is at least an order of magnitude larger than all the MSE valuesforGNNmpandGNN+(except for the MSTdataset, where it is around five times larger - we suspectthat the weighted graphs involved in this dataset make this a harder problem).We posit that these accuracy gains directly stem from the sample-efficiency of the GNN+modelsas captured in Theorems 1,2 and 4 - the most compact GNN+networks that can represent theseproblems are smaller than the corresponding most compact GNNmpnetworks. Hence, by Theorem 5,such networks will have smaller generalization errors. In the appendix, we also plot the test accuracyas a function of number of epochs that suggest that our models also converge faster than the baselineGNNmpmodels, though we do not have any theoretical justification supporting this observation.Experiments on Real World Data. We further demonstrate the applicability of our proposedGNN+architecture for solving classification tasks involving real world graphs. We experimentwith the following real world datasets (Yanardag & Vishwanathan, 2015) that have been used inrecent works for evaluating various GNN architectures (Xu et al., 2019a): 1) IMDB-BINARY and 2)IMDB-MULTI datasets: These are movie collaboration datasets with nodes as actors and the classlabel being the genre. 3) COLLAB: This is a scientific collaboration dataset with three classes. 4)PROTEINS: This is a bioinformatics dataset with 3class labels. 5) PTC, 6) NCI1 and 7) MUTAG:These are various datasets of chemical compounds with two class labels each.We train our GNN+proposed architecture on these graphs using the cross-entropy loss and as beforecompare with the GIN architecture of Xu et al. (2019a). We use the same input node features asin Xu et al. (2019a) and use the same experimental methodology as that for synthetic graphs above.In particular, when tuning hyperparameter tuning we allow the GNNmparchitecture to explore depthupto 9whereas the GNN+architecture is tuned by restricting the depth upto 3. The results aresummarized in Table 2 below. As can be seen, in each instance GNN+either outperforms or matchesthe performance of the GNNmparchitecture in terms of final test accuracy.Dataset Test Acc. (GNNmp)Test Acc. (GNN+)IMDB-BINARY 0:7420:09 0:7690:02IMDB-MULTI 0:5230:06 0:5270:04COLLAB 0:8020:02 0:8160:004PROTEINS 0:76020:008 0:76540:015NCI1 0:8490:004 0:8510:003PTC 0:6860:02 0:7080:018MUTAG 0:8760:016 0:8980:012Table 2: Performance of the GNNmpand GNN+architectures on real world classification datasets.<|im_end|> <|im_start|>assistant ### Review Title Interesting paper ### Review Text The main question this paper tackles is: can one develop sample efficient architectures for graph problems while retaining the simplicity of the message passing framework? While combining message passing with GNNs has shown to have positive empirical results, we do not know of a general neural architecture that is versatile enough to provably encode the solution space of a variety of graph problems such as shortest paths and minimum spanning trees. This paper introduces a theoretically principled architecture -- GCN+ -- which is attempts to make GCNs more efficient by using ideas from the subfields of sketching approximations and parallel computing. ############ I vote for accepting the paper due to its novelty and the pros listed below. ############ Pros + An interesting paper with novel contribution combining ideas from parallel computing and sketching approximations. + Major algorithmic contribution of interest to practitioners + Theoretical contributions in the form of several theorems in the paper Cons - No code provided with the submission ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
kUk79nBY__2
ICLR.cc/2023/BlogPosts
2023
A Hitchhiker's Guide to Momentum
["Fabian Pedregosa"]
Polyak momentum is one of the most iconic methods in optimization. Despite it's simplicit, it features rich dynamics that depend both on the step-size and momentum parameter. In this blog post we identify the different regions of the parameter space and discuss their convergence properties using the theory of Chebyshev polynomials.
["optimization", "momentum", "heavy ball"]
ZuwjsdeP8UT
Excellent blog post on convergence behavior of the gradient descent with momentum
9: Top 15% of accepted papers, strong accept
The article gives a really nice description of the convergence of the gradient descent with momentum in the deterministic case (deterministic gradients, no additive noise etc.). It resembles classical numerical analysis study of convergence (as mentioned, also reference Rutishauser 1959 given), where the asymptotic convergence is in terms of orthogonal polynomials, the Chebychev polynomials. The argument of the polynomials is a certain function that includes the step size and the momentum parameter, called the link function. Based on the behavior of the link function, the convergence region can be divided into three parts, where in the center part the convergence speed depends only on the momentum parameter. The intersection point gives the Polyak method. The figures nicely illustrate the converge regions. I think this is a really nice post and would be perfect teaching material, for example. I don't really have anything to add, I think this is great blog post. Small questions: - How about the stochastic case, where instead of stochastic gradients you would have additive noise (for example Brownian motion ) ? What could you say in that case, are the references to mention for that? - Is there some connection to Hamiltonian dynamics in the gradient descent with momentum, something to mention? Are there some invariants that are preserved in this deterministic case? small remarks: The assumption $\mu > 0$ isn't given anywhere explicitly, maybe that should be added (though you say $H$ is positive definite) ? In Eq. (3): transpose missing Eq. (22): add curly brackets. Knife's edge - Section, second paragraph: one dot in a wrong place
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Hitchhiker's Guide to Momentum ### Paper Abstract Polyak momentum is one of the most iconic methods in optimization. Despite it's simplicit, it features rich dynamics that depend both on the step-size and momentum parameter. In this blog post we identify the different regions of the parameter space and discuss their convergence properties using the theory of Chebyshev polynomials. ### Paper Keywords ["optimization", "momentum", "heavy ball"] ### Paper Content <|im_end|> <|im_start|>assistant ### Review Title Excellent blog post on convergence behavior of the gradient descent with momentum ### Review Text The article gives a really nice description of the convergence of the gradient descent with momentum in the deterministic case (deterministic gradients, no additive noise etc.). It resembles classical numerical analysis study of convergence (as mentioned, also reference Rutishauser 1959 given), where the asymptotic convergence is in terms of orthogonal polynomials, the Chebychev polynomials. The argument of the polynomials is a certain function that includes the step size and the momentum parameter, called the link function. Based on the behavior of the link function, the convergence region can be divided into three parts, where in the center part the convergence speed depends only on the momentum parameter. The intersection point gives the Polyak method. The figures nicely illustrate the converge regions. I think this is a really nice post and would be perfect teaching material, for example. I don't really have anything to add, I think this is great blog post. Small questions: - How about the stochastic case, where instead of stochastic gradients you would have additive noise (for example Brownian motion ) ? What could you say in that case, are the references to mention for that? - Is there some connection to Hamiltonian dynamics in the gradient descent with momentum, something to mention? Are there some invariants that are preserved in this deterministic case? small remarks: The assumption $\mu > 0$ isn't given anywhere explicitly, maybe that should be added (though you say $H$ is positive definite) ? In Eq. (3): transpose missing Eq. (22): add curly brackets. Knife's edge - Section, second paragraph: one dot in a wrong place ### Review Rating 9: Top 15% of accepted papers, strong accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
BJ78bJZCZ
ICLR.cc/2018/Conference
2018
Efficiently applying attention to sequential data with the Recurrent Discounted Attention unit
["Brendan Maginnis", "Pierre Richemond"]
Recurrent Neural Networks architectures excel at processing sequences by modelling dependencies over different timescales. The recently introduced Recurrent Weighted Average (RWA) unit captures long term dependencies far better than an LSTM on several challenging tasks. The RWA achieves this by applying attention to each input and computing a weighted average over the full history of its computations. Unfortunately, the RWA cannot change the attention it has assigned to previous timesteps, and so struggles with carrying out consecutive tasks or tasks with changing requirements. We present the Recurrent Discounted Attention (RDA) unit that builds on the RWA by additionally allowing the discounting of the past. We empirically compare our model to RWA, LSTM and GRU units on several challenging tasks. On tasks with a single output the RWA, RDA and GRU units learn much quicker than the LSTM and with better performance. On the multiple sequence copy task our RDA unit learns the task three times as quickly as the LSTM or GRU units while the RWA fails to learn at all. On the Wikipedia character prediction task the LSTM performs best but it followed closely by our RDA unit. Overall our RDA unit performs well and is sample efficient on a large variety of sequence tasks.
["RNNs"]
ABSTRACTRecurrent Neural Networks architectures excel at processing sequences by mod-elling dependencies over different timescales. The recently introduced RecurrentWeighted Average (RWA) unit captures long term dependencies far better thanan LSTM on several challenging tasks. The RWA achieves this by applying at-tention to each input and computing a weighted average over the full history ofits computations. Unfortunately, the RWA cannot change the attention it has as-signed to previous timesteps, and so struggles with carrying out consecutive tasksor tasks with changing requirements. We present the Recurrent Discounted Atten-tion(RDA) unit that builds on the RWA by additionally allowing the discountingof the past.We empirically compare our model to RWA, LSTM and GRU units on severalchallenging tasks. On tasks with a single output the RWA, RDA and GRU unitslearn much quicker than the LSTM and with better performance. On the mul-tiple sequence copy task our RDA unit learns the task three times as quickly asthe LSTM or GRU units while the RWA fails to learn at all. On the Wikipediacharacter prediction task the LSTM performs best but it followed closely by ourRDA unit. Overall our RDA unit performs well and is sample efficient on a largevariety of sequence tasks.1 I NTRODUCTIONMany types of information such as language, music and video can be represented as sequential data.Sequential data often contains related information separated by many timesteps, for instance a poemmay start and end with the same line, a scenario which we call long term dependencies. Long termdependencies are difficult to model as we must retain information from the whole sequence and thisincreases the complexity of the model.A class of model capable of capturing long term dependencies are Recurrent Neural Networks(RNNs). A specific RNN architecture, known as Long Short-Term Memory (LSTM) (Hochreiterand Schmidhuber, 1997), is the benchmark against which other RNNs are compared. LSTMs havebeen shown to learn many difficult sequential tasks effectively. They store information from the pastwithin a hidden state that is combined with the latest input at each timestep. This hidden state cancarry information right from the beginning of the input sequence, which allows long term dependen-cies to be captured. However, the hidden state tends to focus on the more recent past and while thismostly works well, in tasks requiring equal weighting between old and new information LSTMs canfail to learn.A technique for accessing information from anywhere in the input sequence is known as attention.The attention mechanism was introduced to RNNs by Bahdanau et al. (2014) for neural machinetranslation. The text to translate is first encoded by a bidirectional-RNN producing a new sequenceof encoded state. Different locations within the encoded state are focused on by multiplying each ofthem by an attention matrix and calculating the weighted average. This attention is calculated foreach translated word. Computing the attention matrix for each encoded state and translated wordcombination provides a great deal of flexibility in choosing where in the sequence to attend to, but1Under review as a conference paper at ICLR 2018the cost of computing these matrices grows as a square of the number of words to translate. Thiscost limits this method to short sequences, typically only single sentences are processed at a time.TheRecurrent Weighted Average (RWA) unit, recently introduced by Ostmeyer and Cowell (2017),can apply attention to sequences of any length. It does this by only computing the attention for eachinput once and computing the weighted average by maintaining a running average up to the currenttimestep. Their experiments show that the RWA performs very well on tasks where information isneeded from any point in the input sequence. Unfortunately, as it cannot change the attention itassigns to previous timesteps, it performs poorly when asked to carry out multiple tasks within thesame sequence, or when asked to predict the next character in a sample of text, a task in which newinformation is more important than old.We introduce the Recurrent Discounted Attention (RDA) unit, which extends the RWA by allowing itto discount the attention applied to previous timesteps. As this adjustment is applied to all previoustimesteps at once, it continues to be efficient. It performs very well both at tasks requiring equalweighting over all information seen and at tasks in which new information is more important thanold.The main contributions of this paper are as follows:1. We analyse the Recurrent Weighted Average unit and show that it cannot output certainsimple sequences.2. We propose the Recurrent Discounted Attention unit that extends the Recurrent WeightedAverage by allowing it to discount the past.3. We run extensive experiments on the RWA, RDA, LSTM and GRU units and show that theRWA, RDA and GRU units are well suited to tasks with a single output, the RDA performsbest on the multiple sequence copy task while the LSTM unit performs better on the HutterPrize Wikipedia dataset.Our paper is setout as follows: we present the analysis of the RWA (sections 3 and 4) and proposethe RDA (section 5). The experimental results (section 6), discussion (section 7) and conclusionfollow (section 8).2 R ELATED WORKRecently many people have worked on using RNNs to predict the next character in a corpus oftext. Sutskever et al. (2011) first attempted this on the Hutter Prize Wikipedia datasets using theMRNN archtecture. Since then many architectures (Graves, 2013; Chung et al., 2015; Kalchbrenneret al., 2015; Rocki, 2016; Zilly et al., 2016; Ha et al., 2016; Chung et al., 2016) and regularizationtechniques (Ba et al., 2016; Krueger et al., 2016) have achieved impressive performance on this task,coming close to the bit-per-character limits bespoke compression algorithms have attained.Many of the above architectures are very complex, and so the Gated Recurrent Unit (GRU) is a muchsimpler design that achieves similar performance to the LSTM. Our experiments confirm previousliterature (Chung et al., 2014) that reports it performing very well.Attention mechanisms have been used in neural machine translation by Bahdanau et al. (2014). Xuet al. (2015) experimented with hard-attention on image where a single location is selected froma multinomial distribution. Xu et al. (2015) introduced the global and local attention to refer toattention applied to the whole input and hard attention applied to a local subset of the input.An idea related to attention is the notion of providing additional computation time for difficult inputs.Graves (2016) introduce shows that this yields insight into the distribution of information in the inputdata itself.Several RNN architectures have attempted to deal with long term dependencies by storing informa-tion in an external memory (Graves et al., 2014; 2016).2Under review as a conference paper at ICLR 20183 R ECURRENT WEIGHTED AVERAGEAt each timestep the Recurrent Weighted Average model uses its current hidden state ht1and theinputxtto calculate two quantities:1. The features ztof the current input:ut=Wuxt+bugt=Wg[xt;ht1] +bgzt=uttanhgtwhereutis an unbounded vector dependent only on the input xt, and tanhgtis a boundedvector dependent on the input xtand the hidden state ht1.Notation:Ware weights, bare biases, ()is matrix multiplication, and is the element-wise product.2. The attention atto pay to the features zt:at=eWa[xt;ht1]+baThe hidden state htis then the average of of the features zt,weighted by the attention at, andsquashed through the hyperbolic tangent function:ht= tanh Pti=1ziaiPti=1ai!This is implemented efficiently as a running average:nt=nt1+ztatdt=dt1+atht= tanhntdtwherentis the numerator and dtis the denominator of the average.4 P ROPERTIES OF THE RECURRENT WEIGHTED AVERAGEThe RWA shows superior experimental results compared to the LSTM on the following tasks:1. Classifying whether a sequence is longer than 500 elements.2. Memorising short random sequences of symbols and recalling them at any point in thesubsequent 1000 timesteps.3. Adding two numbers spaced randomly apart on a sequence of length 1000.4. Classifying MNIST images pixel by pixel.All of these tasks require combining the full sequence of inputs into a single output. It makes perfectsense that an average over all timesteps would perform well in these tasks.On the other hand, we can imagine tasks where an average over all timesteps would not work effec-tively:1. Copying many input sequences from input to output. It will need to forget sequences oncethey have been output.2. Predicting the next character in a body of text. Typically, the next character depends muchmore on recent characters than on those from the beginning of the text.3. Outputting the parity of an input sequence ht=1tcfor 0<c< 1.3Under review as a conference paper at ICLR 2018All of these follow from the property that dtis monotonically increasing in t, which can be seenfromat>0anddt=dt1+at. Asdtbecomes larger, the magnitude of atmust increase to changethe value of ht. This means that it becomes harder and harder to change the value of htto the pointwhere it almost becomes fixed. In the specific case of outputting the sequence ht=1tcwe canshow thatatmust grow geometrically with time.Lemma 1 Let the task be to output the sequence ht=1tcfor0< c < 1. Lethtbe defined bythe equations of the Recurrent Weighted Average, and let ztbe bounded and fhbe a continuous,monotonically increasing surjection from R!(1;1).Then,atgrows geometrically with increasing t.Proof. Provided in Appendix A. Corollary 2 Ifatis also bounded then it cannot grow geometrically for all time and so the RWAcannot output the sequence ht=1tc.Corollary 2 suggests that the Recurrent Weighted Average may not actually be Turing Complete.Overall, these properties suggest the the RWA is a good choice for tasks with a single result, but notfor sequences with multiple results or tasks that require forgetting.5 T HERECURRENT DISCOUNTED ATTENTION UNITThe RDA uses its current hidden state ht1and the input xtto calculate three quantities:1. The features ztof the current input are calculated identically to the RWA:ut=Wuxt+bugt=Wg[xt;ht1] +bgzt=uttanhgt2. The attention atto pay to the features: ztat=fa(Wa[xt;ht1] +ba)Here we generalize attention to allow any function fawhich is non-negative and monoton-ically increasing. If we choose fa= exp , then we recover the RWA attention function.3. The discount factor tto apply to the previous values in the averaget=(W[xt;ht1] +b)whereis the sigmoid/logistic function defined as (x) =11+ex.We use these values to calculate a discounted moving average. This discounting mechanism iscrucial in remediating the RWA’s inability to forget the pastnt=nt1t+ztatdt=dt1t+atht=fhntdtHere we generalize RWA further by allowing fhto be any function, and we also introduce a finaltransformation to the hidden state htto produce the outputot=fo(ht)5.1 C HOICES FOR THE ATTENTION FUNCTION fa,HIDDEN STATE FUNCTION fhAND OUTPUTFUNCTIONfoThe attention function fa(x)is a non-negative monotonically increasing function of x. There areseveral possible choices:4Under review as a conference paper at ICLR 2018fa(x) =ex- This is used in the RWA.fa(x) = max(0;x)- Using a ReLU allows the complete ignoring of some timesteps withlinearly increasing attention for others.fa(x) = ln(1 +ex)- The softplus function is a smooth approximation to the ReLU.fa(x) =(x)- Using the sigmoid limits the maximum attention an input can be given.The domain of the hidden activation function fhis the averagentdt. This average is bounded by theminimum and maximum values of zt. Possible choices of fhinclude:fh(ntdt) = tanh(ntdt)- This is used in the RWA. We observed that the range ofntdtmostlyremained in the linear domain of tanh centred around 0, suggesting that using this wasunneccessary.fh(ntdt) =ntdt- The identity is our choice for fhin the RDA.Possible choices for the output function foare:fo(ht) =ht- The RWA uses the identity as its hidden state has already been transformedbytanh .fo(ht) = tanh(ht)- The output can be squashed between [1;1]using tanh .6 E XPERIMENTSWe ran experiments to investigate the following questions:1. Which form of the RDA works best? (Section 6.2)2. The RWA unit works remarkably well for sequences with a single task. Does the RDA unitretain this strength? (Section 6.3)3. We expect the RWA unit to struggle with consecutive independent tasks. Does this happenin practice and does the RDA solve this problem? (Section 6.4)4. How does the RDA unit scale up to very long sequences? We test character prediction onthe Hutter Prize Wikipedia dataset. (Section 6.5)5. How does the RDA unit compare to RWA, LSTM and GRU units? Are some units moresuited to certain types of tasks than others? (Section 7)We provide plots of the training process in Appendix B.6.1 I MPLEMENTATION DETAILSFor all tasks except the Wikipedia character prediction task, we use 250 recurrent units. Weightsare initialized using Xavier initialization (Glorot and Bengio, 2010) and biases are initialized to0, except for forget gates and discount gates which are initialized to 1 (Gers, Schmidhuber, andCummins, 2000). We use mini-batches of 100 examples and backpropagate over the full sequencelength. We train the models using Adam Kingma and Ba (2014) with a learning rate of 0.001.Gradients are clipped between -1 and 1.For the Wikipedia task, we use a character embedding of 64 dimensions, followed by a single layerof 1800 recurrent units, and a final softmax layer for the character prediction. We apply truncatedbackpropagation every 250 timesteps, and use the last hidden state of the sequence as the initialhidden state of the next sequence to approximate full backpropagation.All of our experiments are implemented in TensorFlow (Abadi et al., 2016).6.2 E MPIRICAL EVALUATION OF RDA ACTIVATION FUNCTIONSWe ran our experiments with different combinations of faandfoand found the following:5Under review as a conference paper at ICLR 2018AdditionModel Steps until loss <0:001GRU 2036LSTM >10000RDA-exp-tanh 1781RDA-sigmoid-id 2016RWA 1735Table 1: Addition: steps until loss <0:001.Classify LengthModel Steps until accuracy = 1.0.GRU 71LSTM 776RDA-exp-tanh 164RDA-sigmoid-id 414RWA 133Table 2: Classify: steps until accuracy = 1.0.MNISTModel Test Set AccuracyGRU 0.985LSTM 0.114RDA-exp-tanh 0.985RDA-sigmoid-id 0.987RWA 0.979Table 3: MNIST test set accuracy.MNIST permutedModel Permuted Test Set AccuracyGRU 0.944LSTM 0.915RDA-exp-tanh 0.905RDA-sigmoid-id 0.913RWA 0.899Table 4: MNIST permuted test set accuracy.Using a ReLU for the attention function faalmost always fails to train. Using a Softplusforfais much more stable than a ReLU. However, it doesn’t perform as well as usingsigmoid or exponential attention.Exponential attention performs well in all tasks, and works best with the tanh output func-tionfo(ht) = tanh(ht). We refer to this as RDA-exp-tanh .Sigmoid attention performs well in all tasks, and works best with the identity output func-tionfo(ht) =ht. We refer to this as RDA-sigmoid-id .It is difficult to choose between RDA-exp-tanh and RDA-sigmoid-id. RDA-exp-tanh oftentrains faster, but it sometimes diverges with NaN errors during training. RDA-sigmoid-idtrains slower but is more stable, and tends to have better loss.We include results for both of them.6.3 S INGLE TASK SEQUENCESHere we investigate whether sequences with a single task can be performed as well with the RDA aswith the RWA.Each of the four tasks detailed below require the RNN to save some or all of the input sequencebefore outputting a single result many steps later.1.Addition - The input consists of two sequences. The first is a sequence of numbers eachuniformly sampled from [0;1], and the second consists of all zeros except for two oneswhich indicate the two numbers of the first sequence to be added together. (Table 1)2.Classify length - A sequence of length between 1 and 1000 is input. The goal is to classifywhether the sequence is longer than 500.All RNN architectures could learn their initial hidden state for this task, which improvedperformance for all of them. (Table 2)3.MNIST - The task is supervised classification of MNIST digits. We flatten the 28x28 pixelarrays into a single 784 element sequence and use RNNs to predict the digit class labels.This task challenges networks’ ability to learn long-range dependencies, as crucial pixelsare present at the beginning, middle and end of the sequence. We implement two variantsof this task:(a) Sequential - the pixels are fed in from the top left to the bottom right of the image.(Table 3)6Under review as a conference paper at ICLR 2018CopyModel Steps until accuracy >0.999GRU 5329LSTM >20000RDA-exp-tanh 11831RDA-sigmoid-id 9840RWA 5660Table 5: Copy: steps until accuracy >0.999MulticopyModel Steps until accuracy >0.99GRU 3984LSTM 4048RDA-exp-tanh 1114RDA-sigmoid-id 1316RWA >10000Table 6: Multicopy: steps until accuracy >0.99Hutter Prize WikipediaModel BPCStacked LSTM (Graves, 2013) 1.67MRNN (Sutskever et al., 2011) 1.60GF-LSTM (Chung et al., 2015) 1.58Grid-LSTM (Kalchbrenner et al., 2015) 1.47MI-LSTM (Wu et al., 2016) 1.44Recurrent Memory Array Structures (Rocki, 2016a) 1.40HyperNetworks (Ha et al., 2016) 1.35LayerNorm HyperNetworks (Ha et al., 2016) 1.34Recurrent Highway Networks (Zilly et al., 2016) 1.32LayerNorm LSTM y 1.39HM-LSTM 1.34LayerNorm HM-LSTM 1.32GRU (our implementation) 1.535LSTM (our implementation) 1.492RDA-exp-tanh N/ARDA-sigmoid-id 1.529RWA 5.067PAQ8hp12 (Mahoney, 2005) 1.32decomp8 (Mahoney, 2009) 1.28Table 7: Bits per character on the Hutter Prize Wikipedia test set(b) Permuted - the pixels of the image are randomly permuted before the image is fed in.The same permutation is applied to all images. (Table 4)4.Copy - The input sequence starts with randomly sampled symbols. The rest of the input isblanks except for a single recall symbol. The goal is to memorize the starting symbols andoutput them when prompted by the recall symbol. All other output symbols must be blank.(Table 5)6.4 M ULTIPLE SEQUENCE COPY TASKHere we investigate whether the different RNN units can cope with doing the same task repeatedly.The tasks consists of multiple copying tasks all within the same sequence. Instead of having therecall symbol randomly placed over the whole sequence it always appears a couple of steps afterthe sequence being memorized. This gives room for 50 consecutive copying tasks in a length 1000input sequence. (Table 6)6.5 W IKIPEDIA CHARACTER PREDICTION TASKThe standard test for RNN models is character-level language modelling. We evaluate our models onthe Hutter Prize Wikipedia dataset enwik8, which contains 100M characters of 205 different symbolsincluding XML markup and special characters. We split the data into the first 90M characters forthe training set, the next 5M for validation, and the final 5M for the test set. (Table 7)7Under review as a conference paper at ICLR 20187 D ISCUSSIONWe start our discussion by describing the performance of each individual unit.Our analysis of the RWA unit showed that it should only work well on the single task sequences andwe confirm this experimentally. It learns the single sequence tasks quickly but is unable to learn themultiple sequence copy task and Wikipedia character prediction task.Our experiments show that the RDA unit is a consistent performer on all types of tasks. As expected,it learns single task sequences slower than the RWA but it actually achieves better generalization onthe MNIST test sets. We speculate that the cause of this improvement is because the ability to forgeteffectively allows it to compress the information it has previously processed, or perhaps discountingthe past should be considered as changing the attention on the past and the RDA is able to vary itsattention on previous inputs based on later inputs. On the multiple sequence copy task the RDAunit was far superior to all other units learning three times as fast as the LSTM and GRU units.On the Wikipedia character prediction task the RDA unit performed respectably, achieving a bettercompression rate than the GRU but worse than the LSTM.The LSTM unit learns the single task sequences slower than all the other units and often fails tolearn at all. This is surprising as it is often used on these tasks as a baseline against which otherarchtectures are compared. On the multiple sequence copy task it learns slowly compared to theRDA units but solves the task. The Wikipedia character prediction task is where it performs best,learning much faster and achieving better compression than the other units.The GRU unit works very well on single task sequences often learning the fastest and achievingexcellent generalization on the MNIST test sets. On the multiple sequence copy task it has equalperformance to the LSTM. On the Wikipedia character prediction task it performs worse than theLSTM and RDA units but still achieves a good performance.We now look at how our results show that different neural network architectures are suited fordifferent tasks.For our single output tasks the RWA, RDA and GRU units work best. Thus for similar real workapplications such as encoding a molecule into a latent representation, classification of genomicsequences, answering questions or language translation, these units should be considered beforeLSTM units. However, our results are yet to be verified in these domains.For sequences that contain an unknown number of independent tasks the RDA unit should be used.For the Wikipedia character prediction task the LSTM performs best. Therefore we can’t recom-mend RWA, RDA or GRU units on this or similar tasks.8 C ONCLUSIONWe analysed the Recurrent Weighted Average (RWA) unit and identified its weakness as the inabilityto forget the past. By adding this ability to forget the past we arrived at the Recurrent DiscountedAttention (RDA). We implemented several varieties of the RDA and compared them to the RWA,LSTM and GRU units on several different tasks. We showed that in almost all cases the RDA shouldbe used in preference to the RWA and is a flexible RNN unit that can perform well on all types oftasks.We also determined which types of tasks were more suited to each different RNN unit. For tasksinvolving a single output the RWA, RDA and GRU units performed best, for the multiple sequencecopy task the RDA performed best, while on the Wikipedia character prediction task the LSTMunit performed best. We recommend taking these results into account when choosing a unit for realworld applications.8Under review as a conference paper at ICLR 2018
BkEYMCPlG
Improvement upon a rarely used technique still not showing great results yet.
4: Ok but not good enough - rejection
The authors present RDA, the Recurrent Discounted Attention unit, that improves upon RWA, the earlier introduced Recurrent Weighted Average unit, by adding a discount factor. While the RWA was an interesting idea with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), the RDA brings it more on-par with the standard methods. On the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original. On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU - except for MultiCopy where it trains faster, but not to better results and it looks like the difference is between few and very-few training steps anyway. The most interesting result is language modeling on Hutter Prize Wikipedia, where RDA very significantly improves upon RWA - but again, only matches a standard GRU or LSTM. So the results are not strongly convincing, and the paper lacks any mention of newer work on attention. This year strong improvements over state-of-the-art have been achieved using attention for translation ("Attention is All You Need") and image classification (e.g., Non-local Neural Networks, but also others in ImageNet competition). To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Efficiently applying attention to sequential data with the Recurrent Discounted Attention unit ### Paper Abstract Recurrent Neural Networks architectures excel at processing sequences by modelling dependencies over different timescales. The recently introduced Recurrent Weighted Average (RWA) unit captures long term dependencies far better than an LSTM on several challenging tasks. The RWA achieves this by applying attention to each input and computing a weighted average over the full history of its computations. Unfortunately, the RWA cannot change the attention it has assigned to previous timesteps, and so struggles with carrying out consecutive tasks or tasks with changing requirements. We present the Recurrent Discounted Attention (RDA) unit that builds on the RWA by additionally allowing the discounting of the past. We empirically compare our model to RWA, LSTM and GRU units on several challenging tasks. On tasks with a single output the RWA, RDA and GRU units learn much quicker than the LSTM and with better performance. On the multiple sequence copy task our RDA unit learns the task three times as quickly as the LSTM or GRU units while the RWA fails to learn at all. On the Wikipedia character prediction task the LSTM performs best but it followed closely by our RDA unit. Overall our RDA unit performs well and is sample efficient on a large variety of sequence tasks. ### Paper Keywords ["RNNs"] ### Paper Content ABSTRACTRecurrent Neural Networks architectures excel at processing sequences by mod-elling dependencies over different timescales. The recently introduced RecurrentWeighted Average (RWA) unit captures long term dependencies far better thanan LSTM on several challenging tasks. The RWA achieves this by applying at-tention to each input and computing a weighted average over the full history ofits computations. Unfortunately, the RWA cannot change the attention it has as-signed to previous timesteps, and so struggles with carrying out consecutive tasksor tasks with changing requirements. We present the Recurrent Discounted Atten-tion(RDA) unit that builds on the RWA by additionally allowing the discountingof the past.We empirically compare our model to RWA, LSTM and GRU units on severalchallenging tasks. On tasks with a single output the RWA, RDA and GRU unitslearn much quicker than the LSTM and with better performance. On the mul-tiple sequence copy task our RDA unit learns the task three times as quickly asthe LSTM or GRU units while the RWA fails to learn at all. On the Wikipediacharacter prediction task the LSTM performs best but it followed closely by ourRDA unit. Overall our RDA unit performs well and is sample efficient on a largevariety of sequence tasks.1 I NTRODUCTIONMany types of information such as language, music and video can be represented as sequential data.Sequential data often contains related information separated by many timesteps, for instance a poemmay start and end with the same line, a scenario which we call long term dependencies. Long termdependencies are difficult to model as we must retain information from the whole sequence and thisincreases the complexity of the model.A class of model capable of capturing long term dependencies are Recurrent Neural Networks(RNNs). A specific RNN architecture, known as Long Short-Term Memory (LSTM) (Hochreiterand Schmidhuber, 1997), is the benchmark against which other RNNs are compared. LSTMs havebeen shown to learn many difficult sequential tasks effectively. They store information from the pastwithin a hidden state that is combined with the latest input at each timestep. This hidden state cancarry information right from the beginning of the input sequence, which allows long term dependen-cies to be captured. However, the hidden state tends to focus on the more recent past and while thismostly works well, in tasks requiring equal weighting between old and new information LSTMs canfail to learn.A technique for accessing information from anywhere in the input sequence is known as attention.The attention mechanism was introduced to RNNs by Bahdanau et al. (2014) for neural machinetranslation. The text to translate is first encoded by a bidirectional-RNN producing a new sequenceof encoded state. Different locations within the encoded state are focused on by multiplying each ofthem by an attention matrix and calculating the weighted average. This attention is calculated foreach translated word. Computing the attention matrix for each encoded state and translated wordcombination provides a great deal of flexibility in choosing where in the sequence to attend to, but1Under review as a conference paper at ICLR 2018the cost of computing these matrices grows as a square of the number of words to translate. Thiscost limits this method to short sequences, typically only single sentences are processed at a time.TheRecurrent Weighted Average (RWA) unit, recently introduced by Ostmeyer and Cowell (2017),can apply attention to sequences of any length. It does this by only computing the attention for eachinput once and computing the weighted average by maintaining a running average up to the currenttimestep. Their experiments show that the RWA performs very well on tasks where information isneeded from any point in the input sequence. Unfortunately, as it cannot change the attention itassigns to previous timesteps, it performs poorly when asked to carry out multiple tasks within thesame sequence, or when asked to predict the next character in a sample of text, a task in which newinformation is more important than old.We introduce the Recurrent Discounted Attention (RDA) unit, which extends the RWA by allowing itto discount the attention applied to previous timesteps. As this adjustment is applied to all previoustimesteps at once, it continues to be efficient. It performs very well both at tasks requiring equalweighting over all information seen and at tasks in which new information is more important thanold.The main contributions of this paper are as follows:1. We analyse the Recurrent Weighted Average unit and show that it cannot output certainsimple sequences.2. We propose the Recurrent Discounted Attention unit that extends the Recurrent WeightedAverage by allowing it to discount the past.3. We run extensive experiments on the RWA, RDA, LSTM and GRU units and show that theRWA, RDA and GRU units are well suited to tasks with a single output, the RDA performsbest on the multiple sequence copy task while the LSTM unit performs better on the HutterPrize Wikipedia dataset.Our paper is setout as follows: we present the analysis of the RWA (sections 3 and 4) and proposethe RDA (section 5). The experimental results (section 6), discussion (section 7) and conclusionfollow (section 8).2 R ELATED WORKRecently many people have worked on using RNNs to predict the next character in a corpus oftext. Sutskever et al. (2011) first attempted this on the Hutter Prize Wikipedia datasets using theMRNN archtecture. Since then many architectures (Graves, 2013; Chung et al., 2015; Kalchbrenneret al., 2015; Rocki, 2016; Zilly et al., 2016; Ha et al., 2016; Chung et al., 2016) and regularizationtechniques (Ba et al., 2016; Krueger et al., 2016) have achieved impressive performance on this task,coming close to the bit-per-character limits bespoke compression algorithms have attained.Many of the above architectures are very complex, and so the Gated Recurrent Unit (GRU) is a muchsimpler design that achieves similar performance to the LSTM. Our experiments confirm previousliterature (Chung et al., 2014) that reports it performing very well.Attention mechanisms have been used in neural machine translation by Bahdanau et al. (2014). Xuet al. (2015) experimented with hard-attention on image where a single location is selected froma multinomial distribution. Xu et al. (2015) introduced the global and local attention to refer toattention applied to the whole input and hard attention applied to a local subset of the input.An idea related to attention is the notion of providing additional computation time for difficult inputs.Graves (2016) introduce shows that this yields insight into the distribution of information in the inputdata itself.Several RNN architectures have attempted to deal with long term dependencies by storing informa-tion in an external memory (Graves et al., 2014; 2016).2Under review as a conference paper at ICLR 20183 R ECURRENT WEIGHTED AVERAGEAt each timestep the Recurrent Weighted Average model uses its current hidden state ht1and theinputxtto calculate two quantities:1. The features ztof the current input:ut=Wuxt+bugt=Wg[xt;ht1] +bgzt=uttanhgtwhereutis an unbounded vector dependent only on the input xt, and tanhgtis a boundedvector dependent on the input xtand the hidden state ht1.Notation:Ware weights, bare biases, ()is matrix multiplication, and is the element-wise product.2. The attention atto pay to the features zt:at=eWa[xt;ht1]+baThe hidden state htis then the average of of the features zt,weighted by the attention at, andsquashed through the hyperbolic tangent function:ht= tanh Pti=1ziaiPti=1ai!This is implemented efficiently as a running average:nt=nt1+ztatdt=dt1+atht= tanhntdtwherentis the numerator and dtis the denominator of the average.4 P ROPERTIES OF THE RECURRENT WEIGHTED AVERAGEThe RWA shows superior experimental results compared to the LSTM on the following tasks:1. Classifying whether a sequence is longer than 500 elements.2. Memorising short random sequences of symbols and recalling them at any point in thesubsequent 1000 timesteps.3. Adding two numbers spaced randomly apart on a sequence of length 1000.4. Classifying MNIST images pixel by pixel.All of these tasks require combining the full sequence of inputs into a single output. It makes perfectsense that an average over all timesteps would perform well in these tasks.On the other hand, we can imagine tasks where an average over all timesteps would not work effec-tively:1. Copying many input sequences from input to output. It will need to forget sequences oncethey have been output.2. Predicting the next character in a body of text. Typically, the next character depends muchmore on recent characters than on those from the beginning of the text.3. Outputting the parity of an input sequence ht=1tcfor 0<c< 1.3Under review as a conference paper at ICLR 2018All of these follow from the property that dtis monotonically increasing in t, which can be seenfromat>0anddt=dt1+at. Asdtbecomes larger, the magnitude of atmust increase to changethe value of ht. This means that it becomes harder and harder to change the value of htto the pointwhere it almost becomes fixed. In the specific case of outputting the sequence ht=1tcwe canshow thatatmust grow geometrically with time.Lemma 1 Let the task be to output the sequence ht=1tcfor0< c < 1. Lethtbe defined bythe equations of the Recurrent Weighted Average, and let ztbe bounded and fhbe a continuous,monotonically increasing surjection from R!(1;1).Then,atgrows geometrically with increasing t.Proof. Provided in Appendix A. Corollary 2 Ifatis also bounded then it cannot grow geometrically for all time and so the RWAcannot output the sequence ht=1tc.Corollary 2 suggests that the Recurrent Weighted Average may not actually be Turing Complete.Overall, these properties suggest the the RWA is a good choice for tasks with a single result, but notfor sequences with multiple results or tasks that require forgetting.5 T HERECURRENT DISCOUNTED ATTENTION UNITThe RDA uses its current hidden state ht1and the input xtto calculate three quantities:1. The features ztof the current input are calculated identically to the RWA:ut=Wuxt+bugt=Wg[xt;ht1] +bgzt=uttanhgt2. The attention atto pay to the features: ztat=fa(Wa[xt;ht1] +ba)Here we generalize attention to allow any function fawhich is non-negative and monoton-ically increasing. If we choose fa= exp , then we recover the RWA attention function.3. The discount factor tto apply to the previous values in the averaget=(W[xt;ht1] +b)whereis the sigmoid/logistic function defined as (x) =11+ex.We use these values to calculate a discounted moving average. This discounting mechanism iscrucial in remediating the RWA’s inability to forget the pastnt=nt1t+ztatdt=dt1t+atht=fhntdtHere we generalize RWA further by allowing fhto be any function, and we also introduce a finaltransformation to the hidden state htto produce the outputot=fo(ht)5.1 C HOICES FOR THE ATTENTION FUNCTION fa,HIDDEN STATE FUNCTION fhAND OUTPUTFUNCTIONfoThe attention function fa(x)is a non-negative monotonically increasing function of x. There areseveral possible choices:4Under review as a conference paper at ICLR 2018fa(x) =ex- This is used in the RWA.fa(x) = max(0;x)- Using a ReLU allows the complete ignoring of some timesteps withlinearly increasing attention for others.fa(x) = ln(1 +ex)- The softplus function is a smooth approximation to the ReLU.fa(x) =(x)- Using the sigmoid limits the maximum attention an input can be given.The domain of the hidden activation function fhis the averagentdt. This average is bounded by theminimum and maximum values of zt. Possible choices of fhinclude:fh(ntdt) = tanh(ntdt)- This is used in the RWA. We observed that the range ofntdtmostlyremained in the linear domain of tanh centred around 0, suggesting that using this wasunneccessary.fh(ntdt) =ntdt- The identity is our choice for fhin the RDA.Possible choices for the output function foare:fo(ht) =ht- The RWA uses the identity as its hidden state has already been transformedbytanh .fo(ht) = tanh(ht)- The output can be squashed between [1;1]using tanh .6 E XPERIMENTSWe ran experiments to investigate the following questions:1. Which form of the RDA works best? (Section 6.2)2. The RWA unit works remarkably well for sequences with a single task. Does the RDA unitretain this strength? (Section 6.3)3. We expect the RWA unit to struggle with consecutive independent tasks. Does this happenin practice and does the RDA solve this problem? (Section 6.4)4. How does the RDA unit scale up to very long sequences? We test character prediction onthe Hutter Prize Wikipedia dataset. (Section 6.5)5. How does the RDA unit compare to RWA, LSTM and GRU units? Are some units moresuited to certain types of tasks than others? (Section 7)We provide plots of the training process in Appendix B.6.1 I MPLEMENTATION DETAILSFor all tasks except the Wikipedia character prediction task, we use 250 recurrent units. Weightsare initialized using Xavier initialization (Glorot and Bengio, 2010) and biases are initialized to0, except for forget gates and discount gates which are initialized to 1 (Gers, Schmidhuber, andCummins, 2000). We use mini-batches of 100 examples and backpropagate over the full sequencelength. We train the models using Adam Kingma and Ba (2014) with a learning rate of 0.001.Gradients are clipped between -1 and 1.For the Wikipedia task, we use a character embedding of 64 dimensions, followed by a single layerof 1800 recurrent units, and a final softmax layer for the character prediction. We apply truncatedbackpropagation every 250 timesteps, and use the last hidden state of the sequence as the initialhidden state of the next sequence to approximate full backpropagation.All of our experiments are implemented in TensorFlow (Abadi et al., 2016).6.2 E MPIRICAL EVALUATION OF RDA ACTIVATION FUNCTIONSWe ran our experiments with different combinations of faandfoand found the following:5Under review as a conference paper at ICLR 2018AdditionModel Steps until loss <0:001GRU 2036LSTM >10000RDA-exp-tanh 1781RDA-sigmoid-id 2016RWA 1735Table 1: Addition: steps until loss <0:001.Classify LengthModel Steps until accuracy = 1.0.GRU 71LSTM 776RDA-exp-tanh 164RDA-sigmoid-id 414RWA 133Table 2: Classify: steps until accuracy = 1.0.MNISTModel Test Set AccuracyGRU 0.985LSTM 0.114RDA-exp-tanh 0.985RDA-sigmoid-id 0.987RWA 0.979Table 3: MNIST test set accuracy.MNIST permutedModel Permuted Test Set AccuracyGRU 0.944LSTM 0.915RDA-exp-tanh 0.905RDA-sigmoid-id 0.913RWA 0.899Table 4: MNIST permuted test set accuracy.Using a ReLU for the attention function faalmost always fails to train. Using a Softplusforfais much more stable than a ReLU. However, it doesn’t perform as well as usingsigmoid or exponential attention.Exponential attention performs well in all tasks, and works best with the tanh output func-tionfo(ht) = tanh(ht). We refer to this as RDA-exp-tanh .Sigmoid attention performs well in all tasks, and works best with the identity output func-tionfo(ht) =ht. We refer to this as RDA-sigmoid-id .It is difficult to choose between RDA-exp-tanh and RDA-sigmoid-id. RDA-exp-tanh oftentrains faster, but it sometimes diverges with NaN errors during training. RDA-sigmoid-idtrains slower but is more stable, and tends to have better loss.We include results for both of them.6.3 S INGLE TASK SEQUENCESHere we investigate whether sequences with a single task can be performed as well with the RDA aswith the RWA.Each of the four tasks detailed below require the RNN to save some or all of the input sequencebefore outputting a single result many steps later.1.Addition - The input consists of two sequences. The first is a sequence of numbers eachuniformly sampled from [0;1], and the second consists of all zeros except for two oneswhich indicate the two numbers of the first sequence to be added together. (Table 1)2.Classify length - A sequence of length between 1 and 1000 is input. The goal is to classifywhether the sequence is longer than 500.All RNN architectures could learn their initial hidden state for this task, which improvedperformance for all of them. (Table 2)3.MNIST - The task is supervised classification of MNIST digits. We flatten the 28x28 pixelarrays into a single 784 element sequence and use RNNs to predict the digit class labels.This task challenges networks’ ability to learn long-range dependencies, as crucial pixelsare present at the beginning, middle and end of the sequence. We implement two variantsof this task:(a) Sequential - the pixels are fed in from the top left to the bottom right of the image.(Table 3)6Under review as a conference paper at ICLR 2018CopyModel Steps until accuracy >0.999GRU 5329LSTM >20000RDA-exp-tanh 11831RDA-sigmoid-id 9840RWA 5660Table 5: Copy: steps until accuracy >0.999MulticopyModel Steps until accuracy >0.99GRU 3984LSTM 4048RDA-exp-tanh 1114RDA-sigmoid-id 1316RWA >10000Table 6: Multicopy: steps until accuracy >0.99Hutter Prize WikipediaModel BPCStacked LSTM (Graves, 2013) 1.67MRNN (Sutskever et al., 2011) 1.60GF-LSTM (Chung et al., 2015) 1.58Grid-LSTM (Kalchbrenner et al., 2015) 1.47MI-LSTM (Wu et al., 2016) 1.44Recurrent Memory Array Structures (Rocki, 2016a) 1.40HyperNetworks (Ha et al., 2016) 1.35LayerNorm HyperNetworks (Ha et al., 2016) 1.34Recurrent Highway Networks (Zilly et al., 2016) 1.32LayerNorm LSTM y 1.39HM-LSTM 1.34LayerNorm HM-LSTM 1.32GRU (our implementation) 1.535LSTM (our implementation) 1.492RDA-exp-tanh N/ARDA-sigmoid-id 1.529RWA 5.067PAQ8hp12 (Mahoney, 2005) 1.32decomp8 (Mahoney, 2009) 1.28Table 7: Bits per character on the Hutter Prize Wikipedia test set(b) Permuted - the pixels of the image are randomly permuted before the image is fed in.The same permutation is applied to all images. (Table 4)4.Copy - The input sequence starts with randomly sampled symbols. The rest of the input isblanks except for a single recall symbol. The goal is to memorize the starting symbols andoutput them when prompted by the recall symbol. All other output symbols must be blank.(Table 5)6.4 M ULTIPLE SEQUENCE COPY TASKHere we investigate whether the different RNN units can cope with doing the same task repeatedly.The tasks consists of multiple copying tasks all within the same sequence. Instead of having therecall symbol randomly placed over the whole sequence it always appears a couple of steps afterthe sequence being memorized. This gives room for 50 consecutive copying tasks in a length 1000input sequence. (Table 6)6.5 W IKIPEDIA CHARACTER PREDICTION TASKThe standard test for RNN models is character-level language modelling. We evaluate our models onthe Hutter Prize Wikipedia dataset enwik8, which contains 100M characters of 205 different symbolsincluding XML markup and special characters. We split the data into the first 90M characters forthe training set, the next 5M for validation, and the final 5M for the test set. (Table 7)7Under review as a conference paper at ICLR 20187 D ISCUSSIONWe start our discussion by describing the performance of each individual unit.Our analysis of the RWA unit showed that it should only work well on the single task sequences andwe confirm this experimentally. It learns the single sequence tasks quickly but is unable to learn themultiple sequence copy task and Wikipedia character prediction task.Our experiments show that the RDA unit is a consistent performer on all types of tasks. As expected,it learns single task sequences slower than the RWA but it actually achieves better generalization onthe MNIST test sets. We speculate that the cause of this improvement is because the ability to forgeteffectively allows it to compress the information it has previously processed, or perhaps discountingthe past should be considered as changing the attention on the past and the RDA is able to vary itsattention on previous inputs based on later inputs. On the multiple sequence copy task the RDAunit was far superior to all other units learning three times as fast as the LSTM and GRU units.On the Wikipedia character prediction task the RDA unit performed respectably, achieving a bettercompression rate than the GRU but worse than the LSTM.The LSTM unit learns the single task sequences slower than all the other units and often fails tolearn at all. This is surprising as it is often used on these tasks as a baseline against which otherarchtectures are compared. On the multiple sequence copy task it learns slowly compared to theRDA units but solves the task. The Wikipedia character prediction task is where it performs best,learning much faster and achieving better compression than the other units.The GRU unit works very well on single task sequences often learning the fastest and achievingexcellent generalization on the MNIST test sets. On the multiple sequence copy task it has equalperformance to the LSTM. On the Wikipedia character prediction task it performs worse than theLSTM and RDA units but still achieves a good performance.We now look at how our results show that different neural network architectures are suited fordifferent tasks.For our single output tasks the RWA, RDA and GRU units work best. Thus for similar real workapplications such as encoding a molecule into a latent representation, classification of genomicsequences, answering questions or language translation, these units should be considered beforeLSTM units. However, our results are yet to be verified in these domains.For sequences that contain an unknown number of independent tasks the RDA unit should be used.For the Wikipedia character prediction task the LSTM performs best. Therefore we can’t recom-mend RWA, RDA or GRU units on this or similar tasks.8 C ONCLUSIONWe analysed the Recurrent Weighted Average (RWA) unit and identified its weakness as the inabilityto forget the past. By adding this ability to forget the past we arrived at the Recurrent DiscountedAttention (RDA). We implemented several varieties of the RDA and compared them to the RWA,LSTM and GRU units on several different tasks. We showed that in almost all cases the RDA shouldbe used in preference to the RWA and is a flexible RNN unit that can perform well on all types oftasks.We also determined which types of tasks were more suited to each different RNN unit. For tasksinvolving a single output the RWA, RDA and GRU units performed best, for the multiple sequencecopy task the RDA performed best, while on the Wikipedia character prediction task the LSTMunit performed best. We recommend taking these results into account when choosing a unit for realworld applications.8Under review as a conference paper at ICLR 2018<|im_end|> <|im_start|>assistant ### Review Title Improvement upon a rarely used technique still not showing great results yet. ### Review Text The authors present RDA, the Recurrent Discounted Attention unit, that improves upon RWA, the earlier introduced Recurrent Weighted Average unit, by adding a discount factor. While the RWA was an interesting idea with bad results (far worse than the standard GRU or LSTM with standard attention except for hand-picked tasks), the RDA brings it more on-par with the standard methods. On the positive side, the paper is clearly written and adding discount to RWA, while a small change, is original. On the negative side, in almost all tasks the RDA is on par or worse than the standard GRU - except for MultiCopy where it trains faster, but not to better results and it looks like the difference is between few and very-few training steps anyway. The most interesting result is language modeling on Hutter Prize Wikipedia, where RDA very significantly improves upon RWA - but again, only matches a standard GRU or LSTM. So the results are not strongly convincing, and the paper lacks any mention of newer work on attention. This year strong improvements over state-of-the-art have been achieved using attention for translation ("Attention is All You Need") and image classification (e.g., Non-local Neural Networks, but also others in ImageNet competition). To make the evaluation convincing enough for acceptance, RDA should be combined with those models and evaluated more competitively on multiple widely-studied tasks. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
BJgRsyBtPB
ICLR.cc/2020/Conference
2020
A Greedy Approach to Max-Sliced Wasserstein GANs
["Andr\u00e1s Horv\u00e1th"]
Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse. Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures. This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance. In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution.
["GEnerative Adversarial Networks", "GANs", "Wasserstein distances", "Sliced Wasserstein Distance", "Max-sliced Wasserstein distance"]
ABSTRACTGenerative Adversarial Networks have made data generation possible in varioususe cases, but in case of complex, high-dimensional distributions it can be difficultto train them, because of convergence problems and the appearance of mode col-lapse. Sliced Wasserstein GANs and especially the application of the Max-SlicedWasserstein distance made it possible to approximate Wasserstein distance duringtraining in an efficient and stable way and helped ease convergence problems ofthese architectures.This method transforms sample assignment and distance calculation into sortingthe one-dimensional projection of the samples, which results a sufficient approxi-mation of the high-dimensional Wasserstein distance.In this paper we will demonstrate that the approximation of the Wasserstein dis-tance by sorting the samples is not always the optimal approach and the greedyassignment of the real and fake samples can result faster convergence and betterapproximation of the original distribution.1 I NTRODUCTIONGenerative Adversarial Networks (GANs) were first introduced in Goodfellow et al. (2014), whereinstead of the application of a mathematically well-established loss function an other differentiableneural network, a discriminator was applied to approximate the distance between two distributions.These methods are popularly applied in data generation and has significantly improved the modellingcapabilities of neural networks. It was demonstrated in various use cases that these approachescan approximate complex high-dimensional distributions in practice Karras et al. (2017), Yu et al.(2017), Brock et al. (2018).Apart from the theoretical advantage of GANs and applying a discriminator network instead of adistance metric (e.g. `1or`2loss), modelling high-dimensional distributions with GANs oftenproves to be problematic in practice. The two most common problems are mode collapse, wherethe generator gets stuck in a state where only a small portion of the whole distribution is modeledand convergence problems, where either the generator or the discriminator solves his task almostperfectly, providing low or no gradients for training for the other network.Convergence problems were improved, by introducing the Wasserstein distance Gulrajani et al.(2017) Arjovsky et al. (2017), which instead of a point-wise distance calculation (e.g. cross-entropyor`1distance) calculates a minimal transportation distance (earth mover’s distance) between thetwo distributions.The approximation and calculation of the Wasserstein distance is complex and difficult in high-dimensions, since in case of a large sample size calculation and minimization of the transport be-comes exponentially complex, also distance can have various magnitudes in the different dimen-sions.In Deshpande et al. (2018) it was demonstrated that high-dimensional distributions can be approxi-mated by using a high number of one dimensional projections. For a selected projection the minimaltransport between the one dimensional samples can be calculated by sorting both the real and thefake samples and assigning them according to their sorted indices correspondingly. As an additionaladvantage, it was also demonstrated in Deshpande et al. (2018) that instead of the regular mini-maxgame of adversarial training, the distribution of the real samples could be approximated directly by1Under review as a conference paper at ICLR 2020the generator only, omitting the discriminator and turning training into a simple and more stable min-imization problem. The theory of this novel method is well described and it was demonstrated thatit works in practice, but unfortunately for complex, high-dimensional distributions a large amountof projections are needed.In Deshpande et al. (2019) it was demonstrated how the high number of random projections couldbe substituted by a single continuously optimized plane. The parameters of this projection areoptimized in an adversarial manner selecting the ”worst” projection, which maximizes separationbetween the real and fake samples using a surrogate function. This modification brought the regularadversarial training back and created a mini-max game again, where the generator creates sampleswhich resemble well to the original distribution according to the selected plane and the discriminatortries to find a projection, which separates the real and fake samples from each other.The essence of Sliced Wasserstein distances is how they provide a method to calculate minimaltransportation between the projected samples in one-dimension with ease, which approximates theWasserstein distance in the original high-dimension. In theory this approach is sound Nadjahi et al.(2019) and works well in practise. It was proven in Kolouri et al. (2019) that the sliced Wassersteindistance satisfies the properties of non-negativity, identity of indiscernibles, symmetry, and triangleinequality, this way forming a true metric. However it approximates high-dimensional distributionswell, we would like to demonstrate in this paper that the assignment of real and fake samples bysorting them in one dimension also has its flaws and a greedy assignment approach can performbetter on commonly applied datasets. We would also argue regarding the application of the Wasser-stein distance. We will demonstrate that in many cases various assignments can result the sameminimal transportation during training and calculation of the Wasserstein distance with sorting canalter the distribution of perfectly modeled samples even when only a single sample differs from theapproximated distribution.2 W ASSERSTEIN DISTANCE FOR COMPARING DISTRIBUTIONSGenerative adversarial networks can be described by a generator ( G), whose task is to generatea fake distribution ( PF), which resembles a distribution of real samples ( PR) and a discriminator,whose task is to distinguish between PFandPRsamples. Formally the following min-max objectiveis iteratively optimized:minGmaxDV(PF;PR) (1)WhereVis a distance measure or divergence between the two distributions. In Arjovsky & Bottou(2017) the Wasserstein-p distance was proposed to improve the staibility of GANs, which can bedefined as:WP(PF;PR) = inf2Q(PF;PR)(E(x;y)vkxykp)1p (2)wherepis a parameter of the distance, andQ(PF;PR)defines all the possible joint distributionsoverPFandPR. The number of possible joint distributions increases factorially with the number ofsamples and the calculation of the minimal transport can be difficult.The instability and high complexity of wasserstein GANs were further improved by the introductionof the sliced Wassersstein distance in Deshpande et al. (2018), which can be defined as:WP(PF;PR) =Zw2Wpp(PwF;PwR)dwp(3)where PwF;PwRare one-dimensional projections of the high dimensional real and fake samples tow and denotes a sufficiently high number of projections on the unit sphere. In this setting theWasserstein distance can be calculated by sorting both the real and fake samples and assigning themby their indices, instead of checking all the possible joint distributions inQ(PwF;PwR).Max-sliced Wasserstein GANs can be introduced by selecting the worst projected distribution fromnoted bywmax and since the projection can be implemented by a fully connected layer, one canre-introduce the mini-max game used in regular GAN training:minGmaxD(w2)WP(PF;PR) (4)2Under review as a conference paper at ICLR 2020In this setup the distance in the single, one-dimensional projected space can be calculated by sortingthe samples in the similar manner as in case of the sliced Wasserstein distance.2.1 A C RITICISM OF THEWASSERSTEIN DISTANCEIn case of high sample number and complex high-dimensional distributions the optimal transportcan often be calculated using various assignments between two sets of samples. For example if asample containing nelements from PFcan be defined by the series of Fw1:::nafter the projection,and a sample from PRis defined by Rw1:::nand we know that for every iandj(i;j21:::n):Fwi<Rwj (5)Which means that the projected samples are well separated from each other (all the projected fakesamples are smaller than any of the projected real samples), for p= 1all possible assignments of thei;jpairs will result the same distance, which is the minimal transportation of the samples, althoughthe pairwise differences can be very different.One can also easily see, that the minimal transport calculated with sorting might not assign identicalsamples to each other, which is depicted on Fig. 1. As it can be seen from the figure, for example incase of two Gaussian distributions with the same variance, but different mean values using sortingto calculate the minimal transport (along arbitrary dimensions), will results the shift of one of thedistributions (in this case green). This transformation will also affect those samples which are at theintersection of the two distributions. This means that there might be identical pairs in ( PFandPR)generating an error, which is not correct. One could assume that if a generator produces a sampleidentical to one of the samples of PRit should not generate an error and no gradients should beinvoked by this sample pair. In this case the assignment will not pair these identical samples forcomparison.Figure 1: This figure depicts a flaw of the Wasserstein distance. In these figures a blue (desired out-put distribution and green, generated fake distribution can be seen), the assignments/desired trans-ports are illustrated by red arrows. Wasserstein distance calculates minimal transportation betweenall the samples. In many cases this results shifting all the samples to cover the whole distributioncorrectly, this is depicted on the left subfigure. Unfortunately this means that those samples whichare at the intersection of the two distributions, meaning that for these generated samples an identicalpair could be found in the real samples, will also be altered and shifted towards an other part ofthe real distribution. Instead of this we propose the calculation of the assignment using a greedymethod. This will ensure that that identical (or similar) sample pairs will be selected first, and afterthis the transportation between the disjunct regions will be calculated, which is depicted on the rightsubfigure.Instead of sorting the samples we propose a method which assigns similar samples to each other.First we would like to remove the intersection of the two distributions and calculate the minimaltransport for the remaining disjunct regions. By this calculation the value of the minimal transportwill be the same (in case of a sufficiently large sample number), but more similar samples will beassigned to each other.The previous example reveals a caveat of the Wasserstein distance, that it optimizes the global trans-port between two distributions, but ignores identical sample pairs or intersecting regions of density3Under review as a conference paper at ICLR 2020functions in them. Unfortunately identical samples will be extremely rarely found in two distribu-tions, but closer sample pairs should also be assigned to each other. In training one typically usesmini-batches and will not see the whole distribution, certain parts or elements might not be presentin each mini-batch if they probability in the original distribution is lower than one over the mini-batch size. This can cause further problems in the calculation of the minimal transport using sorting,since the appearance of one wrongly generated sample at the first or last position of the projectedsamples can result a completely different assignment. This problem is depicted on Fig. 2. We haveto emphasize that although the assignment and distance calculation happen in one dimension, thesedistances are results of the projection of high dimensional embeddings, where there can exist di-mensions which can decrease the distance between the two samples without significantly changingthe projected position of all the other samples.Figure 2: In this figure we would like to demonstrate different approaches for minimal transportationin one-dimension using mini-batches. Real samples are noted by blue and fake samples are plottedby red stars. The position of the samples were shifted vertically for better display, but only theirhorizontal position matters. On both figures the same constellation can be seen: four similar samplepairs and one which is very different. The left subfigure depicts sample assignment by sorting,meanwhile the right figure depicts assignment by a greedy approach. The summed distances are thesame in both cases for p= 1. We would like to argue that the lower assignment can be better fornetwork training.2.2 A G REEDY APPROACH FOR SAMPLE ASSIGNMENTWe introduce greedy training of max sliced Wasserstein GANs, where instead of sorting the samplesof the projections we iteratively select the most similar pairs of them for loss calculation. Ourtraining algorithm as a pseudocode is defined in Algorithm 1. The training process is really similarto the approach introduced in Deshpande et al. (2019). The main and only alteration is between lineten and sixteen, where instead of the sorting operation of the original approach we first generatea matrix which determines the distances for all possible sample pairs in one-dimension. First weselect the smallest element in this matrix and remove its row and column and iterate this processuntil the matrix contains only one element which will be the distance between the last two, leastsimilar sample pairs.We have to note that our approach requires O(n3)steps compared to the original O(nlog(n))opera-tions of sorting. We have to implement nminimum selections in the distance matrix with size nn.We also have to note that this increase in complexity refers for training only and has no effect tonthe inference complexity of the generator, also nis the sample or mini-batch size during training,which is usually a relatively low number. In our experiences, using batches of 16 to 512 samples,this step did not result a significant increase in computation time during training.2.3 H YBRID APPROACH FOR SAMPLE ASSIGNMENTIn the previous section flaws of the Wasserstein distance using sorting in one-dimension were pre-sented and greedy sample assignment was suggested as a possible method fixing these defects. Onthe other hand one has to admit that in certain cases greedy approach does not ensure minimaltransportation, transportation resulted by sorting will always be lower or equal to the assignmentcalculated by the greedy approach. We also have to state that distance calculation with the greedyassignment does not form a proper metric since it does not satisfy the triangle inequality. It canalso easily be seen that one could generate one-dimensional cases where the greedy assignment isarbitrarily larger than sorting. To ensure that these cases can not occur during training we have in-troduced a hybrid approach where we first sort the samples and calculate their Wasserstein distance(noted as WSP) and after this step we also calculate the distance using greedy assignment (noted as4Under review as a conference paper at ICLR 2020Algorithm 1 Training the Greedy Max-Sliced Wasserstein GeneratorGiven: Generator parameters g, discriminator parameters d,!d, sample size n, learning rate 1:whilegnot converged do2: fori= 1!ndo3: Sample dataDini=1vPRgenerated samplesnFigoni=1vPF4: compute surrogate loss s(!Di;!Fig)5: return L s(!Di;!Fig)6: (^!;^d) (^!;^d)r^!;^dL7: end for8: compute max-sliced Wasserstein Distance maxWp(!Di;!Fig)9: Sample dataDini=1vPRgenerated samplesnFigoni=1vPF10:L 011: Create distance matrix: Mkl=!Dik!Figl12: fork= 1!ndo13: Find min value of M:min(M) =ms;t14:L L+ms;t15: Remove Row sand Column tfromM16: end for17:g grgL18:end whileWGP). In this case a parameter ( ) can be set, determining a limit and if the difference betweenthe two distances is larger than this value the sorted distance will be used. This way Wassersteindistance with the hybrid assignment ( WHP) can be defined as:WHP=WSP;if: (WGPWSP)>WSPWGP otherwise(6)In case of= 0 the greedy assignment will only be used in those cases, where it also resultsthe minimal transportation. If is set to one, the greedy assignment will be used if the distancecalculated this way is less than twice the minimal distance. We were using = 1in our experiments.It is also worth to note, that sorting the samples before creating the distance matrix ( M) can helpfinding the minimal element, since in this case all series starting from the minimal element in aselected row or column will form a monotonically increasing series.3 C OMPARISON OF THEDIFFERENT APPROACHES AND RESULTS3.1 O NE-DIMENSIONAL GAUSSIAN MIXTURE MODELFor the first tests we have generated a one-dimensional toy problem using Gaussian Mixture Modelwith five modes, each of them with variance of 0.15 and expected values of 0, 2, 4, 6 and 8 ac-cordingly. We have trained a four layered fully connected network containing 128, 256, 512 and 1neurons in the corresponding layers for 5000 iterations using Adam optimizer, with learning rate of0.001, with batches of 64. No discriminator was applied in this case, the position of the generatedsamples were optimized directly by the generator and distance calculation was done directly on theoutput of the generator. In one setup the loss was calculated by sorting the samples, in the othersetup we used the greedy approach introduced in Algorithm 1. For further details regarding the hy-perparameters of all simulations and for the sake of reproducibility our code can be found on github,containing all details of the described simulations: the link was removed from the text but for thesake of anonymity, but a link pointing to the codes was used during submissionAfter training we have used the generator to generate 16000 samples and we have compared itto 16000 samples generated by the Gaussian Mixtures Model. We have calculated the Kullback-Leibler divergence and the Pearson correlation coefficient between these two distributions, repeated5Under review as a conference paper at ICLR 2020all experiments ten times and averaged the results. The results can be seen in Table 1. As it can beseen training the network with greedy assignment resulted lower Kullback-Leibler divergence andhigher correlation between the generated and the original distributions, which signals the the greedyapproach could approximate the original distribution of the samples better. For the calculation of theKullback-Leibler divergence histogram calculation was done using 100 bins uniformly distributedbetween -0.5 and 8.5. An image plotting the histograms using these parameters can be seen on Fig.3.Table 1: KL divergence, and Pearson correlation between the original and generated distributionusing sorting and a greedy approach.Method KL DIV Pearson CorrSorted 0.91 0.41Greedy 0.68 0.74Figure 3: Histograms of the one-dimensional distributions. The blue curve plots the original dis-tribution of the Gaussian Mixture Model, the green curve depicts the generated distribution usingsorting for sample assignment, meanwhile the red curve display the result of the greedy approach.3.2 T WO-DIMENSIONAL GAUSSIAN MIXTURE MODELWe have repeated the simulations described in section 3.1 using two-dimensional Gaussian Mixturemodels. Nine modes were generated forming a small 33grid, each with variance of 0.1 and thedistance between neighbouring modes was 3.5 in the grid. In this case a five-layered network with128, 256, 512, 1024 and 1 neurons was applied and training was executed for 500.000 iterations.All other parameters of the simulation were kept the same. The Kullback-Leibler divergence andPearson correlation coefficients of these simulations comparing the greedy and sorting approachescan be seen in Table 2, meanwhile random samples are depicted on Fig. ??. for qulaitative compar-ison. In this experiment a two-dimensional histogram was used for the Kullback-Leibler divergencecalculation, where a uniform grid was formed between -2 and 8 forming 100 bins.3.3 E XPERIMENTS ON THE MNIST D ATASETWe have evaluated our approach on the MNIST dataset (LeCun (1998)) as well, where we haveused the DCGAN architecture Radford et al. (2015) for image generation. Images were resized to6464to match the input dimension of the architecture, but single channel images were used. Wewere using batches of 128 and 16 samples and Adam optimizer for training both the generator andthe discriminator (which was a single projection). Since the comparison of high-dimensional (inthis case 4096) distributions is complex, we have binarized the images (thresholded at half of themaximum intensity) and calculated the KL divergence between the distribution of white and black6Under review as a conference paper at ICLR 2020Table 2: KL divergence, and Pearson correlation between the original and generated distributionusing sorting and a greedy approach for the two-dimensional Gaussian Mixture Model.Method KL DIV Pearson CorrSorted 1.21 0.55Greedy 0.33 0.80Figure 4: This Figure depicts the real (blue) and the generated (red) samples for two-dimensionalGaussian Mixture Models. The upper rows were generated using sorting, meanwhile the lowersamples were produced using greedy assignment. The subfigures from left to right display results at2, 3, 4 and five hundred thousand iterations.values at each pixel for the original distribution and 60000 generated fake samples. The results canbe seen in Table 3, meanwhile a qualitative comparison of randomly selected samples can be seenon Fig. 5.Figure 5: This Figure displays randomly selected samples generated with the same network archi-tecture and training parameters on the MNIST dataset using batches of 16. The samples on the leftwere generated using sorting assignment with Max-sliced Wasserstein distance, the samples in themiddle were generated using greedy sample assignment and the samples on the right were generatedusing the hybrid approach. All samples were generated after 20 epochs of training.3.4 E XPERIMENTS ON THE CELEB A D ATASETWe have also conducted experiments on the resized CelebA dataset (Liu et al. (2018)), in our caseimages were downscaled to 6464. We have used the DCGAN architecture and compared threedifferent approaches for sample assignment for distance calculation. We did not use any specialnormalization or initialization methods during training. We used batches of 16 for training.We have selected 10.000 samples randomly from the dataset and generated 10.000 random pro-jection and used sliced Wasserstein distance with sorting to compare the distributions along these7Under review as a conference paper at ICLR 2020Table 3: KL divergence between the original and generated distributions using sorting, greedy anda hybrid approach and different batch sizes (16 and 128) on the MNIST dataset.Method KL DIVSorted16 0.0927Greedy16 0.0233Hybrid16 0.0147Sorted128 0.0489Greedy128 0.0172Hybrid128 0.009Table 4: The sliced Wasserstein distance on the generated distributions on the CelebA dataset usingsorting, greedy method and a hybrid approach for sample assignment.Method Sliced Wass.Sorted 3:472103Greedy 1:294103Hybrid 0:783103lines. The results can be seen in Table 4, meanwhile a qualitative comparison of randomly selectedsamples can be seen on Fig. 6.Figure 6: This Figure depicts random samples generated with the same network architecture andtraining parameters on the CelebA dataset using batches of 16. All samples were generated usingMax-sliced Wasserstein distance, the samples on the left were generated using sorting for sampleassignment, in the middle the greedy approach was used, meanwhile the samples on the right weregenerated using the hybrid approach. All samples were generated after 20 epochs of training.4 C ONCLUSIONIn this paper we have introduced greedy sample assignment for Max-Sliced Wasserstein GANs.We have shown that using one-dimensional samples, in many cases multiple assignments can resultoptimal transportation and in most cases sorting changes all the samples, meanwhile those parts ofthe distribution which are at a ”good” position should not generate error.We proposed greedy assignment as a possible solution, where samples will be assigned to their mostsimilar counterparts. We have also introduced how the combination of the two methods can beapplied resulting a hybrid approach in which it can automatically selected - based on the differenceof the two measures - which assignment will be used.We have demonstrated on simple toy datasets that greedy assignment performs better than sorting thesamples and we have evaluated both the greedy and the hybrid methods on commonly investigated8Under review as a conference paper at ICLR 2020datasets (MNIST and CelebA). With all datasets the greedy assignment resulted lower Kullback-Leibler divergence and higher correlation than the traditional approach.We have used the Max-Sliced Wasserstein distance for the base of our comparison, since this isthe most recent version, which also results the best performance, but all the approaches can beexploited in case of regular Sliced Wasserstein distances as well. Also our approach changes thedistance calculation only and it can be applied together with various other improved techniques andarchitectures which are used in GAN training.
rygE9UQgqS
Official Blind Review #2
1: Reject
This paper proposes two alternative approaches to max-sliced Wasserstein GANs. They are based on the authors’ claim that there is a “flaw” in the Wasserstein-1 distance between probability distributions on the one-dimensional space. Briefly, the authors’ argument says that the “flaw” is that the optimal transport may not be unique, some of which are better for network learning than others. One proposal, described in Section 2.2, is to find a plausible transport plan in a greedy manner. The other proposal, described in Section2.3, is a hybrid of the greedy approach in Section 2.2 and the original sliced Wasserstein distance. The working hypothesis in this paper, that the above “flaw” is indeed problematic in learning in max-sliced Wasserstein GANs, has not been confirmed in any sense in this paper. Algorithm 1 is meant to explain one of the proposal, the greedy approach, but I found that several undefined symbols are used there, so that it seems hard to understand it. Numerical experiments in Section 3 are not well described. Because of these, I would not be able to recommend acceptance of this paper. Algorithm 1 seems to heavily rely on Algorithm 1 in Deshpande et al., 2019. This paper does not provide any explanation about why one should sample n data for each i running from 1 to n (line 3), what the “surrogate loss” is (line 4), what ¥omega is (line 4), why one should care about the surrogate loss between ith data and ith generated sample (line 4), as well as what D^i_k means (line 11). I do not understand how the Pearson correlation coefficient between the generator (or fake) distribution and the real distribution in Section 3. As far as my understanding, fake samples and real samples are sampled independently, so that correlation coefficient should ideally vanish in any case. Also, the KL divergence is not symmetric, so that whether KL(P_F,P_R) or KL(P_R,P_F) was evaluated has to be explicitly stated. Furthermore, recalling that the Wasserstein distance has originally been introduced to the GAN literature in order to alleviate the problems associated with the KL-based divergence (Jensen-Shannon), I do not understand either why the authors chose to use the KL divergence in their performance comparison. The “flaw” argued in this paper does not apply to the Wasserstein distance in general, but specifically to the Wasserstein-1 distance between one-dimensional distributions. This fact should be stated clearly. Page 2, equation (2): The subscript ¥mathbb{P} should read p. Page 2, two lines below equation (3): w should be italicized. Page 4, line 4: if the(y->ir) probability Page 4, line 24: has no effect (t)on the inference complexity Page 5, Algorithm 1, line 17: g of ¥theta g should be a subscript of ¥theta. Page 6, line 3: which signals th(e->at) the Page 6, line 12: was executed for (500.000->500,000) iterations. Page 6, line 15: Figure number is missing. Page 7, lines 9-10: (10.000->10,000) random projection(s) and used
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Greedy Approach to Max-Sliced Wasserstein GANs ### Paper Abstract Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse. Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures. This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance. In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution. ### Paper Keywords ["GEnerative Adversarial Networks", "GANs", "Wasserstein distances", "Sliced Wasserstein Distance", "Max-sliced Wasserstein distance"] ### Paper Content ABSTRACTGenerative Adversarial Networks have made data generation possible in varioususe cases, but in case of complex, high-dimensional distributions it can be difficultto train them, because of convergence problems and the appearance of mode col-lapse. Sliced Wasserstein GANs and especially the application of the Max-SlicedWasserstein distance made it possible to approximate Wasserstein distance duringtraining in an efficient and stable way and helped ease convergence problems ofthese architectures.This method transforms sample assignment and distance calculation into sortingthe one-dimensional projection of the samples, which results a sufficient approxi-mation of the high-dimensional Wasserstein distance.In this paper we will demonstrate that the approximation of the Wasserstein dis-tance by sorting the samples is not always the optimal approach and the greedyassignment of the real and fake samples can result faster convergence and betterapproximation of the original distribution.1 I NTRODUCTIONGenerative Adversarial Networks (GANs) were first introduced in Goodfellow et al. (2014), whereinstead of the application of a mathematically well-established loss function an other differentiableneural network, a discriminator was applied to approximate the distance between two distributions.These methods are popularly applied in data generation and has significantly improved the modellingcapabilities of neural networks. It was demonstrated in various use cases that these approachescan approximate complex high-dimensional distributions in practice Karras et al. (2017), Yu et al.(2017), Brock et al. (2018).Apart from the theoretical advantage of GANs and applying a discriminator network instead of adistance metric (e.g. `1or`2loss), modelling high-dimensional distributions with GANs oftenproves to be problematic in practice. The two most common problems are mode collapse, wherethe generator gets stuck in a state where only a small portion of the whole distribution is modeledand convergence problems, where either the generator or the discriminator solves his task almostperfectly, providing low or no gradients for training for the other network.Convergence problems were improved, by introducing the Wasserstein distance Gulrajani et al.(2017) Arjovsky et al. (2017), which instead of a point-wise distance calculation (e.g. cross-entropyor`1distance) calculates a minimal transportation distance (earth mover’s distance) between thetwo distributions.The approximation and calculation of the Wasserstein distance is complex and difficult in high-dimensions, since in case of a large sample size calculation and minimization of the transport be-comes exponentially complex, also distance can have various magnitudes in the different dimen-sions.In Deshpande et al. (2018) it was demonstrated that high-dimensional distributions can be approxi-mated by using a high number of one dimensional projections. For a selected projection the minimaltransport between the one dimensional samples can be calculated by sorting both the real and thefake samples and assigning them according to their sorted indices correspondingly. As an additionaladvantage, it was also demonstrated in Deshpande et al. (2018) that instead of the regular mini-maxgame of adversarial training, the distribution of the real samples could be approximated directly by1Under review as a conference paper at ICLR 2020the generator only, omitting the discriminator and turning training into a simple and more stable min-imization problem. The theory of this novel method is well described and it was demonstrated thatit works in practice, but unfortunately for complex, high-dimensional distributions a large amountof projections are needed.In Deshpande et al. (2019) it was demonstrated how the high number of random projections couldbe substituted by a single continuously optimized plane. The parameters of this projection areoptimized in an adversarial manner selecting the ”worst” projection, which maximizes separationbetween the real and fake samples using a surrogate function. This modification brought the regularadversarial training back and created a mini-max game again, where the generator creates sampleswhich resemble well to the original distribution according to the selected plane and the discriminatortries to find a projection, which separates the real and fake samples from each other.The essence of Sliced Wasserstein distances is how they provide a method to calculate minimaltransportation between the projected samples in one-dimension with ease, which approximates theWasserstein distance in the original high-dimension. In theory this approach is sound Nadjahi et al.(2019) and works well in practise. It was proven in Kolouri et al. (2019) that the sliced Wassersteindistance satisfies the properties of non-negativity, identity of indiscernibles, symmetry, and triangleinequality, this way forming a true metric. However it approximates high-dimensional distributionswell, we would like to demonstrate in this paper that the assignment of real and fake samples bysorting them in one dimension also has its flaws and a greedy assignment approach can performbetter on commonly applied datasets. We would also argue regarding the application of the Wasser-stein distance. We will demonstrate that in many cases various assignments can result the sameminimal transportation during training and calculation of the Wasserstein distance with sorting canalter the distribution of perfectly modeled samples even when only a single sample differs from theapproximated distribution.2 W ASSERSTEIN DISTANCE FOR COMPARING DISTRIBUTIONSGenerative adversarial networks can be described by a generator ( G), whose task is to generatea fake distribution ( PF), which resembles a distribution of real samples ( PR) and a discriminator,whose task is to distinguish between PFandPRsamples. Formally the following min-max objectiveis iteratively optimized:minGmaxDV(PF;PR) (1)WhereVis a distance measure or divergence between the two distributions. In Arjovsky & Bottou(2017) the Wasserstein-p distance was proposed to improve the staibility of GANs, which can bedefined as:WP(PF;PR) = inf2Q(PF;PR)(E(x;y)vkxykp)1p (2)wherepis a parameter of the distance, andQ(PF;PR)defines all the possible joint distributionsoverPFandPR. The number of possible joint distributions increases factorially with the number ofsamples and the calculation of the minimal transport can be difficult.The instability and high complexity of wasserstein GANs were further improved by the introductionof the sliced Wassersstein distance in Deshpande et al. (2018), which can be defined as:WP(PF;PR) =Zw2Wpp(PwF;PwR)dwp(3)where PwF;PwRare one-dimensional projections of the high dimensional real and fake samples tow and denotes a sufficiently high number of projections on the unit sphere. In this setting theWasserstein distance can be calculated by sorting both the real and fake samples and assigning themby their indices, instead of checking all the possible joint distributions inQ(PwF;PwR).Max-sliced Wasserstein GANs can be introduced by selecting the worst projected distribution fromnoted bywmax and since the projection can be implemented by a fully connected layer, one canre-introduce the mini-max game used in regular GAN training:minGmaxD(w2)WP(PF;PR) (4)2Under review as a conference paper at ICLR 2020In this setup the distance in the single, one-dimensional projected space can be calculated by sortingthe samples in the similar manner as in case of the sliced Wasserstein distance.2.1 A C RITICISM OF THEWASSERSTEIN DISTANCEIn case of high sample number and complex high-dimensional distributions the optimal transportcan often be calculated using various assignments between two sets of samples. For example if asample containing nelements from PFcan be defined by the series of Fw1:::nafter the projection,and a sample from PRis defined by Rw1:::nand we know that for every iandj(i;j21:::n):Fwi<Rwj (5)Which means that the projected samples are well separated from each other (all the projected fakesamples are smaller than any of the projected real samples), for p= 1all possible assignments of thei;jpairs will result the same distance, which is the minimal transportation of the samples, althoughthe pairwise differences can be very different.One can also easily see, that the minimal transport calculated with sorting might not assign identicalsamples to each other, which is depicted on Fig. 1. As it can be seen from the figure, for example incase of two Gaussian distributions with the same variance, but different mean values using sortingto calculate the minimal transport (along arbitrary dimensions), will results the shift of one of thedistributions (in this case green). This transformation will also affect those samples which are at theintersection of the two distributions. This means that there might be identical pairs in ( PFandPR)generating an error, which is not correct. One could assume that if a generator produces a sampleidentical to one of the samples of PRit should not generate an error and no gradients should beinvoked by this sample pair. In this case the assignment will not pair these identical samples forcomparison.Figure 1: This figure depicts a flaw of the Wasserstein distance. In these figures a blue (desired out-put distribution and green, generated fake distribution can be seen), the assignments/desired trans-ports are illustrated by red arrows. Wasserstein distance calculates minimal transportation betweenall the samples. In many cases this results shifting all the samples to cover the whole distributioncorrectly, this is depicted on the left subfigure. Unfortunately this means that those samples whichare at the intersection of the two distributions, meaning that for these generated samples an identicalpair could be found in the real samples, will also be altered and shifted towards an other part ofthe real distribution. Instead of this we propose the calculation of the assignment using a greedymethod. This will ensure that that identical (or similar) sample pairs will be selected first, and afterthis the transportation between the disjunct regions will be calculated, which is depicted on the rightsubfigure.Instead of sorting the samples we propose a method which assigns similar samples to each other.First we would like to remove the intersection of the two distributions and calculate the minimaltransport for the remaining disjunct regions. By this calculation the value of the minimal transportwill be the same (in case of a sufficiently large sample number), but more similar samples will beassigned to each other.The previous example reveals a caveat of the Wasserstein distance, that it optimizes the global trans-port between two distributions, but ignores identical sample pairs or intersecting regions of density3Under review as a conference paper at ICLR 2020functions in them. Unfortunately identical samples will be extremely rarely found in two distribu-tions, but closer sample pairs should also be assigned to each other. In training one typically usesmini-batches and will not see the whole distribution, certain parts or elements might not be presentin each mini-batch if they probability in the original distribution is lower than one over the mini-batch size. This can cause further problems in the calculation of the minimal transport using sorting,since the appearance of one wrongly generated sample at the first or last position of the projectedsamples can result a completely different assignment. This problem is depicted on Fig. 2. We haveto emphasize that although the assignment and distance calculation happen in one dimension, thesedistances are results of the projection of high dimensional embeddings, where there can exist di-mensions which can decrease the distance between the two samples without significantly changingthe projected position of all the other samples.Figure 2: In this figure we would like to demonstrate different approaches for minimal transportationin one-dimension using mini-batches. Real samples are noted by blue and fake samples are plottedby red stars. The position of the samples were shifted vertically for better display, but only theirhorizontal position matters. On both figures the same constellation can be seen: four similar samplepairs and one which is very different. The left subfigure depicts sample assignment by sorting,meanwhile the right figure depicts assignment by a greedy approach. The summed distances are thesame in both cases for p= 1. We would like to argue that the lower assignment can be better fornetwork training.2.2 A G REEDY APPROACH FOR SAMPLE ASSIGNMENTWe introduce greedy training of max sliced Wasserstein GANs, where instead of sorting the samplesof the projections we iteratively select the most similar pairs of them for loss calculation. Ourtraining algorithm as a pseudocode is defined in Algorithm 1. The training process is really similarto the approach introduced in Deshpande et al. (2019). The main and only alteration is between lineten and sixteen, where instead of the sorting operation of the original approach we first generatea matrix which determines the distances for all possible sample pairs in one-dimension. First weselect the smallest element in this matrix and remove its row and column and iterate this processuntil the matrix contains only one element which will be the distance between the last two, leastsimilar sample pairs.We have to note that our approach requires O(n3)steps compared to the original O(nlog(n))opera-tions of sorting. We have to implement nminimum selections in the distance matrix with size nn.We also have to note that this increase in complexity refers for training only and has no effect tonthe inference complexity of the generator, also nis the sample or mini-batch size during training,which is usually a relatively low number. In our experiences, using batches of 16 to 512 samples,this step did not result a significant increase in computation time during training.2.3 H YBRID APPROACH FOR SAMPLE ASSIGNMENTIn the previous section flaws of the Wasserstein distance using sorting in one-dimension were pre-sented and greedy sample assignment was suggested as a possible method fixing these defects. Onthe other hand one has to admit that in certain cases greedy approach does not ensure minimaltransportation, transportation resulted by sorting will always be lower or equal to the assignmentcalculated by the greedy approach. We also have to state that distance calculation with the greedyassignment does not form a proper metric since it does not satisfy the triangle inequality. It canalso easily be seen that one could generate one-dimensional cases where the greedy assignment isarbitrarily larger than sorting. To ensure that these cases can not occur during training we have in-troduced a hybrid approach where we first sort the samples and calculate their Wasserstein distance(noted as WSP) and after this step we also calculate the distance using greedy assignment (noted as4Under review as a conference paper at ICLR 2020Algorithm 1 Training the Greedy Max-Sliced Wasserstein GeneratorGiven: Generator parameters g, discriminator parameters d,!d, sample size n, learning rate 1:whilegnot converged do2: fori= 1!ndo3: Sample dataDini=1vPRgenerated samplesnFigoni=1vPF4: compute surrogate loss s(!Di;!Fig)5: return L s(!Di;!Fig)6: (^!;^d) (^!;^d)r^!;^dL7: end for8: compute max-sliced Wasserstein Distance maxWp(!Di;!Fig)9: Sample dataDini=1vPRgenerated samplesnFigoni=1vPF10:L 011: Create distance matrix: Mkl=!Dik!Figl12: fork= 1!ndo13: Find min value of M:min(M) =ms;t14:L L+ms;t15: Remove Row sand Column tfromM16: end for17:g grgL18:end whileWGP). In this case a parameter ( ) can be set, determining a limit and if the difference betweenthe two distances is larger than this value the sorted distance will be used. This way Wassersteindistance with the hybrid assignment ( WHP) can be defined as:WHP=WSP;if: (WGPWSP)>WSPWGP otherwise(6)In case of= 0 the greedy assignment will only be used in those cases, where it also resultsthe minimal transportation. If is set to one, the greedy assignment will be used if the distancecalculated this way is less than twice the minimal distance. We were using = 1in our experiments.It is also worth to note, that sorting the samples before creating the distance matrix ( M) can helpfinding the minimal element, since in this case all series starting from the minimal element in aselected row or column will form a monotonically increasing series.3 C OMPARISON OF THEDIFFERENT APPROACHES AND RESULTS3.1 O NE-DIMENSIONAL GAUSSIAN MIXTURE MODELFor the first tests we have generated a one-dimensional toy problem using Gaussian Mixture Modelwith five modes, each of them with variance of 0.15 and expected values of 0, 2, 4, 6 and 8 ac-cordingly. We have trained a four layered fully connected network containing 128, 256, 512 and 1neurons in the corresponding layers for 5000 iterations using Adam optimizer, with learning rate of0.001, with batches of 64. No discriminator was applied in this case, the position of the generatedsamples were optimized directly by the generator and distance calculation was done directly on theoutput of the generator. In one setup the loss was calculated by sorting the samples, in the othersetup we used the greedy approach introduced in Algorithm 1. For further details regarding the hy-perparameters of all simulations and for the sake of reproducibility our code can be found on github,containing all details of the described simulations: the link was removed from the text but for thesake of anonymity, but a link pointing to the codes was used during submissionAfter training we have used the generator to generate 16000 samples and we have compared itto 16000 samples generated by the Gaussian Mixtures Model. We have calculated the Kullback-Leibler divergence and the Pearson correlation coefficient between these two distributions, repeated5Under review as a conference paper at ICLR 2020all experiments ten times and averaged the results. The results can be seen in Table 1. As it can beseen training the network with greedy assignment resulted lower Kullback-Leibler divergence andhigher correlation between the generated and the original distributions, which signals the the greedyapproach could approximate the original distribution of the samples better. For the calculation of theKullback-Leibler divergence histogram calculation was done using 100 bins uniformly distributedbetween -0.5 and 8.5. An image plotting the histograms using these parameters can be seen on Fig.3.Table 1: KL divergence, and Pearson correlation between the original and generated distributionusing sorting and a greedy approach.Method KL DIV Pearson CorrSorted 0.91 0.41Greedy 0.68 0.74Figure 3: Histograms of the one-dimensional distributions. The blue curve plots the original dis-tribution of the Gaussian Mixture Model, the green curve depicts the generated distribution usingsorting for sample assignment, meanwhile the red curve display the result of the greedy approach.3.2 T WO-DIMENSIONAL GAUSSIAN MIXTURE MODELWe have repeated the simulations described in section 3.1 using two-dimensional Gaussian Mixturemodels. Nine modes were generated forming a small 33grid, each with variance of 0.1 and thedistance between neighbouring modes was 3.5 in the grid. In this case a five-layered network with128, 256, 512, 1024 and 1 neurons was applied and training was executed for 500.000 iterations.All other parameters of the simulation were kept the same. The Kullback-Leibler divergence andPearson correlation coefficients of these simulations comparing the greedy and sorting approachescan be seen in Table 2, meanwhile random samples are depicted on Fig. ??. for qulaitative compar-ison. In this experiment a two-dimensional histogram was used for the Kullback-Leibler divergencecalculation, where a uniform grid was formed between -2 and 8 forming 100 bins.3.3 E XPERIMENTS ON THE MNIST D ATASETWe have evaluated our approach on the MNIST dataset (LeCun (1998)) as well, where we haveused the DCGAN architecture Radford et al. (2015) for image generation. Images were resized to6464to match the input dimension of the architecture, but single channel images were used. Wewere using batches of 128 and 16 samples and Adam optimizer for training both the generator andthe discriminator (which was a single projection). Since the comparison of high-dimensional (inthis case 4096) distributions is complex, we have binarized the images (thresholded at half of themaximum intensity) and calculated the KL divergence between the distribution of white and black6Under review as a conference paper at ICLR 2020Table 2: KL divergence, and Pearson correlation between the original and generated distributionusing sorting and a greedy approach for the two-dimensional Gaussian Mixture Model.Method KL DIV Pearson CorrSorted 1.21 0.55Greedy 0.33 0.80Figure 4: This Figure depicts the real (blue) and the generated (red) samples for two-dimensionalGaussian Mixture Models. The upper rows were generated using sorting, meanwhile the lowersamples were produced using greedy assignment. The subfigures from left to right display results at2, 3, 4 and five hundred thousand iterations.values at each pixel for the original distribution and 60000 generated fake samples. The results canbe seen in Table 3, meanwhile a qualitative comparison of randomly selected samples can be seenon Fig. 5.Figure 5: This Figure displays randomly selected samples generated with the same network archi-tecture and training parameters on the MNIST dataset using batches of 16. The samples on the leftwere generated using sorting assignment with Max-sliced Wasserstein distance, the samples in themiddle were generated using greedy sample assignment and the samples on the right were generatedusing the hybrid approach. All samples were generated after 20 epochs of training.3.4 E XPERIMENTS ON THE CELEB A D ATASETWe have also conducted experiments on the resized CelebA dataset (Liu et al. (2018)), in our caseimages were downscaled to 6464. We have used the DCGAN architecture and compared threedifferent approaches for sample assignment for distance calculation. We did not use any specialnormalization or initialization methods during training. We used batches of 16 for training.We have selected 10.000 samples randomly from the dataset and generated 10.000 random pro-jection and used sliced Wasserstein distance with sorting to compare the distributions along these7Under review as a conference paper at ICLR 2020Table 3: KL divergence between the original and generated distributions using sorting, greedy anda hybrid approach and different batch sizes (16 and 128) on the MNIST dataset.Method KL DIVSorted16 0.0927Greedy16 0.0233Hybrid16 0.0147Sorted128 0.0489Greedy128 0.0172Hybrid128 0.009Table 4: The sliced Wasserstein distance on the generated distributions on the CelebA dataset usingsorting, greedy method and a hybrid approach for sample assignment.Method Sliced Wass.Sorted 3:472103Greedy 1:294103Hybrid 0:783103lines. The results can be seen in Table 4, meanwhile a qualitative comparison of randomly selectedsamples can be seen on Fig. 6.Figure 6: This Figure depicts random samples generated with the same network architecture andtraining parameters on the CelebA dataset using batches of 16. All samples were generated usingMax-sliced Wasserstein distance, the samples on the left were generated using sorting for sampleassignment, in the middle the greedy approach was used, meanwhile the samples on the right weregenerated using the hybrid approach. All samples were generated after 20 epochs of training.4 C ONCLUSIONIn this paper we have introduced greedy sample assignment for Max-Sliced Wasserstein GANs.We have shown that using one-dimensional samples, in many cases multiple assignments can resultoptimal transportation and in most cases sorting changes all the samples, meanwhile those parts ofthe distribution which are at a ”good” position should not generate error.We proposed greedy assignment as a possible solution, where samples will be assigned to their mostsimilar counterparts. We have also introduced how the combination of the two methods can beapplied resulting a hybrid approach in which it can automatically selected - based on the differenceof the two measures - which assignment will be used.We have demonstrated on simple toy datasets that greedy assignment performs better than sorting thesamples and we have evaluated both the greedy and the hybrid methods on commonly investigated8Under review as a conference paper at ICLR 2020datasets (MNIST and CelebA). With all datasets the greedy assignment resulted lower Kullback-Leibler divergence and higher correlation than the traditional approach.We have used the Max-Sliced Wasserstein distance for the base of our comparison, since this isthe most recent version, which also results the best performance, but all the approaches can beexploited in case of regular Sliced Wasserstein distances as well. Also our approach changes thedistance calculation only and it can be applied together with various other improved techniques andarchitectures which are used in GAN training.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text This paper proposes two alternative approaches to max-sliced Wasserstein GANs. They are based on the authors’ claim that there is a “flaw” in the Wasserstein-1 distance between probability distributions on the one-dimensional space. Briefly, the authors’ argument says that the “flaw” is that the optimal transport may not be unique, some of which are better for network learning than others. One proposal, described in Section 2.2, is to find a plausible transport plan in a greedy manner. The other proposal, described in Section2.3, is a hybrid of the greedy approach in Section 2.2 and the original sliced Wasserstein distance. The working hypothesis in this paper, that the above “flaw” is indeed problematic in learning in max-sliced Wasserstein GANs, has not been confirmed in any sense in this paper. Algorithm 1 is meant to explain one of the proposal, the greedy approach, but I found that several undefined symbols are used there, so that it seems hard to understand it. Numerical experiments in Section 3 are not well described. Because of these, I would not be able to recommend acceptance of this paper. Algorithm 1 seems to heavily rely on Algorithm 1 in Deshpande et al., 2019. This paper does not provide any explanation about why one should sample n data for each i running from 1 to n (line 3), what the “surrogate loss” is (line 4), what ¥omega is (line 4), why one should care about the surrogate loss between ith data and ith generated sample (line 4), as well as what D^i_k means (line 11). I do not understand how the Pearson correlation coefficient between the generator (or fake) distribution and the real distribution in Section 3. As far as my understanding, fake samples and real samples are sampled independently, so that correlation coefficient should ideally vanish in any case. Also, the KL divergence is not symmetric, so that whether KL(P_F,P_R) or KL(P_R,P_F) was evaluated has to be explicitly stated. Furthermore, recalling that the Wasserstein distance has originally been introduced to the GAN literature in order to alleviate the problems associated with the KL-based divergence (Jensen-Shannon), I do not understand either why the authors chose to use the KL divergence in their performance comparison. The “flaw” argued in this paper does not apply to the Wasserstein distance in general, but specifically to the Wasserstein-1 distance between one-dimensional distributions. This fact should be stated clearly. Page 2, equation (2): The subscript ¥mathbb{P} should read p. Page 2, two lines below equation (3): w should be italicized. Page 4, line 4: if the(y->ir) probability Page 4, line 24: has no effect (t)on the inference complexity Page 5, Algorithm 1, line 17: g of ¥theta g should be a subscript of ¥theta. Page 6, line 3: which signals th(e->at) the Page 6, line 12: was executed for (500.000->500,000) iterations. Page 6, line 15: Figure number is missing. Page 7, lines 9-10: (10.000->10,000) random projection(s) and used ### Review Rating 1: Reject ### Review Confidence <|im_end|> <|im_end|>
_V3kUd_BhGD
MICCAI.org/2022/Challenge/FLARE
2022
Self-pretrained V-net based on PCRL for Abdominal Organ Segmentation
["Jiapeng Zhang"]
Abdomen organ segmentation has many important clinical applications. However, the manual annotating process is time-consuming and labor-intensive. In the "Fast and Low-resource semi-supervised Abdominal oRgan sEgmentation in CT" challenge, the organizer provide massive unlabeled CT images. To effectively utilize unlabeled cases, we propose a self-pretrained V-net. Inspired by the preservational contrastive representation learning (PCRL), the proposed method consists of two steps: 1) using a large amount of unlabeled data to obtain a pre-trained model, 2) using a small amount of labeled data to perform fully supervised fine-tuning on the basis of the former. The feature extraction part used in both stages uses the same backbone network. The difference is that the pre-training stage introduces the additional image reconstruction branch and the corresponding momentum branch to construct image reconstruction and contrastive learning, and the fully-supervised model downstream uses a fully convolutional network for segmentation prediction. In the pre-training stage, by incorporating diverse image reconstruction tasks into the contrastive learning, the representation ability of the backbone network for specific image data during the upstream feature extraction process is enhanced. Besides, the half-precision (Float16) is used in the prediction stage, which reduces the GPU load by about 36% without losing the prediction accuracy and the maximum used GPU memory is 1719 MB. Quantitative evaluation on the FLARE2022 validation cases, this method achieves the average dice similarity coefficient (DSC) of 0.4811 and average normalized surface distance (NSD) of 0.4513.
["Self-supervised learning", "Self-transfer learning", "Organ Segmentation"]
Self-pretrained V-net based on PCRL forAbdominal Organ SegmentationJiapeng ZhangIDUniversity of Shanghai for Science and Technology, Shanghai, ChinaAbstract. Abdomen organ segmentation has many important clinicalapplications. However, the manual annotating process is time-consumingand labor-intensive. In the "Fast and Low-resource semi-supervised Ab-dominal oRgan sEgmentation in CT" challenge, the organizer providemassive unlabeled CT images. To effectively utilize unlabeled cases, weproposeaself-pretrainedV-net.Inspiredbythepreservationalcontrastiverepresentation learning (PCRL), the proposed method consists of twosteps: 1) using a large amount of unlabeled data to obtain a pre-trainedmodel, 2) using a small amount of labeled data to perform fully super-vised fine-tuning on the basis of the former. The feature extraction partused in both stages uses the same backbone network. The difference isthat the pre-training stage introduces the additional image reconstruc-tionbranchandthecorrespondingmomentumbranchtoconstructimagereconstruction and contrastive learning, and the fully-supervised modeldownstream uses a fully convolutional network for segmentation predic-tion.Inthepre-trainingstage,byincorporatingdiverseimagereconstruc-tion tasks into the contrastive learning, the representation ability of thebackbonenetworkforspecificimagedataduringtheupstreamfeatureex-traction process is enhanced. Besides, the half-precision (Float16) is usedin the prediction stage, which reduces the GPU load by about 36 %with-out losing the prediction accuracy and the maximum used GPU mem-ory is 1719 MB. Quantitative evaluation on the FLARE2022 validationcases, this method achieves the average dice similarity coefficient (DSC)of 0.4811 and average normalized surface distance (NSD) of 0.4513.Keywords: Self-supervisedlearning ·Self-transferlearning ·OrganSeg-mentation.1 IntroductionAbdominal organ segmentation plays an important role in clinical practice, thestate-of-the-art methods have achieved inter-observer performance in severalbenchmark datasets. However, most of the existing abdominal datasets onlycontain single-center, single-phase, single-vendor, or single-disease cases, and itis unclear whether the excellent performance can be generalized on more diversedatasets. Some SOTA methods have good general applicability. However, whenthe training data is limited and the task is complex, it is difficult for the model to2 Jiapeng Zhangbe fully trained. Moreover, many SOTA methods use model ensembles to boostperformance, but these solutions usually have a large model size and cost exten-sive computational resources, which are impractical to be deployed in clinicalpractice.Compared with labeled data, unlabeled data is usually easier to obtain be-cause the manual labeling process is omitted. To make full use of the massive un-labeled cases, self-supervised learning has been widely adopted[2]. Based on themassive unlabeled data provided by the Fast and Low-resource semi-supervisedAbdominal oRgan sEgmentation in CT challenge, we attempted to design ourmethod based on V-Net[7], and PCRL[12].Specifically, the backbone uses the encoder-decoder style architecture withskip connection [8]. The vast majority of successful algorithms for image segmen-tation in the medical domain such as V-net [7] and Dense U-net [10] are basedon this U-shape structure. For unlabeled data, we use the method of retain-ing contrastive representation learning to obtain a pre-training weight throughself-supervised learning. Then, perform full supervision finetuning through lim-ited annotated data. Note that this pre-trained model was trained from theunlabeled cases provided by the challenge, and no additional pre-trained mod-els were used in the process. Compared with methods that only use contrastivelearning, PCRL can generate stronger representations of image information inthe upstream feature extraction network by reconstructing different contexts.Besides, to take into account the use of GPU memory and the preservation ofinformation between multiple organs and backgrounds, we adopt a horizontalplane scaling and vertical sliding window strategy to train the model. Mean-while, due to the limitation of GPU resources, we use a smaller input size toreduce resource consumption.The main contributions of this work are summarized as follows:1)WeproposeaPCRL-basedself-pretrainedmulti-organsegmentationframe-work to make full use of the massive unlabeled cases.2) To reduce resoure consumption and speed up the inference process, wecompress the input size and utilize a smaller width for the network.2 MethodAs mentioned in Fig 1, this whole segmentation framework is composed of aselfsupervised pretrain stage and a full-supervised finetuning stage. The detaildescription of the method is as follows.2.1 PreprocessingThe proposed method includes the following preprocessing steps:–Cropping strategy: Crop the training dataset to the non-zero region.–Resampling method for anisotropic data: First, the images are reorientedto a unified direction. To obtain a larger receptive field during the train-ing process, we tend to use a relatively complete patch for training. In thisSelf-pretrained V-net based on PCRL for Abdominal Organ Segmentation 3way the model can capture better relative relationship between the variousorgans. Constrained by hardware conditions, the original image is downsam-pled to 160 ×160 for clises in the transverse section, and the spacing ofinferior-superior axis is unified to 2.5. Both in-plane and out-of-plane withthird-order spline interpolation.–Intensity normalization method: First, the images is clipped to the range[-320, 320]. Then a z-score normalization is applied based on the mean andstandard deviation of the intensity values[11].2.2 Proposed MethodUnlabeled Cases(2000 )Labeled Cases(42 for train7 for test )Pre-trained WeightsContrastive LossMSE Loss MSE Loss MSE Loss SigmoidGDice + CE LossPredictionResult Random AugmentationSuch as :Random cropRandom flipRandom rotationInpaintingOutpainting Cross Mixup Share Weight Fig. 1.Self-supervised pretrain and full-supervised fine-tuning frameworkThe unlabeled data are used to construct a self-supervised learning process toobtain a pre-trained model for augmenting the fully supervised training process.The encoder and the decoder in both pretarin stage and finetuning stage areconncected via a U shape architecture.For the pretrain stage, the PCRL contains three different encoders and oneshared decoder. The three different encoders are ordinary encoder, momentumencoder, and cross-mixup encoder, where the momentum encoder is obtainedfrom the exponential moving average to the ordinary encode, and the cross-mixup encoder is the hybrid encoder mixed by both former encoders. FollowingZhou et al.[12], for a batch of input image, different data augmentataion meth-ods, such as random crop, random flip and random rotation are first applied to4 Jiapeng Zhang8 × 160 × 160 × 8016 × 80 × 80 × 4032 × 40 × 40 × 8064 × 20 × 20 × 10CCC M13 × 160 × 160 × 80Conv 3D(k = [2, 2, 2] s = [2, 2, 2])ConvTrans 3D(k = [2, 2, 2] s = [2, 2, 2])Conv 3D-IN-ELU(k = [5, 5, 5] s = [1, 1, 1])CCatMMapInputOutputSkip-Connection128 × 10 × 10 × 5C sFig. 2.V-Net backbone, where the input size and the number of network layers aremodified accordingly to this task.generate three batches of images corresponding to three encoders which are setas the ground truth targets of the MSE loss after decoder. Then low-level pro-cessing operations, including inpainting, outpainting are performed randomly togenerate the original encoder and the momentum encoder inputs. And the inputof the cross-mixup encoder is the mixup of these two inputs. The feature mapsoutput from the original encoder and the last layer of the momentum encoder aredeposited into the sequence K after global average pooling encoding to constructthe constractive learning[2].Forthefintuningstage,theweightsfromthepre-trainingphaseareused.Andthe difference is that a sigmoid layer is utilized after the decoder to perform thedownstream task of segmentation.The detail of each layer, hyper-parameters, such as stride, weight size, etc.of the backbone are shown in Fig 2Loss function: During self-pretrain stage, the contrastive loss and MSE lossare used; During the fine-tuning stage, we use the summation between general-ized Dice loss and cross entropy loss because it has been proved to be robust [5]in medical image segmentation tasks.To reduce resource consumption, a smaller input size to reduce resource con-sumption. Besides, existing network frameworks (such as PyTorch) usually usefull precision (Float64) for prediction. However, for intensive prediction taskssuch as 3D image segmentation, the use of full-precision model parameters willgreatly increase the hardware burden in the deduction process. In this work, theSelf-pretrained V-net based on PCRL for Abdominal Organ Segmentation 5half-precision (Float32) is used in the prediction stage, which reduces the GPUload by about 36 %without losing the prediction accuracy.3 Experiments3.1 Dataset and evaluation measuresThe FLARE2022 dataset is curated from more than 20 medical groups underthe license permission, including MSD [9], KiTS [3,4], AbdomenCT-1K [6], andTCIA [1]. The training set includes 50 labelled CT scans with pancreas diseaseand 2000 unlabelled CT scans with liver, kidney, spleen, or pancreas diseases.The validation set includes 50 CT scans with liver, kidney, spleen, or pancreasdiseases. The testing set includes 200 CT scans where 100 cases has liver, kidney,spleen, or pancreas diseases and the other 100 cases has uterine corpus endome-trial,urothelialbladder,stomach,sarcomas,orovariandiseases.AlltheCTscansonly have image information and the center information is not available.The evaluation measures consist of two accuracy measures: Dice SimilarityCoefficient (DSC) and Normalized Surface Dice (NSD), and three running effi-ciency measures: running time, area under GPU memory-time curve, and areaunder CPU utilization-time curve. All measures will be used to compute theranking. Moreover, the GPU memory consumption has a 2 GB tolerance.3.2 Implementation detailsEnvironment settings The development environments and requirements arepresented in Table 1.Table 1. Development environments and requirements.Windows/Ubuntu version Ubuntu 16.04.5 LTSCPU Intel(R) Xeon(R) CPU E5-2640 V3 @2.60GHzRAM 8 ×4GB; 2.4MT /sGPU (number and type) 4 Nvidia Geforce RTX 2080 (8G)CUDA version 11.1Programming language Python 3.9Deep learning framework Pytorch (Torch 1.8.1, torchvision 0.9.0)Specific dependencies V-Net1/PCRL2Training protocols The training protocols of the baseline method is shown inTable 2. During self-supervised pretraining, random crop, random flip, randomrotation, inpainting, outpainting and gaussian blur are used for constraction ofcontrastivelearning.Duringthefull-supervisedfine-tuning,anareawithalengthof 80 on the axis is randomly cropped to obtain a 3D input patch of height 80pixels, note that each patch contains at least one foreground class.6 Jiapeng ZhangTable 2. Training protocols.Network initialization “he" normal initializationBatch size 4Patch size 80 ×160×160Total epochs 2000Optimizer AdamInitial learning rate (lr) 0.0001Learning rate decay schedule MultiStepLR: milestones=[100, 200, 500], gamma=0.5Training time 11.4 day (self-pretrain) + 22.5 hours (fine-tuning)LossContrast Loss + MSE Loss (self-pretrain);GDice Loss + CE Loss (fine-tuning)Number of model parameters 43.60M3Number of flops 218.7G44 Results and discussion4.1 Quantitative results on validation setTable 3. Quantitative results on validation set in terms of DSC. The 1st row representsthe method without self-pretrained, and the 2nd row represents the method with self-pretrained. (where the Liv., RK, Spl., Pan., Aor, IVC, RAG, LAG, Gal., Eso., Sto.,Duo, and LK are Liver, Right Kidney, Spleen, Pancreas, Aorta, inferior vena cava, rightadrenal gland, left adrenal gland, gallbladder, esophagus, stomach, duodenum, and leftkidney, respectively.)Organ Liv. RK Spl. Pan. Aor. IVC RAG LAG Gal. Eso. Sto. Duo. LK MeanDSC(%) 83.10 67.97 69.26 49.44 76.14 64.45 2.00 4.00 40.03 39.02 55.45 40.46 68.65 50.77DSC(%) 73.89 59.06 61.14 34.64 74.31 59.32 40.50 32.06 26.56 41.63 41.58 32.09 57.75 48.67Table 3 illustrate the quantitative results on the provided validation set. In-cluding the mean DSC and individual DSC for liver (Liv.), right kidney (RK),spleen(Spl.),pancreas(Pan.),aorta(Aor.),inferiorvenacava(IVC),rightadrenalgland (RAG), left adrenal gland (LAG), gallbladder (Gal.), esophagus (Eso.),stomach (Stm.), duodenum (Duo.) and left kidney (LK). Although all othermetrics were higher than the method using the self-pretrained model. left andright adrenals were barely predictable when self-pretrained model was not used.Table 4 illustrate the ablation study on provided 50 validation cases. Overall,the proposed method performs well on large organs such as liver and spleen,while it performs poorly on small organs such as esophageal islets and left andright adrenal glands. In addition, it can be seen that the performance of thesegmentation model is significantly improved on the right and left adrenal glandsafter self-pretraining with unlabeled data.Self-pretrained V-net based on PCRL for Abdominal Organ Segmentation 7Table 4. Ablation study on provided validation cases. The 1st row and the 3rd rowrepresent the methods without self-pretrained, and the 2nd and 4th rows represent themethods with self-pretrained. (where the Liv., RK, Spl., Pan., Aor, IVC, RAG, LAG,Gal., Eso., Sto., Duo, and LK are Liver, Right Kidney, Spleen, Pancreas, Aorta, inferiorvena cava, right adrenal gland, left adrenal gland, gallbladder, esophagus, stomach,duodenum, and left kidney, respectively.)Liv. RK Spl. Pan. Aor. IVC RAG LAG Gal. Eso. Sto. Duo. LK MeanDSC(%) 84.28 61.01 70.49 50.87 77.94 67.23 0.00 0.00 27.41 46.85 48.95 39.47 71.79 49.71DSC(%) 77.20 55.59 64.83 37.81 72.39 61.86 37.45 32.11 18.50 39.74 41.33 25.85 60.82 48.11NSD(%) 70.53 48.56 59.41 56.08 77.41 61.38 0.00 0.00 21.23 56.16 46.94 58.73 66.18 47.89NSD(%) 60.74 42.31 48.48 43.42 63.33 52.52 48.76 40.30 12.74 51.03 36.48 38.32 48.30 45.13Due to memory limitations, our method uses a smaller raw input size of thenetwork as well as a smaller channel size, which exacerbates the risk of the modellosing contextual information when dealing with small targets. The process ofpretraining on unlabeled data improves the upstream feature extraction part ofthe model for feature representation under specific data distribution, which caneffectively mitigate the risk of small organ loss.4.2 Qualitative results on validation setFig 3 present some examples on our splitted validation set. It can be found thatthe method using pretrained model from unlabeled data performs better for theprediction of small organs such as left and right adrenal glands compared tothe method that does not utilize unlabeled data. Also, due to the use of slidingwindows in our method and the preprocessing strategy of uniform spacing, theremay be a certain degree of missing prediction when the input scan interval istoo large or when the scan spacing differs too much from the standard spacing,which is a major reason for the decrease in evaluation metrics.Table 5. Overview of DSC and NSD metrics on test set (where the Liv., RK, Spl.,Pan., Aor, IVC, RAG, LAG, Gal., Eso., Sto., Duo, and LK are Liver, Right Kidney,Spleen, Pancreas, Aorta, inferior vena cava, right adrenal gland, left adrenal gland,gallbladder, esophagus, stomach, duodenum, and left kidney, respectively.)Organ Liv. RK Spl. Pan. Aor. IVC RAG LAG Gal. Eso. Sto. Duo. LK MeanDSC(%) 61.68 56.55 53.04 35.05 72.17 60.38 43.97 35.42 27.30 34.85 35.77 26.42 58.54 46.24NSD(%) 41.18 36.67 33.50 35.37 60.77 48.89 56.96 45.05 16.90 44.19 26.31 34.08 41.27 40.094.3 Results on final testing setOur final results on the test set are shown in Table 5. The final mean DSCvalue is 46.26%, and the mean NSD is 40.09%. The results show that the modelalso responds well to small targets that are difficult to segment, such as adrenalglands. The results on the test set are consistent with those of the validation set.8 Jiapeng ZhangGT Self-Pretrained Non-PretrainedFig. 3.Qualitative results on some examples. First two columns are some good casesand the last two columns are some worse cases.4.4 Segmentation efficiency results on validation setTo balance performance and resource consumption, We perform a scaling opera-tion on the slicer in the transverse section, while taking a random sliding windowin the Inferior-Superior axis direction to obtain a uniform size input patch. Also,the images are stretched to a fixed axis spacing of 2.5 before processing. Thismeans that the model prediction efficiency will be greatly reduced for long rangeCT scans where the extent of the abdominal cavity cannot be determined (e.g.,somecasesinthevalidationset),whileitisefficientforCTdatawheretheextentof the abdominal cavity is more certain (e.g., 50 cases in the training set).4.5 Limitation and future workAs mentioned before, although the sliding window strategy can effectively reducethe resource burden compared to the overall processing, it may also lead tomore time-consuming and unnecessary resource wastage on CT data with largerscan ranges, and can also result in incorrect segmentation results in non-target(abdominal) intervals. In addition, the predictive power of the model for smallSelf-pretrained V-net based on PCRL for Abdominal Organ Segmentation 9organs remains limited. In the future, we will focus on addressing these twoaspects and exploring more possibilities for unlabeled data.5 ConclusionIn this work, we proposed a method based on PCLR and V-Net to segment ab-domial organs fast and cost low-resource. The self-supervised pre-trained modelobtained from a large amount of unlabeled data effectively improves the predic-tion ability of the segmentation model for small organs such as adrenal glands.It performs well on healthy data with well-defined target intervals, however, itperforms poorly and is relatively time-consuming for CT data with large scanareas.Acknowledgements The authors of this paper declare that the segmentationmethodtheyimplementedforparticipationintheFLARE2022challengehasnotused any pre-trained models nor additional datasets other than those providedby the organizers. The proposed solution is fully automatic without any manualintervention.References1. Clark, K., Vendt, B., Smith, K., Freymann, J., Kirby, J., Koppel, P., Moore, S.,Phillips,S.,Maffitt,D.,Pringle,M.,etal.:Thecancerimagingarchive(tcia):main-taining and operating a public information repository. Journal of Digital Imaging26(6), 1045–1057 (2013) 52. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsu-pervised visual representation learning. In: 2020 IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. pp. 9726–9735. Computer Vision Foundation / IEEE (2020). https://doi.org/10.1109/CVPR42600.2020.00975 2, 43. Heller, N., Isensee, F., Maier-Hein, K.H., Hou, X., Xie, C., Li, F., Nan, Y., Mu,G., Lin, Z., Han, M., et al.: The state of the art in kidney and kidney tumorsegmentation in contrast-enhanced ct imaging: Results of the kits19 challenge.Medical Image Analysis 67, 101821 (2021) 54. Heller, N., McSweeney, S., Peterson, M.T., Peterson, S., Rickman, J., Stai, B.,Tejpaul, R., Oestreich, M., Blake, P., Rosenberg, J., et al.: An international chal-lenge to use artificial intelligence to define the state-of-the-art in kidney and kidneytumor segmentation in ct imaging. American Society of Clinical Oncology 38(6),626–626 (2020) 55. Ma, J., Chen, J., Ng, M., Huang, R., Li, Y., Li, C., Yang, X., Martel, A.L.: Lossodyssey in medical image segmentation. Medical Image Analysis 71, 102035 (2021)46. Ma, J., Zhang, Y., Gu, S., Zhu, C., Ge, C., Zhang, Y., An, X., Wang, C., Wang, Q.,Liu, X., Cao, S., Zhang, Q., Liu, S., Wang, Y., Li, Y., He, J., Yang, X.: Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Transactions on Pat-tern Analysis and Machine Intelligence (2021). https://doi.org/10.1109/TPAMI.2021.3100536 510 Jiapeng Zhang7. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networksfor volumetric medical image segmentation. In: 2016 Fourth International Confer-ence on 3D Vision (3DV). pp. 565–571 (2016). https://doi.org/10.1109/3DV.2016.79 28. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedi-cal image segmentation. In: International Conference on Medical image computingand computer-assisted intervention. pp. 234–241 (2015) 29. Simpson, A.L., Antonelli, M., Bakas, S., Bilello, M., Farahani, K., Van Ginneken,B., Kopp-Schneider, A., Landman, B.A., Litjens, G., Menze, B., et al.: A large an-notated medical image dataset for the development and evaluation of segmentationalgorithms. arXiv preprint arXiv:1902.09063 (2019) 510. Wang, Z., Zou, N., Shen, D., Ji, S.: Non-Local U-Nets for Biomedical Image Seg-mentation. In: AAAI. pp. 6315–6322 (2020) 211. Zhang, F., Wang, Y., Yang, H.: Efficient context-aware network for abdominalmulti-organ segmentation. arXiv preprint arXiv:2109.10601 (2021) 312. Zhou, H., Lu, C., Yang, S., Han, X., Yu, Y.: Preservational learning improvesself-supervised medical image models by reconstructing diverse contexts. In: 2021IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal,QC, Canada, October 10-17, 2021. pp. 3479–3489. IEEE (2021). https://doi.org/10.1109/ICCV48922.2021.00348 2, 3
HXyQgRvlm9f
More weaknesses in the paper itself
4: Ok but not good enough - rejection
The authors designed a preservational contrastive representation of learning (PCRL) using different argumentations on unlabeled data to get the self-pre-trained network weight. In Sections Experiments, PCRL has only improved segmentation performance on the right adrenal gland (RAG) and left adrenal gland (LAG). There are some problems, which must be solved before it is considered for publication. If the following problems are well-addressed: * It is noted that your manuscript needs careful editing by someone with expertise in technical English editing paying particular attention to English grammar, spelling, and sentence structure so that the goals and results of the study are clear to the reader. * compressing the input size and utilizing a smaller width is not available as a contribution * The contrastive loss mentioned in your method is not clearly defined * In Section 2.2, The detailed part of the PCRL framework is not elaborated on. For example, whether different data augmentation is used for different encoders and whether the outputs of different encoders are used directly without any regulation. * The overall results of your method are worse and the damage to performance is not analyzed at the method level.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Self-pretrained V-net based on PCRL for Abdominal Organ Segmentation ### Paper Abstract Abdomen organ segmentation has many important clinical applications. However, the manual annotating process is time-consuming and labor-intensive. In the "Fast and Low-resource semi-supervised Abdominal oRgan sEgmentation in CT" challenge, the organizer provide massive unlabeled CT images. To effectively utilize unlabeled cases, we propose a self-pretrained V-net. Inspired by the preservational contrastive representation learning (PCRL), the proposed method consists of two steps: 1) using a large amount of unlabeled data to obtain a pre-trained model, 2) using a small amount of labeled data to perform fully supervised fine-tuning on the basis of the former. The feature extraction part used in both stages uses the same backbone network. The difference is that the pre-training stage introduces the additional image reconstruction branch and the corresponding momentum branch to construct image reconstruction and contrastive learning, and the fully-supervised model downstream uses a fully convolutional network for segmentation prediction. In the pre-training stage, by incorporating diverse image reconstruction tasks into the contrastive learning, the representation ability of the backbone network for specific image data during the upstream feature extraction process is enhanced. Besides, the half-precision (Float16) is used in the prediction stage, which reduces the GPU load by about 36% without losing the prediction accuracy and the maximum used GPU memory is 1719 MB. Quantitative evaluation on the FLARE2022 validation cases, this method achieves the average dice similarity coefficient (DSC) of 0.4811 and average normalized surface distance (NSD) of 0.4513. ### Paper Keywords ["Self-supervised learning", "Self-transfer learning", "Organ Segmentation"] ### Paper Content Self-pretrained V-net based on PCRL forAbdominal Organ SegmentationJiapeng ZhangIDUniversity of Shanghai for Science and Technology, Shanghai, ChinaAbstract. Abdomen organ segmentation has many important clinicalapplications. However, the manual annotating process is time-consumingand labor-intensive. In the "Fast and Low-resource semi-supervised Ab-dominal oRgan sEgmentation in CT" challenge, the organizer providemassive unlabeled CT images. To effectively utilize unlabeled cases, weproposeaself-pretrainedV-net.Inspiredbythepreservationalcontrastiverepresentation learning (PCRL), the proposed method consists of twosteps: 1) using a large amount of unlabeled data to obtain a pre-trainedmodel, 2) using a small amount of labeled data to perform fully super-vised fine-tuning on the basis of the former. The feature extraction partused in both stages uses the same backbone network. The difference isthat the pre-training stage introduces the additional image reconstruc-tionbranchandthecorrespondingmomentumbranchtoconstructimagereconstruction and contrastive learning, and the fully-supervised modeldownstream uses a fully convolutional network for segmentation predic-tion.Inthepre-trainingstage,byincorporatingdiverseimagereconstruc-tion tasks into the contrastive learning, the representation ability of thebackbonenetworkforspecificimagedataduringtheupstreamfeatureex-traction process is enhanced. Besides, the half-precision (Float16) is usedin the prediction stage, which reduces the GPU load by about 36 %with-out losing the prediction accuracy and the maximum used GPU mem-ory is 1719 MB. Quantitative evaluation on the FLARE2022 validationcases, this method achieves the average dice similarity coefficient (DSC)of 0.4811 and average normalized surface distance (NSD) of 0.4513.Keywords: Self-supervisedlearning ·Self-transferlearning ·OrganSeg-mentation.1 IntroductionAbdominal organ segmentation plays an important role in clinical practice, thestate-of-the-art methods have achieved inter-observer performance in severalbenchmark datasets. However, most of the existing abdominal datasets onlycontain single-center, single-phase, single-vendor, or single-disease cases, and itis unclear whether the excellent performance can be generalized on more diversedatasets. Some SOTA methods have good general applicability. However, whenthe training data is limited and the task is complex, it is difficult for the model to2 Jiapeng Zhangbe fully trained. Moreover, many SOTA methods use model ensembles to boostperformance, but these solutions usually have a large model size and cost exten-sive computational resources, which are impractical to be deployed in clinicalpractice.Compared with labeled data, unlabeled data is usually easier to obtain be-cause the manual labeling process is omitted. To make full use of the massive un-labeled cases, self-supervised learning has been widely adopted[2]. Based on themassive unlabeled data provided by the Fast and Low-resource semi-supervisedAbdominal oRgan sEgmentation in CT challenge, we attempted to design ourmethod based on V-Net[7], and PCRL[12].Specifically, the backbone uses the encoder-decoder style architecture withskip connection [8]. The vast majority of successful algorithms for image segmen-tation in the medical domain such as V-net [7] and Dense U-net [10] are basedon this U-shape structure. For unlabeled data, we use the method of retain-ing contrastive representation learning to obtain a pre-training weight throughself-supervised learning. Then, perform full supervision finetuning through lim-ited annotated data. Note that this pre-trained model was trained from theunlabeled cases provided by the challenge, and no additional pre-trained mod-els were used in the process. Compared with methods that only use contrastivelearning, PCRL can generate stronger representations of image information inthe upstream feature extraction network by reconstructing different contexts.Besides, to take into account the use of GPU memory and the preservation ofinformation between multiple organs and backgrounds, we adopt a horizontalplane scaling and vertical sliding window strategy to train the model. Mean-while, due to the limitation of GPU resources, we use a smaller input size toreduce resource consumption.The main contributions of this work are summarized as follows:1)WeproposeaPCRL-basedself-pretrainedmulti-organsegmentationframe-work to make full use of the massive unlabeled cases.2) To reduce resoure consumption and speed up the inference process, wecompress the input size and utilize a smaller width for the network.2 MethodAs mentioned in Fig 1, this whole segmentation framework is composed of aselfsupervised pretrain stage and a full-supervised finetuning stage. The detaildescription of the method is as follows.2.1 PreprocessingThe proposed method includes the following preprocessing steps:–Cropping strategy: Crop the training dataset to the non-zero region.–Resampling method for anisotropic data: First, the images are reorientedto a unified direction. To obtain a larger receptive field during the train-ing process, we tend to use a relatively complete patch for training. In thisSelf-pretrained V-net based on PCRL for Abdominal Organ Segmentation 3way the model can capture better relative relationship between the variousorgans. Constrained by hardware conditions, the original image is downsam-pled to 160 ×160 for clises in the transverse section, and the spacing ofinferior-superior axis is unified to 2.5. Both in-plane and out-of-plane withthird-order spline interpolation.–Intensity normalization method: First, the images is clipped to the range[-320, 320]. Then a z-score normalization is applied based on the mean andstandard deviation of the intensity values[11].2.2 Proposed MethodUnlabeled Cases(2000 )Labeled Cases(42 for train7 for test )Pre-trained WeightsContrastive LossMSE Loss MSE Loss MSE Loss SigmoidGDice + CE LossPredictionResult Random AugmentationSuch as :Random cropRandom flipRandom rotationInpaintingOutpainting Cross Mixup Share Weight Fig. 1.Self-supervised pretrain and full-supervised fine-tuning frameworkThe unlabeled data are used to construct a self-supervised learning process toobtain a pre-trained model for augmenting the fully supervised training process.The encoder and the decoder in both pretarin stage and finetuning stage areconncected via a U shape architecture.For the pretrain stage, the PCRL contains three different encoders and oneshared decoder. The three different encoders are ordinary encoder, momentumencoder, and cross-mixup encoder, where the momentum encoder is obtainedfrom the exponential moving average to the ordinary encode, and the cross-mixup encoder is the hybrid encoder mixed by both former encoders. FollowingZhou et al.[12], for a batch of input image, different data augmentataion meth-ods, such as random crop, random flip and random rotation are first applied to4 Jiapeng Zhang8 × 160 × 160 × 8016 × 80 × 80 × 4032 × 40 × 40 × 8064 × 20 × 20 × 10CCC M13 × 160 × 160 × 80Conv 3D(k = [2, 2, 2] s = [2, 2, 2])ConvTrans 3D(k = [2, 2, 2] s = [2, 2, 2])Conv 3D-IN-ELU(k = [5, 5, 5] s = [1, 1, 1])CCatMMapInputOutputSkip-Connection128 × 10 × 10 × 5C sFig. 2.V-Net backbone, where the input size and the number of network layers aremodified accordingly to this task.generate three batches of images corresponding to three encoders which are setas the ground truth targets of the MSE loss after decoder. Then low-level pro-cessing operations, including inpainting, outpainting are performed randomly togenerate the original encoder and the momentum encoder inputs. And the inputof the cross-mixup encoder is the mixup of these two inputs. The feature mapsoutput from the original encoder and the last layer of the momentum encoder aredeposited into the sequence K after global average pooling encoding to constructthe constractive learning[2].Forthefintuningstage,theweightsfromthepre-trainingphaseareused.Andthe difference is that a sigmoid layer is utilized after the decoder to perform thedownstream task of segmentation.The detail of each layer, hyper-parameters, such as stride, weight size, etc.of the backbone are shown in Fig 2Loss function: During self-pretrain stage, the contrastive loss and MSE lossare used; During the fine-tuning stage, we use the summation between general-ized Dice loss and cross entropy loss because it has been proved to be robust [5]in medical image segmentation tasks.To reduce resource consumption, a smaller input size to reduce resource con-sumption. Besides, existing network frameworks (such as PyTorch) usually usefull precision (Float64) for prediction. However, for intensive prediction taskssuch as 3D image segmentation, the use of full-precision model parameters willgreatly increase the hardware burden in the deduction process. In this work, theSelf-pretrained V-net based on PCRL for Abdominal Organ Segmentation 5half-precision (Float32) is used in the prediction stage, which reduces the GPUload by about 36 %without losing the prediction accuracy.3 Experiments3.1 Dataset and evaluation measuresThe FLARE2022 dataset is curated from more than 20 medical groups underthe license permission, including MSD [9], KiTS [3,4], AbdomenCT-1K [6], andTCIA [1]. The training set includes 50 labelled CT scans with pancreas diseaseand 2000 unlabelled CT scans with liver, kidney, spleen, or pancreas diseases.The validation set includes 50 CT scans with liver, kidney, spleen, or pancreasdiseases. The testing set includes 200 CT scans where 100 cases has liver, kidney,spleen, or pancreas diseases and the other 100 cases has uterine corpus endome-trial,urothelialbladder,stomach,sarcomas,orovariandiseases.AlltheCTscansonly have image information and the center information is not available.The evaluation measures consist of two accuracy measures: Dice SimilarityCoefficient (DSC) and Normalized Surface Dice (NSD), and three running effi-ciency measures: running time, area under GPU memory-time curve, and areaunder CPU utilization-time curve. All measures will be used to compute theranking. Moreover, the GPU memory consumption has a 2 GB tolerance.3.2 Implementation detailsEnvironment settings The development environments and requirements arepresented in Table 1.Table 1. Development environments and requirements.Windows/Ubuntu version Ubuntu 16.04.5 LTSCPU Intel(R) Xeon(R) CPU E5-2640 V3 @2.60GHzRAM 8 ×4GB; 2.4MT /sGPU (number and type) 4 Nvidia Geforce RTX 2080 (8G)CUDA version 11.1Programming language Python 3.9Deep learning framework Pytorch (Torch 1.8.1, torchvision 0.9.0)Specific dependencies V-Net1/PCRL2Training protocols The training protocols of the baseline method is shown inTable 2. During self-supervised pretraining, random crop, random flip, randomrotation, inpainting, outpainting and gaussian blur are used for constraction ofcontrastivelearning.Duringthefull-supervisedfine-tuning,anareawithalengthof 80 on the axis is randomly cropped to obtain a 3D input patch of height 80pixels, note that each patch contains at least one foreground class.6 Jiapeng ZhangTable 2. Training protocols.Network initialization “he" normal initializationBatch size 4Patch size 80 ×160×160Total epochs 2000Optimizer AdamInitial learning rate (lr) 0.0001Learning rate decay schedule MultiStepLR: milestones=[100, 200, 500], gamma=0.5Training time 11.4 day (self-pretrain) + 22.5 hours (fine-tuning)LossContrast Loss + MSE Loss (self-pretrain);GDice Loss + CE Loss (fine-tuning)Number of model parameters 43.60M3Number of flops 218.7G44 Results and discussion4.1 Quantitative results on validation setTable 3. Quantitative results on validation set in terms of DSC. The 1st row representsthe method without self-pretrained, and the 2nd row represents the method with self-pretrained. (where the Liv., RK, Spl., Pan., Aor, IVC, RAG, LAG, Gal., Eso., Sto.,Duo, and LK are Liver, Right Kidney, Spleen, Pancreas, Aorta, inferior vena cava, rightadrenal gland, left adrenal gland, gallbladder, esophagus, stomach, duodenum, and leftkidney, respectively.)Organ Liv. RK Spl. Pan. Aor. IVC RAG LAG Gal. Eso. Sto. Duo. LK MeanDSC(%) 83.10 67.97 69.26 49.44 76.14 64.45 2.00 4.00 40.03 39.02 55.45 40.46 68.65 50.77DSC(%) 73.89 59.06 61.14 34.64 74.31 59.32 40.50 32.06 26.56 41.63 41.58 32.09 57.75 48.67Table 3 illustrate the quantitative results on the provided validation set. In-cluding the mean DSC and individual DSC for liver (Liv.), right kidney (RK),spleen(Spl.),pancreas(Pan.),aorta(Aor.),inferiorvenacava(IVC),rightadrenalgland (RAG), left adrenal gland (LAG), gallbladder (Gal.), esophagus (Eso.),stomach (Stm.), duodenum (Duo.) and left kidney (LK). Although all othermetrics were higher than the method using the self-pretrained model. left andright adrenals were barely predictable when self-pretrained model was not used.Table 4 illustrate the ablation study on provided 50 validation cases. Overall,the proposed method performs well on large organs such as liver and spleen,while it performs poorly on small organs such as esophageal islets and left andright adrenal glands. In addition, it can be seen that the performance of thesegmentation model is significantly improved on the right and left adrenal glandsafter self-pretraining with unlabeled data.Self-pretrained V-net based on PCRL for Abdominal Organ Segmentation 7Table 4. Ablation study on provided validation cases. The 1st row and the 3rd rowrepresent the methods without self-pretrained, and the 2nd and 4th rows represent themethods with self-pretrained. (where the Liv., RK, Spl., Pan., Aor, IVC, RAG, LAG,Gal., Eso., Sto., Duo, and LK are Liver, Right Kidney, Spleen, Pancreas, Aorta, inferiorvena cava, right adrenal gland, left adrenal gland, gallbladder, esophagus, stomach,duodenum, and left kidney, respectively.)Liv. RK Spl. Pan. Aor. IVC RAG LAG Gal. Eso. Sto. Duo. LK MeanDSC(%) 84.28 61.01 70.49 50.87 77.94 67.23 0.00 0.00 27.41 46.85 48.95 39.47 71.79 49.71DSC(%) 77.20 55.59 64.83 37.81 72.39 61.86 37.45 32.11 18.50 39.74 41.33 25.85 60.82 48.11NSD(%) 70.53 48.56 59.41 56.08 77.41 61.38 0.00 0.00 21.23 56.16 46.94 58.73 66.18 47.89NSD(%) 60.74 42.31 48.48 43.42 63.33 52.52 48.76 40.30 12.74 51.03 36.48 38.32 48.30 45.13Due to memory limitations, our method uses a smaller raw input size of thenetwork as well as a smaller channel size, which exacerbates the risk of the modellosing contextual information when dealing with small targets. The process ofpretraining on unlabeled data improves the upstream feature extraction part ofthe model for feature representation under specific data distribution, which caneffectively mitigate the risk of small organ loss.4.2 Qualitative results on validation setFig 3 present some examples on our splitted validation set. It can be found thatthe method using pretrained model from unlabeled data performs better for theprediction of small organs such as left and right adrenal glands compared tothe method that does not utilize unlabeled data. Also, due to the use of slidingwindows in our method and the preprocessing strategy of uniform spacing, theremay be a certain degree of missing prediction when the input scan interval istoo large or when the scan spacing differs too much from the standard spacing,which is a major reason for the decrease in evaluation metrics.Table 5. Overview of DSC and NSD metrics on test set (where the Liv., RK, Spl.,Pan., Aor, IVC, RAG, LAG, Gal., Eso., Sto., Duo, and LK are Liver, Right Kidney,Spleen, Pancreas, Aorta, inferior vena cava, right adrenal gland, left adrenal gland,gallbladder, esophagus, stomach, duodenum, and left kidney, respectively.)Organ Liv. RK Spl. Pan. Aor. IVC RAG LAG Gal. Eso. Sto. Duo. LK MeanDSC(%) 61.68 56.55 53.04 35.05 72.17 60.38 43.97 35.42 27.30 34.85 35.77 26.42 58.54 46.24NSD(%) 41.18 36.67 33.50 35.37 60.77 48.89 56.96 45.05 16.90 44.19 26.31 34.08 41.27 40.094.3 Results on final testing setOur final results on the test set are shown in Table 5. The final mean DSCvalue is 46.26%, and the mean NSD is 40.09%. The results show that the modelalso responds well to small targets that are difficult to segment, such as adrenalglands. The results on the test set are consistent with those of the validation set.8 Jiapeng ZhangGT Self-Pretrained Non-PretrainedFig. 3.Qualitative results on some examples. First two columns are some good casesand the last two columns are some worse cases.4.4 Segmentation efficiency results on validation setTo balance performance and resource consumption, We perform a scaling opera-tion on the slicer in the transverse section, while taking a random sliding windowin the Inferior-Superior axis direction to obtain a uniform size input patch. Also,the images are stretched to a fixed axis spacing of 2.5 before processing. Thismeans that the model prediction efficiency will be greatly reduced for long rangeCT scans where the extent of the abdominal cavity cannot be determined (e.g.,somecasesinthevalidationset),whileitisefficientforCTdatawheretheextentof the abdominal cavity is more certain (e.g., 50 cases in the training set).4.5 Limitation and future workAs mentioned before, although the sliding window strategy can effectively reducethe resource burden compared to the overall processing, it may also lead tomore time-consuming and unnecessary resource wastage on CT data with largerscan ranges, and can also result in incorrect segmentation results in non-target(abdominal) intervals. In addition, the predictive power of the model for smallSelf-pretrained V-net based on PCRL for Abdominal Organ Segmentation 9organs remains limited. In the future, we will focus on addressing these twoaspects and exploring more possibilities for unlabeled data.5 ConclusionIn this work, we proposed a method based on PCLR and V-Net to segment ab-domial organs fast and cost low-resource. The self-supervised pre-trained modelobtained from a large amount of unlabeled data effectively improves the predic-tion ability of the segmentation model for small organs such as adrenal glands.It performs well on healthy data with well-defined target intervals, however, itperforms poorly and is relatively time-consuming for CT data with large scanareas.Acknowledgements The authors of this paper declare that the segmentationmethodtheyimplementedforparticipationintheFLARE2022challengehasnotused any pre-trained models nor additional datasets other than those providedby the organizers. The proposed solution is fully automatic without any manualintervention.References1. Clark, K., Vendt, B., Smith, K., Freymann, J., Kirby, J., Koppel, P., Moore, S.,Phillips,S.,Maffitt,D.,Pringle,M.,etal.:Thecancerimagingarchive(tcia):main-taining and operating a public information repository. Journal of Digital Imaging26(6), 1045–1057 (2013) 52. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.B.: Momentum contrast for unsu-pervised visual representation learning. In: 2020 IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. pp. 9726–9735. Computer Vision Foundation / IEEE (2020). https://doi.org/10.1109/CVPR42600.2020.00975 2, 43. Heller, N., Isensee, F., Maier-Hein, K.H., Hou, X., Xie, C., Li, F., Nan, Y., Mu,G., Lin, Z., Han, M., et al.: The state of the art in kidney and kidney tumorsegmentation in contrast-enhanced ct imaging: Results of the kits19 challenge.Medical Image Analysis 67, 101821 (2021) 54. Heller, N., McSweeney, S., Peterson, M.T., Peterson, S., Rickman, J., Stai, B.,Tejpaul, R., Oestreich, M., Blake, P., Rosenberg, J., et al.: An international chal-lenge to use artificial intelligence to define the state-of-the-art in kidney and kidneytumor segmentation in ct imaging. American Society of Clinical Oncology 38(6),626–626 (2020) 55. Ma, J., Chen, J., Ng, M., Huang, R., Li, Y., Li, C., Yang, X., Martel, A.L.: Lossodyssey in medical image segmentation. Medical Image Analysis 71, 102035 (2021)46. Ma, J., Zhang, Y., Gu, S., Zhu, C., Ge, C., Zhang, Y., An, X., Wang, C., Wang, Q.,Liu, X., Cao, S., Zhang, Q., Liu, S., Wang, Y., Li, Y., He, J., Yang, X.: Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Transactions on Pat-tern Analysis and Machine Intelligence (2021). https://doi.org/10.1109/TPAMI.2021.3100536 510 Jiapeng Zhang7. Milletari, F., Navab, N., Ahmadi, S.A.: V-net: Fully convolutional neural networksfor volumetric medical image segmentation. In: 2016 Fourth International Confer-ence on 3D Vision (3DV). pp. 565–571 (2016). https://doi.org/10.1109/3DV.2016.79 28. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedi-cal image segmentation. In: International Conference on Medical image computingand computer-assisted intervention. pp. 234–241 (2015) 29. Simpson, A.L., Antonelli, M., Bakas, S., Bilello, M., Farahani, K., Van Ginneken,B., Kopp-Schneider, A., Landman, B.A., Litjens, G., Menze, B., et al.: A large an-notated medical image dataset for the development and evaluation of segmentationalgorithms. arXiv preprint arXiv:1902.09063 (2019) 510. Wang, Z., Zou, N., Shen, D., Ji, S.: Non-Local U-Nets for Biomedical Image Seg-mentation. In: AAAI. pp. 6315–6322 (2020) 211. Zhang, F., Wang, Y., Yang, H.: Efficient context-aware network for abdominalmulti-organ segmentation. arXiv preprint arXiv:2109.10601 (2021) 312. Zhou, H., Lu, C., Yang, S., Han, X., Yu, Y.: Preservational learning improvesself-supervised medical image models by reconstructing diverse contexts. In: 2021IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal,QC, Canada, October 10-17, 2021. pp. 3479–3489. IEEE (2021). https://doi.org/10.1109/ICCV48922.2021.00348 2, 3<|im_end|> <|im_start|>assistant ### Review Title More weaknesses in the paper itself ### Review Text The authors designed a preservational contrastive representation of learning (PCRL) using different argumentations on unlabeled data to get the self-pre-trained network weight. In Sections Experiments, PCRL has only improved segmentation performance on the right adrenal gland (RAG) and left adrenal gland (LAG). There are some problems, which must be solved before it is considered for publication. If the following problems are well-addressed: * It is noted that your manuscript needs careful editing by someone with expertise in technical English editing paying particular attention to English grammar, spelling, and sentence structure so that the goals and results of the study are clear to the reader. * compressing the input size and utilizing a smaller width is not available as a contribution * The contrastive loss mentioned in your method is not clearly defined * In Section 2.2, The detailed part of the PCRL framework is not elaborated on. For example, whether different data augmentation is used for different encoders and whether the outputs of different encoders are used directly without any regulation. * The overall results of your method are worse and the damage to performance is not analyzed at the method level. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
BC4UYzbLRZ
MIDL.io/2023/Short_Paper_Track
2023
3D Supervised Contrastive-Learning Network for Classification of Ovarian Neoplasms
["Tarun Kanti Roy", "Suely Oliveira", "Jesus Gonzalez Bosquet", "Xiaodong Wu"]
Ovarian cancer ranks the $5^{th}$ in cancer deaths among women, accounting for more deaths than any other cancer of the female reproductive system. We propose a 3D contrastive learning based predictive model to discriminate benign from malignant masses in abdominal CT scans for ovarian cancer patients. We used fully supervised contrastive learning(SCL) approach which allowed us to effectively leverage the label information of our small dataset of 331 patients. All patients' data was collected at the University of Iowa. Three different architectures (VGG, ResNet and DenseNet) were implemented for feature extraction by contrastive learning. We showed that SCL consistently out-performed over the traditional cross-entropy based networks with VGG and two ResNet variants. With five fold cross validation, our best contrastive learning model achieves an accuracy of 92.8\%, mean AUC of 92.4\%, mean recall of 94.45\% and mean specificity of 90.37\%. This work shows that contrastive learning is a promising deep learning method to improve early detection of women at risk of harboring ovarian cancer.
["Supervised contrastive learning", "ovarian cancer", "classification", "deep learning", "feature encoder", "densenet", "resnet", "cross validation", "tumors", "CT scan"]
Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submission3D Supervised Contrastive-Learning Network forClassification of Ovarian NeoplasmsTarun Roy tarunkanti-roy@uiowa.eduJesus Gonzalez Bosquet jsus-gonzalezbosquet@uiowa.eduSuely Oliveira suely-oliveira@uiowa.eduXiaodong Wu xiaodong-wu@uiowa.eduUniversity of IowaIowa City, IA 52242, USAEditors: Under Review for MIDL 2023AbstractOvarian cancer is the deadliest of all female reproductive system cancers and ranks the 5thincancer deaths among women. We propose a 3D contrastive learning based predictive modelto discriminate benign from malignant masses in abdominal CT scans for ovarian cancerpatients. We used fully supervised contrastive learning(SCL) approach which allowed us toeffectively leverage the label information of our small dataset of 331 patients. All patients’data was collected at the University of Iowa. Three different architectures (VGG, ResNetand DenseNet) were implemented for feature extraction by contrastive learning. We showedthat SCL consistently out-performed over the traditional cross-entropy based networks withVGG and two ResNet variants. With five fold cross validation, our best contrastive learningmodel achieves an accuracy of 92.8%, mean AUC of 92.4%, mean recall of 94.45% and meanspecificity of 90.37%. This work shows that contrastive learning is a promising deep learningmethod to improve early detection of women at risk of harboring ovarian cancer.Keywords: Supervised contrastive learning, ovarian cancer, classification, deep learning,feature encoder, efficentnet, resnet, cross validation1. IntroductionAmerican cancer society indicated, the pobability of a woman getting ovarian cancer is178.Moreover, the chance of dying from it is1108. Diagnostic models for cancer patients mayimprove decision making to personalize management of cancer patients. In this study, wepropose a deep learning-based predictive model for ovarian cancer patients to discriminatebenign from malignant masses in abdominal CT scans. Our developed model uses 3D CTscan data obtained at the University of Iowa. A major challenge in the analysis of ovarianCT scans is that there are a large number of ovarian cysts existing in both malignant andbenign patient data. Manually tracing all of them is cumbersome. Previous works also showthat CNN based models out perform experienced field radiologists in terms of accuracy ofprognosis (Saida et al., 2022). Most of the prior works done with ovarian cancer dataonly use 2D convolutional networks. In this study we trained 3D CNN models and gotbetter performance compared to 2D models. We also implemented a new state-of-the-artcontrastive-learning technique in 3D.©2023 CC-BY 4.0, T. Roy, J.G. Bosquet, S. Oliveira & X. Wu.Roy Bosquet Oliveira Wu2. MethodologyIn our proposed approach, we trained a 3D convolutional feature encoder using a supervisedcontrastive loss. The trained encoder was used on top of a multi-layer perceptron(MLP)netwrok to train the classifier. All the weights of the encoders were frozen during classifiertraining. The feature encoders we used had different convolutional architectures. Thedataset contains CT scans of lower abdomens from 331 patients. Out of these samples,196 scans contained malignant tumors and the rest of the 135 samples had benign tumors.Because of the small sample size, we trained models using five-fold stratified cross validationwith a split of 264 for training and 67 for testing. For each volume image, the region ofinterest (ROI) with a dimension of 128 ×128×64 was set around patients’ lower abdomenswhere the ovaries were located, and the images were cropped to the ROI volume.2.1. Representation Learning FrameworkOur proposed predictive model consists of the following components, as in (Tian et al.,2019; Khosla et al., 2020)•Data Augmentation module: 3D medical images are not suitable for any random aug-mentations. We experimented only with three different augmentations: translation,rotation and flipping (Solovyev et al., 2022). From each input sample ntwo randomaugmented images ̃ n=Augment (n) were generated to train the encoder network withthe objective of minimizing the contrastive loss for the same class and maximizing forthe other classes.•Encoder Network: In this work we used different 3D convolutional architectures asencoder networks that output the vector representation of the input CT volume.x=Enc( ̃n)∈RDEIn our experiments we empirically chose the representation vectorsizeDE= 2048.•Projection Network: Maps the representation vector xto a projection vector z=proj(x)∈RDp. In this paper we used MLP network as the projection head withoutput vector size of Dp= 512. The normalized output vector is used to measurethe sample distances in the projection space. Even though we had different encodernetworks, we used the same projection head in each case.•Supervised Contrastive Losses used in this work can leverage the label informationmore effectively compared to the cross-entropy loss. The idea here is to cluster thepoints belonging to the same class that are pulled together in embedding space whilesimultaneously pushing apart cluster of samples from different classes (Khosla et al.,2020)Lsup=2NXi=1LsupiLsupi=−12N ̃yi−12NXj=11i ̸=j·1 ̃yi= ̃yj·logexp ( zi·zj/τ)P2Nk=11i ̸=k·exp ( zi·zk/τ)2Ovarian Cancer Classifier NetworkFor a minibatch of X1..bsamples, here N ̃yiis the total number of images in theminibatch that have the same label, y, as the anchor image, i. Augmented images areindicated by ̃ y. This loss has important properties well suited for supervised learning:(a)generalization to an arbitrary number of positives, (b) contrastive power increaseswith more negatives.Figure 1: Performance overview of the five fold cross validation (a) Networks trained withCross-entropy loss (b) Networks trained with Contrastive loss3. Result and DiscussionAll the models shown in Table 1 are cross validated with leave-one-fold-out fashion. Thisdemonstrates the robustness of the models to new data. Fig. 1 depicts the performanceboxplot of 5-fold cross validation in terms of accuracy, AUC, recall and specificity scores.Supervised contrastive learning models outperformed the baseline models trained with bi-nary cross-entropy loss.Table 1: Performance Comparison of models on CT volume size of (64 ×128×128)Panel A: BaseLine 3D modelsAcc. (%) AUC(%) Recall(%) Spec.(%)VGG19 84.3 84.1 85.2 82.96ResNet18 80.1 77.9 88.33 67.5ResNet50 81.6 80.1 88.99 71.18DenseNet121 82.15 80.58 80.42 80.73Panel B: SCL 3D modelsVGG19 89.48 88.58 93.45 83.7ResNet18 89.17 88.2 93.42 82.96ResNet50 92.8 92.4 94.45 90.37DenseNet121 91.16 90.61 94.89 86.35This work leverages the state-of-the-art contrastive learning method to develop an auto-mated diagnosis model for the classification of ovarian tumors. We studied fully supervised3Roy Bosquet Oliveira Wucontrastive learning for tackling this problem and investigated the predictive powers withrespect to four common CNN baselines. We expect that with a large training dataset (evenwithout annotations), higher accuracy will be achievable using semi-supervised contrastivelearning as well.ReferencesPrannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola,Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. CoRR ,abs/2004.11362, 2020. URL https://arxiv.org/abs/2004.11362 .Tsukasa Saida, Kensaku Mori, Sodai Hoshiai, Masafumi Sakai, Aiko Urushibara, ToshitakaIshiguro, Manabu Minami, Toyomi Satoh, and Takahito Nakajima. Diagnosing ovariancancer on mri: A preliminary study comparing deep learning and radiologist assessments.Cancers , 14(4), 2022. ISSN 2072-6694. doi: 10.3390/cancers14040987. URL https://www.mdpi.com/2072-6694/14/4/987 .Roman Solovyev, Alexandr A Kalinin, and Tatiana Gabruseva. 3d convolutional neuralnetworks for stalled brain capillary detection. Computers in Biology and Medicine , 141:105089, 2022. doi: 10.1016/j.compbiomed.2021.105089.Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. CoRR ,abs/1906.05849, 2019. URL http://arxiv.org/abs/1906.05849 .4
RaOamhoAs0D
Constrastive learning of benign/malignant ovarian masses
6: Marginally above acceptance threshold
The paper proposes an approach for ovarian mass classification based on contrastive learning, which outperforms a traditional training setup on a private dataset of 331 patients. Pros: * Training done in 3D * Several baselines compared Cons: * Abstract talks about early detection, but it is unclear how advanced the cancer is in these patients * Introduction mentions better results than 2D training, but 2D results not presented * Though average performance increases, the training seems unstable (high standard deviations in Fig 1) - is there an explanation for this?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title 3D Supervised Contrastive-Learning Network for Classification of Ovarian Neoplasms ### Paper Abstract Ovarian cancer ranks the $5^{th}$ in cancer deaths among women, accounting for more deaths than any other cancer of the female reproductive system. We propose a 3D contrastive learning based predictive model to discriminate benign from malignant masses in abdominal CT scans for ovarian cancer patients. We used fully supervised contrastive learning(SCL) approach which allowed us to effectively leverage the label information of our small dataset of 331 patients. All patients' data was collected at the University of Iowa. Three different architectures (VGG, ResNet and DenseNet) were implemented for feature extraction by contrastive learning. We showed that SCL consistently out-performed over the traditional cross-entropy based networks with VGG and two ResNet variants. With five fold cross validation, our best contrastive learning model achieves an accuracy of 92.8\%, mean AUC of 92.4\%, mean recall of 94.45\% and mean specificity of 90.37\%. This work shows that contrastive learning is a promising deep learning method to improve early detection of women at risk of harboring ovarian cancer. ### Paper Keywords ["Supervised contrastive learning", "ovarian cancer", "classification", "deep learning", "feature encoder", "densenet", "resnet", "cross validation", "tumors", "CT scan"] ### Paper Content Medical Imaging with Deep Learning – Under Review 2023 Short Paper – MIDL 2023 submission3D Supervised Contrastive-Learning Network forClassification of Ovarian NeoplasmsTarun Roy tarunkanti-roy@uiowa.eduJesus Gonzalez Bosquet jsus-gonzalezbosquet@uiowa.eduSuely Oliveira suely-oliveira@uiowa.eduXiaodong Wu xiaodong-wu@uiowa.eduUniversity of IowaIowa City, IA 52242, USAEditors: Under Review for MIDL 2023AbstractOvarian cancer is the deadliest of all female reproductive system cancers and ranks the 5thincancer deaths among women. We propose a 3D contrastive learning based predictive modelto discriminate benign from malignant masses in abdominal CT scans for ovarian cancerpatients. We used fully supervised contrastive learning(SCL) approach which allowed us toeffectively leverage the label information of our small dataset of 331 patients. All patients’data was collected at the University of Iowa. Three different architectures (VGG, ResNetand DenseNet) were implemented for feature extraction by contrastive learning. We showedthat SCL consistently out-performed over the traditional cross-entropy based networks withVGG and two ResNet variants. With five fold cross validation, our best contrastive learningmodel achieves an accuracy of 92.8%, mean AUC of 92.4%, mean recall of 94.45% and meanspecificity of 90.37%. This work shows that contrastive learning is a promising deep learningmethod to improve early detection of women at risk of harboring ovarian cancer.Keywords: Supervised contrastive learning, ovarian cancer, classification, deep learning,feature encoder, efficentnet, resnet, cross validation1. IntroductionAmerican cancer society indicated, the pobability of a woman getting ovarian cancer is178.Moreover, the chance of dying from it is1108. Diagnostic models for cancer patients mayimprove decision making to personalize management of cancer patients. In this study, wepropose a deep learning-based predictive model for ovarian cancer patients to discriminatebenign from malignant masses in abdominal CT scans. Our developed model uses 3D CTscan data obtained at the University of Iowa. A major challenge in the analysis of ovarianCT scans is that there are a large number of ovarian cysts existing in both malignant andbenign patient data. Manually tracing all of them is cumbersome. Previous works also showthat CNN based models out perform experienced field radiologists in terms of accuracy ofprognosis (Saida et al., 2022). Most of the prior works done with ovarian cancer dataonly use 2D convolutional networks. In this study we trained 3D CNN models and gotbetter performance compared to 2D models. We also implemented a new state-of-the-artcontrastive-learning technique in 3D.©2023 CC-BY 4.0, T. Roy, J.G. Bosquet, S. Oliveira & X. Wu.Roy Bosquet Oliveira Wu2. MethodologyIn our proposed approach, we trained a 3D convolutional feature encoder using a supervisedcontrastive loss. The trained encoder was used on top of a multi-layer perceptron(MLP)netwrok to train the classifier. All the weights of the encoders were frozen during classifiertraining. The feature encoders we used had different convolutional architectures. Thedataset contains CT scans of lower abdomens from 331 patients. Out of these samples,196 scans contained malignant tumors and the rest of the 135 samples had benign tumors.Because of the small sample size, we trained models using five-fold stratified cross validationwith a split of 264 for training and 67 for testing. For each volume image, the region ofinterest (ROI) with a dimension of 128 ×128×64 was set around patients’ lower abdomenswhere the ovaries were located, and the images were cropped to the ROI volume.2.1. Representation Learning FrameworkOur proposed predictive model consists of the following components, as in (Tian et al.,2019; Khosla et al., 2020)•Data Augmentation module: 3D medical images are not suitable for any random aug-mentations. We experimented only with three different augmentations: translation,rotation and flipping (Solovyev et al., 2022). From each input sample ntwo randomaugmented images ̃ n=Augment (n) were generated to train the encoder network withthe objective of minimizing the contrastive loss for the same class and maximizing forthe other classes.•Encoder Network: In this work we used different 3D convolutional architectures asencoder networks that output the vector representation of the input CT volume.x=Enc( ̃n)∈RDEIn our experiments we empirically chose the representation vectorsizeDE= 2048.•Projection Network: Maps the representation vector xto a projection vector z=proj(x)∈RDp. In this paper we used MLP network as the projection head withoutput vector size of Dp= 512. The normalized output vector is used to measurethe sample distances in the projection space. Even though we had different encodernetworks, we used the same projection head in each case.•Supervised Contrastive Losses used in this work can leverage the label informationmore effectively compared to the cross-entropy loss. The idea here is to cluster thepoints belonging to the same class that are pulled together in embedding space whilesimultaneously pushing apart cluster of samples from different classes (Khosla et al.,2020)Lsup=2NXi=1LsupiLsupi=−12N ̃yi−12NXj=11i ̸=j·1 ̃yi= ̃yj·logexp ( zi·zj/τ)P2Nk=11i ̸=k·exp ( zi·zk/τ)2Ovarian Cancer Classifier NetworkFor a minibatch of X1..bsamples, here N ̃yiis the total number of images in theminibatch that have the same label, y, as the anchor image, i. Augmented images areindicated by ̃ y. This loss has important properties well suited for supervised learning:(a)generalization to an arbitrary number of positives, (b) contrastive power increaseswith more negatives.Figure 1: Performance overview of the five fold cross validation (a) Networks trained withCross-entropy loss (b) Networks trained with Contrastive loss3. Result and DiscussionAll the models shown in Table 1 are cross validated with leave-one-fold-out fashion. Thisdemonstrates the robustness of the models to new data. Fig. 1 depicts the performanceboxplot of 5-fold cross validation in terms of accuracy, AUC, recall and specificity scores.Supervised contrastive learning models outperformed the baseline models trained with bi-nary cross-entropy loss.Table 1: Performance Comparison of models on CT volume size of (64 ×128×128)Panel A: BaseLine 3D modelsAcc. (%) AUC(%) Recall(%) Spec.(%)VGG19 84.3 84.1 85.2 82.96ResNet18 80.1 77.9 88.33 67.5ResNet50 81.6 80.1 88.99 71.18DenseNet121 82.15 80.58 80.42 80.73Panel B: SCL 3D modelsVGG19 89.48 88.58 93.45 83.7ResNet18 89.17 88.2 93.42 82.96ResNet50 92.8 92.4 94.45 90.37DenseNet121 91.16 90.61 94.89 86.35This work leverages the state-of-the-art contrastive learning method to develop an auto-mated diagnosis model for the classification of ovarian tumors. We studied fully supervised3Roy Bosquet Oliveira Wucontrastive learning for tackling this problem and investigated the predictive powers withrespect to four common CNN baselines. We expect that with a large training dataset (evenwithout annotations), higher accuracy will be achievable using semi-supervised contrastivelearning as well.ReferencesPrannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola,Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. CoRR ,abs/2004.11362, 2020. URL https://arxiv.org/abs/2004.11362 .Tsukasa Saida, Kensaku Mori, Sodai Hoshiai, Masafumi Sakai, Aiko Urushibara, ToshitakaIshiguro, Manabu Minami, Toyomi Satoh, and Takahito Nakajima. Diagnosing ovariancancer on mri: A preliminary study comparing deep learning and radiologist assessments.Cancers , 14(4), 2022. ISSN 2072-6694. doi: 10.3390/cancers14040987. URL https://www.mdpi.com/2072-6694/14/4/987 .Roman Solovyev, Alexandr A Kalinin, and Tatiana Gabruseva. 3d convolutional neuralnetworks for stalled brain capillary detection. Computers in Biology and Medicine , 141:105089, 2022. doi: 10.1016/j.compbiomed.2021.105089.Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. CoRR ,abs/1906.05849, 2019. URL http://arxiv.org/abs/1906.05849 .4<|im_end|> <|im_start|>assistant ### Review Title Constrastive learning of benign/malignant ovarian masses ### Review Text The paper proposes an approach for ovarian mass classification based on contrastive learning, which outperforms a traditional training setup on a private dataset of 331 patients. Pros: * Training done in 3D * Several baselines compared Cons: * Abstract talks about early detection, but it is unclear how advanced the cancer is in these patients * Introduction mentions better results than 2D training, but 2D results not presented * Though average performance increases, the training seems unstable (high standard deviations in Fig 1) - is there an explanation for this? ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
BkXMikqxx
ICLR.cc/2017/conference
2017
Cortical-Inspired Open-Bigram Representation for Handwritten Word Recognition
["Th\u00e9odore Bluche", "Christopher Kermorvant", "Claude Touzet", "Herv\u00e9 Glotin"]
Recent research in the cognitive process of reading hypothesized that we do not read words by sequentially recognizing letters, but rather by identifing open-bigrams, i.e. couple of letters that are not necessarily next to each other. In this paper, we evaluate an handwritten word recognition method based on original open-bigrams representation. We trained Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) to predict open-bigrams rather than characters, and we show that such models are able to learn the long-range, complicated and intertwined dependencies in the input signal, necessary to the prediction. For decoding, we decomposed each word of a large vocabulary into the set of constituent bigrams, and apply a simple cosine similarity measure between this representation and the bagged RNN prediction to retrieve the vocabulary word. We compare this method to standard word recognition techniques based on sequential character recognition. Experiments are carried out on two public databases of handwritten words (Rimes and IAM), an the results with our bigram decoder are comparable to more conventional decoding methods based on sequences of letters.
["representation", "letters", "handwritten word recognition", "cognitive process", "words", "couple", "next", "original", "long"]
ABSTRACTRecent research in the cognitive process of reading hypothesized that we do notread words by sequentially recognizing letters, but rather by identifing open-bigrams, i.e.couple of letters that are not necessarily next to each other. In thispaper, we evaluate an handwritten word recognition method based on originalopen-bigrams representation. We trained Long Short-Term Memory RecurrentNeural Networks (LSTM-RNNs) to predict open-bigrams rather than characters,and we show that such models are able to learn the long-range, complicated andintertwined dependencies in the input signal, necessary to the prediction. For de-coding, we decomposed each word of a large vocabulary into the set of constituentbigrams, and apply a simple cosine similarity measure between this representationand the bagged RNN prediction to retrieve the vocabulary word. We compare thismethod to standard word recognition techniques based on sequential characterrecognition. Experiments are carried out on two public databases of handwrittenwords (Rimes and IAM), an the results with our bigram decoder are comparableto more conventional decoding methods based on sequences of letters.1 I NTRODUCTIONTaking inspiration in Biology is sometimes very efficient. For example, deep neural networks (NN)– which are outperforming all other methods (including support vector machines, SVM) in imagerecognition – are based on a series of several (usually 5 to 15) neurons layers, each layer involvingsparsity in the activation pattern (a biological trait of the cortical map). The analogy continues withthe modeling of the cortex as a hierarchy of cortical maps. Thanks to the analysis of reaction timein cognitive psychology experiments, the minimal number of cortical maps involved in a cognitiveprocess is estimated to about ten, the same order of magnitude as the number of layers in deepneural networks for computer vision tasks. In the case of handwritten word recognition, Dehaeneet al. have proposed a biologically plausible model of the cortical organization of reading (Dehaeneet al., 2005) that assumes seven successive steps of increasing complexity, from the retinal ganglioncells to a cortical map of the orthographic word forms (Figure 1). One of the most recent successes ofexperimental psychology was the demonstration that human visual word recognition uses an explicitrepresentation of letter position order based on letter pairs: the open-bigram coding (Whitney et al.,2012; Gomez et al., 2008; Grainger & Van Heuven, 2003; Glotin et al., 2010; Dufau, 2008).As demonstrated in (Touzet et al., 2014), open-bigrams (OB) allow an over-coding of the ortho-graphic form of words that facilitates recognition. OB coding favors same length words (i.e., neigh-bors of similar lengths). In the context of learning to read, the existence of the OB layer just beforethe orthographic word representation has been used to explain the lack of efficiency of whole lan-guage method (today banned from reading teaching) compared to the phonics method which explic-1Under review as a conference paper at ICLR 2017Figure 1: The cognitive process of reading, a seven steps procedure that includes an open-bigramsrepresentation layer. Additional information helps the organization of levels 4 and 5, when using aphonics method, but not a whole language method (today banned from reading teaching for lack ofefficiency, adapted from (Dehaene et al., 2005) and (Touzet, 2015).itly supervises the organization of the OB map (with syllables), where the global method does not(Figure 1).Since cognitive psychology has demonstrated the existence of the OB layer, the hypothesis has beenput forward (Touzet et al., 2014) that the orthographic representation of words may have evolved inorder to take into account the topology of the OB space, instead of the topology of the single letterspace. Our goal here is to test this hypothesis, comparing OB vs sequential character recognitionfor word recognition. A state-of-the-art decoder based on a Long Short-Term Memory RecurrentNeural Networks (LSTM-RNN) is used on two public databases of handwritten words (Rimes andIAM).The remaining of this paper will be divided as follows. In Section 2, we present related methods forhandwritten word recognition. Then, we describe the open-bigram representation of words and theproposed decoder in Section 3. The experimental setup, including the data and the bigram predictionmodel, is explained in Section 4. Finally, we present our results in Section 5, before concluding inSection 6.2 R ELATED WORKIn this section, we give a brief overview of existing techniques for handwritten word recognition.Historically, the methods may be divided in three broad categories. The first approach is whole wordrecognition, where the image of the full word is directly classified into word classes, without relyingon the character level (e.g. in (Parisse, 1996; Madhvanath & Govindaraju, 1996)). In the secondmethod, the word image is segmented into parts of characters (stokes or graphemes). The segmentsare grouped and scored, and character sequences are obtained with a graph search (e.g. in (Bengioet al., 1995)) or with hidden Markov models (HMMs, e.g. in (Knerr et al., 1998)). The last method,most popular nowadays, is a segmentation-free approach. The goal is to predict a character sequencefrom the image without segmenting it first. The techniques include scanning a sliding window toextract features used in an HMM (e.g. in (Kaltenmeier et al., 1993)), or to feed the image to aneural network able to output sequences of character predictions (e.g. SDNNs (LeCun et al., 1998)or MDLSTM-RNN (Graves & Schmidhuber, 2009)).More recently, different approaches have been proposed to recognize words using character bi-grams, and therefore closer to the method we propose in this paper. Jaderberg et al. (2014) propose2Under review as a conference paper at ICLR 2017to predict both the characters and ngrams of characters with two distinct convolutional neural net-works (CNNs) to recognize text in natural images. Their approach includes a conditional randomfield as decoder. Similarly, Poznanski & Wolf (2016) train a CNN with a cross-entropy loss to detectcommon unigrams, bigrams or trigrams of character in a handwritten word image. The output of thenetwork is matched against the lexicon using canonical correlation analysis. Almaz ́an et al. (2014)use Fisher vectors from images and pyramidal character histograms, to learn a feature space sharedby the word images and labels, for word spotting, also using canonical correlation analysis.3 P ROPOSED METHOD3.1 A NOPEN-BIGRAM REPRESENTATION OF WORDSThe letter bigrams of a word wis the set of pairs of consecutive letters. The open-bigram of orderdis the set of pairs of letters separated by dother letters in the word, which we call Bd(w):Bd(w) =fwiwi+d:i2f1:::jwjdgg: (1)The usual bigrams are open-bigrams of order 1. By extension, we call B0(w)the set of letters in thewordw. For example, for word word , we have:B1(word ) =for;rd;wog;B2(word ) =fod;wrg;B3(word ) =fwdg:The general open-bigram representation of a word is the union ofBd1;:::;dn(w) =Bd1(w)[:::[Bdn(w): (2)For example,B1;2;3(word ) =fod;or;rd;wd;wo;wrg.We extendBintoB0by including special bigrams for the letters at the beginning and end of a word:B0(w) =B(w)[fw0;wjwjg: (3)So, for example,B01;2;3(word ) =fw;d;od;or;rd;wd;wo;wrg: (4)In this paper, we will call Bthe set of all bigrams, and Wthe set of all words. We will represent aword of the vocabulary w2Was a normalized binary vector vw2W2<jBjvw=[(b2B(w))]b2BpjB(w)j; (5)i.e.the vector with 0everywhere and 1=pjB(w)jat indices corresponding to bigrams of the word.The stacking of the vector representation of all the words in the vocabulary yields the vocabularymatrixV2<jWjjBj.Note that in this representation, the bigrams form an unordered set. We do not know : (i) where thebigrams are, (ii) what is the order of a given bigram, (iii) how many times it occurs. The goal is tobuild a word recognition decoder in the bigram space.3.2 A NOPEN-BIGRAM DECODERWhile the trivial representation of a word is an ordered sequence of letters, the order in the bigramspace is locally embedded in the bigram representation. Most state-of-the-art word recognitionsystems recognize sequences of letters, and organize the vocabulary for a constrained search asdirected graphs, such as prefix trees, or Finite-State Transducers. On the other hand, we can interpretthe bigram representation as encoding directed edges in a graph, although we will not explicitly buildsuch a graph for decoding.On Figure 2, we show the graph for a representation of the word into a sequence of letters. Grayedges show the potential risk of a misrecognition in the letter sequences. On Figure 2(b), we dis-play the conceptual representation of bigrams as edges. We observe that a global order of letterscan emerge from the local representation. Moreover, the constituent information of a word in the3Under review as a conference paper at ICLR 2017(a) Sequential representation (b) Bigram representationFigure 2: Word representation as an explicit sequence of letters (a), and as a set of bigrams (b). Greyedges show the potential impact of misrecognitions.bigram space is redundant, potentially making this representation more robust to mispredictions ofthe optical model.The optical model is the system which provides the predictions of bigrams from the image (or, inthe classical approach sequences of character predictions). That is, it provides a confidence measurethat each bigram bis present in image x:0pb(x)1. This is transformed into a vector in thebigram space:qx=[pb(x)]b2BpPbp2b(x): (6)For decoding, we chose the very simple cosine similarity between the query ( qx) and a vocabularyword ( vw). Since we normalized both vectors, this is simply the dot product:d(qx;vw) =vTwqx; (7)so the similarity with all words of the vocabulary can be computed with a matrix-vector product:DV(x) =VTqx: (8)The recognized word is the one with maximum similarity with the query:w=argmaxDV(x) =argmaxwPb2B(w)pb(x)pjB(w)jPbp2b(x): (9)We carried out a few preliminary experiments to justify the open-bigram decoder. First, we consid-ered the famous sentence with mixed up letters:“aoccdrnig to a rscheearch at cmabrigde uinervtisy it deos not mttaer in waht oredr theltteers in a wrod are the olny iprmoatnt tihng is taht the frist and lsat ltteers be at the rghitpclae the rset can be a toatl mses and you can sitll raed it wouthit porbelm tihs is bcuseaethe huamn mnid deos not raed ervey lteter by istlef but the wrod as a wlohe” .Although the origin and validity of this statement when letters are put in the right order has beendiscussed1, it is true that most of us can read it without trouble. For each word of more than oneletter in this sentence, we computed the open-bigram representation ( d= 0::3), and replaced it withthe word having the highest cosine similarity in the English vocabulary described in the next section.The result was:“according to a researcher atabridged university it does not matter in what ordered theletters in a word are the only important thing is that the first and last letters be at the rightplace the rest can be a total messes and you can still read it outwith problem this is becausethe human mind does not read every letter by itself but the word as a whole” .Note that the word “cambridge” was not in the vocabulary. Although the task in this paper is notto recognize mixed up words, it shows the ability of our decoder to perform a reading task that wenaturally do.1http://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/4Under review as a conference paper at ICLR 2017Figure 3: Visualization of the bigram representation of the English vocabulary, for d= 1::3(Touzetet al., 2014) (left), vsafter t-SNE (Van der Maaten & Hinton, 2008) (right). Our complete bigramicmap of English: https://youtu.be/OR2vjj8MNeM?t=197 .On Figure 3, we show the English vocabulary in bigram space ( d= 1::3), reduced to two dimensionswith t-SNE (Van der Maaten & Hinton, 2008). We observe that words which are close in the bigramspace also have a close orthographic form.4 E XPERIMENTAL SETUP4.1 D ATA PREPARATIONWe carried out the experiments on two public handwritten word databases: Rimes (Augustin et al.,2006) (French), and IAM (Marti & Bunke, 2002) (English). We simplified the problem by limitingourselves to words of at least two lowercase characters ( atoz). This selection removed approxi-mately 30% of the words. The number of words and bigrams of different orders in the different setsare reported on Table 4, in Appendix A.1.We applied deslanting (Buse et al., 1997), contrast enhancement, and padded the images with 10pxof white pixels to account for empty context on the left and right of words. From the preprocessedimages, we extracted sequences of feature vectors with a sliding window of width 3px. The featuresare geometrical and statistical features described in (Bianne et al., 2011), which give state-of-the-artresults in handwritten text line recognition (Bluche et al., 2014).We downloaded word frequency lists for French and English2. These lists were built from filmsubtitles3written by many contributors, and they contain many misspellings. We removed themisspelled words using GNU Aspell (Atkinson).We selected 50,000 words for each language. They are the most frequent words (length 2) andmade only of lowercase characters between aandz, making sure to also include all the words ofthe database. For example, the 50,000 most frequent French words fulfilling these condition missabout 200 words of the Rimes database, so we selected the most frequent 49,800 and added themissing 200. Note that most of the words that were removed from the dataset are not shorter words,but words with special or upper case characters. The distribution of lengths of filtered-out words isshown on Figure 5 in Appendix A.1.4.2 R ECOGNITION OF OPEN-BIGRAMS WITH RECURRENT NEURAL NETWORKS (RNN S)To predict bigrams, we chose Bidirectional Long Short-Term Memory RNNs (BLSTM-RNNs) fortheir ability to consider the whole sequence of input vectors to make predictions. We trained oneRNN for each order- dbigram, with the Connectionist Temporal Classification (CTC (Graves et al.,2006)) criterion. The CTC framework defines a sequence labeling problem, with an output sequenceof labels, of smaller length than the input sequence of observations.2http://invokeit.wordpress.com/frequency-word-lists/.3http://opensubtitles.org5Under review as a conference paper at ICLR 2017We built the target sequences for training as sequences of bigrams, ordered according to the firstletter of the bigram. For example, for d= 2, the target for example isea-xm-ap-ml-pe . TheCTC training criterion optimizes the Negative Log-Likelihood (NLL) of the correct label sequence.We set the learning rate to 0.001, and stopped the training when the NLL on the validation set didnot decrease for 20 epochs. We kept the network yielding the best NLL on the validation set.Figure 4: Hypothetical context needed in the input image to make two consecutive (yellow and blue)bigram predictions, for d= 0 (left, to predict c, then t) to3(right, to predict ai, then cc). Asdincreases, the contexts become more complex to model: they involve long range dependencies andare highly intertwined.We trained one RNN for each order d= 0to3, including the special bigrams for word extremitiesor not. We will refer to each of these RNNs with rnndfor orderd(rnnd0when extremities areincluded). The architecture of the networks is described in Appendix A.3. These RNNs are trainedto predict sequences of fixed order bigrams. Here, we are interested in a word representation asa bag of bigrams, which does not carry any information about the sequence in which the bigramsappear, the number of times each bigram appears, or the order of each individual bigram. That is, weare interested in a decoder which considers an unordered set of bigrams predictions across bigramorders.Weforget the temporal aspect of bigram predictions by taking the maximum value of a givenbigram prediction by the RNN (where rnnd(x;t)if the output of the RNN for order d, input imagexat timestept):pd;b(x) = maxtrnnd(x;t); (10)and we forget the bigram order by taking the maximum output across different values of d:pb(x) = maxdmaxtrnnd(x;t): (11)It would have been more satisfying for this experiment to train an optical model to predict a set ofbigrams for all orders. However, this work is focused on the decoder. Moreover, even the simplertask of predicting a sequence of bigrams of fixed order is challenging (the sequence error rates ofthese networks are detailed in Appendix B.2). On Figure 4, we show the hypothetical context neededto make two consecutive predictions, for bigram order d= 0::3. RNNs are popular for handwritingrecognition, and can consider a context size of variable length – but still local – to predict characters(d= 0).Ford= 1, the required context is still local (and would span two consecutive characters), butoverlap, because each character is involved in two bigrams. For d>1, the context is even split intotwo areas (covering the involved characters) that might be far apart depending on d. Contexts fordifferent predictions are entangled: the whole area between two characters forming a bigram is notrelevant for this bigram (and might be of varying size), but will be important to predict other bigrams.It means that the RNN will have to remember a character observation for some time, until it seesthe second character of the bigram, while ignoring the area in between for this bigram prediction,but remembering it since it will be useful in order to predict other bigrams. The number of classesfor bigrams is also 26 times larger than the number of characters, making the classification problemharder, and the number of examples per class in training smaller.5 R ESULTSIn this paper, we focused on a subset of Rimes and IAM word databases, which makes the compari-son with published results difficult. Instead, we compared the bigram decoder approach to decodingwith standard models, consisting of a beam search with Viterbi algorithm in the lexicon. How-ever, these standard models yield state-of-the art results on the reference task for the two considereddatabase (Bluche et al., 2014).6Under review as a conference paper at ICLR 20175.1 B ASELINE SYSTEMS BASED ON HMM S AND VITERBI DECODINGWe built several models and used the same vocabulary as for the bigram decoder, and no languagemodel (all words have the same prior probability). These baseline systems are based on HMMs,with emission models made either of Gaussian mixtures (GMM/HMM), Multi-Layer Perceptrons(MLP/HMM) or Recurrent Neural Networks ( rnn 0/HMM). They are almost identical to those pre-sented in a previous work (Bluche et al., 2014), where a comparison is made with state-of-the-artsystems for handwritten text line recognition. More details about these models and their trainingprocedure are presented in Appendix A.2.Table 1: Word Error Rates (%) with baseline systems and Viterbi decoding of character sequences.ModelsDataset GMM/HMM MLP/HMM rnn 0/HMMRimes Valid. 37.38 14.82 10.79Viterbi (Char. seq.) Test 36.24 14.45 10.03IAM Valid. 27.64 11.73 10.21Viterbi (Char. seq.) Test 37.96 19.97 17.49On Table 1, we report the percentages of word errors on the validation and test sets of Rimes andIAM. The best word error rates are around 10% (17.5% on the test set of IAM), and constitute thebaseline performance to which the bigram approach is to be compared.5.2 M EASURING THE QUALITY OF BIGRAM PREDICTIONSSince we keep a confidence value for all bigrams in the prediction vector, rather than using a binaryvector (cf. Eq. 6), we modified the formulation of precision and recall. A bigram b2B(w)iscorrectly retrieved with confidence pb(x), and missed with confidence (1pb(x)). Similarly, abigram not in the representation B(w)of wordwis falsely recognized with confidence pb(x), andcorrectly ignored with confidence (1pb(x)). It gives us the following expressions for precisionand recallprecision =P(x;w)Pb2B(w)pb(x)PxPb02Bpb(x); recall =P(x;w)Pb2B(w)pb(x)Pw2WjB(w)j; (12)which are the usual ones when pb(x)2f0;1g. The F-measure is calculated from precision andrecall with the usual formula.Table 2: Precision, Recall and F-measure of OB detection by RNNs with different orders d.d0 1 1’ 2 2’ 3 3’ 1,2,3 1’,2’,3’Precision 95.0 89.9 91.2 79.8 82.8 74.8 82.6 84.5 84.0Rimes Recall 93.4 87.6 89.3 84.8 85.8 83.4 80.9 86.7 88.5F-measure 0.94 0.89 0.90 0.82 0.84 0.79 0.82 0.89 0.86Precision 93.5 87.3 89.3 77.7 81.6 62.3 76.2 80.5 81.0IAM Recall 92.5 86.2 88.5 82.3 84.0 77.5 78.6 84.3 86.4F-measure 0.93 0.87 0.89 0.80 0.83 0.69 0.77 0.82 0.84The results for all RNNs, and for the combination of orders, are reported on Table 2. We observethat the precision and recall results are correlated to the performance in terms of edit distance orsequence error rates. Namely, they decrease as the bigram order increases, which is not surprising,given that higher order bigrams are more difficult to recognize with these sequence models. Wealso see that including the special bigrams for word beginnings and endings generally improves theresults. This is not surprising either: the RNNs are good at recognizing them.Despite this performance decrease, the precision remains above 70%, which limits the amount ofnoise that will be included in the bigram representation for recognition. Combining the recognitionacross orders, we obtain a precision of around 84% on Rimes and 80% on IAM. The recall tends tobe higher than the precision, staying around or above 80% in all configurations. Across orders, the7Under review as a conference paper at ICLR 2017recall is above 88% on Rimes and 86% on IAM. The high recall will limit the amount of missinginformation in the bigram representation.Overall, the F-measure for bigram recognition is above 80%, which is a good starting point, giventhat(i)the vocabulary used in decoding will add constraints and may help recovering from somemistakes in the bigram recognition, and (ii)the redundancy and order encoded in the bigram maylimit the impact of misrecognitions.5.3 W ORD RECOGNITION USING BIGRAM PREDICTIONSOn Table 3, we report the results of bigram decoding. For each word image in the validation andtest sets, we computed the bigram predictions with the RNNs described above. We combined thedifferent orders as explained previously, and either added the special bigrams for word boundariesand/or the single character predictions or not. We computed the cosine similarity to the bigram de-composition of all words in the vocabularies in the same representation space ( i.e.same orders, andsame choices for the inclusion of special bigrams and single characters) by computing the productof the vocabulary matrix Vby the recognition vector. We counted the number of times the correctword was not the most similar one.Table 3: Decoding results (% of word errors).Rimes IAMDecoding Model Valid Test Valid TestViterbi (Char. seq.) Best in Table 1 10.79 10.03 10.21 17.49Cosine (bigrams) rnn 1;2;3 25.58 24.37 13.45 20.82rnn 10;20;30 12.43 12.27 11.80 19.25rnn 0;1;2;3 11.03 10.41 11.98 19.61rnn 0;10;20;30 9.81 9.43 11.09 18.39We see that adding the special bigrams for word boundaries improves the results, especially whensingle characters are not included in the representation. A possible explanation, besides the factthat they tend to be recognized more easily, could be that they provide a very useful information todisambiguate words having a similar bigram representation ( e.g.them andtheme ). Adding singlecharacters also improves the performance of the decoder, especially when the boundary bigrams arenot included in the representation. The gain obtained with the single characters is about the same –sometimes a little better – as the gain with boundaries. It might be due to the much better recognitionof the RNN for single characters (precision and recall over 90%), as well as the added redundancyand complementary information provided. The results of decoding with different combinations oforders are presented in the appendices in Table 7. They confirm those observations. The best perfor-mance is achieved with both single characters and word boundaries, although the gain compared toadding only one of them is slight. The error rates are competitive or better than the best error ratesobtained by classical character sequence modeling and Viterbi decoding.6 C ONCLUSIONState-of-the-art systems, as well as most of the systems for handwritten word recognition found inthe literature, either try to model words as a whole, or as a sequence of characters. The latter, whichcurrently gives the best results, is widely adopted by the community, and benefits from a lot ofattention. In this paper, we have proposed a simple alternative model, inspired by the recent findingsin cognitive neurosciences research on reading.We focused on the representation of words in the open-bigram space and built an handwritten wordrecognition system operating in that space. We were interested in observing how a simple decodingscheme, based on a mere cosine similarity measure in the bigram space, compared to traditionalmethods. The main apparent difficulty arises from the fact that the global ordering of characters andthe distance between bigram constituents are lost in this representation.The qualitative results presented in the first section showed that the envisioned approach was viable.With the letter reordering example, we have seen that the correct orthographic form of words can beretrieved with a limited and local knowledge of character orders. Moreover, we validated that words8Under review as a conference paper at ICLR 2017that are close in orthographic form are also close in the bigram space. Thus, we demonstrated thatthe open-bigram representation shows interesting and competitive metric properties for the wordrecognition. Current work consists in learning most discriminant open-bigram at different order,possibly higher than three according to the length of the word and its similarity to others.ACKNOWLEDGMENTSThis work was conducted in COGNILEGO project 2012-15, supported by the French ResearchAgency under the contract ANR 2010-CORD-013 http://cognilego.univ-tln.fr .
SkG_8jUVe
Review
5: Marginally below acceptance threshold
This paper explores the use of Open Bigrams as a target representation of words, for application to handwriting image recognition. Pros: - The use of OBs is novel and interesting. - Clearly written and explained. Cons: - No comparison to previous state of the art, only with author-generated results. - More ablation studies needed -- i.e. fill in Table3 with rnn0,1 rnn0,1,2 rnn0,1' etc etc. It is not clear where the performance is coming from, as it seems that it is single character modelling (0) and word endings (') that are actually beneficial. - While the use of Open bigrams is novel, there are works which use bag of bigrams and ngrams as models which are not really compared to or explored. E.g. https://arxiv.org/abs/1406.2227 [1] and https://arxiv.org/abs/1412.5903 [2]. Both use bag of ngrams models and achieve state of the art results, so it would be interesting to see whether open bigrams in the same experimental setup as [1] would yield better results. - Why not use a graph-based decoder like in Fig 2 b? Overall an interesting paper but the lack of comparisons and benchmarks makes it difficult to assess the reality of the contributions.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Cortical-Inspired Open-Bigram Representation for Handwritten Word Recognition ### Paper Abstract Recent research in the cognitive process of reading hypothesized that we do not read words by sequentially recognizing letters, but rather by identifing open-bigrams, i.e. couple of letters that are not necessarily next to each other. In this paper, we evaluate an handwritten word recognition method based on original open-bigrams representation. We trained Long Short-Term Memory Recurrent Neural Networks (LSTM-RNNs) to predict open-bigrams rather than characters, and we show that such models are able to learn the long-range, complicated and intertwined dependencies in the input signal, necessary to the prediction. For decoding, we decomposed each word of a large vocabulary into the set of constituent bigrams, and apply a simple cosine similarity measure between this representation and the bagged RNN prediction to retrieve the vocabulary word. We compare this method to standard word recognition techniques based on sequential character recognition. Experiments are carried out on two public databases of handwritten words (Rimes and IAM), an the results with our bigram decoder are comparable to more conventional decoding methods based on sequences of letters. ### Paper Keywords ["representation", "letters", "handwritten word recognition", "cognitive process", "words", "couple", "next", "original", "long"] ### Paper Content ABSTRACTRecent research in the cognitive process of reading hypothesized that we do notread words by sequentially recognizing letters, but rather by identifing open-bigrams, i.e.couple of letters that are not necessarily next to each other. In thispaper, we evaluate an handwritten word recognition method based on originalopen-bigrams representation. We trained Long Short-Term Memory RecurrentNeural Networks (LSTM-RNNs) to predict open-bigrams rather than characters,and we show that such models are able to learn the long-range, complicated andintertwined dependencies in the input signal, necessary to the prediction. For de-coding, we decomposed each word of a large vocabulary into the set of constituentbigrams, and apply a simple cosine similarity measure between this representationand the bagged RNN prediction to retrieve the vocabulary word. We compare thismethod to standard word recognition techniques based on sequential characterrecognition. Experiments are carried out on two public databases of handwrittenwords (Rimes and IAM), an the results with our bigram decoder are comparableto more conventional decoding methods based on sequences of letters.1 I NTRODUCTIONTaking inspiration in Biology is sometimes very efficient. For example, deep neural networks (NN)– which are outperforming all other methods (including support vector machines, SVM) in imagerecognition – are based on a series of several (usually 5 to 15) neurons layers, each layer involvingsparsity in the activation pattern (a biological trait of the cortical map). The analogy continues withthe modeling of the cortex as a hierarchy of cortical maps. Thanks to the analysis of reaction timein cognitive psychology experiments, the minimal number of cortical maps involved in a cognitiveprocess is estimated to about ten, the same order of magnitude as the number of layers in deepneural networks for computer vision tasks. In the case of handwritten word recognition, Dehaeneet al. have proposed a biologically plausible model of the cortical organization of reading (Dehaeneet al., 2005) that assumes seven successive steps of increasing complexity, from the retinal ganglioncells to a cortical map of the orthographic word forms (Figure 1). One of the most recent successes ofexperimental psychology was the demonstration that human visual word recognition uses an explicitrepresentation of letter position order based on letter pairs: the open-bigram coding (Whitney et al.,2012; Gomez et al., 2008; Grainger & Van Heuven, 2003; Glotin et al., 2010; Dufau, 2008).As demonstrated in (Touzet et al., 2014), open-bigrams (OB) allow an over-coding of the ortho-graphic form of words that facilitates recognition. OB coding favors same length words (i.e., neigh-bors of similar lengths). In the context of learning to read, the existence of the OB layer just beforethe orthographic word representation has been used to explain the lack of efficiency of whole lan-guage method (today banned from reading teaching) compared to the phonics method which explic-1Under review as a conference paper at ICLR 2017Figure 1: The cognitive process of reading, a seven steps procedure that includes an open-bigramsrepresentation layer. Additional information helps the organization of levels 4 and 5, when using aphonics method, but not a whole language method (today banned from reading teaching for lack ofefficiency, adapted from (Dehaene et al., 2005) and (Touzet, 2015).itly supervises the organization of the OB map (with syllables), where the global method does not(Figure 1).Since cognitive psychology has demonstrated the existence of the OB layer, the hypothesis has beenput forward (Touzet et al., 2014) that the orthographic representation of words may have evolved inorder to take into account the topology of the OB space, instead of the topology of the single letterspace. Our goal here is to test this hypothesis, comparing OB vs sequential character recognitionfor word recognition. A state-of-the-art decoder based on a Long Short-Term Memory RecurrentNeural Networks (LSTM-RNN) is used on two public databases of handwritten words (Rimes andIAM).The remaining of this paper will be divided as follows. In Section 2, we present related methods forhandwritten word recognition. Then, we describe the open-bigram representation of words and theproposed decoder in Section 3. The experimental setup, including the data and the bigram predictionmodel, is explained in Section 4. Finally, we present our results in Section 5, before concluding inSection 6.2 R ELATED WORKIn this section, we give a brief overview of existing techniques for handwritten word recognition.Historically, the methods may be divided in three broad categories. The first approach is whole wordrecognition, where the image of the full word is directly classified into word classes, without relyingon the character level (e.g. in (Parisse, 1996; Madhvanath & Govindaraju, 1996)). In the secondmethod, the word image is segmented into parts of characters (stokes or graphemes). The segmentsare grouped and scored, and character sequences are obtained with a graph search (e.g. in (Bengioet al., 1995)) or with hidden Markov models (HMMs, e.g. in (Knerr et al., 1998)). The last method,most popular nowadays, is a segmentation-free approach. The goal is to predict a character sequencefrom the image without segmenting it first. The techniques include scanning a sliding window toextract features used in an HMM (e.g. in (Kaltenmeier et al., 1993)), or to feed the image to aneural network able to output sequences of character predictions (e.g. SDNNs (LeCun et al., 1998)or MDLSTM-RNN (Graves & Schmidhuber, 2009)).More recently, different approaches have been proposed to recognize words using character bi-grams, and therefore closer to the method we propose in this paper. Jaderberg et al. (2014) propose2Under review as a conference paper at ICLR 2017to predict both the characters and ngrams of characters with two distinct convolutional neural net-works (CNNs) to recognize text in natural images. Their approach includes a conditional randomfield as decoder. Similarly, Poznanski & Wolf (2016) train a CNN with a cross-entropy loss to detectcommon unigrams, bigrams or trigrams of character in a handwritten word image. The output of thenetwork is matched against the lexicon using canonical correlation analysis. Almaz ́an et al. (2014)use Fisher vectors from images and pyramidal character histograms, to learn a feature space sharedby the word images and labels, for word spotting, also using canonical correlation analysis.3 P ROPOSED METHOD3.1 A NOPEN-BIGRAM REPRESENTATION OF WORDSThe letter bigrams of a word wis the set of pairs of consecutive letters. The open-bigram of orderdis the set of pairs of letters separated by dother letters in the word, which we call Bd(w):Bd(w) =fwiwi+d:i2f1:::jwjdgg: (1)The usual bigrams are open-bigrams of order 1. By extension, we call B0(w)the set of letters in thewordw. For example, for word word , we have:B1(word ) =for;rd;wog;B2(word ) =fod;wrg;B3(word ) =fwdg:The general open-bigram representation of a word is the union ofBd1;:::;dn(w) =Bd1(w)[:::[Bdn(w): (2)For example,B1;2;3(word ) =fod;or;rd;wd;wo;wrg.We extendBintoB0by including special bigrams for the letters at the beginning and end of a word:B0(w) =B(w)[fw0;wjwjg: (3)So, for example,B01;2;3(word ) =fw;d;od;or;rd;wd;wo;wrg: (4)In this paper, we will call Bthe set of all bigrams, and Wthe set of all words. We will represent aword of the vocabulary w2Was a normalized binary vector vw2W2<jBjvw=[(b2B(w))]b2BpjB(w)j; (5)i.e.the vector with 0everywhere and 1=pjB(w)jat indices corresponding to bigrams of the word.The stacking of the vector representation of all the words in the vocabulary yields the vocabularymatrixV2<jWjjBj.Note that in this representation, the bigrams form an unordered set. We do not know : (i) where thebigrams are, (ii) what is the order of a given bigram, (iii) how many times it occurs. The goal is tobuild a word recognition decoder in the bigram space.3.2 A NOPEN-BIGRAM DECODERWhile the trivial representation of a word is an ordered sequence of letters, the order in the bigramspace is locally embedded in the bigram representation. Most state-of-the-art word recognitionsystems recognize sequences of letters, and organize the vocabulary for a constrained search asdirected graphs, such as prefix trees, or Finite-State Transducers. On the other hand, we can interpretthe bigram representation as encoding directed edges in a graph, although we will not explicitly buildsuch a graph for decoding.On Figure 2, we show the graph for a representation of the word into a sequence of letters. Grayedges show the potential risk of a misrecognition in the letter sequences. On Figure 2(b), we dis-play the conceptual representation of bigrams as edges. We observe that a global order of letterscan emerge from the local representation. Moreover, the constituent information of a word in the3Under review as a conference paper at ICLR 2017(a) Sequential representation (b) Bigram representationFigure 2: Word representation as an explicit sequence of letters (a), and as a set of bigrams (b). Greyedges show the potential impact of misrecognitions.bigram space is redundant, potentially making this representation more robust to mispredictions ofthe optical model.The optical model is the system which provides the predictions of bigrams from the image (or, inthe classical approach sequences of character predictions). That is, it provides a confidence measurethat each bigram bis present in image x:0pb(x)1. This is transformed into a vector in thebigram space:qx=[pb(x)]b2BpPbp2b(x): (6)For decoding, we chose the very simple cosine similarity between the query ( qx) and a vocabularyword ( vw). Since we normalized both vectors, this is simply the dot product:d(qx;vw) =vTwqx; (7)so the similarity with all words of the vocabulary can be computed with a matrix-vector product:DV(x) =VTqx: (8)The recognized word is the one with maximum similarity with the query:w=argmaxDV(x) =argmaxwPb2B(w)pb(x)pjB(w)jPbp2b(x): (9)We carried out a few preliminary experiments to justify the open-bigram decoder. First, we consid-ered the famous sentence with mixed up letters:“aoccdrnig to a rscheearch at cmabrigde uinervtisy it deos not mttaer in waht oredr theltteers in a wrod are the olny iprmoatnt tihng is taht the frist and lsat ltteers be at the rghitpclae the rset can be a toatl mses and you can sitll raed it wouthit porbelm tihs is bcuseaethe huamn mnid deos not raed ervey lteter by istlef but the wrod as a wlohe” .Although the origin and validity of this statement when letters are put in the right order has beendiscussed1, it is true that most of us can read it without trouble. For each word of more than oneletter in this sentence, we computed the open-bigram representation ( d= 0::3), and replaced it withthe word having the highest cosine similarity in the English vocabulary described in the next section.The result was:“according to a researcher atabridged university it does not matter in what ordered theletters in a word are the only important thing is that the first and last letters be at the rightplace the rest can be a total messes and you can still read it outwith problem this is becausethe human mind does not read every letter by itself but the word as a whole” .Note that the word “cambridge” was not in the vocabulary. Although the task in this paper is notto recognize mixed up words, it shows the ability of our decoder to perform a reading task that wenaturally do.1http://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/4Under review as a conference paper at ICLR 2017Figure 3: Visualization of the bigram representation of the English vocabulary, for d= 1::3(Touzetet al., 2014) (left), vsafter t-SNE (Van der Maaten & Hinton, 2008) (right). Our complete bigramicmap of English: https://youtu.be/OR2vjj8MNeM?t=197 .On Figure 3, we show the English vocabulary in bigram space ( d= 1::3), reduced to two dimensionswith t-SNE (Van der Maaten & Hinton, 2008). We observe that words which are close in the bigramspace also have a close orthographic form.4 E XPERIMENTAL SETUP4.1 D ATA PREPARATIONWe carried out the experiments on two public handwritten word databases: Rimes (Augustin et al.,2006) (French), and IAM (Marti & Bunke, 2002) (English). We simplified the problem by limitingourselves to words of at least two lowercase characters ( atoz). This selection removed approxi-mately 30% of the words. The number of words and bigrams of different orders in the different setsare reported on Table 4, in Appendix A.1.We applied deslanting (Buse et al., 1997), contrast enhancement, and padded the images with 10pxof white pixels to account for empty context on the left and right of words. From the preprocessedimages, we extracted sequences of feature vectors with a sliding window of width 3px. The featuresare geometrical and statistical features described in (Bianne et al., 2011), which give state-of-the-artresults in handwritten text line recognition (Bluche et al., 2014).We downloaded word frequency lists for French and English2. These lists were built from filmsubtitles3written by many contributors, and they contain many misspellings. We removed themisspelled words using GNU Aspell (Atkinson).We selected 50,000 words for each language. They are the most frequent words (length 2) andmade only of lowercase characters between aandz, making sure to also include all the words ofthe database. For example, the 50,000 most frequent French words fulfilling these condition missabout 200 words of the Rimes database, so we selected the most frequent 49,800 and added themissing 200. Note that most of the words that were removed from the dataset are not shorter words,but words with special or upper case characters. The distribution of lengths of filtered-out words isshown on Figure 5 in Appendix A.1.4.2 R ECOGNITION OF OPEN-BIGRAMS WITH RECURRENT NEURAL NETWORKS (RNN S)To predict bigrams, we chose Bidirectional Long Short-Term Memory RNNs (BLSTM-RNNs) fortheir ability to consider the whole sequence of input vectors to make predictions. We trained oneRNN for each order- dbigram, with the Connectionist Temporal Classification (CTC (Graves et al.,2006)) criterion. The CTC framework defines a sequence labeling problem, with an output sequenceof labels, of smaller length than the input sequence of observations.2http://invokeit.wordpress.com/frequency-word-lists/.3http://opensubtitles.org5Under review as a conference paper at ICLR 2017We built the target sequences for training as sequences of bigrams, ordered according to the firstletter of the bigram. For example, for d= 2, the target for example isea-xm-ap-ml-pe . TheCTC training criterion optimizes the Negative Log-Likelihood (NLL) of the correct label sequence.We set the learning rate to 0.001, and stopped the training when the NLL on the validation set didnot decrease for 20 epochs. We kept the network yielding the best NLL on the validation set.Figure 4: Hypothetical context needed in the input image to make two consecutive (yellow and blue)bigram predictions, for d= 0 (left, to predict c, then t) to3(right, to predict ai, then cc). Asdincreases, the contexts become more complex to model: they involve long range dependencies andare highly intertwined.We trained one RNN for each order d= 0to3, including the special bigrams for word extremitiesor not. We will refer to each of these RNNs with rnndfor orderd(rnnd0when extremities areincluded). The architecture of the networks is described in Appendix A.3. These RNNs are trainedto predict sequences of fixed order bigrams. Here, we are interested in a word representation asa bag of bigrams, which does not carry any information about the sequence in which the bigramsappear, the number of times each bigram appears, or the order of each individual bigram. That is, weare interested in a decoder which considers an unordered set of bigrams predictions across bigramorders.Weforget the temporal aspect of bigram predictions by taking the maximum value of a givenbigram prediction by the RNN (where rnnd(x;t)if the output of the RNN for order d, input imagexat timestept):pd;b(x) = maxtrnnd(x;t); (10)and we forget the bigram order by taking the maximum output across different values of d:pb(x) = maxdmaxtrnnd(x;t): (11)It would have been more satisfying for this experiment to train an optical model to predict a set ofbigrams for all orders. However, this work is focused on the decoder. Moreover, even the simplertask of predicting a sequence of bigrams of fixed order is challenging (the sequence error rates ofthese networks are detailed in Appendix B.2). On Figure 4, we show the hypothetical context neededto make two consecutive predictions, for bigram order d= 0::3. RNNs are popular for handwritingrecognition, and can consider a context size of variable length – but still local – to predict characters(d= 0).Ford= 1, the required context is still local (and would span two consecutive characters), butoverlap, because each character is involved in two bigrams. For d>1, the context is even split intotwo areas (covering the involved characters) that might be far apart depending on d. Contexts fordifferent predictions are entangled: the whole area between two characters forming a bigram is notrelevant for this bigram (and might be of varying size), but will be important to predict other bigrams.It means that the RNN will have to remember a character observation for some time, until it seesthe second character of the bigram, while ignoring the area in between for this bigram prediction,but remembering it since it will be useful in order to predict other bigrams. The number of classesfor bigrams is also 26 times larger than the number of characters, making the classification problemharder, and the number of examples per class in training smaller.5 R ESULTSIn this paper, we focused on a subset of Rimes and IAM word databases, which makes the compari-son with published results difficult. Instead, we compared the bigram decoder approach to decodingwith standard models, consisting of a beam search with Viterbi algorithm in the lexicon. How-ever, these standard models yield state-of-the art results on the reference task for the two considereddatabase (Bluche et al., 2014).6Under review as a conference paper at ICLR 20175.1 B ASELINE SYSTEMS BASED ON HMM S AND VITERBI DECODINGWe built several models and used the same vocabulary as for the bigram decoder, and no languagemodel (all words have the same prior probability). These baseline systems are based on HMMs,with emission models made either of Gaussian mixtures (GMM/HMM), Multi-Layer Perceptrons(MLP/HMM) or Recurrent Neural Networks ( rnn 0/HMM). They are almost identical to those pre-sented in a previous work (Bluche et al., 2014), where a comparison is made with state-of-the-artsystems for handwritten text line recognition. More details about these models and their trainingprocedure are presented in Appendix A.2.Table 1: Word Error Rates (%) with baseline systems and Viterbi decoding of character sequences.ModelsDataset GMM/HMM MLP/HMM rnn 0/HMMRimes Valid. 37.38 14.82 10.79Viterbi (Char. seq.) Test 36.24 14.45 10.03IAM Valid. 27.64 11.73 10.21Viterbi (Char. seq.) Test 37.96 19.97 17.49On Table 1, we report the percentages of word errors on the validation and test sets of Rimes andIAM. The best word error rates are around 10% (17.5% on the test set of IAM), and constitute thebaseline performance to which the bigram approach is to be compared.5.2 M EASURING THE QUALITY OF BIGRAM PREDICTIONSSince we keep a confidence value for all bigrams in the prediction vector, rather than using a binaryvector (cf. Eq. 6), we modified the formulation of precision and recall. A bigram b2B(w)iscorrectly retrieved with confidence pb(x), and missed with confidence (1pb(x)). Similarly, abigram not in the representation B(w)of wordwis falsely recognized with confidence pb(x), andcorrectly ignored with confidence (1pb(x)). It gives us the following expressions for precisionand recallprecision =P(x;w)Pb2B(w)pb(x)PxPb02Bpb(x); recall =P(x;w)Pb2B(w)pb(x)Pw2WjB(w)j; (12)which are the usual ones when pb(x)2f0;1g. The F-measure is calculated from precision andrecall with the usual formula.Table 2: Precision, Recall and F-measure of OB detection by RNNs with different orders d.d0 1 1’ 2 2’ 3 3’ 1,2,3 1’,2’,3’Precision 95.0 89.9 91.2 79.8 82.8 74.8 82.6 84.5 84.0Rimes Recall 93.4 87.6 89.3 84.8 85.8 83.4 80.9 86.7 88.5F-measure 0.94 0.89 0.90 0.82 0.84 0.79 0.82 0.89 0.86Precision 93.5 87.3 89.3 77.7 81.6 62.3 76.2 80.5 81.0IAM Recall 92.5 86.2 88.5 82.3 84.0 77.5 78.6 84.3 86.4F-measure 0.93 0.87 0.89 0.80 0.83 0.69 0.77 0.82 0.84The results for all RNNs, and for the combination of orders, are reported on Table 2. We observethat the precision and recall results are correlated to the performance in terms of edit distance orsequence error rates. Namely, they decrease as the bigram order increases, which is not surprising,given that higher order bigrams are more difficult to recognize with these sequence models. Wealso see that including the special bigrams for word beginnings and endings generally improves theresults. This is not surprising either: the RNNs are good at recognizing them.Despite this performance decrease, the precision remains above 70%, which limits the amount ofnoise that will be included in the bigram representation for recognition. Combining the recognitionacross orders, we obtain a precision of around 84% on Rimes and 80% on IAM. The recall tends tobe higher than the precision, staying around or above 80% in all configurations. Across orders, the7Under review as a conference paper at ICLR 2017recall is above 88% on Rimes and 86% on IAM. The high recall will limit the amount of missinginformation in the bigram representation.Overall, the F-measure for bigram recognition is above 80%, which is a good starting point, giventhat(i)the vocabulary used in decoding will add constraints and may help recovering from somemistakes in the bigram recognition, and (ii)the redundancy and order encoded in the bigram maylimit the impact of misrecognitions.5.3 W ORD RECOGNITION USING BIGRAM PREDICTIONSOn Table 3, we report the results of bigram decoding. For each word image in the validation andtest sets, we computed the bigram predictions with the RNNs described above. We combined thedifferent orders as explained previously, and either added the special bigrams for word boundariesand/or the single character predictions or not. We computed the cosine similarity to the bigram de-composition of all words in the vocabularies in the same representation space ( i.e.same orders, andsame choices for the inclusion of special bigrams and single characters) by computing the productof the vocabulary matrix Vby the recognition vector. We counted the number of times the correctword was not the most similar one.Table 3: Decoding results (% of word errors).Rimes IAMDecoding Model Valid Test Valid TestViterbi (Char. seq.) Best in Table 1 10.79 10.03 10.21 17.49Cosine (bigrams) rnn 1;2;3 25.58 24.37 13.45 20.82rnn 10;20;30 12.43 12.27 11.80 19.25rnn 0;1;2;3 11.03 10.41 11.98 19.61rnn 0;10;20;30 9.81 9.43 11.09 18.39We see that adding the special bigrams for word boundaries improves the results, especially whensingle characters are not included in the representation. A possible explanation, besides the factthat they tend to be recognized more easily, could be that they provide a very useful information todisambiguate words having a similar bigram representation ( e.g.them andtheme ). Adding singlecharacters also improves the performance of the decoder, especially when the boundary bigrams arenot included in the representation. The gain obtained with the single characters is about the same –sometimes a little better – as the gain with boundaries. It might be due to the much better recognitionof the RNN for single characters (precision and recall over 90%), as well as the added redundancyand complementary information provided. The results of decoding with different combinations oforders are presented in the appendices in Table 7. They confirm those observations. The best perfor-mance is achieved with both single characters and word boundaries, although the gain compared toadding only one of them is slight. The error rates are competitive or better than the best error ratesobtained by classical character sequence modeling and Viterbi decoding.6 C ONCLUSIONState-of-the-art systems, as well as most of the systems for handwritten word recognition found inthe literature, either try to model words as a whole, or as a sequence of characters. The latter, whichcurrently gives the best results, is widely adopted by the community, and benefits from a lot ofattention. In this paper, we have proposed a simple alternative model, inspired by the recent findingsin cognitive neurosciences research on reading.We focused on the representation of words in the open-bigram space and built an handwritten wordrecognition system operating in that space. We were interested in observing how a simple decodingscheme, based on a mere cosine similarity measure in the bigram space, compared to traditionalmethods. The main apparent difficulty arises from the fact that the global ordering of characters andthe distance between bigram constituents are lost in this representation.The qualitative results presented in the first section showed that the envisioned approach was viable.With the letter reordering example, we have seen that the correct orthographic form of words can beretrieved with a limited and local knowledge of character orders. Moreover, we validated that words8Under review as a conference paper at ICLR 2017that are close in orthographic form are also close in the bigram space. Thus, we demonstrated thatthe open-bigram representation shows interesting and competitive metric properties for the wordrecognition. Current work consists in learning most discriminant open-bigram at different order,possibly higher than three according to the length of the word and its similarity to others.ACKNOWLEDGMENTSThis work was conducted in COGNILEGO project 2012-15, supported by the French ResearchAgency under the contract ANR 2010-CORD-013 http://cognilego.univ-tln.fr .<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text This paper explores the use of Open Bigrams as a target representation of words, for application to handwriting image recognition. Pros: - The use of OBs is novel and interesting. - Clearly written and explained. Cons: - No comparison to previous state of the art, only with author-generated results. - More ablation studies needed -- i.e. fill in Table3 with rnn0,1 rnn0,1,2 rnn0,1' etc etc. It is not clear where the performance is coming from, as it seems that it is single character modelling (0) and word endings (') that are actually beneficial. - While the use of Open bigrams is novel, there are works which use bag of bigrams and ngrams as models which are not really compared to or explored. E.g. https://arxiv.org/abs/1406.2227 [1] and https://arxiv.org/abs/1412.5903 [2]. Both use bag of ngrams models and achieve state of the art results, so it would be interesting to see whether open bigrams in the same experimental setup as [1] would yield better results. - Why not use a graph-based decoder like in Fig 2 b? Overall an interesting paper but the lack of comparisons and benchmarks makes it difficult to assess the reality of the contributions. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
rkHywl-A-
ICLR.cc/2018/Conference
2018
Learning Robust Rewards with Adverserial Inverse Reinforcement Learning
["Justin Fu", "Katie Luo", "Sergey Levine"]
Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose AIRL, a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation that is competitive with direct imitation learning algorithms. Additionally, we show that AIRL is able to recover portable reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training.
["inverse reinforcement learning", "deep reinforcement learning"]
ABSTRACTReinforcement learning provides a powerful and general framework for decisionmaking and control, but its application in practice is often hindered by the needfor extensive feature and reward engineering. Deep reinforcement learning meth-ods can remove the need for explicit engineering of policy or value features, butstill require a manually specified reward function. Inverse reinforcement learningholds the promise of automatic reward acquisition, but has proven exceptionallydifficult to apply to large, high-dimensional problems with unknown dynamics. Inthis work, we propose AIRL, a practical and scalable inverse reinforcement learn-ing algorithm based on an adversarial reward learning formulation. We demon-strate that AIRL is able to recover reward functions that are robust to changesin dynamics, enabling us to learn policies even under significant variation in theenvironment seen during training. Our experiments show that AIRL greatly out-performs prior methods in these transfer settings.1 I NTRODUCTIONWhile reinforcement learning (RL) provides a powerful framework for automating decision makingand control, significant engineering of elements such as features and reward functions has typicallybeen required for good practical performance. In recent years, deep reinforcement learning has al-leviated the need for feature engineering for policies and value functions, and has shown promisingresults on a range of complex tasks, from vision-based robotic control (Levine et al., 2016) to videogames such as Atari (Mnih et al., 2015) and Minecraft (Oh et al., 2016). However, reward engineer-ing remains a significant barrier to applying reinforcement learning in practice. In some domains,this may be difficult to specify (for example, encouraging “socially acceptable” behavior), and inothers, a na ̈ıvely specified reward function can produce unintended behavior (Amodei et al., 2016).Moreover, deep RL algorithms are often sensitive to factors such as reward sparsity and magnitude,making well performing reward functions particularly difficult to engineer.Inverse reinforcement learning (IRL) (Russell, 1998; Ng & Russell, 2000) refers to the problem ofinferring an expert’s reward function from demonstrations, which is a potential method for solv-ing the problem of reward engineering. However, inverse reinforcement learning methods havegenerally been less efficient than direct methods for learning from demonstration such as imitationlearning (Ho & Ermon, 2016), and methods using powerful function approximators such as neuralnetworks have required tricks such as domain-specific regularization and operate inefficiently overwhole trajectories (Finn et al., 2016b). There are many scenarios where IRL may be preferred overdirect imitation learning, such as re-optimizing a reward in novel environments (Finn et al., 2017) orto infer an agent’s intentions, but IRL methods have not been shown to scale to the same complexityof tasks as direct imitation learning. However, adversarial IRL methods (Finn et al., 2016b;a) holdpromise for tackling difficult tasks due to the ability to adapt training samples to improve learningefficiency.Part of the challenge is that IRL is an ill-defined problem, since there are many optimal policiesthat can explain a set of demonstrations, and many rewards that can explain an optimal policy (Ng1Published as a conference paper at ICLR 2018et al., 1999). The maximum entropy (MaxEnt) IRL framework introduced by Ziebart et al. (2008)handles the former ambiguity, but the latter ambiguity means that IRL algorithms have difficultydistinguishing the true reward functions from those shaped by the environment dynamics. Whileshaped rewards can increase learning speed in the original training environment, when the rewardis deployed at test-time on environments with varying dynamics, it may no longer produce optimalbehavior, as we discuss in Sec. 5. To address this issue, we discuss how to modify IRL algorithmsto learn rewards that are invariant to changing dynamics, which we refer to as disentangled rewards .In this paper, we propose adversarial inverse reinforcement learning (AIRL), an inverse reinforce-ment learning algorithm based on adversarial learning. Our algorithm provides for simultaneouslearning of the reward function and value function, which enables us to both make use of the effi-cient adversarial formulation and recover a generalizable and portable reward function, in contrastto prior works that either do not recover a reward functions (Ho & Ermon, 2016), or operates atthe level of entire trajectories, making it difficult to apply to more complex problem settings (Finnet al., 2016b;a). Our experimental evaluation demonstrates that AIRL outperforms prior IRL meth-ods (Finn et al., 2016b) on continuous, high-dimensional tasks with unknown dynamics by a widemargin. When compared to GAIL (Ho & Ermon, 2016), which does not attempt to directly recoverrewards, our method achieves comparable results on tasks that do not require transfer. However,on tasks where there is considerable variability in the environment from the demonstration setting,GAIL and other IRL methods fail to generalize. In these settings, our approach, which can effec-tively disentangle the goals of the expert from the dynamics of the environment, achieves superiorresults.2 R ELATED WORKInverse reinforcement learning (IRL) is a form of imitation learning and learning from demonstra-tion (Argall et al., 2009). Imitation learning methods seek to learn policies from expert demonstra-tions, and IRL methods accomplish this by first inferring the expert’s reward function. Previous IRLapproaches have included maximum margin approaches (Abbeel & Ng, 2004; Ratliff et al., 2006),and probabilistic approaches such as Ziebart et al. (2008); Boularias et al. (2011). In this work, wework under the maximum causal IRL framework of Ziebart (2010). Some advantages of this frame-work are that it removes ambiguity between demonstrations and the expert policy, and allows us tocast the reward learning problem as a maximum likelihood problem, connecting IRL to generativemodel training.Our proposed method most closely resembles the algorithms proposed by Uchibe (2017); Ho & Er-mon (2016); Finn et al. (2016a). Generative adversarial imitation learning (GAIL) (Ho & Ermon,2016) differs from our work in that it is not an IRL algorithm that seeks to recover reward functions.The critic or discriminator of GAIL is unsuitable as a reward since, at optimality, it outputs 0.5 uni-formly across all states and actions. Instead, GAIL aims only to recover the expert’s policy, whichis a less portable representation for transfer. Uchibe (2017) does not interleave policy optimizationwith reward learning within an adversarial framework. Improving a policy within an adversarialframework corresponds to training an amortized sampler for an energy-based model, and prior workhas shown this is crucial for performance (Finn et al., 2016b). Wulfmeier et al. (2015) also considerlearning cost functions with neural networks, but only evaluate on simple domains where analyt-ically solving the problem with value iteration is tractable. Previous methods which aim to learnnonlinear cost functions have used boosting (Ratliff et al., 2007) and Gaussian processes (Levineet al., 2011), but still suffer from the feature engineering problem.Our IRL algorithm builds on the adversarial IRL framework proposed by Finn et al. (2016a), withthe discriminator corresponding to an odds ratio between the policy and exponentiated reward dis-tribution. The discussion in Finn et al. (2016a) is theoretical, and to our knowledge no prior workhas reported a practical implementation of this method. Our experiments show that direct imple-mentation of the proposed algorithm is ineffective, due to high variance from operating over entiretrajectories. While it is straightforward to extend the algorithm to single state-action pairs, as wediscuss in Section 4, a simple unrestricted form of the discriminator is susceptible to the rewardambiguity described in (Ng et al., 1999), making learning the portable reward functions difficult.As illustrated in our experiments, this greatly limits the generalization capability of the method: thelearned reward functions are not robust to environment changes, and it is difficult to use the algo-2Published as a conference paper at ICLR 2018rithm for the purpose of inferring the intentions of agents. We discuss how to overcome this issue inSection 5.Amin et al. (2017) consider learning reward functions which generalize to new tasks given multipletraining tasks. Our work instead focuses on how to achieve generalization within the standard IRLformulation.3 B ACKGROUNDOur inverse reinforcement learning method builds on the maximum causal entropy IRL frame-work (Ziebart, 2010), which considers an entropy-regularized Markov decision process (MDP), de-fined by the tuple (S;A;T;r;; 0).S;Aare the state and action spaces, respectively, 2(0;1)isthe discount factor. The dynamics or transition distribution T(s0ja;s), the initial state distribution0(s), and the reward function r(s;a)are unknown in the standard reinforcement learning setup andcan only be queried through interaction with the MDP.The goal of (forward) reinforcement learning is to find the optimal policy that maximizes theexpected entropy-regularized discounted reward, under ,T, and0:=arg maxE"TXt=0t(r(st;at) +H((jst)))#;where= (s0;a0;:::sT;aT)denotes a sequence of states and actions induced by the policyand dynamics. It can be shown that the trajectory distribution induced by the optimal policy(ajs)takes the form (ajs)/expfQsoft(st;at)g(Ziebart, 2010; Haarnoja et al., 2017), whereQsoft(st;at) =rt(s;a) +E(st+1;:::)[PTt0=tt0(r(st0;at0) +H((jst0))]denotes the soft Q-function.Inverse reinforcement learning instead seeks infer the reward function r(s;a)given a set of demon-strationsD=f1;:::;Ng. In IRL, we assume the demonstrations are drawn from an optimal policy(ajs). We can interpret the IRL problem as solving the maximum likelihood problem:maxED[logp()]; (1)Wherep()/p(s0)QTt=0p(st+1jst;at)etr(st;at)parametrizes the reward function r(s;a)butfixes the dynamics and initial state distribution to that of the MDP. Note that under determinis-tic dynamics, this simplifies to an energy-based model where for feasible trajectories, p()/ePTt=0tr(st;at)(Ziebart et al., 2008).Finn et al. (2016a) propose to cast optimization of Eqn. 1 as a GAN (Goodfellow et al., 2014)optimization problem. They operate in a trajectory-centric formulation, where the discriminatortakes on a particular form ( f()is a learned function; ()is precomputed and its value “filledin”):D() =expff()gexpff()g+(); (2)and the policy is trained to maximize R() = log(1D())logD(). Updating the discrim-inator can be viewed as updating the reward function, and updating the policy can be viewed asimproving the sampling distribution used to estimate the partition function. If trained to optimality,it can be shown that an optimal reward function can be extracted from the optimal discriminator asf() =R()+const, andrecovers the optimal policy. We refer to this formulation as generativeadversarial network guided cost learning (GAN-GCL) to discriminate it from guided cost learning(GCL) (Finn et al., 2016a). This formulation shares similarities with GAIL (Ho & Ermon, 2016),but GAIL does not place special structure on the discriminator, so the reward cannot be recovered.4 A DVERSARIAL INVERSE REINFORCEMENT LEARNING (AIRL)In practice, using full trajectories as proposed by GAN-GCL can result in high variance estimatesas compared to using single state, action pairs, and our experimental results show that this results in3Published as a conference paper at ICLR 2018very poor learning. We could instead propose a straightforward conversion of Eqn. 2 into the singlestate and action case, where:D(s;a) =expff(s;a)gexpff(s;a)g+(ajs):As in the trajectory-centric case, we can show that, at optimality, f(s;a) = log(ajs) =A(s;a),the advantage function of the optimal policy. We justify this, as well as a proof that this algorithmsolves the IRL problem in Appendix A .This change results in an efficient algorithm for imitation learning. However, it is less desirablefor the purpose of reward learning. While the advantage is a valid optimal reward function, it is aheavily entangled reward, as it supervises each action based on the action of the optimal policy forthe training MDP. Based on the analysis in the following Sec. 5, we cannot guarantee that this rewardwill be robust to changes in environment dynamics. In our experiments we demonstrate several caseswhere this reward simply encourages mimicking the expert policy , and fails to produce desirablebehavior even when changes to the environment are made.5 T HEREWARD AMBIGUITY PROBLEMWe now discuss why IRL methods can fail to learn robust reward functions. First, we review theconcept of reward shaping. Ng et al. (1999) describe a class of reward transformations that preservethe optimal policy. Their main theoretical result is that under the following reward transformation,^r(s;a;s0) =r(s;a;s0) +(s0)(s); (3)the optimal policy remains unchanged, for any function :S!R. Moreover, without prior knowl-edge of the dynamics, this is the only class of reward transformations that exhibits policy invariance.Because IRL methods only infer rewards from demonstrations given from an optimal agent, theycannot in general disambiguate between reward functions within this class of transformations, un-less the class of learnable reward functions is restricted.We argue that shaped reward functions may not be robust to changes in dynamics. We formalize thisnotion by studying policy invariance in two MDPs M;M0which share the same reward and differonly in the dynamics, denoted as TandT0, respectively.Suppose an IRL algorithm recovers a shaped, policy invariant reward ^r(s;a;s0)under MDP Mwhere 6= 0. Then, there exists MDP pairs M;M0where changing the transition model from TtoT0breaks policy invariance on MDP M0. As a simple example, consider deterministic dynamicsT(s;a)!s0and state-action rewards ^r(s;a) =r(s;a) +(T(s;a))(s). It is easy to see thatchanging the dynamics TtoT0such thatT0(s;a)6=T(s;a)means that ^r(s;a)no longer lies in theequivalence class of Eqn. 3 for M0.5.1 D ISENTANGLING REWARDS FROM DYNAMICSFirst, let the notation Qr;T(s;a)denote the optimal Q-function with respect to a reward functionrand dynamics T, andr;T(ajs)denote the same for policies. We first define our notion of a”disentangled” reward.Definition 5.1 (Disentangled Rewards) .A reward function r0(s;a;s0)is (perfectly) disentangledwith respect to a ground-truth reward r(s;a;s0)and a set of dynamics Tsuch that under all dynam-icsT2T, the optimal policy is the same: r0;T(ajs) =r;T(ajs)We could also expand this definition to include a notion of suboptimality. However, we leave thisdirection to future work.Under maximum causal entropy RL, the following condition is equivalent to two optimal policiesbeing equal, since Q-functions and policies are equivalent representations (up to arbitrary functionsof statef(s)):Qr0;T(s;a) =Qr;T(s;a)f(s)To remove unwanted reward shaping with arbitrary reward function classes, the learned reward func-tion can only depend on the current state s. We require that the dynamics satisfy a decomposability4Published as a conference paper at ICLR 2018Algorithm 1 Adversarial inverse reinforcement learning1:Obtain expert trajectories Ei2:Initialize policy and discriminator D;.3:forsteptinf1, . . . , Ngdo4: Collect trajectories i= (s0;a0;:::;sT;aT)by executing .5: TrainD;via binary logistic regression to classify expert data Eifrom samples i.6: Update reward r;(s;a;s0) logD;(s;a;s0)log(1D;(s;a;s0))7: Updatewith respect to r;using any policy optimization method.8:end forcondition where functions over current states f(s)and next states g(s0)can be isolated from theirsumf(s) +g(s0). This can be satisfied for example by adding self transitions at each state toan ergodic MDP, or any of the environments used in our experiments. The exact definition of thecondition, as well as proof of the following statements are included in Appendix B.Theorem 5.1. Letr(s)be a ground-truth reward, and Tbe a dynamics model satisfying the de-composability condition. Suppose IRL recovers a state-only reward r0(s)such that it produces anoptimal policy in T:Qr0;T(s;a) =Qr;T(s;a)f(s)Then,r0(s)is disentangled with respect to all dynamics.Theorem 5.2. If a reward function r0(s;a;s0)is disentangled for all dynamics functions, then itmust be state-only. i.e. If for all dynamics T,Qr;T(s;a) =Qr0;T(s;a) +f(s)8s;aThenr0is only a function of state.In the traditional IRL setup, where we learn the reward in a single MDP, our analysis motivateslearning reward functions that are solely functions of state. If the ground truth reward is also only afunction of state, this allows us to recover the true reward up to a constant.6 L EARNING DISENTANGLED REWARDS WITH AIRLIn the method presented in Section 4, we cannot learn a state-only reward function, r(s), meaningthat we cannot guarantee that learned rewards will not be shaped. In order to decouple the rewardfunction from the advantage, we propose to modify the discriminator of Sec. 4 with the form:D;(s;a;s0) =expff;(s;a;s0)gexpff;(s;a;s0)g+(ajs);wheref;is restricted to a reward approximator gand a shaping term hasf;(s;a;s0) =g(s;a) +h(s0)h(s): (4)The additional shaping term helps mitigate the effects of unwanted shaping on our reward approx-imatorg(and as we will show, in some cases it can account for allshaping effects). The entiretraining procedure is detailed in Algorithm 1. Our algorithm resembles GAIL (Ho & Ermon, 2016)and GAN-GCL (Finn et al., 2016a), where we alternate between training a discriminator to classifyexpert data from policy samples, and update the policy to confuse the discriminator.The advantage of this approach is that we can now parametrize g(s)as solely a function of the state,allowing us to extract rewards that are disentangled from the dynamics of the environment in whichthey were trained. In fact, under this restricted case, we can show the following under deterministicenvironments with a state-only ground truth reward (proof in Appendix C):g(s) =r(s) +const;h(s) =V(s) +const;whereris the truereward function. Since fmust recover to the advantage as shown in Sec. 4, hrecovers the optimal value function V, which serves as the reward shaping term.5Published as a conference paper at ICLR 2018To be consistent with Sec. 4, an alternative way to interpret the form of Eqn. 4 is to view f;as theadvantage under deterministic dynamicsf(s;a;s0) =r(s) +V(s0)|{z}Q(s;a)V(s)|{z}V(s)=A(s;a)In stochastic environments, we can instead view f(s;a;s0)as a single-sample estimate of A(s;a).7 E XPERIMENTSIn our experiments, we aim to answer two questions:1. Can AIRL learn disentangled rewards that are robust to changes in environment dynamics?2. Is AIRL efficient and scalable to high-dimensional continuous control tasks?To answer 1, we evaluate AIRL in transfer learning scenarios, where a reward is learned in a trainingenvironment, and optimized in a test environment with significantly different dynamics. We showthat rewards learned with our algorithm under the constraint presented in Section 5 still produceoptimal or near-optimal behavior, while na ̈ıve methods that do not consider reward shaping fail. Wealso show that in small MDPs, we can recover the exact ground truth reward function.To answer 2, we compare AIRL as an imitation learning algorithm against GAIL (Ho & Ermon,2016) and the GAN-based GCL algorithm proposed by Finn et al. (2016a), which we refer to asGAN-GCL, on standard benchmark tasks that do not evaluate transfer. Note that Finn et al. (2016a)does not implement or evaluate GAN-GCL and, to our knowledge, we present the first empiricalevaluation of this algorithm. We find that AIRL performs on par with GAIL in a traditional imitationlearning setup while vastly outperforming it in transfer learning setups, and outperforms GAN-GCLin both settings. It is worth noting that, except for (Finn et al., 2016b), our method is the only IRLalgorithm that we are aware of that scales to high dimensional tasks with unknown dynamics, andalthough GAIL (Ho & Ermon, 2016) resembles an IRL algorithm in structure, it does not recoverdisentangled reward functions, making it unable to re-optimize the learned reward under changes inthe environment, as we illustrate below.For our continuous control tasks, we use trust region policy optimization (Schulman et al., 2015)as our policy optimization algorithm across all evaluated methods, and in the tabular MDP task, weuse soft value iteration. We obtain expert demonstrations by training an expert policy on the groundtruth reward, but hide the ground truth reward from the IRL algorithm. In this way, we simulate ascenario where we wish to use RL to solve a task but wish to refrain from manual reward engineeringand instead seek to learn a reward function from demonstrations. Our code and additional supple-mentary material including videos will be available at https://sites.google.com/view/adversarial-irl , and hyper-parameter and architecture choices are detailed in Appendix D.7.1 R ECOVERING TRUE REWARDS IN TABULAR MDP SWe first consider MaxEnt IRL in a toy task with randomly generated MDPs. The MDPs have 16states, 4 actions, randomly drawn transition matrices, and a reward function that always gives areward of 1:0when taking an action from state 0. The initial state is always state 1.The optimal reward, learned reward with a state-only reward function, and learned reward usinga state-action reward function are shown in Fig. 1. We subtract a constant offset from all rewardfunctions so that they share the same mean for visualization - this does not influence the optimalpolicy. AIRL with a state-only reward function is able to recover the ground truth reward, but AIRLwith a state-action reward instead recovers a shaped advantage function.We also show that in the transfer learning setup, under a new transition matrix T0, the optimal policyunder the state-only reward achieves optimal performance (it is identical to the ground truth reward)whereas the state-action reward only improves marginally over uniform random policy. The learningcurve for this experiment is shown in Fig 2.6Published as a conference paper at ICLR 2018Figure 1: Ground truth (a) and learned rewards (b, c) onthe random MDP task. Dark blue corresponds to a rewardof 1, and white corresponds to 0. Note that AIRL with astate-only reward recovers the ground truth, whereas thestate-action reward is shaped.Figure 2: Learning curve for thetransfer learning experiment on tabularMDPs. Value iteration steps are plot-ted on the x-axis, against returns for thepolicy on the y-axis.7.2 D ISENTANGLING REWARDS IN CONTINUOUS CONTROL TASKSTo evaluate whether our method can learn disentangled rewards in higher dimensional environments,we perform transfer learning experiments on continuous control tasks. In each task, a reward islearned via IRL on the training environment, and the reward is used to reoptimize a new policy ona test environment. We train two IRL algorithms, AIRL and GAN-GCL, with state-only and state-action rewards. We also include results for directly transferring the policy learned with GAIL, andan oracle result that involves optimizing the ground truth reward function with TRPO. Numericalresults for these environment transfer experiments are given in Table 1.The first task involves a 2D point mass navigating to a goal position in a small maze when theposition of the walls are changed between train and test time. At test time, the agent cannot simplymimic the actions learned during training, and instead must successfully infer that the goal in themaze is to reach the target. The task is shown in Fig. 3. Only AIRL trained with state-only rewardsis able to consistently navigate to the goal when the maze is modified. Direct policy transfer andstate-action IRL methods learn rewards which encourage the agent to take the same path taken inthe training environment, which is blocked in the test environment. We plot the learned reward inFig. 4.In our second task, we modify the agent itself. We train a quadrupedal “ant” agent to run forwards,and at test time we disable and shrink two of the front legs of the ant such that it must significantlychange its gait.We find that AIRL is able to learn reward functions that encourage the ant to moveforwards, acquiring a modified gait that involves orienting itself to face the forward direction andcrawling with its two hind legs. Alternative methods, including transferring a policy learned byGAIL (which achieves near-optimal performance with the unmodified agent), fail to move forwardat all. We show the qualitative difference in behavior in Fig. 5.We have demonstrated that AIRL can learn disentangled rewards that can accommodate significantdomain shift even in high-dimensional environments where it is difficult to exactly extract the truereward. GAN-GCL can presumably learn disentangled rewards, but we find that the trajectory-centric formulation does not perform well even in learning rewards in the original task, let alonetransferring to a new domain. GAIL learns successfully in the training domain, but does not acquirea representation that is suitable for transfer to test domains.7Published as a conference paper at ICLR 2018Figure 3: Illustration of the shifting mazetask, where the agent (blue) must reach the goal(green). During training the agent must goaround the wall on the left side, but during testtime it must go around on the right.Figure 4: Reward learned on the point massshifting maze task. The goal is located at thegreen star and the agent starts at the white circle.Note that there is little reward shaping, which en-ables the reward to transfer well.Figure 5: Top row : An ant running forwards (right in the picture) in the training environment.Bottom row : Behavior acquired by optimizing a state-only reward learned with AIRL on the disabledant environment. Note that the ant must orient itself before crawling forward, which is a qualitativelydifferent behavior from the optimal policy in the original environment, which runs sideways.Table 1: Results on transfer learning tasks. Mean scores (higher is better) are reported over 5 runs.We also include results for TRPO optimizing the ground truth reward, and the performance of apolicy learned via GAIL on the training environment.State-Only? Point Mass-Maze Ant-DisabledGAN-GCL No -40.2 -44.8GAN-GCL Yes -41.8 -43.4AIRL (ours) No -31.2 -41.4AIRL (ours) Yes -8.82 130.3GAIL, policy transfer N/A -29.9 -58.8TRPO, ground truth N/A -8.45 315.57.3 B ENCHMARK TASKS FOR IMITATION LEARNINGFinally, we evaluate AIRL as an imitation learning algorithm against the GAN-GCL and the state-of-the-art GAIL on several benchmark tasks. Each algorithm is presented with 50 expert demonstra-tions, collected from a policy trained with TRPO on the ground truth reward function. For AIRL,we use an unrestricted state-action reward function as we are not concerned with reward transfer.Numerical results are presented in Table 2.These experiments do not test transfer, and in a sense canbe regarded as “testing on the training set,” but they match the settings reported in prior work (Ho &Ermon, 2016).8Published as a conference paper at ICLR 2018We find that the performance difference between AIRL and GAIL is negligible, even though AIRLis a true IRL algorithm that recovers reward functions, while GAIL does not. Both methods achieveclose to the best possible result on each task, and there is little room for improvement. This resultgoes against the belief that IRL algorithms are indirect, and less efficient that direct imitation learn-ing algorithms (Ho & Ermon, 2016). The GAN-GCL method is ineffective on all but the simplestPendulum task when trained with the same number of samples as AIRL and GAIL. We find that adiscriminator trained over trajectories easily overfits and provides poor learning signal for the policy.Our results illustrate that AIRL achieves the same performance as GAIL on benchmark imitationtasks that do not require any generalization. On tasks that require transfer and generalization, illus-trated in the previous section, AIRL outperforms GAIL by a wide margin, since our method is ableto recover disentangled rewards that transfer effectively in the presence of domain shift.Table 2: Results on imitation learning benchmark tasks. Mean scores (higher is better) are reportedacross 5 runs.Pendulum Ant Swimmer Half-CheetahGAN-GCL -261.5 460.6 -10.6 -670.7GAIL -226.0 1358.7 140.2 1642.8AIRL (ours) -204.7 1238.6 139.1 1839.8AIRL State Only (ours) -221.5 1089.3 136.4 891.9Expert (TRPO) -179.6 1537.9 141.1 1811.2Random -654.5 -108.1 -11.5 -988.48 C ONCLUSIONWe presented AIRL, a practical and scalable IRL algorithm that can learn disentangled rewards andgreatly outperforms both prior imitation learning and IRL algorithms. We show that rewards learnedwith AIRL transfer effectively under variation in the underlying domain, in contrast to unmodifiedIRL methods which tend to recover brittle rewards that do not generalize well and GAIL, whichdoes not recover reward functions at all. In small MDPs where the optimal policy and reward areunambiguous, we also show that we can exactly recover the ground-truth rewards up to a constant.ACKNOWLEDGEMENTSThis research was supported by the National Science Foundation through IIS-1651843, IIS-1614653,and IIS-1637443. We would like to thank Roberto Calandra for helpful feedback on the paper.
ryZzenclz
Using the deterministic-MDP formulation of MaxEnt IRL is a concern
6: Marginally above acceptance threshold
SUMMARY: This paper considers the Inverse Reinforcement Learning (IRL) problem, and particularly suggests a method that obtains a reward function that is robust to the change of dynamics of the MDP. It starts from formulating the problem within the MaxEnt IRL framework of Ziebart et al. (2008). The challenge of MaxEnt IRL is the computation of a partition function. Guided Cost Learning (GCL) of Finn et al. (2016b) is an approximation of MaxEnt IRL that uses an adaptive importance sampler to estimate the partition function. This can be shown to be a form of GAN, obtained by using a specific discriminator [Finn et al. (2016a)]. If the discriminator directly works with trajectories tau, the result would be GAN-GCL. But this leads to high variance estimates, so the paper suggests using a single state-action formulation, in which the discriminator f_theta(s,a) is a function of (s,a) instead of the trajectory. The optimal solution of this discriminator is to have f(s,a) = A(s,a) — the advantage function. The paper, however, argues that the advantage function is “entangled” with the dynamics, and this is undesirable. So it modified the discriminator to learn a function that is a combination of two terms, one only depends on state-action and the other depends on state, and has the form of shaped reward transformation. EVALUATION: This is an interesting paper with good empirical results. As I am not very familiar with the work of Finn et al. (2016a) and Finn et al. (2016b), I have not verified the detail of derivations of this new paper very closely. That being said, I have some comments and questions: * The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from [Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics: Ziebart, Bagnell, Dey, “The Principle of Maximum Causal Entropy for Estimating Interacting Processes,” IEEE Trans. on IT, 2013. Is it still a reasonable thing to develop based on this earlier, an inaccurate, formulation? * I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants. It is suggested that since the only items on both sides of the equation on top of p. 13 depend on s’ are h* and V, they should be equal. This would be true if s’ could be chosen arbitrararily. But s’ would be uniquely determined by s for a deterministic dynamics. In that case, this conclusion is not obvious anymore. Consider the state space to be integers 0, 1, 2, 3, … . Suppose the dynamics is that whenever we are at state s (which is an integer), at the next time step the state decreases toward 1, that is s’ = phi(s,a) = s - 1; unless s = 0, which we just stay at s’ = s = 0. This is independent of actions. Also define r(s) = 1/s for s>=1 and r(0) = 0. Suppose the discount factor is gamma = 1 (note that in Appendix B.1, the undiscounted case is studied, so I assume gamma = 1 is acceptable). With this choices, the value function V(s) = 1/s + 1/(s-1) + … + 1/1 = H_s, i.e., the Harmonic function. The advantage function is zero. So we can choose g*(s) = 0, and h*(s) = h*(s’) = 1. This is in contrast to the conclusion that h*(s’) = V(s’) + c, which would be H_s + c, and g*(s) = r(s) = 1/s. (In fact, nothing is special about this choice of reward and dynamics.) Am I missing something obvious here? Also please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous. * Please make the argument of Section 5.1 more rigorous.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning Robust Rewards with Adverserial Inverse Reinforcement Learning ### Paper Abstract Reinforcement learning provides a powerful and general framework for decision making and control, but its application in practice is often hindered by the need for extensive feature and reward engineering. Deep reinforcement learning methods can remove the need for explicit engineering of policy or value features, but still require a manually specified reward function. Inverse reinforcement learning holds the promise of automatic reward acquisition, but has proven exceptionally difficult to apply to large, high-dimensional problems with unknown dynamics. In this work, we propose AIRL, a practical and scalable inverse reinforcement learning algorithm based on an adversarial reward learning formulation that is competitive with direct imitation learning algorithms. Additionally, we show that AIRL is able to recover portable reward functions that are robust to changes in dynamics, enabling us to learn policies even under significant variation in the environment seen during training. ### Paper Keywords ["inverse reinforcement learning", "deep reinforcement learning"] ### Paper Content ABSTRACTReinforcement learning provides a powerful and general framework for decisionmaking and control, but its application in practice is often hindered by the needfor extensive feature and reward engineering. Deep reinforcement learning meth-ods can remove the need for explicit engineering of policy or value features, butstill require a manually specified reward function. Inverse reinforcement learningholds the promise of automatic reward acquisition, but has proven exceptionallydifficult to apply to large, high-dimensional problems with unknown dynamics. Inthis work, we propose AIRL, a practical and scalable inverse reinforcement learn-ing algorithm based on an adversarial reward learning formulation. We demon-strate that AIRL is able to recover reward functions that are robust to changesin dynamics, enabling us to learn policies even under significant variation in theenvironment seen during training. Our experiments show that AIRL greatly out-performs prior methods in these transfer settings.1 I NTRODUCTIONWhile reinforcement learning (RL) provides a powerful framework for automating decision makingand control, significant engineering of elements such as features and reward functions has typicallybeen required for good practical performance. In recent years, deep reinforcement learning has al-leviated the need for feature engineering for policies and value functions, and has shown promisingresults on a range of complex tasks, from vision-based robotic control (Levine et al., 2016) to videogames such as Atari (Mnih et al., 2015) and Minecraft (Oh et al., 2016). However, reward engineer-ing remains a significant barrier to applying reinforcement learning in practice. In some domains,this may be difficult to specify (for example, encouraging “socially acceptable” behavior), and inothers, a na ̈ıvely specified reward function can produce unintended behavior (Amodei et al., 2016).Moreover, deep RL algorithms are often sensitive to factors such as reward sparsity and magnitude,making well performing reward functions particularly difficult to engineer.Inverse reinforcement learning (IRL) (Russell, 1998; Ng & Russell, 2000) refers to the problem ofinferring an expert’s reward function from demonstrations, which is a potential method for solv-ing the problem of reward engineering. However, inverse reinforcement learning methods havegenerally been less efficient than direct methods for learning from demonstration such as imitationlearning (Ho & Ermon, 2016), and methods using powerful function approximators such as neuralnetworks have required tricks such as domain-specific regularization and operate inefficiently overwhole trajectories (Finn et al., 2016b). There are many scenarios where IRL may be preferred overdirect imitation learning, such as re-optimizing a reward in novel environments (Finn et al., 2017) orto infer an agent’s intentions, but IRL methods have not been shown to scale to the same complexityof tasks as direct imitation learning. However, adversarial IRL methods (Finn et al., 2016b;a) holdpromise for tackling difficult tasks due to the ability to adapt training samples to improve learningefficiency.Part of the challenge is that IRL is an ill-defined problem, since there are many optimal policiesthat can explain a set of demonstrations, and many rewards that can explain an optimal policy (Ng1Published as a conference paper at ICLR 2018et al., 1999). The maximum entropy (MaxEnt) IRL framework introduced by Ziebart et al. (2008)handles the former ambiguity, but the latter ambiguity means that IRL algorithms have difficultydistinguishing the true reward functions from those shaped by the environment dynamics. Whileshaped rewards can increase learning speed in the original training environment, when the rewardis deployed at test-time on environments with varying dynamics, it may no longer produce optimalbehavior, as we discuss in Sec. 5. To address this issue, we discuss how to modify IRL algorithmsto learn rewards that are invariant to changing dynamics, which we refer to as disentangled rewards .In this paper, we propose adversarial inverse reinforcement learning (AIRL), an inverse reinforce-ment learning algorithm based on adversarial learning. Our algorithm provides for simultaneouslearning of the reward function and value function, which enables us to both make use of the effi-cient adversarial formulation and recover a generalizable and portable reward function, in contrastto prior works that either do not recover a reward functions (Ho & Ermon, 2016), or operates atthe level of entire trajectories, making it difficult to apply to more complex problem settings (Finnet al., 2016b;a). Our experimental evaluation demonstrates that AIRL outperforms prior IRL meth-ods (Finn et al., 2016b) on continuous, high-dimensional tasks with unknown dynamics by a widemargin. When compared to GAIL (Ho & Ermon, 2016), which does not attempt to directly recoverrewards, our method achieves comparable results on tasks that do not require transfer. However,on tasks where there is considerable variability in the environment from the demonstration setting,GAIL and other IRL methods fail to generalize. In these settings, our approach, which can effec-tively disentangle the goals of the expert from the dynamics of the environment, achieves superiorresults.2 R ELATED WORKInverse reinforcement learning (IRL) is a form of imitation learning and learning from demonstra-tion (Argall et al., 2009). Imitation learning methods seek to learn policies from expert demonstra-tions, and IRL methods accomplish this by first inferring the expert’s reward function. Previous IRLapproaches have included maximum margin approaches (Abbeel & Ng, 2004; Ratliff et al., 2006),and probabilistic approaches such as Ziebart et al. (2008); Boularias et al. (2011). In this work, wework under the maximum causal IRL framework of Ziebart (2010). Some advantages of this frame-work are that it removes ambiguity between demonstrations and the expert policy, and allows us tocast the reward learning problem as a maximum likelihood problem, connecting IRL to generativemodel training.Our proposed method most closely resembles the algorithms proposed by Uchibe (2017); Ho & Er-mon (2016); Finn et al. (2016a). Generative adversarial imitation learning (GAIL) (Ho & Ermon,2016) differs from our work in that it is not an IRL algorithm that seeks to recover reward functions.The critic or discriminator of GAIL is unsuitable as a reward since, at optimality, it outputs 0.5 uni-formly across all states and actions. Instead, GAIL aims only to recover the expert’s policy, whichis a less portable representation for transfer. Uchibe (2017) does not interleave policy optimizationwith reward learning within an adversarial framework. Improving a policy within an adversarialframework corresponds to training an amortized sampler for an energy-based model, and prior workhas shown this is crucial for performance (Finn et al., 2016b). Wulfmeier et al. (2015) also considerlearning cost functions with neural networks, but only evaluate on simple domains where analyt-ically solving the problem with value iteration is tractable. Previous methods which aim to learnnonlinear cost functions have used boosting (Ratliff et al., 2007) and Gaussian processes (Levineet al., 2011), but still suffer from the feature engineering problem.Our IRL algorithm builds on the adversarial IRL framework proposed by Finn et al. (2016a), withthe discriminator corresponding to an odds ratio between the policy and exponentiated reward dis-tribution. The discussion in Finn et al. (2016a) is theoretical, and to our knowledge no prior workhas reported a practical implementation of this method. Our experiments show that direct imple-mentation of the proposed algorithm is ineffective, due to high variance from operating over entiretrajectories. While it is straightforward to extend the algorithm to single state-action pairs, as wediscuss in Section 4, a simple unrestricted form of the discriminator is susceptible to the rewardambiguity described in (Ng et al., 1999), making learning the portable reward functions difficult.As illustrated in our experiments, this greatly limits the generalization capability of the method: thelearned reward functions are not robust to environment changes, and it is difficult to use the algo-2Published as a conference paper at ICLR 2018rithm for the purpose of inferring the intentions of agents. We discuss how to overcome this issue inSection 5.Amin et al. (2017) consider learning reward functions which generalize to new tasks given multipletraining tasks. Our work instead focuses on how to achieve generalization within the standard IRLformulation.3 B ACKGROUNDOur inverse reinforcement learning method builds on the maximum causal entropy IRL frame-work (Ziebart, 2010), which considers an entropy-regularized Markov decision process (MDP), de-fined by the tuple (S;A;T;r;; 0).S;Aare the state and action spaces, respectively, 2(0;1)isthe discount factor. The dynamics or transition distribution T(s0ja;s), the initial state distribution0(s), and the reward function r(s;a)are unknown in the standard reinforcement learning setup andcan only be queried through interaction with the MDP.The goal of (forward) reinforcement learning is to find the optimal policy that maximizes theexpected entropy-regularized discounted reward, under ,T, and0:=arg maxE"TXt=0t(r(st;at) +H((jst)))#;where= (s0;a0;:::sT;aT)denotes a sequence of states and actions induced by the policyand dynamics. It can be shown that the trajectory distribution induced by the optimal policy(ajs)takes the form (ajs)/expfQsoft(st;at)g(Ziebart, 2010; Haarnoja et al., 2017), whereQsoft(st;at) =rt(s;a) +E(st+1;:::)[PTt0=tt0(r(st0;at0) +H((jst0))]denotes the soft Q-function.Inverse reinforcement learning instead seeks infer the reward function r(s;a)given a set of demon-strationsD=f1;:::;Ng. In IRL, we assume the demonstrations are drawn from an optimal policy(ajs). We can interpret the IRL problem as solving the maximum likelihood problem:maxED[logp()]; (1)Wherep()/p(s0)QTt=0p(st+1jst;at)etr(st;at)parametrizes the reward function r(s;a)butfixes the dynamics and initial state distribution to that of the MDP. Note that under determinis-tic dynamics, this simplifies to an energy-based model where for feasible trajectories, p()/ePTt=0tr(st;at)(Ziebart et al., 2008).Finn et al. (2016a) propose to cast optimization of Eqn. 1 as a GAN (Goodfellow et al., 2014)optimization problem. They operate in a trajectory-centric formulation, where the discriminatortakes on a particular form ( f()is a learned function; ()is precomputed and its value “filledin”):D() =expff()gexpff()g+(); (2)and the policy is trained to maximize R() = log(1D())logD(). Updating the discrim-inator can be viewed as updating the reward function, and updating the policy can be viewed asimproving the sampling distribution used to estimate the partition function. If trained to optimality,it can be shown that an optimal reward function can be extracted from the optimal discriminator asf() =R()+const, andrecovers the optimal policy. We refer to this formulation as generativeadversarial network guided cost learning (GAN-GCL) to discriminate it from guided cost learning(GCL) (Finn et al., 2016a). This formulation shares similarities with GAIL (Ho & Ermon, 2016),but GAIL does not place special structure on the discriminator, so the reward cannot be recovered.4 A DVERSARIAL INVERSE REINFORCEMENT LEARNING (AIRL)In practice, using full trajectories as proposed by GAN-GCL can result in high variance estimatesas compared to using single state, action pairs, and our experimental results show that this results in3Published as a conference paper at ICLR 2018very poor learning. We could instead propose a straightforward conversion of Eqn. 2 into the singlestate and action case, where:D(s;a) =expff(s;a)gexpff(s;a)g+(ajs):As in the trajectory-centric case, we can show that, at optimality, f(s;a) = log(ajs) =A(s;a),the advantage function of the optimal policy. We justify this, as well as a proof that this algorithmsolves the IRL problem in Appendix A .This change results in an efficient algorithm for imitation learning. However, it is less desirablefor the purpose of reward learning. While the advantage is a valid optimal reward function, it is aheavily entangled reward, as it supervises each action based on the action of the optimal policy forthe training MDP. Based on the analysis in the following Sec. 5, we cannot guarantee that this rewardwill be robust to changes in environment dynamics. In our experiments we demonstrate several caseswhere this reward simply encourages mimicking the expert policy , and fails to produce desirablebehavior even when changes to the environment are made.5 T HEREWARD AMBIGUITY PROBLEMWe now discuss why IRL methods can fail to learn robust reward functions. First, we review theconcept of reward shaping. Ng et al. (1999) describe a class of reward transformations that preservethe optimal policy. Their main theoretical result is that under the following reward transformation,^r(s;a;s0) =r(s;a;s0) +(s0)(s); (3)the optimal policy remains unchanged, for any function :S!R. Moreover, without prior knowl-edge of the dynamics, this is the only class of reward transformations that exhibits policy invariance.Because IRL methods only infer rewards from demonstrations given from an optimal agent, theycannot in general disambiguate between reward functions within this class of transformations, un-less the class of learnable reward functions is restricted.We argue that shaped reward functions may not be robust to changes in dynamics. We formalize thisnotion by studying policy invariance in two MDPs M;M0which share the same reward and differonly in the dynamics, denoted as TandT0, respectively.Suppose an IRL algorithm recovers a shaped, policy invariant reward ^r(s;a;s0)under MDP Mwhere 6= 0. Then, there exists MDP pairs M;M0where changing the transition model from TtoT0breaks policy invariance on MDP M0. As a simple example, consider deterministic dynamicsT(s;a)!s0and state-action rewards ^r(s;a) =r(s;a) +(T(s;a))(s). It is easy to see thatchanging the dynamics TtoT0such thatT0(s;a)6=T(s;a)means that ^r(s;a)no longer lies in theequivalence class of Eqn. 3 for M0.5.1 D ISENTANGLING REWARDS FROM DYNAMICSFirst, let the notation Qr;T(s;a)denote the optimal Q-function with respect to a reward functionrand dynamics T, andr;T(ajs)denote the same for policies. We first define our notion of a”disentangled” reward.Definition 5.1 (Disentangled Rewards) .A reward function r0(s;a;s0)is (perfectly) disentangledwith respect to a ground-truth reward r(s;a;s0)and a set of dynamics Tsuch that under all dynam-icsT2T, the optimal policy is the same: r0;T(ajs) =r;T(ajs)We could also expand this definition to include a notion of suboptimality. However, we leave thisdirection to future work.Under maximum causal entropy RL, the following condition is equivalent to two optimal policiesbeing equal, since Q-functions and policies are equivalent representations (up to arbitrary functionsof statef(s)):Qr0;T(s;a) =Qr;T(s;a)f(s)To remove unwanted reward shaping with arbitrary reward function classes, the learned reward func-tion can only depend on the current state s. We require that the dynamics satisfy a decomposability4Published as a conference paper at ICLR 2018Algorithm 1 Adversarial inverse reinforcement learning1:Obtain expert trajectories Ei2:Initialize policy and discriminator D;.3:forsteptinf1, . . . , Ngdo4: Collect trajectories i= (s0;a0;:::;sT;aT)by executing .5: TrainD;via binary logistic regression to classify expert data Eifrom samples i.6: Update reward r;(s;a;s0) logD;(s;a;s0)log(1D;(s;a;s0))7: Updatewith respect to r;using any policy optimization method.8:end forcondition where functions over current states f(s)and next states g(s0)can be isolated from theirsumf(s) +g(s0). This can be satisfied for example by adding self transitions at each state toan ergodic MDP, or any of the environments used in our experiments. The exact definition of thecondition, as well as proof of the following statements are included in Appendix B.Theorem 5.1. Letr(s)be a ground-truth reward, and Tbe a dynamics model satisfying the de-composability condition. Suppose IRL recovers a state-only reward r0(s)such that it produces anoptimal policy in T:Qr0;T(s;a) =Qr;T(s;a)f(s)Then,r0(s)is disentangled with respect to all dynamics.Theorem 5.2. If a reward function r0(s;a;s0)is disentangled for all dynamics functions, then itmust be state-only. i.e. If for all dynamics T,Qr;T(s;a) =Qr0;T(s;a) +f(s)8s;aThenr0is only a function of state.In the traditional IRL setup, where we learn the reward in a single MDP, our analysis motivateslearning reward functions that are solely functions of state. If the ground truth reward is also only afunction of state, this allows us to recover the true reward up to a constant.6 L EARNING DISENTANGLED REWARDS WITH AIRLIn the method presented in Section 4, we cannot learn a state-only reward function, r(s), meaningthat we cannot guarantee that learned rewards will not be shaped. In order to decouple the rewardfunction from the advantage, we propose to modify the discriminator of Sec. 4 with the form:D;(s;a;s0) =expff;(s;a;s0)gexpff;(s;a;s0)g+(ajs);wheref;is restricted to a reward approximator gand a shaping term hasf;(s;a;s0) =g(s;a) +h(s0)h(s): (4)The additional shaping term helps mitigate the effects of unwanted shaping on our reward approx-imatorg(and as we will show, in some cases it can account for allshaping effects). The entiretraining procedure is detailed in Algorithm 1. Our algorithm resembles GAIL (Ho & Ermon, 2016)and GAN-GCL (Finn et al., 2016a), where we alternate between training a discriminator to classifyexpert data from policy samples, and update the policy to confuse the discriminator.The advantage of this approach is that we can now parametrize g(s)as solely a function of the state,allowing us to extract rewards that are disentangled from the dynamics of the environment in whichthey were trained. In fact, under this restricted case, we can show the following under deterministicenvironments with a state-only ground truth reward (proof in Appendix C):g(s) =r(s) +const;h(s) =V(s) +const;whereris the truereward function. Since fmust recover to the advantage as shown in Sec. 4, hrecovers the optimal value function V, which serves as the reward shaping term.5Published as a conference paper at ICLR 2018To be consistent with Sec. 4, an alternative way to interpret the form of Eqn. 4 is to view f;as theadvantage under deterministic dynamicsf(s;a;s0) =r(s) +V(s0)|{z}Q(s;a)V(s)|{z}V(s)=A(s;a)In stochastic environments, we can instead view f(s;a;s0)as a single-sample estimate of A(s;a).7 E XPERIMENTSIn our experiments, we aim to answer two questions:1. Can AIRL learn disentangled rewards that are robust to changes in environment dynamics?2. Is AIRL efficient and scalable to high-dimensional continuous control tasks?To answer 1, we evaluate AIRL in transfer learning scenarios, where a reward is learned in a trainingenvironment, and optimized in a test environment with significantly different dynamics. We showthat rewards learned with our algorithm under the constraint presented in Section 5 still produceoptimal or near-optimal behavior, while na ̈ıve methods that do not consider reward shaping fail. Wealso show that in small MDPs, we can recover the exact ground truth reward function.To answer 2, we compare AIRL as an imitation learning algorithm against GAIL (Ho & Ermon,2016) and the GAN-based GCL algorithm proposed by Finn et al. (2016a), which we refer to asGAN-GCL, on standard benchmark tasks that do not evaluate transfer. Note that Finn et al. (2016a)does not implement or evaluate GAN-GCL and, to our knowledge, we present the first empiricalevaluation of this algorithm. We find that AIRL performs on par with GAIL in a traditional imitationlearning setup while vastly outperforming it in transfer learning setups, and outperforms GAN-GCLin both settings. It is worth noting that, except for (Finn et al., 2016b), our method is the only IRLalgorithm that we are aware of that scales to high dimensional tasks with unknown dynamics, andalthough GAIL (Ho & Ermon, 2016) resembles an IRL algorithm in structure, it does not recoverdisentangled reward functions, making it unable to re-optimize the learned reward under changes inthe environment, as we illustrate below.For our continuous control tasks, we use trust region policy optimization (Schulman et al., 2015)as our policy optimization algorithm across all evaluated methods, and in the tabular MDP task, weuse soft value iteration. We obtain expert demonstrations by training an expert policy on the groundtruth reward, but hide the ground truth reward from the IRL algorithm. In this way, we simulate ascenario where we wish to use RL to solve a task but wish to refrain from manual reward engineeringand instead seek to learn a reward function from demonstrations. Our code and additional supple-mentary material including videos will be available at https://sites.google.com/view/adversarial-irl , and hyper-parameter and architecture choices are detailed in Appendix D.7.1 R ECOVERING TRUE REWARDS IN TABULAR MDP SWe first consider MaxEnt IRL in a toy task with randomly generated MDPs. The MDPs have 16states, 4 actions, randomly drawn transition matrices, and a reward function that always gives areward of 1:0when taking an action from state 0. The initial state is always state 1.The optimal reward, learned reward with a state-only reward function, and learned reward usinga state-action reward function are shown in Fig. 1. We subtract a constant offset from all rewardfunctions so that they share the same mean for visualization - this does not influence the optimalpolicy. AIRL with a state-only reward function is able to recover the ground truth reward, but AIRLwith a state-action reward instead recovers a shaped advantage function.We also show that in the transfer learning setup, under a new transition matrix T0, the optimal policyunder the state-only reward achieves optimal performance (it is identical to the ground truth reward)whereas the state-action reward only improves marginally over uniform random policy. The learningcurve for this experiment is shown in Fig 2.6Published as a conference paper at ICLR 2018Figure 1: Ground truth (a) and learned rewards (b, c) onthe random MDP task. Dark blue corresponds to a rewardof 1, and white corresponds to 0. Note that AIRL with astate-only reward recovers the ground truth, whereas thestate-action reward is shaped.Figure 2: Learning curve for thetransfer learning experiment on tabularMDPs. Value iteration steps are plot-ted on the x-axis, against returns for thepolicy on the y-axis.7.2 D ISENTANGLING REWARDS IN CONTINUOUS CONTROL TASKSTo evaluate whether our method can learn disentangled rewards in higher dimensional environments,we perform transfer learning experiments on continuous control tasks. In each task, a reward islearned via IRL on the training environment, and the reward is used to reoptimize a new policy ona test environment. We train two IRL algorithms, AIRL and GAN-GCL, with state-only and state-action rewards. We also include results for directly transferring the policy learned with GAIL, andan oracle result that involves optimizing the ground truth reward function with TRPO. Numericalresults for these environment transfer experiments are given in Table 1.The first task involves a 2D point mass navigating to a goal position in a small maze when theposition of the walls are changed between train and test time. At test time, the agent cannot simplymimic the actions learned during training, and instead must successfully infer that the goal in themaze is to reach the target. The task is shown in Fig. 3. Only AIRL trained with state-only rewardsis able to consistently navigate to the goal when the maze is modified. Direct policy transfer andstate-action IRL methods learn rewards which encourage the agent to take the same path taken inthe training environment, which is blocked in the test environment. We plot the learned reward inFig. 4.In our second task, we modify the agent itself. We train a quadrupedal “ant” agent to run forwards,and at test time we disable and shrink two of the front legs of the ant such that it must significantlychange its gait.We find that AIRL is able to learn reward functions that encourage the ant to moveforwards, acquiring a modified gait that involves orienting itself to face the forward direction andcrawling with its two hind legs. Alternative methods, including transferring a policy learned byGAIL (which achieves near-optimal performance with the unmodified agent), fail to move forwardat all. We show the qualitative difference in behavior in Fig. 5.We have demonstrated that AIRL can learn disentangled rewards that can accommodate significantdomain shift even in high-dimensional environments where it is difficult to exactly extract the truereward. GAN-GCL can presumably learn disentangled rewards, but we find that the trajectory-centric formulation does not perform well even in learning rewards in the original task, let alonetransferring to a new domain. GAIL learns successfully in the training domain, but does not acquirea representation that is suitable for transfer to test domains.7Published as a conference paper at ICLR 2018Figure 3: Illustration of the shifting mazetask, where the agent (blue) must reach the goal(green). During training the agent must goaround the wall on the left side, but during testtime it must go around on the right.Figure 4: Reward learned on the point massshifting maze task. The goal is located at thegreen star and the agent starts at the white circle.Note that there is little reward shaping, which en-ables the reward to transfer well.Figure 5: Top row : An ant running forwards (right in the picture) in the training environment.Bottom row : Behavior acquired by optimizing a state-only reward learned with AIRL on the disabledant environment. Note that the ant must orient itself before crawling forward, which is a qualitativelydifferent behavior from the optimal policy in the original environment, which runs sideways.Table 1: Results on transfer learning tasks. Mean scores (higher is better) are reported over 5 runs.We also include results for TRPO optimizing the ground truth reward, and the performance of apolicy learned via GAIL on the training environment.State-Only? Point Mass-Maze Ant-DisabledGAN-GCL No -40.2 -44.8GAN-GCL Yes -41.8 -43.4AIRL (ours) No -31.2 -41.4AIRL (ours) Yes -8.82 130.3GAIL, policy transfer N/A -29.9 -58.8TRPO, ground truth N/A -8.45 315.57.3 B ENCHMARK TASKS FOR IMITATION LEARNINGFinally, we evaluate AIRL as an imitation learning algorithm against the GAN-GCL and the state-of-the-art GAIL on several benchmark tasks. Each algorithm is presented with 50 expert demonstra-tions, collected from a policy trained with TRPO on the ground truth reward function. For AIRL,we use an unrestricted state-action reward function as we are not concerned with reward transfer.Numerical results are presented in Table 2.These experiments do not test transfer, and in a sense canbe regarded as “testing on the training set,” but they match the settings reported in prior work (Ho &Ermon, 2016).8Published as a conference paper at ICLR 2018We find that the performance difference between AIRL and GAIL is negligible, even though AIRLis a true IRL algorithm that recovers reward functions, while GAIL does not. Both methods achieveclose to the best possible result on each task, and there is little room for improvement. This resultgoes against the belief that IRL algorithms are indirect, and less efficient that direct imitation learn-ing algorithms (Ho & Ermon, 2016). The GAN-GCL method is ineffective on all but the simplestPendulum task when trained with the same number of samples as AIRL and GAIL. We find that adiscriminator trained over trajectories easily overfits and provides poor learning signal for the policy.Our results illustrate that AIRL achieves the same performance as GAIL on benchmark imitationtasks that do not require any generalization. On tasks that require transfer and generalization, illus-trated in the previous section, AIRL outperforms GAIL by a wide margin, since our method is ableto recover disentangled rewards that transfer effectively in the presence of domain shift.Table 2: Results on imitation learning benchmark tasks. Mean scores (higher is better) are reportedacross 5 runs.Pendulum Ant Swimmer Half-CheetahGAN-GCL -261.5 460.6 -10.6 -670.7GAIL -226.0 1358.7 140.2 1642.8AIRL (ours) -204.7 1238.6 139.1 1839.8AIRL State Only (ours) -221.5 1089.3 136.4 891.9Expert (TRPO) -179.6 1537.9 141.1 1811.2Random -654.5 -108.1 -11.5 -988.48 C ONCLUSIONWe presented AIRL, a practical and scalable IRL algorithm that can learn disentangled rewards andgreatly outperforms both prior imitation learning and IRL algorithms. We show that rewards learnedwith AIRL transfer effectively under variation in the underlying domain, in contrast to unmodifiedIRL methods which tend to recover brittle rewards that do not generalize well and GAIL, whichdoes not recover reward functions at all. In small MDPs where the optimal policy and reward areunambiguous, we also show that we can exactly recover the ground-truth rewards up to a constant.ACKNOWLEDGEMENTSThis research was supported by the National Science Foundation through IIS-1651843, IIS-1614653,and IIS-1637443. We would like to thank Roberto Calandra for helpful feedback on the paper.<|im_end|> <|im_start|>assistant ### Review Title Using the deterministic-MDP formulation of MaxEnt IRL is a concern ### Review Text SUMMARY: This paper considers the Inverse Reinforcement Learning (IRL) problem, and particularly suggests a method that obtains a reward function that is robust to the change of dynamics of the MDP. It starts from formulating the problem within the MaxEnt IRL framework of Ziebart et al. (2008). The challenge of MaxEnt IRL is the computation of a partition function. Guided Cost Learning (GCL) of Finn et al. (2016b) is an approximation of MaxEnt IRL that uses an adaptive importance sampler to estimate the partition function. This can be shown to be a form of GAN, obtained by using a specific discriminator [Finn et al. (2016a)]. If the discriminator directly works with trajectories tau, the result would be GAN-GCL. But this leads to high variance estimates, so the paper suggests using a single state-action formulation, in which the discriminator f_theta(s,a) is a function of (s,a) instead of the trajectory. The optimal solution of this discriminator is to have f(s,a) = A(s,a) — the advantage function. The paper, however, argues that the advantage function is “entangled” with the dynamics, and this is undesirable. So it modified the discriminator to learn a function that is a combination of two terms, one only depends on state-action and the other depends on state, and has the form of shaped reward transformation. EVALUATION: This is an interesting paper with good empirical results. As I am not very familiar with the work of Finn et al. (2016a) and Finn et al. (2016b), I have not verified the detail of derivations of this new paper very closely. That being said, I have some comments and questions: * The MaxEnt IRL formulation of this work, which assumes that p_theta(tau) is proportional to exp( r_theta (tau) ), comes from [Ziebart et al., 2008] and assumes a deterministic dynamics. Ziebart’s PhD dissertation [Ziebart, 2010] or the following paper show that the formulation is different for stochastic dynamics: Ziebart, Bagnell, Dey, “The Principle of Maximum Causal Entropy for Estimating Interacting Processes,” IEEE Trans. on IT, 2013. Is it still a reasonable thing to develop based on this earlier, an inaccurate, formulation? * I am not convinced about the argument of Appendix C that shows that AIRL recovers reward up to constants. It is suggested that since the only items on both sides of the equation on top of p. 13 depend on s’ are h* and V, they should be equal. This would be true if s’ could be chosen arbitrararily. But s’ would be uniquely determined by s for a deterministic dynamics. In that case, this conclusion is not obvious anymore. Consider the state space to be integers 0, 1, 2, 3, … . Suppose the dynamics is that whenever we are at state s (which is an integer), at the next time step the state decreases toward 1, that is s’ = phi(s,a) = s - 1; unless s = 0, which we just stay at s’ = s = 0. This is independent of actions. Also define r(s) = 1/s for s>=1 and r(0) = 0. Suppose the discount factor is gamma = 1 (note that in Appendix B.1, the undiscounted case is studied, so I assume gamma = 1 is acceptable). With this choices, the value function V(s) = 1/s + 1/(s-1) + … + 1/1 = H_s, i.e., the Harmonic function. The advantage function is zero. So we can choose g*(s) = 0, and h*(s) = h*(s’) = 1. This is in contrast to the conclusion that h*(s’) = V(s’) + c, which would be H_s + c, and g*(s) = r(s) = 1/s. (In fact, nothing is special about this choice of reward and dynamics.) Am I missing something obvious here? Also please discuss how ergodicity leads to the conclusion that spaces of s’ and s are identical. What does “space of s” mean? Do you mean the support of s? Please make the argument more rigorous. * Please make the argument of Section 5.1 more rigorous. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
HkGGfhC5Y7
ICLR.cc/2019/Conference
2019
Towards a better understanding of Vector Quantized Autoencoders
["Aurko Roy", "Ashish Vaswani", "Niki Parmar", "Arvind Neelakantan"]
Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQ-VAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete autoencoder with EM and combining it with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference.
["machine translation", "vector quantized autoencoders", "non-autoregressive", "NMT"]
Under review as a conference paper at ICLR 2019Towards a better understanding of VectorQuantized AutoencodersAnonymous authorsPaper under double-blind reviewAbstractDeep neural networks with discrete latent variables offer the promise ofbetter symbolic reasoning, and learning abstractions that are more usefulto new tasks. There has been a surge in interest in discrete latent variablemodels, however, despiteseveralrecentimprovements, thetrainingofdiscretelatent variable models has remained challenging and their performance hasmostly failed to match their continuous counterparts. Recent work on vectorquantized autoencoders (VQ-VAE) has made substantial progress in thisdirection, with its perplexity almost matching that of a VAE on datasets suchas CIFAR-10. In this work, we investigate an alternate training techniquefor VQ-VAE, inspired by its connection to the Expectation Maximization(EM) algorithm. Training the discrete autoencoder with EM and combiningit with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matchesa strong greedy autoregressive baseline Transformer, while being 3:3timesfaster at inference.1 IntroductionUnsupervised learning of meaningful representations is a fundamental problem in machinelearning since obtaining labeled data can often be very expensive. Continuous representationshave largely been the workhorse of unsupervised deep learning models of images (Goodfellowet al., 2014; van den Oord et al., 2016; Kingma et al., 2016; Salimans et al., 2017; Parmaret al., 2018), audio (Van Den Oord et al., 2016; Reed et al., 2017), and video (Kalchbrenneret al., 2016). However, it is often the case that datasets are more naturally modeled as asequence of discrete symbols rather than continuous ones. For example, language and speechare inherently discrete in nature and images are often concisely described by language, seee.g., Vinyals et al. (2015). Improved discrete latent variable models could also prove usefulfor learning novel data compression algorithms (Theis et al., 2017), while having far moreinterpretable representations of the data.We build on Vector Quantized Variational Autoencoder (VQ-VAE) (van den Oord et al.,2017), a recently proposed training technique for learning discrete latent variables. Themethod uses a learned code-book combined with nearest neighbor search to train the discretelatent variable model. The nearest neighbor search is performed between the encoder outputand the embedding of the latent code using the `2distance metric. VQ-VAE adopts thestandard latent variable model generative process, first sampling latent codes from a prior,P(z), which are then consumed by the decoder to generate data from P(xjz). In van denOord et al. (2017), the authors use both uniform and autoregressive priors for P(z). Theresulting discrete autoencoder obtains impressive results on unconditional image, speech, andvideo generation. In particular, on image generation, VQ-VAE was shown to perform almoston par with continuous VAEs on datasets such as CIFAR-10 (van den Oord et al., 2017).An extension of this method to conditional supervised generation, out-performs continuousautoencoders on WMT English-German translation task (Kaiser et al., 2018).The work of Kaiser et al. (2018) introduced the Latent Transformer, which set a new state-of-the-art in non-autoregressive Neural Machine Translation. However, additional trainingheuristics, namely, exponential moving averages (EMA) of cluster assignment counts, and1Under review as a conference paper at ICLR 2019product quantization (Norouzi & Fleet, 2013) were essential to achieve competitive resultswith VQ-VAE. In this work, we show that tuning for the code-book size can significantlyoutperform the results presented in Kaiser et al. (2018). We also exploit VQ-VAE’s connec-tion with the expectation maximization (EM) algorithm (Dempster et al., 1977), yieldingadditional improvements. With both improvements, we achieve a BLEU score of 22:4onEnglish to German translation, outperforming Kaiser et al. (2018) by 2:6BLEU. Knowledgedistillation (Hinton et al., 2015; Kim & Rush, 2016) provides significant gains with our bestmodels and EM, achieving 26:7BLEU, which almost matches the autoregressive transformermodel with no beam search at 27:0BLEU, while being 3:3faster.Our contributions can be summarized as follows:1.We show that VQ-VAE from van den Oord et al. (2017) can outperform previousstate-of-the-art without product quantization.2.Inspired by the EM algorithm, we introduce a new training algorithm for trainingdiscrete variational autoencoders, that outperforms the previous best result withdiscrete latent autoencoders for neural machine translation.3.Using EM training, and combining it sequence level knowledge distillation (Hintonet al., 2015; Kim & Rush, 2016), allows us to develop a non-autoregressive machinetranslation model whose accuracy almost matches a strong greedy autoregressivebaseline Transformer, while being 3:3times faster at inference.4.On the larger English-French dataset, we show that denoising discrete autoencodersgives us a significant improvement (1.0 BLEU) on top of our non-autoregressivebaseline (see Section D).2 VQ-VAE and the Hard EM AlgorithmThe connection between K-means, and hard EM, or the Viterbi EM algorithm is wellknown (Bottou & Bengio, 1995), where the former can be seen a special case of hard-EMstyle algorithm with a mixture-of-Gaussians model with identity covariance and uniformprior over cluster probabilities. In the following sections we briefly explain the VQ-VAEdiscrete autoencoder for completeness and it’s connection to classical EM.2.1 VQ-VAE discretization algorithmVQ-VAE models the joint distribution P(x;z)where are the model parameters, xis thedata point and zis the sequence of discrete latent variables or codes. Each position in theencoded sequence has its own set of latent codes. Given a data point, the discrete latentcode in each position is selected independently using the encoder output. For simplicity,we describe the procedure for selecting the discrete latent code ( zi) in one position giventhe data point ( xi). The encoder output ze(xi)2RDis passed through a discretizationbottleneck using a nearest-neighbor lookup on embedding vectors e2RKD. HereKis thenumber of latent codes (in a particular position of the discrete latent sequence) in the model.More specifically, the discrete latent variable assignment is given by,zi= arg minj2[K]kze(xi)ejk2(1)The selected latent variable’s embedding is passed as input to the decoder,zq(xi) =eziThe model is trained to minimize:L=lr+kze(xi)sg (zq(xi))k2; (2)wherelris the reconstruction loss of the decoder given zq(x)(e.g., the cross entropy loss),and, sg (:)is the stop gradient operator defined as follows:sg (x) =xforward pass0backward pass2Under review as a conference paper at ICLR 2019To train the embedding vectors e2RKD, van den Oord et al. (2017) proposed using agradient based loss functionksg (ze(xi))zq(xi)k2; (3)and also suggested an alternate technique of training the embeddings: by maintaining anexponential moving average (EMA) of all the encoder hidden states that get assigned toit. It was observed in Kaiser et al. (2018) that the EMA update for training the code-bookembedding, results in more stable training than using gradient-based methods. We analyzethis in more detail in Section 5.1.1.Specifically, an exponential moving average is maintained over the following two quantities:1) the embeddings ejfor everyj2[1;:::;K ]and, 2) the count cjmeasuring the number ofencoder hidden states that have ejas it’s nearest neighbor. The counts are updated in amini-batch of targets as:cj cj+ (1)Xi1[zq(xi) =ej]; (4)with the embedding ejbeing subsequently updated as:ej ej+ (1)Xi1[zq(xi) =ej]ze(xi)cj; (5)where 1[:]is the indicator function and is a decay parameter which we set to 0:999inour experiments. This amounts to doing stochastic gradient in the space of both code-bookembeddings and cluster assignments. These techniques have also been successfully used inminibatchK-means (Sculley, 2010) and online EM (Liang & Klein, 2009; Sato & Ishii, 2000).The generative process for our latent variable NMT model, P(y;zjx), begins by autoregres-sivelysampling a sequence of discrete latent codes from a model conditioned on the inputx,P(zjx) =jzjYi=1Pzijz1;:::;(i1);x; (6)which we refer to as the Latent Predictor model (Kaiser et al., 2018). The decoder thenconsumes this sequence of discrete latent variables to generate the target yall at once, whereP(yjz;x) =jyjYj=1P(yjjz;x): (7)The autoregressive learned prior prior is fit on the discrete latent variables produced bythe autoencoder. Our goal is to learn a sequence of latents, that is much shorter than thetargets,jzjjyj, thereby speeding up decoding significantly with no loss in accuracy. Thearchitecture of the encoder, the decoder, and the latent predictor model are described infurther detail in Section 5.2.2 Hard EM and the K-means algorithmIn this section we briefly recall the hard Expectation maximization (EM) algorithm (Dempsteret al., 1977). Given a set of data points (x1;:::;xN), the hard EM algorithm approximatelysolves the following optimization problem:= arg maxmaxz1;:::;z NP(x1;:::;xN;z1;:::;zN); (8)Hard EM performs coordinate descent over the following two coordinates: the modelparameters , and the hidden variables z1;:::;zN. In other words, hard EM consists ofrepeating the following two steps until convergence:1.E step: (z1;:::;zN) arg maxz1;:::;z NP(x1;:::;xN;z1;:::;zN),3Under review as a conference paper at ICLR 20192.M step: arg max P(x1;:::;xN;z1;:::;zN)A special case of the hard EM algorithm is K-means clustering (MacQueen et al., 1967;Bottou & Bengio, 1995) where the likelihood is modelled by a Gaussian with identitycovariance matrix. Here, the means of the KGaussians are the parameters to be estimated, =h1;:::;Ki; k2RD:With a uniform prior over the hidden variables ( P(zi) =1K), the marginal is given byP(xijzi) =N(zi;I)(xi). In this case, equation (8) is equivalent to:1;:::;K= arg max1;:::;Kminz1;:::;z NNXi=1kzixik22(9)Note that optimizing equation (9)is NP-hard, however one can find a local optima byapplying coordinate descent until convergence:1.E step: Cluster assignment is given by,zi arg minj2[K]jxi22; (10)2.M step: The means of the clusters are updated as,cj NXi=11[zi=j];j 1cjNXi=11[zi=j]xi: (11)We can now easily see the connections between the training updates of VQ-VAE and K-meansclustering. The encoder output ze(x)2RDcorresponds to the data point while the discretelatent variables corresponds to clusters. Given this, Equation 1 is equivalent to the E-step(Equation 10) and the EMA updates in Equation 4 and Equation 5 converge to the M-step(Equation 11) in the limit. The M-step in K-means overwrites the old values while theEMA updates interpolate between the old values and the M step update.3 VQ-VAE training with EMIn this section, we investigate a new training strategy for VQ-VAE using the EM algorithm.3.1 Expectation MaximizationFirst, we briefly describe the EM algorithm. While the hard EM procedure selects one clusteror latent variable assignment for a data point, here the data point is assigned to a mixtureof clusters. Now, the optimization objective is given by,= arg maxP(x1;:::;xN)= arg maxXz1;:::;z NP(x1;:::;xN;z1;:::;zN)Coordinate descent algorithm is again used to approximately solve the above optimizationalgorithm. The E and M step are given by:1.E step:(zi) P(zijxi); (12)2.M step: arg maxEzi[logP(xi;zi)] (13)4Under review as a conference paper at ICLR 20193.2 Vector Quantized Autoencoders trained with EMNow, we describe vector quantized autoencoders training using the EM algorithm. Asdiscussed in the previous section, the encoder output ze(x)2RDcorresponds to the datapoint while the discrete latent variables corresponds to clusters. The E step instead of hardassignment now produces a probability distribution over the set of discrete latent variables(Equation 12). Following VQ-VAE, we continue to assume a uniform prior over clusters,since we observe that training the cluster priors seemed to cause the cluster assignments tocollapse to only a few clusters. The probability distribution is modeled as a Gaussian withidentity covariance matrix,P(zijze(xi))/ekezize(xi)k22As an alternative to computing the full expectation term in the M step (Equation 13)we perform Monte-Carlo Expectation Maximization (Wei & Tanner, 1990) by drawingmsamplesz1i;;zmiMultinomialke1ze(xi)k22;:::;keKze(xi)k22, whereMultinomial (l1;:::;lK)refers to the K-way multinomial distribution with logits l1;:::;lK.This results in a less diffuse target for the autoregressive prior. Thus, the E step can befinally written as:E step: z1i;:::;zmi Multinomialke1ze(xi)k22;:::;keKze(xi)k22The model parameters are then updated to maximize this Monte-Carlo estimate in the Mstep given byM step: cj 1mNXi=1mXl=11zli=j;ej 1mcjNXi=1mXl=11zli=jze(xi):Instead of exactly following the above M step update, we use the EMA version of this updatesimilar to the one described in Section 2.1.When sending the embedding of the discrete latent to the decoder, instead of sending theposterior mode, argmaxzP(zjx), similar to hard EM and K-means, we send the average ofthe embeddings of the sampled latents:zq(xi) =1mmXl=1ezli: (14)Sincemlatent code embeddings are sent to the decoder in the forward pass, all of them areupdated in the backward pass for a single training example. In hard EM training, only oneof them is updated during training. Sending averaged embeddings also results in more stabletraining using the EM algorithm compared to VQ-VAE as shown in Section 5.To train the latent predictor model (Section 2.1) in this case, we use an approach similar tolabel smoothing (Pereyra et al., 2017): the latent predictor model is trained to minimize thecross entropy loss with the labels being the average of the one-hot labels of z1i;:::;zmi.4 Other Related WorkVariational autoencoders were first introduced by Kingma & Welling (2014) for training con-tinuous representations; unfortunately, training them for discrete latent variable models hasproved challenging. One promising approach has been to use various gradient estimators fordiscrete latent variable models, starting with the REINFORCE estimator of Williams (1992),an unbiased, high-variance gradient estimator. Subsequent work on improving the variance ofthe REINFORCE estimator are REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al.,2017). An alternate approach towards gradient estimators is to use continuous relaxations ofcategorical distributions, for e.g., the Gumbel-Softmax reparametrization trick (Jang et al.,5Under review as a conference paper at ICLR 20192016; Maddison et al., 2016). These methods provide biased but low variance gradients fortraining.Machine translation using deep neural networks have been shown to achieve impressive results(Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014; Vaswani et al., 2017). Thestate-of-the-art models in Neural Machine Translation are all auto-regressive, which meansthat during decoding, the model consumes all previously generated tokens to predict the nextone. Recently, there have been multiple efforts to speed-up machine translation decoding. Guet al. (2017) attempts to address this issue by using the Transformer model (Vaswani et al.,2017) together with the REINFORCE algorithm (Williams, 1992), to model the fertilitiesof words. The main drawback of the approach of Gu et al. (2017) is the need for extensivefine-tuning to make policy gradients work, as well as the non-generic nature of the solution.Lee et al. (2018) propose a non-autoregressive model using iterative refinement. Here, insteadof decoding the target sentence in one-shot, the output is successively refined to producethe final output. While the output is produced in parallel at each step, the refinement stepshappen sequentially.5 ExperimentsIn this section we report our experiments with VQ-VAE and EM on the English-Germantranslation task, with the aim of improving the decoding speed of autoregressive translationmodels. Our model and generative process follows the architecture proposed in Kaiser et al.(2018) and is depicted in Figure 1. For all our experiments, we use the Adam (Kingma &Ba, 2014) optimizer and decay the learning rate exponentially after initial warm-up steps.Unless otherwise stated, the dimension of the hidden states of the encoder and the decoderis512, see Table 4 for a comparison of models with lower dimension. For all configurationswe select the optimal hyperparameters by using WMT’13 English-German as the validationset and reporting the BLEU score on the WMT’14 English-German test set.5.1 Machine TranslationFigure 1: VQ-VAE model adapted to conditional supervised translation as described inKaiser et al. (2018). We use xandyto denote the source and target sentence respectively.The encoder, the decoder and the latent predictor now additionally condition on the sourcesentencex.In Neural Machine Translation with latent variables, we model P(y;zjx), whereyandxarethe target and source sentence respectively. Our model architecture, depicted in Figure 1,is similar to the one in Kaiser et al. (2018). The encoder function is a series of stridedconvolutional layers with residual convolutional layers in between and takes target sentence y6Under review as a conference paper at ICLR 2019as input. The source sentence xis converted to a sequence of hidden states through multiplecausal self-attention layers. In Kaiser et al. (2018), the encoder of the autoencoder attendsadditionally to this sequence of continuous representation of the source sentence. We useVQ-VAE as the discretization algorithm. The decoders, applied after the bottleneck layeruses transposed convolution layers whose continuous output is fed to a transformer decoderwith causal attention, which generates the output.The results are summarized in Table 1. Our implementation of VQ-VAE achieves a sig-nificantly better BLEU score and faster decoding speed compared to Kaiser et al. (2018).We found that tuning the code-book size (number of clusters) for using 212discrete latentsachieves the best accuracy which is 16 times smaller as compared to the code-book size inKaiser et al. (2018). Additionally, we see a large improvement in the performance of themodel by using sequence-level distillation (Hinton et al., 2015; Kim & Rush, 2016), as hasbeen observed previously in non-autoregressive models (Gu et al., 2017; Lee et al., 2018).Our teacher model is a base Transformer (Vaswani et al., 2017) that achieves a BLEU scoreof28:1and27:0on the WMT’14 test set using beam search decoding and greedy decodingrespectively. The distilled data is decoded from the base Transformer using a beam size of4. Our VQ-VAE model trained with soft EM and distillation, achieves a BLEU score of26:7, without noisy parallel decoding (Gu et al., 2017). This perforamce is 1:4bleu pointslower than an autoregressive model decoded with a beam size of 4, while being 4:1faster.Importantly, we nearly match the same autoregressive model with beam size 1(greedydecoding), with a 3:3speedup.The length of the sequence of discrete latent variables is shorter than that of target sentencey. Specifically, at each compression step of the encoder we reduce its length by half. Wedenote bync, the compression factor for the latents, i.e. the number of steps for which we dothis compression. In almost all our experiments, we use nc= 3reducing the length by 8. Wecan decrease the decoding time further by increasing the number of compression steps. Asshown in Table 1, by setting ncto 4, the decoding time drops to 58 milliseconds achieving25.4 BLEU while a NAT model (Gu et al., 2017) with similar decoding speed achieves only18.7 BLEU. Note that, all NAT models also train with sequence level knowledge distillationfrom an autoregressive teacher.5.1.1 AnalysisAttention to Source Sentence Encoder: While the encoder of the discrete autoencoderin Kaiser et al. (2018) attends to the output of the encoder of the source sentence, we findthat to be unnecessary, with both models achieving the same BLEU score with 212latents.Removing this attention step results in more stable training (see Figure 3) and is themain reason why VQ-VAE works in our setup (see Table 1) without the use of ProductQuantization (DVQ) (Kaiser et al., 2018). Note that the decoder of the discrete autoencoderin both Kaiser et al. (2018) and our work does not attend to the source sentence.Size of Discrete Latent Variable code-book: Table 2 shows the BLEU score fordifferent code-book sizes for models trained using VQ-VAE without distillation. While Kaiseret al. (2018) use 216as their code-book size, we find that 212gives the best performance.Number of samples in Monte-Carlo EM update: While training with EM, we per-form a Monte-Carlo update with a small number of samples (Section 3.2). Table 3 showsthe impact of number of samples on the final BLEU score.VQ-VAE vs Other Discretization Techniques: We compare the Gumbel-Softmax of(Jang et al., 2016; Maddison et al., 2016) and the improved semantic hashing discretizationtechnique proposed in Kaiser et al. (2018) to VQ-VAE. When trained with sequence levelknowledge distillation, the model using Gumbel-Softmax reached 23:2BLEU, the modelusing improved semantic hashing reached 24:1BLEU, and the model using VQ-VAE reached26:4BLEU on WMT’14 English-German.7Under review as a conference paper at ICLR 2019Model ncnsBLEU Latency SpeedupAutoregressive Model (beam size=4) --28.1 331ms 1Autoregressive Baseline (no beam-search) --27.0265 ms 1:25NAT + distillation --17.7 39 ms 15:6*NAT + distillation + NPD=10 --18.7 79 ms 7:68*NAT + distillation + NPD=100 --19.2257 ms 2:36*LT + Semhash --19.8105 ms 3:15Our ResultsVQ-VAE 3-21.4 81 ms 4:08VQ-VAE with EM 3522.4 81 ms 4:08VQ-VAE + distillation 3-26.4 81 ms 4:08VQ-VAE with EM + distillation 310 26.7 81 ms 4:08VQ-VAE with EM + distillation 41025.4 58 ms 5:71Table 1: BLEU score and decoding times for different models on the WMT’14 English-German translation dataset. The baseline is the autoregressive Transformer of Vaswaniet al. (2017) with no beam search, NAT denotes the Non-Autoregressive Transformerof Gu et al. (2017), and LT + Semhash denotes the Latent Transformer from van denOord et al. (2017) using the improved semantic hashing discretization technique ofKaiser & Bengio (2018). NPD refers to noisy parallel decoding as described in Gu et al.(2017). We use the notation ncto denote the compression factor for the latents, andthe notation nsto denote the number of samples used to perform the Monte-Carloapproximation of the EM algorithm. Distillation refers to sequence level knowledgedistillation from Hinton et al. (2015); Kim & Rush (2016). We used a code-book ofsize212for VQ-VAE (for with and without EM) with a hidden dimension of size 512.Decoding is performed on a single CPU machine with an NVIDIA GeForce GTX 1080with a batch size of 1*Speedup reported for these items are compared to the decode time of 408ms for anautoregressive Transformer from Gu et al. (2017).8Under review as a conference paper at ICLR 20196 ConclusionWe investigate an alternate training technique for VQ-VAE inspired by its connectionto the EM algorithm. Training the discrete autoencoder with EM and combining itwith sequence level knowledge distillation, allows us to develop a non-autoregressivemachine translation model whose accuracy almost matches a greedy autoregressivebaseline, while being 3:3times faster at inference. While sequence distillation is veryimportant for training our best model, we find that the improvements from EM onharder tasks is quite significant. We hope that our results will inspire further re-search on using vector quantization for fast decoding of autoregressive sequence models.ReferencesDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translationby jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 . 6Leon Bottou and Yoshua Bengio. Convergence properties of the k-means algorithms. InAdvances in neural information processing systems , pp. 585–592, 1995. 2, 4Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder forstatistical machine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406.1078 . 6ArthurPDempster, NanMLaird, andDonaldBRubin. Maximumlikelihoodfromincompletedata via the em algorithm. Journal of the royal statistical society. Series B (methodological) ,pp. 1–38, 1977. 2, 3Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances inneural information processing systems , pp. 2672–2680, 2014. 1Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. Backpropa-gation through the void: Optimizing control variates for black-box gradient estimation.arXiv preprint arXiv:1711.00123 , 2017. 5Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. Non-autoregressive neural machine translation. CoRR, abs/1711.02281, 2017. URL http://arxiv.org/abs/1711.02281 . 6, 7, 8Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network.arXiv preprint arXiv:1503.02531 , 2015. 2, 7, 8, 16Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax.CoRR, abs/1611.01144, 2016. URL http://arxiv.org/abs/1611.01144 . 5, 7Łukasz Kaiser and Samy Bengio. Discrete autoencoders for sequence models. CoRR,abs/1801.09797, 2018. URL http://arxiv.org/abs/1801.09797 . 8Łukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Pamar, Samy Bengio, Jakob Uszkoreit, andNoam Shazeer. Fast decoding in sequence models using discrete latent variables. arXivpreprint arXiv:1803.03382 , 2018. 1, 2, 3, 6, 7, 14, 15Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, AlexGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527 ,2016. 1Yoon Kim and Alexander Rush. Sequence-level knowledge distillation. 2016. 2, 7, 8, 16D. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. 59Under review as a conference paper at ICLR 2019Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXivpreprint arXiv:1412.6980 , 2014. 6Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and MaxWelling. Improved variational inference with inverse autoregressive flow. In Advances inNeural Information Processing Systems , pp. 4743–4751, 2016. 1Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato.Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755 ,2018. 16Jason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic non-autoregressive neuralsequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901 , 2018. 6, 7Percy Liang and Dan Klein. Online em for unsupervised models. In Proceedings of humanlanguage technologies: The 2009 annual conference of the North American chapter ofthe association for computational linguistics , pp. 611–619. Association for ComputationalLinguistics, 2009. 3Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. arXiv preprintarXiv:1711.05101 , 2017. 14James MacQueen et al. Some methods for classification and analysis of multivariate obser-vations. In Proceedings of the fifth Berkeley symposium on mathematical statistics andprobability , volume 1, pp. 281–297. Oakland, CA, USA, 1967. 4Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: Acontinuous relaxation of discrete random variables. CoRR, abs/1611.00712, 2016. URLhttp://arxiv.org/abs/1611.00712 . 6, 7Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.Reading digits in natural images with unsupervised feature learning. In NIPS workshopon deep learning and unsupervised feature learning , volume 2011, pp. 5, 2011. 12, 15Mohammad Norouzi and David J Fleet. Cartesian k-means. In Computer Vision and PatternRecognition (CVPR), 2013 IEEE Conference on , pp. 3017–3024. IEEE, 2013. 2Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and AlexanderKu. Image transformer. arXiv, 2018. URL https://arxiv.org/abs/1802.05751 . 1Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton.Regularizing neural networks by penalizing confident output distributions. arXiv preprintarXiv:1701.06548 , 2017. 5Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang,Dan Belov, and Nando de Freitas. Parallel multiscale autoregressive density estimation.arXiv preprint arXiv:1703.03664 , 2017. 1Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improvingthe pixelcnn with discretized logistic mixture likelihood and other modifications. arXivpreprint arXiv:1701.05517 , 2017. 1Masa-Aki Sato and Shin Ishii. On-line em algorithm for the normalized gaussian network.Neural computation , 12(2):407–432, 2000. 3David Sculley. Web-scale k-means clustering. In Proceedings of the 19th internationalconference on World wide web , pp. 1177–1178. ACM, 2010. 3Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neuralnetworks. In Advances in Neural Information Processing Systems , pp. 3104–3112, 2014.URL http://arxiv.org/abs/1409.3215 . 6Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compressionwith compressive autoencoders. arXiv preprint arXiv:1703.00395 , 2017. 110Under review as a conference paper at ICLR 2019George Tucker, Andriy Mnih, Chris J Maddison, John Lawson, and Jascha Sohl-Dickstein.Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. InAdvances in Neural Information Processing Systems , pp. 2624–2633, 2017. 5Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, AlexGraves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generativemodel for raw audio. arXiv preprint arXiv:1609.03499 , 2016. 1Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al.Conditional image generation with pixelcnn decoders. In Advances in Neural InformationProcessing Systems , pp. 4790–4798, 2016. 1Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representationlearning. CoRR, abs/1711.00937, 2017. URL http://arxiv.org/abs/1711.00937 . 1, 2,3, 8, 12, 13, 14, 15Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, 2017. URLhttp://arxiv.org/abs/1706.03762 . 6, 7, 8, 16Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: Aneural image caption generator. In Computer Vision and Pattern Recognition (CVPR),2015 IEEE Conference on , pp. 3156–3164. IEEE, 2015. 1Greg CG Wei and Martin A Tanner. A monte carlo implementation of the em algorithmand the poor man’s data augmentation algorithms. Journal of the American statisticalAssociation , 85(411):699–704, 1990. 5Ronald J Williams. Simple statistical gradient-following algorithms for connectionist rein-forcement learning. In Reinforcement Learning , pp. 5–32. Springer, 1992. 5, 611Under review as a conference paper at ICLR 2019A Image ReconstructionFigure 2: VQ-VAE model as described in van den Oord et al. (2017) for image reconstruction.We use the notation xto denote the input image, with the output of the encoder ze(x)2RDbeing used to perform nearest neighbor search to select the (sequence of) discrete latentvariable. The selected discrete latent is used to train the latent predictor model, while theembedding zq(x)of the selected discrete latent is passed as input to the decoder.Figure 3: Samples of original and reconstructed images from CIFAR-10 using EM trainedwith a code-book of size 28.In this section we report additional experiments we performed using VQ-VAE and EM forthe task of image reconstruction. We train a discrete autoencoder with VQ-VAE (van denOord et al., 2017) and EM on the CIFAR-10 data set, modeling the joint probability P(x;z),wherexis the image and zare the discrete latent codes. We use a field of 8810latentswith a code-book of size 28each containing 512dimensions. We maintain the same encoderand decoder as used in Machine Translation. For the encoder, we use 4convolutional layers,with kernel size 55and strides 22, followed by 2residual layers, and a single dense layer.For the decoder, we use a single dense layer, 2residual layers, and 4deconvolutional layers.Figure 3 shows that our reconstructions are on par with hard EM training.We also train discrete autoencoders on the SVHN dataset (Netzer et al., 2011), with bothVQ-VAE (van den Oord et al., 2017) and EM. The autoencoder is similar to our CIFAR-10model, where each nx= 32323image is encoded into 640discrete latents from a sharedcodebook of size 256. By contrasting the reconstructions from several training runs forVQ-VAE (left) and EM (right), we find that training with EM is more reliable and thereconstructions are of high quality (Figure 4)12Under review as a conference paper at ICLR 2019Figure 4: On the left are reconstructions from a model trained with VQ-VAE (van den Oordet al., 2017) and the right figure shows reconstructions from EM training, our approach.B Ablation TablesModel Code-book size BLEUVQ-VAE 21020.8VQ-VAE 21221.6VQ-VAE 21421.0VQ-VAE 21621.8Table 2: Results showing the impact of the discrete vocabulary on the BLEU score for theWMT’14 English-German dataset. The hidden dimension is 512 for all runs.13Under review as a conference paper at ICLR 2019Model ncnsBLEU Latency SpeedupVQ-VAE with EM + distillation 3125.8 81 ms 4.08VQ-VAE with EM + distillation 3526.4 81 ms 4.08VQ-VAE with EM + distillation 310 26.7 81 ms 4.08VQ-VAE with EM + distillation 32526.6 81 ms 4.08VQ-VAE with EM + distillation 35026.5 81 ms 4.08VQ-VAE with EM + distillation 310025.8 81 ms 4.08VQ-VAE with EM + distillation 4124.1 58 ms 5.71VQ-VAE with EM + distillation 4524.7 58 ms 5.71VQ-VAE with EM + distillation 41025.4 58 ms 5.71VQ-VAE with EM + distillation 42525.1 58 ms 5.71VQ-VAE with EM + distillation 45023.6 58 ms 5.71VQ-VAE with EM + distillation 410024.8 58 ms 5.71Table 3: Results showing the impact of number of samples used to perform the Monte-CarloEM update on the BLEU score for the WMT’14 English-German dataset. The codebooksize for all runs in this table is 212512.Model Hidden dimension nsBLEU Latency SpeedupVQ-VAE + distillation 256 -24.5 76 ms 4:36VQ-VAE with EM + distillation 256 1021.9 76 ms 4:36VQ-VAE with EM + distillation 256 2525.8 76 ms 4:36VQ-VAE + distillation 384 -25.6 80 ms 4:14VQ-VAE with EM + distillation 384 1022.2 80 ms 4:14VQ-VAE with EM + distillation 384 2526.2 80 ms 4:14Table 4: Results showing the impact of the dimension of the word embeddings and thehidden layers of the model on the BLEU score for the WMT’14 English-German datasetwith a discrete vocabulary of size 212.C Additional AnalysisGradient based update vs EMA update of code-book: The original VQ-VAE paper(van den Oord et al., 2017) proposed a gradient based update rule for learning the code-bookwhere the code-book entries are trained by minimizing ksg (ze(x))zq(x)k2. However, itwas found in Kaiser et al. (2018) that the EMA update worked better than this gradientbased loss. Note that if the gradient based loss was minimized using SGD then the updaterule for the embeddings isej (1)ej+Pi1[zq(xi) =ej]ze(xi)Pi1[zq(xi) =ej]; (15)for a learning rate . This is quite similar to the EMA update rule of Equation 5, with theonly difference being that the latter also maintains an EMA over the counts cj. When usingSGD with momentum or Adam, the update rule becomes quite different however, since wenow take the moving average of the gradient term itself, before subtracting it from currentvalue of the embedding ej. This is similar to the issue of using weight decay with Adam,where using the `2penalty in the loss function results in worse performance (Loshchilov &Hutter, 2017).Model Size: The effect of model size on BLEU score for models trained with EM anddistillation is shown in Table 4.14Under review as a conference paper at ICLR 2019Robustness of EM to Hyperparameters: While EMtraining givesa small performanceimprovement, we find that it also leads to more robust training for machine translation.Our experiments on image reconstruction on SVHN (Netzer et al., 2011) in section A alsohighlight the robustness of EM training. The training approach from van den Oord et al.(2017) exhibits high variance on reconstruction quality, while EM is much more stable,resulting in good reconstructions in almost all training runs.Figure 5: Comparison of VQ-VAE (green curve) vs EM with different number of samples(yellow and blue curves) on the WMT’14 English-German translation dataset with a code-book size of 214, with the encoder of the discrete autoencoder attending to the outputof the encoder of the source sentence as in Kaiser et al. (2018). The y-axis denotes theteacher-forced BLEU score on the test set, which is used only for evaluation while training.Notice that the VQ-VAE run collapsed (green curve), while the EM runs (yellow and bluecurves) exhibit more stability.Emergence of EOS/PAD latent: We observe that all the latent sentences for a specificexperiment with VQ-VAE or EM end with a fixed latent indicating the end of the sequence.Since we always fix the length of the latent sentence to be 2nctimes smaller than the truesentence, the model learns to pad the remainder of the latent sequence with this special code(see Table 5 for examples). Note that one can speed up decoding even further by stoppingthe Latent Predictor (LP) model as soon as it outputs this special code.7 89 517 3773 760 760 760 760607 1901 1901 3051 760 760 760 7602388 15 850 2590 760 760 760 760670 127 17 3773 760 760 760 7602335 26 129 2986 760 760 760 76010 45 1755 766 760 760 760 7603773 1082 13 91 760 760 760 7601790 38 270 554 760 760 760 7602951 2015 91 2418 760 760 760 7602951 27 760 760 760 760 760 760463 201 3410 3051 760 760 760 760Table 5: Example latent codes for sentences from the WMT’14 English-German datasethighlighting the emergence of the EOS/PAD latent (760 in this case).15Under review as a conference paper at ICLR 2019Denoising autoencoder: We also use word dropout with a dropout rate of 0:3andwordpermutation with a shuffle rate of 0:5as in Lample et al. (2018). On the WMT English-German we did not notice any improvement from using these regularization techniques, buton the larger WMT English-French dataset, we observe that using a denoising autoencodersignificantly improves performance with a gain of 1.0 BLEU on VQ-VAE and 0.9 BLEU overEM (see Table 6).Additional analysis on latents: In order to compute correlations between the discretelatents and n-grams in the original text, we computed Point-wise Mutual Information (PMI)andtf-idfscores where the latents are treated as documents. However, we were unable tosee any semantic patterns that stood out in this analysis.D Preliminary Results on English FrenchIn this section we report preliminary results on the WMT English-French dataset withoutusing knowledge distillation from an autoregressive teacher (Hinton et al., 2015; Kim &Rush, 2016). We use a Transformer base model from Vaswani et al. (2017). Our bestnon-autoregressive base model trained on non-distilled targets gets 30:0BLEU comparedto the autoregressive base model with the same choice of hyperparameters, which gets 33:3BLEU (see Table 6). As in the case of English-German, we anticipate that using knowledgedistillation Hinton et al. (2015) will likely close this gap.Model ncnsCode-book size BLEU Latency SpeedupAutoregressive Baseline -- - 33.3771 ms 1Our ResultsVQ-VAE 3- 12 29.0215 ms 3:58VQ-VAE with EM 310 12 29.2215 ms 3:58VQ-VAE with reg. 3- 12 30.0 215 ms 3:58VQ-VAE with EM, reg. 310 12 29.9215 ms 3:58VQ-VAE with reg. 3- 14 29.0228 ms 3:38VQ-VAE with EM, reg. 310 14 29.5228 ms 3:38Table 6: BLEU score and decoding times for different models on the WMT’13 English-French translation dataset. The baseline is the autoregressive Transformer of Vaswaniet al. (2017) with no beam search, We use the notation ncto denote the compressionfactor for the latents, and the notation nsto denote the number of samples used toperform the Monte-Carlo approximation of the EM algorithm. Reg. refers to worddropoutwithrate 0:3andword permutation withshufflerate 0:5asdescribedinSectionC.The hidden dimension of the codebook is 512. Decoding is performed on a single CPUmachine with an NVIDIA GeForce GTX 1080 with a batch size of 1.16
H1xD-S0dh7
Training procedure for VQ-VAE is equivalent to the EM algorithm
7: Good paper, accept
General: The paper presents an alternative view on the training procedure for the VQ-VAE. The authors have noticed that there is a close connection between the original training algorithm and the well-known EM algorithm. Then, they proposed to use the soft EM algorithm. In the experiments the authors showed that the soft EM allows to obtain significantly better results than the standard learning procedure on both image and text datasets. In general, the paper shows a neat link between the well-known EM algorithm and the learning method for the VQ-VAE. I like the manner the idea is presented. Additionally, the results are convincing. I believe that the paper will be interesting for the ICLR audience. Pros: + The connection between the EM algorithms and the training procedure for the VQ-VAE is neat. + The paper is very well written, all concepts are clear and properly outlined. + The experiments are properly performed and all results are convincing. Cons: - The paper is rather incremental, however, still interesting. - The quality of Figure 1, 2 and 3 (especially Figure 3) is unacceptable. - There is a typo in Table 6 (row 5: V-VAE → VQ-VAE). - I miss two references in the related work on training with discrete variables: REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2018). - The paper style is not compliant with the ICLR style. --REVISION-- I would like to thank authors for their effort to improve quality of images. In my opinion the paper is nice and I sustain my initial score.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Towards a better understanding of Vector Quantized Autoencoders ### Paper Abstract Deep neural networks with discrete latent variables offer the promise of better symbolic reasoning, and learning abstractions that are more useful to new tasks. There has been a surge in interest in discrete latent variable models, however, despite several recent improvements, the training of discrete latent variable models has remained challenging and their performance has mostly failed to match their continuous counterparts. Recent work on vector quantized autoencoders (VQ-VAE) has made substantial progress in this direction, with its perplexity almost matching that of a VAE on datasets such as CIFAR-10. In this work, we investigate an alternate training technique for VQ-VAE, inspired by its connection to the Expectation Maximization (EM) algorithm. Training the discrete autoencoder with EM and combining it with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matches a strong greedy autoregressive baseline Transformer, while being 3.3 times faster at inference. ### Paper Keywords ["machine translation", "vector quantized autoencoders", "non-autoregressive", "NMT"] ### Paper Content Under review as a conference paper at ICLR 2019Towards a better understanding of VectorQuantized AutoencodersAnonymous authorsPaper under double-blind reviewAbstractDeep neural networks with discrete latent variables offer the promise ofbetter symbolic reasoning, and learning abstractions that are more usefulto new tasks. There has been a surge in interest in discrete latent variablemodels, however, despiteseveralrecentimprovements, thetrainingofdiscretelatent variable models has remained challenging and their performance hasmostly failed to match their continuous counterparts. Recent work on vectorquantized autoencoders (VQ-VAE) has made substantial progress in thisdirection, with its perplexity almost matching that of a VAE on datasets suchas CIFAR-10. In this work, we investigate an alternate training techniquefor VQ-VAE, inspired by its connection to the Expectation Maximization(EM) algorithm. Training the discrete autoencoder with EM and combiningit with sequence level knowledge distillation alows us to develop a non-autoregressive machine translation model whose accuracy almost matchesa strong greedy autoregressive baseline Transformer, while being 3:3timesfaster at inference.1 IntroductionUnsupervised learning of meaningful representations is a fundamental problem in machinelearning since obtaining labeled data can often be very expensive. Continuous representationshave largely been the workhorse of unsupervised deep learning models of images (Goodfellowet al., 2014; van den Oord et al., 2016; Kingma et al., 2016; Salimans et al., 2017; Parmaret al., 2018), audio (Van Den Oord et al., 2016; Reed et al., 2017), and video (Kalchbrenneret al., 2016). However, it is often the case that datasets are more naturally modeled as asequence of discrete symbols rather than continuous ones. For example, language and speechare inherently discrete in nature and images are often concisely described by language, seee.g., Vinyals et al. (2015). Improved discrete latent variable models could also prove usefulfor learning novel data compression algorithms (Theis et al., 2017), while having far moreinterpretable representations of the data.We build on Vector Quantized Variational Autoencoder (VQ-VAE) (van den Oord et al.,2017), a recently proposed training technique for learning discrete latent variables. Themethod uses a learned code-book combined with nearest neighbor search to train the discretelatent variable model. The nearest neighbor search is performed between the encoder outputand the embedding of the latent code using the `2distance metric. VQ-VAE adopts thestandard latent variable model generative process, first sampling latent codes from a prior,P(z), which are then consumed by the decoder to generate data from P(xjz). In van denOord et al. (2017), the authors use both uniform and autoregressive priors for P(z). Theresulting discrete autoencoder obtains impressive results on unconditional image, speech, andvideo generation. In particular, on image generation, VQ-VAE was shown to perform almoston par with continuous VAEs on datasets such as CIFAR-10 (van den Oord et al., 2017).An extension of this method to conditional supervised generation, out-performs continuousautoencoders on WMT English-German translation task (Kaiser et al., 2018).The work of Kaiser et al. (2018) introduced the Latent Transformer, which set a new state-of-the-art in non-autoregressive Neural Machine Translation. However, additional trainingheuristics, namely, exponential moving averages (EMA) of cluster assignment counts, and1Under review as a conference paper at ICLR 2019product quantization (Norouzi & Fleet, 2013) were essential to achieve competitive resultswith VQ-VAE. In this work, we show that tuning for the code-book size can significantlyoutperform the results presented in Kaiser et al. (2018). We also exploit VQ-VAE’s connec-tion with the expectation maximization (EM) algorithm (Dempster et al., 1977), yieldingadditional improvements. With both improvements, we achieve a BLEU score of 22:4onEnglish to German translation, outperforming Kaiser et al. (2018) by 2:6BLEU. Knowledgedistillation (Hinton et al., 2015; Kim & Rush, 2016) provides significant gains with our bestmodels and EM, achieving 26:7BLEU, which almost matches the autoregressive transformermodel with no beam search at 27:0BLEU, while being 3:3faster.Our contributions can be summarized as follows:1.We show that VQ-VAE from van den Oord et al. (2017) can outperform previousstate-of-the-art without product quantization.2.Inspired by the EM algorithm, we introduce a new training algorithm for trainingdiscrete variational autoencoders, that outperforms the previous best result withdiscrete latent autoencoders for neural machine translation.3.Using EM training, and combining it sequence level knowledge distillation (Hintonet al., 2015; Kim & Rush, 2016), allows us to develop a non-autoregressive machinetranslation model whose accuracy almost matches a strong greedy autoregressivebaseline Transformer, while being 3:3times faster at inference.4.On the larger English-French dataset, we show that denoising discrete autoencodersgives us a significant improvement (1.0 BLEU) on top of our non-autoregressivebaseline (see Section D).2 VQ-VAE and the Hard EM AlgorithmThe connection between K-means, and hard EM, or the Viterbi EM algorithm is wellknown (Bottou & Bengio, 1995), where the former can be seen a special case of hard-EMstyle algorithm with a mixture-of-Gaussians model with identity covariance and uniformprior over cluster probabilities. In the following sections we briefly explain the VQ-VAEdiscrete autoencoder for completeness and it’s connection to classical EM.2.1 VQ-VAE discretization algorithmVQ-VAE models the joint distribution P(x;z)where are the model parameters, xis thedata point and zis the sequence of discrete latent variables or codes. Each position in theencoded sequence has its own set of latent codes. Given a data point, the discrete latentcode in each position is selected independently using the encoder output. For simplicity,we describe the procedure for selecting the discrete latent code ( zi) in one position giventhe data point ( xi). The encoder output ze(xi)2RDis passed through a discretizationbottleneck using a nearest-neighbor lookup on embedding vectors e2RKD. HereKis thenumber of latent codes (in a particular position of the discrete latent sequence) in the model.More specifically, the discrete latent variable assignment is given by,zi= arg minj2[K]kze(xi)ejk2(1)The selected latent variable’s embedding is passed as input to the decoder,zq(xi) =eziThe model is trained to minimize:L=lr+kze(xi)sg (zq(xi))k2; (2)wherelris the reconstruction loss of the decoder given zq(x)(e.g., the cross entropy loss),and, sg (:)is the stop gradient operator defined as follows:sg (x) =xforward pass0backward pass2Under review as a conference paper at ICLR 2019To train the embedding vectors e2RKD, van den Oord et al. (2017) proposed using agradient based loss functionksg (ze(xi))zq(xi)k2; (3)and also suggested an alternate technique of training the embeddings: by maintaining anexponential moving average (EMA) of all the encoder hidden states that get assigned toit. It was observed in Kaiser et al. (2018) that the EMA update for training the code-bookembedding, results in more stable training than using gradient-based methods. We analyzethis in more detail in Section 5.1.1.Specifically, an exponential moving average is maintained over the following two quantities:1) the embeddings ejfor everyj2[1;:::;K ]and, 2) the count cjmeasuring the number ofencoder hidden states that have ejas it’s nearest neighbor. The counts are updated in amini-batch of targets as:cj cj+ (1)Xi1[zq(xi) =ej]; (4)with the embedding ejbeing subsequently updated as:ej ej+ (1)Xi1[zq(xi) =ej]ze(xi)cj; (5)where 1[:]is the indicator function and is a decay parameter which we set to 0:999inour experiments. This amounts to doing stochastic gradient in the space of both code-bookembeddings and cluster assignments. These techniques have also been successfully used inminibatchK-means (Sculley, 2010) and online EM (Liang & Klein, 2009; Sato & Ishii, 2000).The generative process for our latent variable NMT model, P(y;zjx), begins by autoregres-sivelysampling a sequence of discrete latent codes from a model conditioned on the inputx,P(zjx) =jzjYi=1Pzijz1;:::;(i1);x; (6)which we refer to as the Latent Predictor model (Kaiser et al., 2018). The decoder thenconsumes this sequence of discrete latent variables to generate the target yall at once, whereP(yjz;x) =jyjYj=1P(yjjz;x): (7)The autoregressive learned prior prior is fit on the discrete latent variables produced bythe autoencoder. Our goal is to learn a sequence of latents, that is much shorter than thetargets,jzjjyj, thereby speeding up decoding significantly with no loss in accuracy. Thearchitecture of the encoder, the decoder, and the latent predictor model are described infurther detail in Section 5.2.2 Hard EM and the K-means algorithmIn this section we briefly recall the hard Expectation maximization (EM) algorithm (Dempsteret al., 1977). Given a set of data points (x1;:::;xN), the hard EM algorithm approximatelysolves the following optimization problem:= arg maxmaxz1;:::;z NP(x1;:::;xN;z1;:::;zN); (8)Hard EM performs coordinate descent over the following two coordinates: the modelparameters , and the hidden variables z1;:::;zN. In other words, hard EM consists ofrepeating the following two steps until convergence:1.E step: (z1;:::;zN) arg maxz1;:::;z NP(x1;:::;xN;z1;:::;zN),3Under review as a conference paper at ICLR 20192.M step: arg max P(x1;:::;xN;z1;:::;zN)A special case of the hard EM algorithm is K-means clustering (MacQueen et al., 1967;Bottou & Bengio, 1995) where the likelihood is modelled by a Gaussian with identitycovariance matrix. Here, the means of the KGaussians are the parameters to be estimated, =h1;:::;Ki; k2RD:With a uniform prior over the hidden variables ( P(zi) =1K), the marginal is given byP(xijzi) =N(zi;I)(xi). In this case, equation (8) is equivalent to:1;:::;K= arg max1;:::;Kminz1;:::;z NNXi=1kzixik22(9)Note that optimizing equation (9)is NP-hard, however one can find a local optima byapplying coordinate descent until convergence:1.E step: Cluster assignment is given by,zi arg minj2[K]jxi22; (10)2.M step: The means of the clusters are updated as,cj NXi=11[zi=j];j 1cjNXi=11[zi=j]xi: (11)We can now easily see the connections between the training updates of VQ-VAE and K-meansclustering. The encoder output ze(x)2RDcorresponds to the data point while the discretelatent variables corresponds to clusters. Given this, Equation 1 is equivalent to the E-step(Equation 10) and the EMA updates in Equation 4 and Equation 5 converge to the M-step(Equation 11) in the limit. The M-step in K-means overwrites the old values while theEMA updates interpolate between the old values and the M step update.3 VQ-VAE training with EMIn this section, we investigate a new training strategy for VQ-VAE using the EM algorithm.3.1 Expectation MaximizationFirst, we briefly describe the EM algorithm. While the hard EM procedure selects one clusteror latent variable assignment for a data point, here the data point is assigned to a mixtureof clusters. Now, the optimization objective is given by,= arg maxP(x1;:::;xN)= arg maxXz1;:::;z NP(x1;:::;xN;z1;:::;zN)Coordinate descent algorithm is again used to approximately solve the above optimizationalgorithm. The E and M step are given by:1.E step:(zi) P(zijxi); (12)2.M step: arg maxEzi[logP(xi;zi)] (13)4Under review as a conference paper at ICLR 20193.2 Vector Quantized Autoencoders trained with EMNow, we describe vector quantized autoencoders training using the EM algorithm. Asdiscussed in the previous section, the encoder output ze(x)2RDcorresponds to the datapoint while the discrete latent variables corresponds to clusters. The E step instead of hardassignment now produces a probability distribution over the set of discrete latent variables(Equation 12). Following VQ-VAE, we continue to assume a uniform prior over clusters,since we observe that training the cluster priors seemed to cause the cluster assignments tocollapse to only a few clusters. The probability distribution is modeled as a Gaussian withidentity covariance matrix,P(zijze(xi))/ekezize(xi)k22As an alternative to computing the full expectation term in the M step (Equation 13)we perform Monte-Carlo Expectation Maximization (Wei & Tanner, 1990) by drawingmsamplesz1i;;zmiMultinomialke1ze(xi)k22;:::;keKze(xi)k22, whereMultinomial (l1;:::;lK)refers to the K-way multinomial distribution with logits l1;:::;lK.This results in a less diffuse target for the autoregressive prior. Thus, the E step can befinally written as:E step: z1i;:::;zmi Multinomialke1ze(xi)k22;:::;keKze(xi)k22The model parameters are then updated to maximize this Monte-Carlo estimate in the Mstep given byM step: cj 1mNXi=1mXl=11zli=j;ej 1mcjNXi=1mXl=11zli=jze(xi):Instead of exactly following the above M step update, we use the EMA version of this updatesimilar to the one described in Section 2.1.When sending the embedding of the discrete latent to the decoder, instead of sending theposterior mode, argmaxzP(zjx), similar to hard EM and K-means, we send the average ofthe embeddings of the sampled latents:zq(xi) =1mmXl=1ezli: (14)Sincemlatent code embeddings are sent to the decoder in the forward pass, all of them areupdated in the backward pass for a single training example. In hard EM training, only oneof them is updated during training. Sending averaged embeddings also results in more stabletraining using the EM algorithm compared to VQ-VAE as shown in Section 5.To train the latent predictor model (Section 2.1) in this case, we use an approach similar tolabel smoothing (Pereyra et al., 2017): the latent predictor model is trained to minimize thecross entropy loss with the labels being the average of the one-hot labels of z1i;:::;zmi.4 Other Related WorkVariational autoencoders were first introduced by Kingma & Welling (2014) for training con-tinuous representations; unfortunately, training them for discrete latent variable models hasproved challenging. One promising approach has been to use various gradient estimators fordiscrete latent variable models, starting with the REINFORCE estimator of Williams (1992),an unbiased, high-variance gradient estimator. Subsequent work on improving the variance ofthe REINFORCE estimator are REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al.,2017). An alternate approach towards gradient estimators is to use continuous relaxations ofcategorical distributions, for e.g., the Gumbel-Softmax reparametrization trick (Jang et al.,5Under review as a conference paper at ICLR 20192016; Maddison et al., 2016). These methods provide biased but low variance gradients fortraining.Machine translation using deep neural networks have been shown to achieve impressive results(Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014; Vaswani et al., 2017). Thestate-of-the-art models in Neural Machine Translation are all auto-regressive, which meansthat during decoding, the model consumes all previously generated tokens to predict the nextone. Recently, there have been multiple efforts to speed-up machine translation decoding. Guet al. (2017) attempts to address this issue by using the Transformer model (Vaswani et al.,2017) together with the REINFORCE algorithm (Williams, 1992), to model the fertilitiesof words. The main drawback of the approach of Gu et al. (2017) is the need for extensivefine-tuning to make policy gradients work, as well as the non-generic nature of the solution.Lee et al. (2018) propose a non-autoregressive model using iterative refinement. Here, insteadof decoding the target sentence in one-shot, the output is successively refined to producethe final output. While the output is produced in parallel at each step, the refinement stepshappen sequentially.5 ExperimentsIn this section we report our experiments with VQ-VAE and EM on the English-Germantranslation task, with the aim of improving the decoding speed of autoregressive translationmodels. Our model and generative process follows the architecture proposed in Kaiser et al.(2018) and is depicted in Figure 1. For all our experiments, we use the Adam (Kingma &Ba, 2014) optimizer and decay the learning rate exponentially after initial warm-up steps.Unless otherwise stated, the dimension of the hidden states of the encoder and the decoderis512, see Table 4 for a comparison of models with lower dimension. For all configurationswe select the optimal hyperparameters by using WMT’13 English-German as the validationset and reporting the BLEU score on the WMT’14 English-German test set.5.1 Machine TranslationFigure 1: VQ-VAE model adapted to conditional supervised translation as described inKaiser et al. (2018). We use xandyto denote the source and target sentence respectively.The encoder, the decoder and the latent predictor now additionally condition on the sourcesentencex.In Neural Machine Translation with latent variables, we model P(y;zjx), whereyandxarethe target and source sentence respectively. Our model architecture, depicted in Figure 1,is similar to the one in Kaiser et al. (2018). The encoder function is a series of stridedconvolutional layers with residual convolutional layers in between and takes target sentence y6Under review as a conference paper at ICLR 2019as input. The source sentence xis converted to a sequence of hidden states through multiplecausal self-attention layers. In Kaiser et al. (2018), the encoder of the autoencoder attendsadditionally to this sequence of continuous representation of the source sentence. We useVQ-VAE as the discretization algorithm. The decoders, applied after the bottleneck layeruses transposed convolution layers whose continuous output is fed to a transformer decoderwith causal attention, which generates the output.The results are summarized in Table 1. Our implementation of VQ-VAE achieves a sig-nificantly better BLEU score and faster decoding speed compared to Kaiser et al. (2018).We found that tuning the code-book size (number of clusters) for using 212discrete latentsachieves the best accuracy which is 16 times smaller as compared to the code-book size inKaiser et al. (2018). Additionally, we see a large improvement in the performance of themodel by using sequence-level distillation (Hinton et al., 2015; Kim & Rush, 2016), as hasbeen observed previously in non-autoregressive models (Gu et al., 2017; Lee et al., 2018).Our teacher model is a base Transformer (Vaswani et al., 2017) that achieves a BLEU scoreof28:1and27:0on the WMT’14 test set using beam search decoding and greedy decodingrespectively. The distilled data is decoded from the base Transformer using a beam size of4. Our VQ-VAE model trained with soft EM and distillation, achieves a BLEU score of26:7, without noisy parallel decoding (Gu et al., 2017). This perforamce is 1:4bleu pointslower than an autoregressive model decoded with a beam size of 4, while being 4:1faster.Importantly, we nearly match the same autoregressive model with beam size 1(greedydecoding), with a 3:3speedup.The length of the sequence of discrete latent variables is shorter than that of target sentencey. Specifically, at each compression step of the encoder we reduce its length by half. Wedenote bync, the compression factor for the latents, i.e. the number of steps for which we dothis compression. In almost all our experiments, we use nc= 3reducing the length by 8. Wecan decrease the decoding time further by increasing the number of compression steps. Asshown in Table 1, by setting ncto 4, the decoding time drops to 58 milliseconds achieving25.4 BLEU while a NAT model (Gu et al., 2017) with similar decoding speed achieves only18.7 BLEU. Note that, all NAT models also train with sequence level knowledge distillationfrom an autoregressive teacher.5.1.1 AnalysisAttention to Source Sentence Encoder: While the encoder of the discrete autoencoderin Kaiser et al. (2018) attends to the output of the encoder of the source sentence, we findthat to be unnecessary, with both models achieving the same BLEU score with 212latents.Removing this attention step results in more stable training (see Figure 3) and is themain reason why VQ-VAE works in our setup (see Table 1) without the use of ProductQuantization (DVQ) (Kaiser et al., 2018). Note that the decoder of the discrete autoencoderin both Kaiser et al. (2018) and our work does not attend to the source sentence.Size of Discrete Latent Variable code-book: Table 2 shows the BLEU score fordifferent code-book sizes for models trained using VQ-VAE without distillation. While Kaiseret al. (2018) use 216as their code-book size, we find that 212gives the best performance.Number of samples in Monte-Carlo EM update: While training with EM, we per-form a Monte-Carlo update with a small number of samples (Section 3.2). Table 3 showsthe impact of number of samples on the final BLEU score.VQ-VAE vs Other Discretization Techniques: We compare the Gumbel-Softmax of(Jang et al., 2016; Maddison et al., 2016) and the improved semantic hashing discretizationtechnique proposed in Kaiser et al. (2018) to VQ-VAE. When trained with sequence levelknowledge distillation, the model using Gumbel-Softmax reached 23:2BLEU, the modelusing improved semantic hashing reached 24:1BLEU, and the model using VQ-VAE reached26:4BLEU on WMT’14 English-German.7Under review as a conference paper at ICLR 2019Model ncnsBLEU Latency SpeedupAutoregressive Model (beam size=4) --28.1 331ms 1Autoregressive Baseline (no beam-search) --27.0265 ms 1:25NAT + distillation --17.7 39 ms 15:6*NAT + distillation + NPD=10 --18.7 79 ms 7:68*NAT + distillation + NPD=100 --19.2257 ms 2:36*LT + Semhash --19.8105 ms 3:15Our ResultsVQ-VAE 3-21.4 81 ms 4:08VQ-VAE with EM 3522.4 81 ms 4:08VQ-VAE + distillation 3-26.4 81 ms 4:08VQ-VAE with EM + distillation 310 26.7 81 ms 4:08VQ-VAE with EM + distillation 41025.4 58 ms 5:71Table 1: BLEU score and decoding times for different models on the WMT’14 English-German translation dataset. The baseline is the autoregressive Transformer of Vaswaniet al. (2017) with no beam search, NAT denotes the Non-Autoregressive Transformerof Gu et al. (2017), and LT + Semhash denotes the Latent Transformer from van denOord et al. (2017) using the improved semantic hashing discretization technique ofKaiser & Bengio (2018). NPD refers to noisy parallel decoding as described in Gu et al.(2017). We use the notation ncto denote the compression factor for the latents, andthe notation nsto denote the number of samples used to perform the Monte-Carloapproximation of the EM algorithm. Distillation refers to sequence level knowledgedistillation from Hinton et al. (2015); Kim & Rush (2016). We used a code-book ofsize212for VQ-VAE (for with and without EM) with a hidden dimension of size 512.Decoding is performed on a single CPU machine with an NVIDIA GeForce GTX 1080with a batch size of 1*Speedup reported for these items are compared to the decode time of 408ms for anautoregressive Transformer from Gu et al. (2017).8Under review as a conference paper at ICLR 20196 ConclusionWe investigate an alternate training technique for VQ-VAE inspired by its connectionto the EM algorithm. Training the discrete autoencoder with EM and combining itwith sequence level knowledge distillation, allows us to develop a non-autoregressivemachine translation model whose accuracy almost matches a greedy autoregressivebaseline, while being 3:3times faster at inference. While sequence distillation is veryimportant for training our best model, we find that the improvements from EM onharder tasks is quite significant. We hope that our results will inspire further re-search on using vector quantization for fast decoding of autoregressive sequence models.ReferencesDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translationby jointly learning to align and translate. CoRR, abs/1409.0473, 2014. URL http://arxiv.org/abs/1409.0473 . 6Leon Bottou and Yoshua Bengio. Convergence properties of the k-means algorithms. InAdvances in neural information processing systems , pp. 585–592, 1995. 2, 4Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk,and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder forstatistical machine translation. CoRR, abs/1406.1078, 2014. URL http://arxiv.org/abs/1406.1078 . 6ArthurPDempster, NanMLaird, andDonaldBRubin. Maximumlikelihoodfromincompletedata via the em algorithm. Journal of the royal statistical society. Series B (methodological) ,pp. 1–38, 1977. 2, 3Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances inneural information processing systems , pp. 2672–2680, 2014. 1Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. Backpropa-gation through the void: Optimizing control variates for black-box gradient estimation.arXiv preprint arXiv:1711.00123 , 2017. 5Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K. Li, and Richard Socher. Non-autoregressive neural machine translation. CoRR, abs/1711.02281, 2017. URL http://arxiv.org/abs/1711.02281 . 6, 7, 8Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network.arXiv preprint arXiv:1503.02531 , 2015. 2, 7, 8, 16Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax.CoRR, abs/1611.01144, 2016. URL http://arxiv.org/abs/1611.01144 . 5, 7Łukasz Kaiser and Samy Bengio. Discrete autoencoders for sequence models. CoRR,abs/1801.09797, 2018. URL http://arxiv.org/abs/1801.09797 . 8Łukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Pamar, Samy Bengio, Jakob Uszkoreit, andNoam Shazeer. Fast decoding in sequence models using discrete latent variables. arXivpreprint arXiv:1803.03382 , 2018. 1, 2, 3, 6, 7, 14, 15Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, AlexGraves, and Koray Kavukcuoglu. Video pixel networks. arXiv preprint arXiv:1610.00527 ,2016. 1Yoon Kim and Alexander Rush. Sequence-level knowledge distillation. 2016. 2, 7, 8, 16D. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. 59Under review as a conference paper at ICLR 2019Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXivpreprint arXiv:1412.6980 , 2014. 6Diederik P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and MaxWelling. Improved variational inference with inverse autoregressive flow. In Advances inNeural Information Processing Systems , pp. 4743–4751, 2016. 1Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato.Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755 ,2018. 16Jason Lee, Elman Mansimov, and Kyunghyun Cho. Deterministic non-autoregressive neuralsequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901 , 2018. 6, 7Percy Liang and Dan Klein. Online em for unsupervised models. In Proceedings of humanlanguage technologies: The 2009 annual conference of the North American chapter ofthe association for computational linguistics , pp. 611–619. Association for ComputationalLinguistics, 2009. 3Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. arXiv preprintarXiv:1711.05101 , 2017. 14James MacQueen et al. Some methods for classification and analysis of multivariate obser-vations. In Proceedings of the fifth Berkeley symposium on mathematical statistics andprobability , volume 1, pp. 281–297. Oakland, CA, USA, 1967. 4Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: Acontinuous relaxation of discrete random variables. CoRR, abs/1611.00712, 2016. URLhttp://arxiv.org/abs/1611.00712 . 6, 7Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.Reading digits in natural images with unsupervised feature learning. In NIPS workshopon deep learning and unsupervised feature learning , volume 2011, pp. 5, 2011. 12, 15Mohammad Norouzi and David J Fleet. Cartesian k-means. In Computer Vision and PatternRecognition (CVPR), 2013 IEEE Conference on , pp. 3017–3024. IEEE, 2013. 2Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and AlexanderKu. Image transformer. arXiv, 2018. URL https://arxiv.org/abs/1802.05751 . 1Gabriel Pereyra, George Tucker, Jan Chorowski, Łukasz Kaiser, and Geoffrey Hinton.Regularizing neural networks by penalizing confident output distributions. arXiv preprintarXiv:1701.06548 , 2017. 5Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Sergio Gómez Colmenarejo, Ziyu Wang,Dan Belov, and Nando de Freitas. Parallel multiscale autoregressive density estimation.arXiv preprint arXiv:1703.03664 , 2017. 1Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improvingthe pixelcnn with discretized logistic mixture likelihood and other modifications. arXivpreprint arXiv:1701.05517 , 2017. 1Masa-Aki Sato and Shin Ishii. On-line em algorithm for the normalized gaussian network.Neural computation , 12(2):407–432, 2000. 3David Sculley. Web-scale k-means clustering. In Proceedings of the 19th internationalconference on World wide web , pp. 1177–1178. ACM, 2010. 3Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neuralnetworks. In Advances in Neural Information Processing Systems , pp. 3104–3112, 2014.URL http://arxiv.org/abs/1409.3215 . 6Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compressionwith compressive autoencoders. arXiv preprint arXiv:1703.00395 , 2017. 110Under review as a conference paper at ICLR 2019George Tucker, Andriy Mnih, Chris J Maddison, John Lawson, and Jascha Sohl-Dickstein.Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. InAdvances in Neural Information Processing Systems , pp. 2624–2633, 2017. 5Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, AlexGraves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generativemodel for raw audio. arXiv preprint arXiv:1609.03499 , 2016. 1Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al.Conditional image generation with pixelcnn decoders. In Advances in Neural InformationProcessing Systems , pp. 4790–4798, 2016. 1Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representationlearning. CoRR, abs/1711.00937, 2017. URL http://arxiv.org/abs/1711.00937 . 1, 2,3, 8, 12, 13, 14, 15Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, 2017. URLhttp://arxiv.org/abs/1706.03762 . 6, 7, 8, 16Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: Aneural image caption generator. In Computer Vision and Pattern Recognition (CVPR),2015 IEEE Conference on , pp. 3156–3164. IEEE, 2015. 1Greg CG Wei and Martin A Tanner. A monte carlo implementation of the em algorithmand the poor man’s data augmentation algorithms. Journal of the American statisticalAssociation , 85(411):699–704, 1990. 5Ronald J Williams. Simple statistical gradient-following algorithms for connectionist rein-forcement learning. In Reinforcement Learning , pp. 5–32. Springer, 1992. 5, 611Under review as a conference paper at ICLR 2019A Image ReconstructionFigure 2: VQ-VAE model as described in van den Oord et al. (2017) for image reconstruction.We use the notation xto denote the input image, with the output of the encoder ze(x)2RDbeing used to perform nearest neighbor search to select the (sequence of) discrete latentvariable. The selected discrete latent is used to train the latent predictor model, while theembedding zq(x)of the selected discrete latent is passed as input to the decoder.Figure 3: Samples of original and reconstructed images from CIFAR-10 using EM trainedwith a code-book of size 28.In this section we report additional experiments we performed using VQ-VAE and EM forthe task of image reconstruction. We train a discrete autoencoder with VQ-VAE (van denOord et al., 2017) and EM on the CIFAR-10 data set, modeling the joint probability P(x;z),wherexis the image and zare the discrete latent codes. We use a field of 8810latentswith a code-book of size 28each containing 512dimensions. We maintain the same encoderand decoder as used in Machine Translation. For the encoder, we use 4convolutional layers,with kernel size 55and strides 22, followed by 2residual layers, and a single dense layer.For the decoder, we use a single dense layer, 2residual layers, and 4deconvolutional layers.Figure 3 shows that our reconstructions are on par with hard EM training.We also train discrete autoencoders on the SVHN dataset (Netzer et al., 2011), with bothVQ-VAE (van den Oord et al., 2017) and EM. The autoencoder is similar to our CIFAR-10model, where each nx= 32323image is encoded into 640discrete latents from a sharedcodebook of size 256. By contrasting the reconstructions from several training runs forVQ-VAE (left) and EM (right), we find that training with EM is more reliable and thereconstructions are of high quality (Figure 4)12Under review as a conference paper at ICLR 2019Figure 4: On the left are reconstructions from a model trained with VQ-VAE (van den Oordet al., 2017) and the right figure shows reconstructions from EM training, our approach.B Ablation TablesModel Code-book size BLEUVQ-VAE 21020.8VQ-VAE 21221.6VQ-VAE 21421.0VQ-VAE 21621.8Table 2: Results showing the impact of the discrete vocabulary on the BLEU score for theWMT’14 English-German dataset. The hidden dimension is 512 for all runs.13Under review as a conference paper at ICLR 2019Model ncnsBLEU Latency SpeedupVQ-VAE with EM + distillation 3125.8 81 ms 4.08VQ-VAE with EM + distillation 3526.4 81 ms 4.08VQ-VAE with EM + distillation 310 26.7 81 ms 4.08VQ-VAE with EM + distillation 32526.6 81 ms 4.08VQ-VAE with EM + distillation 35026.5 81 ms 4.08VQ-VAE with EM + distillation 310025.8 81 ms 4.08VQ-VAE with EM + distillation 4124.1 58 ms 5.71VQ-VAE with EM + distillation 4524.7 58 ms 5.71VQ-VAE with EM + distillation 41025.4 58 ms 5.71VQ-VAE with EM + distillation 42525.1 58 ms 5.71VQ-VAE with EM + distillation 45023.6 58 ms 5.71VQ-VAE with EM + distillation 410024.8 58 ms 5.71Table 3: Results showing the impact of number of samples used to perform the Monte-CarloEM update on the BLEU score for the WMT’14 English-German dataset. The codebooksize for all runs in this table is 212512.Model Hidden dimension nsBLEU Latency SpeedupVQ-VAE + distillation 256 -24.5 76 ms 4:36VQ-VAE with EM + distillation 256 1021.9 76 ms 4:36VQ-VAE with EM + distillation 256 2525.8 76 ms 4:36VQ-VAE + distillation 384 -25.6 80 ms 4:14VQ-VAE with EM + distillation 384 1022.2 80 ms 4:14VQ-VAE with EM + distillation 384 2526.2 80 ms 4:14Table 4: Results showing the impact of the dimension of the word embeddings and thehidden layers of the model on the BLEU score for the WMT’14 English-German datasetwith a discrete vocabulary of size 212.C Additional AnalysisGradient based update vs EMA update of code-book: The original VQ-VAE paper(van den Oord et al., 2017) proposed a gradient based update rule for learning the code-bookwhere the code-book entries are trained by minimizing ksg (ze(x))zq(x)k2. However, itwas found in Kaiser et al. (2018) that the EMA update worked better than this gradientbased loss. Note that if the gradient based loss was minimized using SGD then the updaterule for the embeddings isej (1)ej+Pi1[zq(xi) =ej]ze(xi)Pi1[zq(xi) =ej]; (15)for a learning rate . This is quite similar to the EMA update rule of Equation 5, with theonly difference being that the latter also maintains an EMA over the counts cj. When usingSGD with momentum or Adam, the update rule becomes quite different however, since wenow take the moving average of the gradient term itself, before subtracting it from currentvalue of the embedding ej. This is similar to the issue of using weight decay with Adam,where using the `2penalty in the loss function results in worse performance (Loshchilov &Hutter, 2017).Model Size: The effect of model size on BLEU score for models trained with EM anddistillation is shown in Table 4.14Under review as a conference paper at ICLR 2019Robustness of EM to Hyperparameters: While EMtraining givesa small performanceimprovement, we find that it also leads to more robust training for machine translation.Our experiments on image reconstruction on SVHN (Netzer et al., 2011) in section A alsohighlight the robustness of EM training. The training approach from van den Oord et al.(2017) exhibits high variance on reconstruction quality, while EM is much more stable,resulting in good reconstructions in almost all training runs.Figure 5: Comparison of VQ-VAE (green curve) vs EM with different number of samples(yellow and blue curves) on the WMT’14 English-German translation dataset with a code-book size of 214, with the encoder of the discrete autoencoder attending to the outputof the encoder of the source sentence as in Kaiser et al. (2018). The y-axis denotes theteacher-forced BLEU score on the test set, which is used only for evaluation while training.Notice that the VQ-VAE run collapsed (green curve), while the EM runs (yellow and bluecurves) exhibit more stability.Emergence of EOS/PAD latent: We observe that all the latent sentences for a specificexperiment with VQ-VAE or EM end with a fixed latent indicating the end of the sequence.Since we always fix the length of the latent sentence to be 2nctimes smaller than the truesentence, the model learns to pad the remainder of the latent sequence with this special code(see Table 5 for examples). Note that one can speed up decoding even further by stoppingthe Latent Predictor (LP) model as soon as it outputs this special code.7 89 517 3773 760 760 760 760607 1901 1901 3051 760 760 760 7602388 15 850 2590 760 760 760 760670 127 17 3773 760 760 760 7602335 26 129 2986 760 760 760 76010 45 1755 766 760 760 760 7603773 1082 13 91 760 760 760 7601790 38 270 554 760 760 760 7602951 2015 91 2418 760 760 760 7602951 27 760 760 760 760 760 760463 201 3410 3051 760 760 760 760Table 5: Example latent codes for sentences from the WMT’14 English-German datasethighlighting the emergence of the EOS/PAD latent (760 in this case).15Under review as a conference paper at ICLR 2019Denoising autoencoder: We also use word dropout with a dropout rate of 0:3andwordpermutation with a shuffle rate of 0:5as in Lample et al. (2018). On the WMT English-German we did not notice any improvement from using these regularization techniques, buton the larger WMT English-French dataset, we observe that using a denoising autoencodersignificantly improves performance with a gain of 1.0 BLEU on VQ-VAE and 0.9 BLEU overEM (see Table 6).Additional analysis on latents: In order to compute correlations between the discretelatents and n-grams in the original text, we computed Point-wise Mutual Information (PMI)andtf-idfscores where the latents are treated as documents. However, we were unable tosee any semantic patterns that stood out in this analysis.D Preliminary Results on English FrenchIn this section we report preliminary results on the WMT English-French dataset withoutusing knowledge distillation from an autoregressive teacher (Hinton et al., 2015; Kim &Rush, 2016). We use a Transformer base model from Vaswani et al. (2017). Our bestnon-autoregressive base model trained on non-distilled targets gets 30:0BLEU comparedto the autoregressive base model with the same choice of hyperparameters, which gets 33:3BLEU (see Table 6). As in the case of English-German, we anticipate that using knowledgedistillation Hinton et al. (2015) will likely close this gap.Model ncnsCode-book size BLEU Latency SpeedupAutoregressive Baseline -- - 33.3771 ms 1Our ResultsVQ-VAE 3- 12 29.0215 ms 3:58VQ-VAE with EM 310 12 29.2215 ms 3:58VQ-VAE with reg. 3- 12 30.0 215 ms 3:58VQ-VAE with EM, reg. 310 12 29.9215 ms 3:58VQ-VAE with reg. 3- 14 29.0228 ms 3:38VQ-VAE with EM, reg. 310 14 29.5228 ms 3:38Table 6: BLEU score and decoding times for different models on the WMT’13 English-French translation dataset. The baseline is the autoregressive Transformer of Vaswaniet al. (2017) with no beam search, We use the notation ncto denote the compressionfactor for the latents, and the notation nsto denote the number of samples used toperform the Monte-Carlo approximation of the EM algorithm. Reg. refers to worddropoutwithrate 0:3andword permutation withshufflerate 0:5asdescribedinSectionC.The hidden dimension of the codebook is 512. Decoding is performed on a single CPUmachine with an NVIDIA GeForce GTX 1080 with a batch size of 1.16<|im_end|> <|im_start|>assistant ### Review Title Training procedure for VQ-VAE is equivalent to the EM algorithm ### Review Text General: The paper presents an alternative view on the training procedure for the VQ-VAE. The authors have noticed that there is a close connection between the original training algorithm and the well-known EM algorithm. Then, they proposed to use the soft EM algorithm. In the experiments the authors showed that the soft EM allows to obtain significantly better results than the standard learning procedure on both image and text datasets. In general, the paper shows a neat link between the well-known EM algorithm and the learning method for the VQ-VAE. I like the manner the idea is presented. Additionally, the results are convincing. I believe that the paper will be interesting for the ICLR audience. Pros: + The connection between the EM algorithms and the training procedure for the VQ-VAE is neat. + The paper is very well written, all concepts are clear and properly outlined. + The experiments are properly performed and all results are convincing. Cons: - The paper is rather incremental, however, still interesting. - The quality of Figure 1, 2 and 3 (especially Figure 3) is unacceptable. - There is a typo in Table 6 (row 5: V-VAE → VQ-VAE). - I miss two references in the related work on training with discrete variables: REBAR (Tucker et al., 2017) and RELAX (Grathwohl et al., 2018). - The paper style is not compliant with the ICLR style. --REVISION-- I would like to thank authors for their effort to improve quality of images. In my opinion the paper is nice and I sustain my initial score. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
D1E1h-K3jso
ICLR.cc/2021/Conference
2021
Learning from Noisy Data with Robust Representation Learning
["Junnan Li", "Caiming Xiong", "Steven Hoi"]
Learning from noisy data has attracted much attention, where most methods focus on label noise. In this work, we propose a new framework which simultaneously addresses three types of noise commonly seen in real-world data: label noise, out-of-distribution input, and input corruption. In contrast to most existing methods, we combat noise by learning robust representation. Specifically, we embed images into a low-dimensional subspace by training an autoencoder on the deep features. We regularize the geometric structure of the subspace with robust contrastive learning, which includes an unsupervised consistency loss and a supervised mixup prototypical loss. Furthermore, we leverage the structure of the learned subspace for noise cleaning, by aggregating information from neighboring samples. Experiments on multiple benchmarks demonstrate state-of-the-art performance of our method and robustness of the learned representation. Our code will be released.
["label noise", "out-of-distribution noise", "contrastive learning"]
ABSTRACTLearning from noisy data has attracted much attention, where most methods focuson label noise. In this work, we propose a new framework which simultaneouslyaddresses three types of noise commonly seen in real-world data: label noise, out-of-distribution input, and input corruption. In contrast to most existing methods,we combat noise by learning robust representation. Specifically, we embed imagesinto a low-dimensional subspace by training an autoencoder on the deep features.We regularize the geometric structure of the subspace with robust contrastivelearning, which includes an unsupervised consistency loss and a supervised mixupprototypical loss. Furthermore, we leverage the structure of the learned subspace fornoise cleaning, by aggregating information from neighboring samples. Experimentson multiple benchmarks demonstrate state-of-the-art performance of our methodand robustness of the learned representation. Our code will be released1.1I NTRODUCTIONData in real life is noisy . However, deep models with remarkable performance are mostly trainedon clean datasets with high-quality human annotations. Manual data cleaning and labeling is anexpensive process that is difficult to scale. On the other hand, there exists almost infinite amount ofnoisy data online. It is crucial that deep neural networks (DNNs) could harvest noisy training data.However, it has been shown that DNNs are susceptible to overfitting to noise (Zhang et al., 2017).As shown in Figure 1, a real-world noisy image dataset often consists of multiple types of noise.Label noise refers to samples that are wrongly labeled as another class ( e.g. flower labeled as orange).Out-of-distribution input refers to samples that do not belong to any known classes. Input corruptionrefers to image-level distortion ( e.g. low brightness) that causes data shift between training and test.Most of the methods in literature focus on addressing the more detrimental label noise. Two dominantapproaches include: (1) find clean samples as those with smaller loss and assign larger weights tothem (Han et al., 2018; Yu et al., 2019; Shen & Sanghavi, 2019; Arazo et al., 2019); (2) relabel noisysamples using model’s predictions (Reed et al., 2015; Ma et al., 2018; Tanaka et al., 2018; Yi & Wu,2019). The recently proposed DivideMix (Li et al., 2020a) integrates both approaches in a co-trainingframework, but it also increases computation cost. Previous methods that focus on addressing labelnoise do not consider out-of-distribution input or input corruption, which limits their performance inreal-world scenarios. Furthermore, using a model’s own prediction to relabel samples could causeconfirmation bias, where the prediction error accumulates and harms performance.We propose a new direction for effective learning from noisy data. Our method embeds imagesinto noise-robust low-dimensional representations, and regularizes the geometric structure of therepresentations with contrastive learning. Specifically, our algorithmic contributions include:•We propose noise-robust contrastive learning, which introduces two contrastive losses. The first isan unsupervised consistency contrastive loss. It enforces inputs with perturbations to have similarnormalized embeddings, which helps learn robust and discriminative representation.•Our second contrastive loss is a weakly-supervised mixup prototypical loss. We compute classprototypes as normalized mean embeddings, and enforces each sample’s embedding to be closer to1Code is in the supplementary material1Under review as a conference paper at ICLR 2021Input CorruptionLabel NoiseOut-of-distribution InputFigure 1: Google search images from WebVision (Li et al., 2017) dataset with keyword “orange”.its class prototype. Inspired by Mixup (Zhang et al., 2018), we construct virtual training samples aslinear interpolation of inputs, and encourage the same linear relationship w.r.tthe class prototypes.•We train a linear autoencoder to reconstruct the high-dimensional features using low-dimensionalembeddings. The autoendoer enables the high-dimensional features to maximally preserve therobustness of the low-dimensional embeddings, thus regularizing the classifier.•We propose a new noise cleaning method which exploits the structure of the learned representations.For each sample, we aggregate information from its top- kneighbors to create a pseudo-label.A subset of training samples with confident pseudo-labels are selected to compute the weakly-supervised losses. This process can effectively clean both label noise and out-of-distribution (OOD)noise.Ourexperimental contributions include:•We experimentally show that our method is robust to label noise, OOD input, and input corruption.Experiments are performed on multiple datasets with controlled noise and real-world noise, whereour method achieves state-of-the-art performance.•We demonstrate that the proposed noise cleaning method can effectively clean a majority oflabel noise. It also learns a curriculum that gradually leverages more samples to compute theweakly-supervised losses as the pseudo-labels become more accurate.•We validate the robustness of the learned low-dimensional representation by showing (1) k-nearestneighbor classification outperforms the softmax classifier. (2) OOD samples can be separated fromin-distribution samples. The efficacy of the proposed autoencoder is also verified.2R ELATED WORKLabel noise learning. Learning from noisy labels have been extensively studied in the literature.While some methods require access to a small set of clean samples (Xiao et al., 2015; Vahdat,2017; Veit et al., 2017; Lee et al., 2018; Hendrycks et al., 2018), most methods focus on the morechallenging scenario where no clean labels are available. These methods can be categorized into twomajor types. The first type performs label correction using predictions from the network (Reed et al.,2015; Ma et al., 2018; Tanaka et al., 2018; Yi & Wu, 2019). The second type tries to separate cleansamples from corrupted samples, and trains the model on clean samples (Han et al., 2018; Arazoet al., 2019; Jiang et al., 2018; 2020; Wang et al., 2018; Chen et al., 2019; Lyu & Tsang, 2020). Therecently proposed DivideMix (Li et al., 2020a) effectively combines label correction and sampleselection with the Mixup (Zhang et al., 2018) data augmentation under a co-training framework.However, it cost 2⇥the computational resource of our method.Different from existing methods, our method combats noise by learning noise-robust low-dimensionalrepresentations. We propose a more effective noise cleaning method by leveraging the structure ofthe learned representations. Furthermore, our model is robust not only to label noise, but also toout-of-distribution and corrupted input. A previous work has studied open-set noisy labels (Wanget al., 2018), but their method does not enjoy the same level of robustness as ours.Contrastive learning. Contrastive learning is at the core of recent self-supervised representationlearning methods (Chen et al., 2020; He et al., 2019; Oord et al., 2018; Wu et al., 2018). In self-supervised contrastive learning, two randomly augmented images are generated for each input image.Then a contrastive loss is applied to pull embeddings from the same source image closer, whilepushing embeddings from different source images apart. Recently, prototypical contrastive learning(PCL) (Li et al., 2020b) has been proposed, which uses cluster centroids as prototypes, and trains thenetwork by pulling an image embedding closer to its assigned prototypes.2Under review as a conference paper at ICLR 2021CNNSoftmaxLow-DHigh-DWeakly-augmentedCNNStrongly-augmentednormalizeLow-DHigh-D0.2×+0.8×Class prototypeInterpolated embeddingnormalizeShared weightsL$%&'(L$%&'(L&%Loss function“Wolf”L&&L)&_+,-InterpolatedInput Subspace projectionFigure 2: Our proposed framework for noise-robust contrastive learning. We project images into a low-dimensional subspace, and regularize the geometric structure of the subspace with (1) Lcca consistency con-trastive loss which enforces images with perturbations to have similar embeddings; (2) Lpcmix: a prototypicalcontrastive loss augmented with mixup, which encourages the embedding for a linearly-interpolated input tohave the same linear relationship w.r.tthe class prototypes. The low-dimensional embeddings are also trained toreconstruct the high-dimensional features, which preserves the learned information and regularizes the classifier.Different from previous methods, our method performs contrastive learning in the principal subspaceof the high-dimensional feature space, by training a linear autoencoder. Furthermore, our supervisedcontrastive loss improves PCL (Li et al., 2020b) with Mixup (Zhang et al., 2018). Different from theoriginal Mixup where learning happens at the classification layer, our learning takes places in thelow-dimensional subspace.3M ETHODGiven a noisy training dataset D={(xi,yi)}ni=1, where xiis an image and yi2{1,. . . ,C }is itsclass label. We aim to train a network that is robust to the noise in training data ( i.e. label noise, OODinput, input corruption) and achieves high accuracy on a clean test set. The proposed network consistsof three components: (1) a deep encoder (a convolutional neural network) that encodes an imagexito a high-dimensional feature vi; (2) a classifier (a fully-connected layer followed by softmax)that receives vias input and outputs class predictions; (3) a linear autoencoder that projects viintoa low-dimensional embedding zi2Rd. We show an illustration of our method in Figure 2, and apseudo-code in appendix B. Next, we delineate its details.3.1 C ONTRASTIVE LEARNING IN ROBUST LOW -DIMENSIONAL SUBSPACELetzi=Wevibe the linear projection from high-dimensional features to low-dimensional embed-dings, and ˆzi=zi/kzik2be the normalized embeddings. We aim to learn robust embeddings withtwo contrastive losses: unsupervised consistency loss and weakly-supervised mixup prototypical loss.Unsupervised consistency contrastive loss . Following the NT-Xent (Chen et al., 2020) loss for self-supervised representation learning, our consistency contrastive loss enforces images with semantic-preserving perturbations to have similar embeddings. Specifically, given a minibatch of bimages,we apply weak-augmentation and strong-augmentation to each image, and obtain 2binputs {xi}2bi=1.Weak augmentation is a standard flip-and-shift augmentation strategy, while strong augmentationconsists of color and brightness changes with details given in Section 4.1.We project the inputs into the low-dimensional space to obtain their normalized embeddings {ˆzi}2bi=1.Leti2{1,. . . ,b }be the index of a weakly-augmented input, and j(i)be the index of the strong-3Under review as a conference paper at ICLR 2021augmented input from the same source image, the consistency contrastive loss is defined as:Lcc=bXi=1logexp( ˆzi·ˆzj(i)/⌧)P2bk=1 i6=kexp( ˆzi·ˆzk/⌧), (1)where ⌧is a scalar temperature parameter. The consistency contrastive loss maximizes the innerproduct between the pair of positive embeddings ˆziandˆzj(i), while minimizing the inner productbetween 2(b1)pairs of negative embeddings. By mapping different views (augmentations) of thesame image to neighboring embeddings, the consistency contrastive loss encourages the network tolearn discriminative representation that is robust to low-level image corruption.Weakly-supervised mixup prototypical contrastive loss . Our second contrastive loss injects struc-tural knowledge of classes into the embedding space. Let Icdenote indices for the subset of imagesinDlabeled with class c, we calculate the class prototype as the normalized mean embedding:zc=1|Ic|Xi2I cˆzi,ˆzc=zckzck2, (2)where ˆziis the embedding of a center-cropped image, and the class prototypes are calculated at thebeginning of each epoch.The prototypical contrastive loss enforces an image embedding ˆzito be more similar to its corre-sponding class prototype ˆzyi, in contrast to other class prototypes:Lpc(ˆzi,yi)=logexp( ˆzi·ˆzyi/⌧)PCc=1exp( ˆzi·ˆzc/⌧). (3)Since the label yiis noisy, we would like to regularize the encoder from memorizing training labels.Mixup (Zhang et al., 2018) has been shown to be an effective method against label noise (Arazo et al.,2019; Li et al., 2020a). Inspired by it, we create virtual training samples by linearly interpolatinga sample (indexed by i) with another sample (indexed by m(i)) randomly chosen from the sameminibatch:xmi=xi+( 1 )xm(i), (4)where ⇠Beta( ↵, ↵).Letˆzmibe the normalized embedding for xmi, the mixup version of the prototypical contrastive loss isdefined as a weighted combination of the two Lpcw.r.tclass yiandym(i). It enforces the embeddingfor the interpolated input to have the same linear relationship w.r.t. the class prototypes.Lpcmix=2bXi=1Lpc(ˆzmi,yi)+( 1 )Lpc(ˆzmi,ym(i)). (5)Reconstruction loss . We also train a linear decoder Wdto reconstruct the high-dimensional featurevibased on zi. The reconstruction loss is defined as:Lrecon =2bXi=1kviWdzik22. (6)There are several benefits for training the autoencoder. First, with an optimal linear autoencoder,Wewill project viinto its low-dimensional principal subspace and can be understood as applyingPCA (Baldi & Hornik, 1989). Thus the low-dimensional representation ziis intrinsically robustto input noise. Second, minimizing the reconstruction error is maximizing a lower bound of themutual information between viandzi(Vincent et al., 2010). Therefore, knowledge learned fromthe proposed contrastive losses can be maximally preserved in the high-dimensional representation,which helps regularize the classifier.Classification loss . Given the softmax output from the classifier, p(y;xi), we define the classificationloss as the cross-entropy loss. Note that it is only applied to the weakly-augmented inputs.Lce=bXi=1logp(yi;xi). (7)4Under review as a conference paper at ICLR 2021(a)(b)(c)Figure 3: Curriculum learned by the proposed label correction method for training on CIFAR datasets with50% sym. noise. (a) Accuracy of pseudo-labels w.r.t to clean training labels. (b) Number of samples in theweakly-supervised subset Dtsup. (c) Label noise ratio in the weakly-supervised subset.The overall training objective is to minimize a weighted sum of all losses:L=Lce+!ccLcc+!pcLpcmix+!reconLrecon (8)Forallexperiments, we fix !cc=1,!recon =1, and change !pconly across datasets.3.2 N OISE CLEANING WITH SMOOTH NEIGHBORSAfter warming-up the model by training with the noisy labels {yi}ni=1fort0epochs, we aim toclean the noise by generating a soft pseudo-label qifor each training sample. Different fromprevious methods that perform label correction purely using the model’s softmax prediction, ourmethod exploits the structure of the low-dimensional subspace by aggregating information from top- kneighboring samples, which helps alleviate the confirmation bias problem.At the t-th epoch, for each sample xi, letptibe the classifier’s softmax prediction, let qt1ibe its softlabel from the previous epoch, we calculate the soft label for the current epoch as:qti=12pti+12kXj=1wtijqt1j, (9)where wtijrepresents the normalized affinity between a sample and its neighbor and is defined aswtij=exp( ˆzti·ˆztj/⌧)Pkj=1exp( ˆzti·ˆztj/⌧). We set k= 200 in all experiments.The soft label defined by eqn.(9) is the minimizer of the following quadratic loss function:J(qti)=kXj=1wtijqtiqt1j22+qtipti22. (10)The first term is a smoothness constraint which encourages the soft label to take a similar value as itsneighbors’ labels, whereas the second term attempts to maintain the model’s class prediction.We construct a weakly-supervised subset which contains (1) clean sample whose soft label score forthe original class yiis higher than a threshold ⌘0, (2)pseudo-labeled sample whose maximum softlabel score exceeds a threshold ⌘1. For pseudo-labeled samples, we convert their soft labels into hardlabels by taking the class with the maximum score.Dtsup={xi,yi|qti(yi)>⌘ 0}[{ xi,ˆyti= arg maxcqti(c)|8maxcqti(c)>⌘ 1,c2{1,. . ,C }}(11)Given the weakly-supervised subset, we modify the classification loss Lce, the mixup prototypicalcontrastive loss Lpcmix, and the calculation of prototypes ˆzc, such that they only use samples fromDtsup. The unsupervised losses ( i.e.LccandLrecon) still operate on all training samples.Learning curriculum. Our iterative noise cleaning method learns an effective training curriculum,which gradually increases the size of Dtsupas the pseudo-labels become more accurate. To demonstrate5Under review as a conference paper at ICLR 2021Dataset CIFAR-10 CIFAR-100Noise type Sym 20% Sym 50% Asym 40% Sym 20% Sym 50%Cross-Entropy (Li et al., 2020a) 82.7 57.9 72.3 61.8 37.3Forward (Patrini et al., 2017) 83.1 59.4 83.1 61.4 37.3Co-teaching+ (Yu et al., 2019) 88.2 84.1 - 64.1 45.3Mixup (Zhang et al., 2018) 92.3 77.6 - 66.0 46.6P-correction (Yi & Wu, 2019) 92.0 88.7 88.1 68.1 56.4MLNT (Li et al., 2019) 92.0 88.8 88.6 67.7 58.0M-correction (Arazo et al., 2019) 93.8 91.9 86.3 73.4 65.4DivideMix (Li et al., 2020a) 95.0 93.7 91.4 74.8 72.1DivideMix (reproduced) 95.1±0.1 93.6±0.2 91.3±0.8 75.1±0.2 72.1±0.3Ours (classifier) 95.8±0.1 94.3±0.2 91.9±0.8 79.1±0.1 74.8±0.4Ours (knn) 95.9±0.1 94.5±0.1 92.4±0.9 79.4±0.1 75.0±0.4Table 1: Comparison with state-of-the-art methods on CIFAR datasets with label noise. Numbers indicateaverage test accuracy (%) over last 10 epochs. We report results over 3 independent runs with randomly-generatedlabel noise. Results for previous methods are copied from Arazo et al. (2019); Li et al. (2020a). We re-runDivideMix (without ensemble) using the publicly available code on the same noisy data as ours.such curriculum, we analyse the noise cleaning statistics for training our model on CIFAR-10 andCIFAR-100 datasets with 50% label noise (experimental details explained in the next section). InFigure 3 (a), we show the accuracy of the soft pseudo-labels w.r.t to clean training labels (only usedfor analysis purpose). Our method can significantly reduce the ratio of label noise from 50% to 5%(for CIFAR-10) and 17% (for CIFAR-100). Figure 3 (b) shows the size of Dtsupas a percentage of thetotal number of training samples, and Figure 3 (c) shows the effective label noise ratio within theweakly-supervised subset Dtsup. Our method maintains a low noise ratio in the weakly-supervisedsubset, while gradually increasing its size to utilize more samples for the weakly-supervised losses.4E XPERIMENTIn this section, we validate the proposed method on multiple benchmarks with controlled noise andreal-world noise. Our method achieves state-of-the-art performance across all benchmarks. For faircomparison, we compare with DivideMix (Li et al., 2020a) without ensemble. In appendix A, wereport the result of our method with co-training and ensemble, which further improves performance.4.1 E XPERIMENTS ON CONTROLLED NOISY LABELSDataset. Following Tanaka et al. (2018); Li et al. (2020a), we corrupt the training data of CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009) with two types of label noise: symmetric andasymmetric . Symmetric noise is injected by randomly selecting a percentage of samples and changingtheir labels to random labels. Asymmetric noise is class-dependant, where labels are only changed tosimilar classes ( e.g. dog $cat, deer !horse). We experiment with multiple noise ratios: sym 20%,sym 50%, and asym 40% (see results for sym 80% and 90% in appendix A). Note that asymmetricnoise ratio cannot exceed 50% because certain classes would become theoretically indistinguishable.Implementation details. Same as previous works (Arazo et al., 2019; Li et al., 2020a), we usePreAct ResNet-18 (He et al., 2016) as our encoder model. We set the dimensionality of the bottlenecklayer as d= 50 . Our model is trained using SGD with a momentum of 0.9, a weight decay of 0.0005,and a batch size of 128. The network is trained for 200 epochs. We set the initial learning rateas 0.02 and use a cosine decay schedule. We apply standard crop and horizontal flip as the weakaugmentation. For strong augmentation, we use AugMix (Hendrycks et al., 2020), though othermethods ( e.g. SimAug (Chen et al., 2020)) work equally well. For all CIFAR experiments, we fix thehyper-parameters as !cc=1,!pc=5,!recon =1,⌧=0.3,↵=8,⌘1=0.9. For CIFAR-10, weactivate noise cleaning at epoch t0=5, and set ⌘0=0.1(sym.) or 0.4 (asym.). For CIFAR-100, weactivate noise cleaning at epoch t0= 15 , and set ⌘0=0.02. We use faiss-gpu (Johnson et al., 2017)for efficient knn search in the low-dimensional subspace, which finishes within 1 second.Results. Table 1 shows the comparison with existing methods. Our method outperforms previousmethods across all label noise settings. On the more challenging CIFAR-100, we achieve 3-4%accuracy improvement compared to the second-best method DivideMix. Moreover, our method ismore computational efficient than DivideMix, which needs co-training for noise filtering.6Under review as a conference paper at ICLR 2021CIFAR-10 CE Iterative GCE DivideMix Ours Ours50% sym. noise (Wang et al., 2018) (Zhang & Sabuncu, 2018) (Li et al., 2020a) (cls.) (knn)+ CIFAR-100 20k 53.6 87.2 87.3 89.0 91.5 93.1±0.3+ SVHN 20k 58.1 88.6 88.8 91.9 93.3 93.9±0.2+ Image Corruption 53.8 87.7 87.9 89.8 91.4 91.6±0.2Table 2: Comparison with state-of-the-art methods on datasets with label noise and input noise. Numbersindicate average test accuracy (%) over last 10 epochs. We report results over 3 independent runs with randomly-generated noise. We re-run previous methods using publicly available code with the same noisy data and modelarchitecture as ours.In order to demonstrate the advantage of the proposed low-dimensional embeddings, we perform k-nearest neighbor (knn) classification ( k= 200 ), by projecting test images into normalized embeddings.Compared to the trained classifier, knn achieves higher accuracy, which verifies the robustness of thelearned low-dimensional representations.4.2 E XPERIMENTS ON CONTROLLED NOISY LABELS WITH NOISY IMAGESDataset. We further corrupt a noisy CIFAR-10 dataset (sym. 50%) by injecting two types of inputnoise: out-of-distribution (OOD) images and input corruption. For OOD noise, we follow Wang et al.(2018) and add 20 kimages from either one of the two other datasets: CIFAR-100 and SVHN (Netzeret al., 2011), enlarging the training set to 70 k. A random CIFAR-10 label is assigned to each OODimage. For input corruption, we follow Hendrycks & Dietterich (2019) and corrupt each image inCIFAR-10 with a noise randomly chosen from the following four types: Fog,Snow ,Motion blur andGaussian noise . Examples of both types of input noise are shown in Figure 4. We follow the sameimplementation details as the CIFAR-10 experiments described in Section 4.1.CIFAR-100Gaussian NoiseFogSnowMotion BlurSVHNOut-of-distribution ImagesInput CorruptionFigure 4: Examples of input noise injected to CIFAR-10.Results. Table 2 shows the results, where our method consistently outperforms existing methods bya substantial margin. We observe that OOD images from a similar domain (CIFAR-100) are moreharmful than OOD images from a more different domain (SVHN). This is because noisy images thatare closer to the test data distribution are more likely to distort the decision boundary in a way thatnegatively affects test performance. Nevertheless, performing knn classification using the learnedembeddings demonstrates high robustness to input noise.In Figure 5, we show the t-SNE (Maaten & Hinton, 2008) visualization of the low-dimensionalembeddings for all training samples. As training progresses, our model learns to separate OODsamples (represented as gray points) from in-distribution samples, and cluster samples of the sameclass together despite their noisy labels.Figure 5: t-SNE visualization of low-dimensional embeddings for CIFAR-10 images (color represents the trueclass) +OOD images (gray points) from CIFAR-100 or SVHN. The model is trained on noisy CIFAR-10 (50 kimages with 50% label noise) and 20 kOOD images with random labels. Our method can effectively learn to (1)cluster CIFAR-10 images according to their true class, despite their noisy labels; (2) separate OOD samplesfrom in-distribution samples, such that their harm is reduced.7Under review as a conference paper at ICLR 2021Test dataset WebVision ILSVRC12Accuracy (%) top1 top5 top1 top5Forward (Patrini et al., 2017) 61.1 82.7 57.4 82.4Decoupling (Malach & Shalev-Shwartz, 2017) 62.5 84.7 58.3 82.3D2L (Ma et al., 2018) 62.7 84.0 57.8 81.4MentorNet (Jiang et al., 2018) 63.0 81.4 57.8 79.9Co-teaching (Han et al., 2018) 63.6 85.2 61.5 84.7INCV (Chen et al., 2019) 65.2 85.3 61.0 85.0DivideMix (Li et al., 2020a) 75.9 90.1 73.3 89.2Ours (w/o noise cleaning) 75.5 90.2 72.0 90.0Ours (classifier) 76.3 91.5 73.3 91.2Ours (knn) 77.8 91.3 74.4 90.9Table 3: Comparison with state-of-the-art methods trained on WebVision (mini).Method CE Forward Joint-Opt MLNT MentorMix SL DivideMix Ours (cls.) Ours (knn)Accuracy 69.21 69.84 72.16 73.47 74.30 74.45 74.48 74.84 74.97Table 4: Comparison with state-of-the-art methods on Clothing1M dataset.4.3 E XPERIMENTS ON REAL -WORLD NOISY DATADataset and implementation details. We verify our method on two real-word noisy datasets:WebVision (Li et al., 2017) and Clothing1M (Xiao et al., 2015). Webvision contains images crawledfrom the web using the same concepts from ImageNet ILSVRC12 (Deng et al., 2009). Followingprevious works (Chen et al., 2019; Li et al., 2020a), we perform experiments on the first 50 classes ofthe Google image subset. Clothing1M consists of images collected from online shopping websiteswhere labels were generated from surrounding texts. Note that we do not use the additional clean setfor training. For both experiments, we use the same model architecture as previous methods. Moreimplementation details are given in the appendix.Results. We report the results for WebVision in Table 3 and Clothing1M in Table 4, where weachieve state-of-the-art performance on both datasets. Our method achieves competitive performanceon WebVision even without performing noise cleaning, which demonstrates the robustness of thelearned representation. Appendix D shows examples of noisy images that are cleaned by our method.4.4 A BLATION STUDYEffect of the proposed components. In order to study the effect of the proposed components, weremove each of them and report accuracy of the classifier (knn) across four benchmarks. As shown inTable 5, the mixup prototypical contrastive loss ( Lpcmix) is most crucial to the model’s performance.The consistency contrastive loss ( Lcc) has a stronger effect with corrupted input or larger number ofclasses. We also experiment with removing mixup and using the standard prototypical contrastive loss,and using standard data augmentation (crop and horizontal flip) instead of AugMix. The proposedmethod still achieves state-of-the-art result with standard data augmentation.CIFAR-10 Sym 50% + CIFAR-100 20k + Image Corruption CIFAR-100 Sym 50%w/oLpcmix 85.9 (86.1) 79.7 (81.5) 81.6 (81.7) 65.6 (65.9)w/oLcc 93.7 (93.8) 91.3 (91.5) 89.4 (89.5) 71.9 (71.8)w/oLrecon 93.3 (94.0) 90.7 (92.9) 90.2 (91.0) 73.2 (73.9)w/o mixup 89.5 (89.9) 85.4 (87.0) 84.7 (84.9) 69.3 (69.7)w/ standard aug. 94.1 (94.3) 90.8 (92.9) 90.5 (90.7) 74.5 (75.0)DivideMix 93.6 89.0 89.8 72.1Ours 94.3 (94.5) 91.5 (93.1) 91.4 (91.6) 74.8 (75.0)Table 5: Effect of the proposed components. We show the accuracy of the classifier (knn) on four benchmarkswith different noise. Note that DivideMix (Li et al., 2020a) also performs mixup.8Under review as a conference paper at ICLR 2021Effect of bottleneck dimension. We vary the dimensionality of the bottleneck layer, d, and examinethe performance change in Table 6. Our model is in general not very sensitive to the change of d.bottleneck dimension d=2 5 d=5 0 d=1 0 0 d=2 0 0CIFAR-10 Sym 50% 93.4 94.3 94.2 93.7CIFAR-100 Sym 50% 73.8 74.8 74.4 73.8Table 6: Classifier’s test accuracy (%) with different low-dimensions.5C ONCLUSIONThis paper proposes noise-robust contrastive learning, a new method to combat noise in training databy learning robust representation. We demonstrate our model’s state-of-the-art performance withextensive experiments on multiple noisy datasets. For future work, we are interested in adapting ourmethod to other domains such as NLP or speech. We would also like to explore the potential of ourmethod for learning transferable representations that could be useful for down-stream tasks.
HwPVdjHOvMc
An extention of contrastive learning for training noise-robust deep networks.
6: Marginally above acceptance threshold
####################################################################### Summary: The paper proposes noise-robust contrastive learning to combat label noise, out-of-distribution input and input corruption simultaneously. In particular, this paper embeds images into low-dimensional representations by training an autoencoder, and regularizes the geometric structure of the representations by contrastive learning. Furthermore, this paper introduces a new noise cleaning method based on the structure of the representations. Training samples with confident pseudo-labels are selected for supervised learning to clean both label noise and out-of-distribution noise. The effectiveness of the proposed method has been evaluated on multiple simulated and real-world noisy datasets. ####################################################################### Reasons for score: Overall, I vote for a weak acceptance. The proposed noise-robust contrastive learning introduces two contrastive losses: unsupervised consistency loss and supervised mixup prototypical loss. My major concern is about the clarity of the paper and some additional issues (see cons below). Hopefully the authors can address my concern in the rebuttal period. ####################################################################### Pros: 1. The paper takes one import issue in deep learning: learning from noisy data. 2. For me, the proposed supervised mixup prototypical contrastive loss is novel for learning with noisy data. Specifically, it injects structure knowledge of classes into embedding space by combining the mixup technique and prototypical contrastive loss. The design is reasonable and interesting. 3. This paper provides comprehensive experiments, including both qualitative analysis and quantitative results, to show the effectiveness of the proposed framework. In particular, the proposed method outperforms several state-of-the-art robust learning methods in learning with label noise, out-of-distribution input and input corruption. ####################################################################### Cons: 1. For the motivation, it would be better to provide more details about it, which seems not very clear to me. Particularly, it is unclear why the contrastive loss is duly used in the paper. Will the functionality of the unsupervised contrastive loss be achieved in the supervised prototypical loss? Additionally, the prototypical contrastive loss in equation (1) is an InfoNCE with normalized mean embeddings as the prototypes, which seems different from the ProtoNCE in the original paper [1]. It is better to clarify the the differences of the formulation and training strategy, and the reason of design of supervised prototypical loss in this paper. [1] Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, and Steven C.H. Hoi. Prototypical Contrastive Learning of Unsupervised Representations, In ICLR, 2020. 2. As the key contribution of this paper, mixup prototypical contrastive loss combines mixup technique and prototypical contrastive loss. In the appendix, the authors have provided ablation study to show the effect of proposed losses, and it shows that the mixup prototypical loss is most crucial to the model’s performance. Here, the authors utilizes mixup in two ways: first is to create virtual training samples, and the other is to define the mixup version of prototypical contrastive loss as an weighted combination of two prototypical contrastive loss with respect to true class and virtual class. However, the effect of mixup augmentation, prototypical contrastive loss and mixup prototypical contrastive loss are unclear. Since mixup has been shown to be an effective method against label noise, it would be more convincing if the authors can study the individual effect of each components of the proposed loss in the rebuttal period. 3. The proposed method uses many data augmentations, e.g., standard crop and horizontal flip as weak augmentation and AugMix as strong augmentation in the unsupervised consistency contrastive loss, and mixup technique in the supervised prototypical contrastive loss. I am concerning about the fairness in the experimental comparison. It is unclear to me if the authors have applied the same data augmentation to all the compared methods. ####################################################################### Questions during rebuttal period: Please address and clarify the cons above.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning from Noisy Data with Robust Representation Learning ### Paper Abstract Learning from noisy data has attracted much attention, where most methods focus on label noise. In this work, we propose a new framework which simultaneously addresses three types of noise commonly seen in real-world data: label noise, out-of-distribution input, and input corruption. In contrast to most existing methods, we combat noise by learning robust representation. Specifically, we embed images into a low-dimensional subspace by training an autoencoder on the deep features. We regularize the geometric structure of the subspace with robust contrastive learning, which includes an unsupervised consistency loss and a supervised mixup prototypical loss. Furthermore, we leverage the structure of the learned subspace for noise cleaning, by aggregating information from neighboring samples. Experiments on multiple benchmarks demonstrate state-of-the-art performance of our method and robustness of the learned representation. Our code will be released. ### Paper Keywords ["label noise", "out-of-distribution noise", "contrastive learning"] ### Paper Content ABSTRACTLearning from noisy data has attracted much attention, where most methods focuson label noise. In this work, we propose a new framework which simultaneouslyaddresses three types of noise commonly seen in real-world data: label noise, out-of-distribution input, and input corruption. In contrast to most existing methods,we combat noise by learning robust representation. Specifically, we embed imagesinto a low-dimensional subspace by training an autoencoder on the deep features.We regularize the geometric structure of the subspace with robust contrastivelearning, which includes an unsupervised consistency loss and a supervised mixupprototypical loss. Furthermore, we leverage the structure of the learned subspace fornoise cleaning, by aggregating information from neighboring samples. Experimentson multiple benchmarks demonstrate state-of-the-art performance of our methodand robustness of the learned representation. Our code will be released1.1I NTRODUCTIONData in real life is noisy . However, deep models with remarkable performance are mostly trainedon clean datasets with high-quality human annotations. Manual data cleaning and labeling is anexpensive process that is difficult to scale. On the other hand, there exists almost infinite amount ofnoisy data online. It is crucial that deep neural networks (DNNs) could harvest noisy training data.However, it has been shown that DNNs are susceptible to overfitting to noise (Zhang et al., 2017).As shown in Figure 1, a real-world noisy image dataset often consists of multiple types of noise.Label noise refers to samples that are wrongly labeled as another class ( e.g. flower labeled as orange).Out-of-distribution input refers to samples that do not belong to any known classes. Input corruptionrefers to image-level distortion ( e.g. low brightness) that causes data shift between training and test.Most of the methods in literature focus on addressing the more detrimental label noise. Two dominantapproaches include: (1) find clean samples as those with smaller loss and assign larger weights tothem (Han et al., 2018; Yu et al., 2019; Shen & Sanghavi, 2019; Arazo et al., 2019); (2) relabel noisysamples using model’s predictions (Reed et al., 2015; Ma et al., 2018; Tanaka et al., 2018; Yi & Wu,2019). The recently proposed DivideMix (Li et al., 2020a) integrates both approaches in a co-trainingframework, but it also increases computation cost. Previous methods that focus on addressing labelnoise do not consider out-of-distribution input or input corruption, which limits their performance inreal-world scenarios. Furthermore, using a model’s own prediction to relabel samples could causeconfirmation bias, where the prediction error accumulates and harms performance.We propose a new direction for effective learning from noisy data. Our method embeds imagesinto noise-robust low-dimensional representations, and regularizes the geometric structure of therepresentations with contrastive learning. Specifically, our algorithmic contributions include:•We propose noise-robust contrastive learning, which introduces two contrastive losses. The first isan unsupervised consistency contrastive loss. It enforces inputs with perturbations to have similarnormalized embeddings, which helps learn robust and discriminative representation.•Our second contrastive loss is a weakly-supervised mixup prototypical loss. We compute classprototypes as normalized mean embeddings, and enforces each sample’s embedding to be closer to1Code is in the supplementary material1Under review as a conference paper at ICLR 2021Input CorruptionLabel NoiseOut-of-distribution InputFigure 1: Google search images from WebVision (Li et al., 2017) dataset with keyword “orange”.its class prototype. Inspired by Mixup (Zhang et al., 2018), we construct virtual training samples aslinear interpolation of inputs, and encourage the same linear relationship w.r.tthe class prototypes.•We train a linear autoencoder to reconstruct the high-dimensional features using low-dimensionalembeddings. The autoendoer enables the high-dimensional features to maximally preserve therobustness of the low-dimensional embeddings, thus regularizing the classifier.•We propose a new noise cleaning method which exploits the structure of the learned representations.For each sample, we aggregate information from its top- kneighbors to create a pseudo-label.A subset of training samples with confident pseudo-labels are selected to compute the weakly-supervised losses. This process can effectively clean both label noise and out-of-distribution (OOD)noise.Ourexperimental contributions include:•We experimentally show that our method is robust to label noise, OOD input, and input corruption.Experiments are performed on multiple datasets with controlled noise and real-world noise, whereour method achieves state-of-the-art performance.•We demonstrate that the proposed noise cleaning method can effectively clean a majority oflabel noise. It also learns a curriculum that gradually leverages more samples to compute theweakly-supervised losses as the pseudo-labels become more accurate.•We validate the robustness of the learned low-dimensional representation by showing (1) k-nearestneighbor classification outperforms the softmax classifier. (2) OOD samples can be separated fromin-distribution samples. The efficacy of the proposed autoencoder is also verified.2R ELATED WORKLabel noise learning. Learning from noisy labels have been extensively studied in the literature.While some methods require access to a small set of clean samples (Xiao et al., 2015; Vahdat,2017; Veit et al., 2017; Lee et al., 2018; Hendrycks et al., 2018), most methods focus on the morechallenging scenario where no clean labels are available. These methods can be categorized into twomajor types. The first type performs label correction using predictions from the network (Reed et al.,2015; Ma et al., 2018; Tanaka et al., 2018; Yi & Wu, 2019). The second type tries to separate cleansamples from corrupted samples, and trains the model on clean samples (Han et al., 2018; Arazoet al., 2019; Jiang et al., 2018; 2020; Wang et al., 2018; Chen et al., 2019; Lyu & Tsang, 2020). Therecently proposed DivideMix (Li et al., 2020a) effectively combines label correction and sampleselection with the Mixup (Zhang et al., 2018) data augmentation under a co-training framework.However, it cost 2⇥the computational resource of our method.Different from existing methods, our method combats noise by learning noise-robust low-dimensionalrepresentations. We propose a more effective noise cleaning method by leveraging the structure ofthe learned representations. Furthermore, our model is robust not only to label noise, but also toout-of-distribution and corrupted input. A previous work has studied open-set noisy labels (Wanget al., 2018), but their method does not enjoy the same level of robustness as ours.Contrastive learning. Contrastive learning is at the core of recent self-supervised representationlearning methods (Chen et al., 2020; He et al., 2019; Oord et al., 2018; Wu et al., 2018). In self-supervised contrastive learning, two randomly augmented images are generated for each input image.Then a contrastive loss is applied to pull embeddings from the same source image closer, whilepushing embeddings from different source images apart. Recently, prototypical contrastive learning(PCL) (Li et al., 2020b) has been proposed, which uses cluster centroids as prototypes, and trains thenetwork by pulling an image embedding closer to its assigned prototypes.2Under review as a conference paper at ICLR 2021CNNSoftmaxLow-DHigh-DWeakly-augmentedCNNStrongly-augmentednormalizeLow-DHigh-D0.2×+0.8×Class prototypeInterpolated embeddingnormalizeShared weightsL$%&'(L$%&'(L&%Loss function“Wolf”L&&L)&_+,-InterpolatedInput Subspace projectionFigure 2: Our proposed framework for noise-robust contrastive learning. We project images into a low-dimensional subspace, and regularize the geometric structure of the subspace with (1) Lcca consistency con-trastive loss which enforces images with perturbations to have similar embeddings; (2) Lpcmix: a prototypicalcontrastive loss augmented with mixup, which encourages the embedding for a linearly-interpolated input tohave the same linear relationship w.r.tthe class prototypes. The low-dimensional embeddings are also trained toreconstruct the high-dimensional features, which preserves the learned information and regularizes the classifier.Different from previous methods, our method performs contrastive learning in the principal subspaceof the high-dimensional feature space, by training a linear autoencoder. Furthermore, our supervisedcontrastive loss improves PCL (Li et al., 2020b) with Mixup (Zhang et al., 2018). Different from theoriginal Mixup where learning happens at the classification layer, our learning takes places in thelow-dimensional subspace.3M ETHODGiven a noisy training dataset D={(xi,yi)}ni=1, where xiis an image and yi2{1,. . . ,C }is itsclass label. We aim to train a network that is robust to the noise in training data ( i.e. label noise, OODinput, input corruption) and achieves high accuracy on a clean test set. The proposed network consistsof three components: (1) a deep encoder (a convolutional neural network) that encodes an imagexito a high-dimensional feature vi; (2) a classifier (a fully-connected layer followed by softmax)that receives vias input and outputs class predictions; (3) a linear autoencoder that projects viintoa low-dimensional embedding zi2Rd. We show an illustration of our method in Figure 2, and apseudo-code in appendix B. Next, we delineate its details.3.1 C ONTRASTIVE LEARNING IN ROBUST LOW -DIMENSIONAL SUBSPACELetzi=Wevibe the linear projection from high-dimensional features to low-dimensional embed-dings, and ˆzi=zi/kzik2be the normalized embeddings. We aim to learn robust embeddings withtwo contrastive losses: unsupervised consistency loss and weakly-supervised mixup prototypical loss.Unsupervised consistency contrastive loss . Following the NT-Xent (Chen et al., 2020) loss for self-supervised representation learning, our consistency contrastive loss enforces images with semantic-preserving perturbations to have similar embeddings. Specifically, given a minibatch of bimages,we apply weak-augmentation and strong-augmentation to each image, and obtain 2binputs {xi}2bi=1.Weak augmentation is a standard flip-and-shift augmentation strategy, while strong augmentationconsists of color and brightness changes with details given in Section 4.1.We project the inputs into the low-dimensional space to obtain their normalized embeddings {ˆzi}2bi=1.Leti2{1,. . . ,b }be the index of a weakly-augmented input, and j(i)be the index of the strong-3Under review as a conference paper at ICLR 2021augmented input from the same source image, the consistency contrastive loss is defined as:Lcc=bXi=1logexp( ˆzi·ˆzj(i)/⌧)P2bk=1 i6=kexp( ˆzi·ˆzk/⌧), (1)where ⌧is a scalar temperature parameter. The consistency contrastive loss maximizes the innerproduct between the pair of positive embeddings ˆziandˆzj(i), while minimizing the inner productbetween 2(b1)pairs of negative embeddings. By mapping different views (augmentations) of thesame image to neighboring embeddings, the consistency contrastive loss encourages the network tolearn discriminative representation that is robust to low-level image corruption.Weakly-supervised mixup prototypical contrastive loss . Our second contrastive loss injects struc-tural knowledge of classes into the embedding space. Let Icdenote indices for the subset of imagesinDlabeled with class c, we calculate the class prototype as the normalized mean embedding:zc=1|Ic|Xi2I cˆzi,ˆzc=zckzck2, (2)where ˆziis the embedding of a center-cropped image, and the class prototypes are calculated at thebeginning of each epoch.The prototypical contrastive loss enforces an image embedding ˆzito be more similar to its corre-sponding class prototype ˆzyi, in contrast to other class prototypes:Lpc(ˆzi,yi)=logexp( ˆzi·ˆzyi/⌧)PCc=1exp( ˆzi·ˆzc/⌧). (3)Since the label yiis noisy, we would like to regularize the encoder from memorizing training labels.Mixup (Zhang et al., 2018) has been shown to be an effective method against label noise (Arazo et al.,2019; Li et al., 2020a). Inspired by it, we create virtual training samples by linearly interpolatinga sample (indexed by i) with another sample (indexed by m(i)) randomly chosen from the sameminibatch:xmi=xi+( 1 )xm(i), (4)where ⇠Beta( ↵, ↵).Letˆzmibe the normalized embedding for xmi, the mixup version of the prototypical contrastive loss isdefined as a weighted combination of the two Lpcw.r.tclass yiandym(i). It enforces the embeddingfor the interpolated input to have the same linear relationship w.r.t. the class prototypes.Lpcmix=2bXi=1Lpc(ˆzmi,yi)+( 1 )Lpc(ˆzmi,ym(i)). (5)Reconstruction loss . We also train a linear decoder Wdto reconstruct the high-dimensional featurevibased on zi. The reconstruction loss is defined as:Lrecon =2bXi=1kviWdzik22. (6)There are several benefits for training the autoencoder. First, with an optimal linear autoencoder,Wewill project viinto its low-dimensional principal subspace and can be understood as applyingPCA (Baldi & Hornik, 1989). Thus the low-dimensional representation ziis intrinsically robustto input noise. Second, minimizing the reconstruction error is maximizing a lower bound of themutual information between viandzi(Vincent et al., 2010). Therefore, knowledge learned fromthe proposed contrastive losses can be maximally preserved in the high-dimensional representation,which helps regularize the classifier.Classification loss . Given the softmax output from the classifier, p(y;xi), we define the classificationloss as the cross-entropy loss. Note that it is only applied to the weakly-augmented inputs.Lce=bXi=1logp(yi;xi). (7)4Under review as a conference paper at ICLR 2021(a)(b)(c)Figure 3: Curriculum learned by the proposed label correction method for training on CIFAR datasets with50% sym. noise. (a) Accuracy of pseudo-labels w.r.t to clean training labels. (b) Number of samples in theweakly-supervised subset Dtsup. (c) Label noise ratio in the weakly-supervised subset.The overall training objective is to minimize a weighted sum of all losses:L=Lce+!ccLcc+!pcLpcmix+!reconLrecon (8)Forallexperiments, we fix !cc=1,!recon =1, and change !pconly across datasets.3.2 N OISE CLEANING WITH SMOOTH NEIGHBORSAfter warming-up the model by training with the noisy labels {yi}ni=1fort0epochs, we aim toclean the noise by generating a soft pseudo-label qifor each training sample. Different fromprevious methods that perform label correction purely using the model’s softmax prediction, ourmethod exploits the structure of the low-dimensional subspace by aggregating information from top- kneighboring samples, which helps alleviate the confirmation bias problem.At the t-th epoch, for each sample xi, letptibe the classifier’s softmax prediction, let qt1ibe its softlabel from the previous epoch, we calculate the soft label for the current epoch as:qti=12pti+12kXj=1wtijqt1j, (9)where wtijrepresents the normalized affinity between a sample and its neighbor and is defined aswtij=exp( ˆzti·ˆztj/⌧)Pkj=1exp( ˆzti·ˆztj/⌧). We set k= 200 in all experiments.The soft label defined by eqn.(9) is the minimizer of the following quadratic loss function:J(qti)=kXj=1wtijqtiqt1j22+qtipti22. (10)The first term is a smoothness constraint which encourages the soft label to take a similar value as itsneighbors’ labels, whereas the second term attempts to maintain the model’s class prediction.We construct a weakly-supervised subset which contains (1) clean sample whose soft label score forthe original class yiis higher than a threshold ⌘0, (2)pseudo-labeled sample whose maximum softlabel score exceeds a threshold ⌘1. For pseudo-labeled samples, we convert their soft labels into hardlabels by taking the class with the maximum score.Dtsup={xi,yi|qti(yi)>⌘ 0}[{ xi,ˆyti= arg maxcqti(c)|8maxcqti(c)>⌘ 1,c2{1,. . ,C }}(11)Given the weakly-supervised subset, we modify the classification loss Lce, the mixup prototypicalcontrastive loss Lpcmix, and the calculation of prototypes ˆzc, such that they only use samples fromDtsup. The unsupervised losses ( i.e.LccandLrecon) still operate on all training samples.Learning curriculum. Our iterative noise cleaning method learns an effective training curriculum,which gradually increases the size of Dtsupas the pseudo-labels become more accurate. To demonstrate5Under review as a conference paper at ICLR 2021Dataset CIFAR-10 CIFAR-100Noise type Sym 20% Sym 50% Asym 40% Sym 20% Sym 50%Cross-Entropy (Li et al., 2020a) 82.7 57.9 72.3 61.8 37.3Forward (Patrini et al., 2017) 83.1 59.4 83.1 61.4 37.3Co-teaching+ (Yu et al., 2019) 88.2 84.1 - 64.1 45.3Mixup (Zhang et al., 2018) 92.3 77.6 - 66.0 46.6P-correction (Yi & Wu, 2019) 92.0 88.7 88.1 68.1 56.4MLNT (Li et al., 2019) 92.0 88.8 88.6 67.7 58.0M-correction (Arazo et al., 2019) 93.8 91.9 86.3 73.4 65.4DivideMix (Li et al., 2020a) 95.0 93.7 91.4 74.8 72.1DivideMix (reproduced) 95.1±0.1 93.6±0.2 91.3±0.8 75.1±0.2 72.1±0.3Ours (classifier) 95.8±0.1 94.3±0.2 91.9±0.8 79.1±0.1 74.8±0.4Ours (knn) 95.9±0.1 94.5±0.1 92.4±0.9 79.4±0.1 75.0±0.4Table 1: Comparison with state-of-the-art methods on CIFAR datasets with label noise. Numbers indicateaverage test accuracy (%) over last 10 epochs. We report results over 3 independent runs with randomly-generatedlabel noise. Results for previous methods are copied from Arazo et al. (2019); Li et al. (2020a). We re-runDivideMix (without ensemble) using the publicly available code on the same noisy data as ours.such curriculum, we analyse the noise cleaning statistics for training our model on CIFAR-10 andCIFAR-100 datasets with 50% label noise (experimental details explained in the next section). InFigure 3 (a), we show the accuracy of the soft pseudo-labels w.r.t to clean training labels (only usedfor analysis purpose). Our method can significantly reduce the ratio of label noise from 50% to 5%(for CIFAR-10) and 17% (for CIFAR-100). Figure 3 (b) shows the size of Dtsupas a percentage of thetotal number of training samples, and Figure 3 (c) shows the effective label noise ratio within theweakly-supervised subset Dtsup. Our method maintains a low noise ratio in the weakly-supervisedsubset, while gradually increasing its size to utilize more samples for the weakly-supervised losses.4E XPERIMENTIn this section, we validate the proposed method on multiple benchmarks with controlled noise andreal-world noise. Our method achieves state-of-the-art performance across all benchmarks. For faircomparison, we compare with DivideMix (Li et al., 2020a) without ensemble. In appendix A, wereport the result of our method with co-training and ensemble, which further improves performance.4.1 E XPERIMENTS ON CONTROLLED NOISY LABELSDataset. Following Tanaka et al. (2018); Li et al. (2020a), we corrupt the training data of CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009) with two types of label noise: symmetric andasymmetric . Symmetric noise is injected by randomly selecting a percentage of samples and changingtheir labels to random labels. Asymmetric noise is class-dependant, where labels are only changed tosimilar classes ( e.g. dog $cat, deer !horse). We experiment with multiple noise ratios: sym 20%,sym 50%, and asym 40% (see results for sym 80% and 90% in appendix A). Note that asymmetricnoise ratio cannot exceed 50% because certain classes would become theoretically indistinguishable.Implementation details. Same as previous works (Arazo et al., 2019; Li et al., 2020a), we usePreAct ResNet-18 (He et al., 2016) as our encoder model. We set the dimensionality of the bottlenecklayer as d= 50 . Our model is trained using SGD with a momentum of 0.9, a weight decay of 0.0005,and a batch size of 128. The network is trained for 200 epochs. We set the initial learning rateas 0.02 and use a cosine decay schedule. We apply standard crop and horizontal flip as the weakaugmentation. For strong augmentation, we use AugMix (Hendrycks et al., 2020), though othermethods ( e.g. SimAug (Chen et al., 2020)) work equally well. For all CIFAR experiments, we fix thehyper-parameters as !cc=1,!pc=5,!recon =1,⌧=0.3,↵=8,⌘1=0.9. For CIFAR-10, weactivate noise cleaning at epoch t0=5, and set ⌘0=0.1(sym.) or 0.4 (asym.). For CIFAR-100, weactivate noise cleaning at epoch t0= 15 , and set ⌘0=0.02. We use faiss-gpu (Johnson et al., 2017)for efficient knn search in the low-dimensional subspace, which finishes within 1 second.Results. Table 1 shows the comparison with existing methods. Our method outperforms previousmethods across all label noise settings. On the more challenging CIFAR-100, we achieve 3-4%accuracy improvement compared to the second-best method DivideMix. Moreover, our method ismore computational efficient than DivideMix, which needs co-training for noise filtering.6Under review as a conference paper at ICLR 2021CIFAR-10 CE Iterative GCE DivideMix Ours Ours50% sym. noise (Wang et al., 2018) (Zhang & Sabuncu, 2018) (Li et al., 2020a) (cls.) (knn)+ CIFAR-100 20k 53.6 87.2 87.3 89.0 91.5 93.1±0.3+ SVHN 20k 58.1 88.6 88.8 91.9 93.3 93.9±0.2+ Image Corruption 53.8 87.7 87.9 89.8 91.4 91.6±0.2Table 2: Comparison with state-of-the-art methods on datasets with label noise and input noise. Numbersindicate average test accuracy (%) over last 10 epochs. We report results over 3 independent runs with randomly-generated noise. We re-run previous methods using publicly available code with the same noisy data and modelarchitecture as ours.In order to demonstrate the advantage of the proposed low-dimensional embeddings, we perform k-nearest neighbor (knn) classification ( k= 200 ), by projecting test images into normalized embeddings.Compared to the trained classifier, knn achieves higher accuracy, which verifies the robustness of thelearned low-dimensional representations.4.2 E XPERIMENTS ON CONTROLLED NOISY LABELS WITH NOISY IMAGESDataset. We further corrupt a noisy CIFAR-10 dataset (sym. 50%) by injecting two types of inputnoise: out-of-distribution (OOD) images and input corruption. For OOD noise, we follow Wang et al.(2018) and add 20 kimages from either one of the two other datasets: CIFAR-100 and SVHN (Netzeret al., 2011), enlarging the training set to 70 k. A random CIFAR-10 label is assigned to each OODimage. For input corruption, we follow Hendrycks & Dietterich (2019) and corrupt each image inCIFAR-10 with a noise randomly chosen from the following four types: Fog,Snow ,Motion blur andGaussian noise . Examples of both types of input noise are shown in Figure 4. We follow the sameimplementation details as the CIFAR-10 experiments described in Section 4.1.CIFAR-100Gaussian NoiseFogSnowMotion BlurSVHNOut-of-distribution ImagesInput CorruptionFigure 4: Examples of input noise injected to CIFAR-10.Results. Table 2 shows the results, where our method consistently outperforms existing methods bya substantial margin. We observe that OOD images from a similar domain (CIFAR-100) are moreharmful than OOD images from a more different domain (SVHN). This is because noisy images thatare closer to the test data distribution are more likely to distort the decision boundary in a way thatnegatively affects test performance. Nevertheless, performing knn classification using the learnedembeddings demonstrates high robustness to input noise.In Figure 5, we show the t-SNE (Maaten & Hinton, 2008) visualization of the low-dimensionalembeddings for all training samples. As training progresses, our model learns to separate OODsamples (represented as gray points) from in-distribution samples, and cluster samples of the sameclass together despite their noisy labels.Figure 5: t-SNE visualization of low-dimensional embeddings for CIFAR-10 images (color represents the trueclass) +OOD images (gray points) from CIFAR-100 or SVHN. The model is trained on noisy CIFAR-10 (50 kimages with 50% label noise) and 20 kOOD images with random labels. Our method can effectively learn to (1)cluster CIFAR-10 images according to their true class, despite their noisy labels; (2) separate OOD samplesfrom in-distribution samples, such that their harm is reduced.7Under review as a conference paper at ICLR 2021Test dataset WebVision ILSVRC12Accuracy (%) top1 top5 top1 top5Forward (Patrini et al., 2017) 61.1 82.7 57.4 82.4Decoupling (Malach & Shalev-Shwartz, 2017) 62.5 84.7 58.3 82.3D2L (Ma et al., 2018) 62.7 84.0 57.8 81.4MentorNet (Jiang et al., 2018) 63.0 81.4 57.8 79.9Co-teaching (Han et al., 2018) 63.6 85.2 61.5 84.7INCV (Chen et al., 2019) 65.2 85.3 61.0 85.0DivideMix (Li et al., 2020a) 75.9 90.1 73.3 89.2Ours (w/o noise cleaning) 75.5 90.2 72.0 90.0Ours (classifier) 76.3 91.5 73.3 91.2Ours (knn) 77.8 91.3 74.4 90.9Table 3: Comparison with state-of-the-art methods trained on WebVision (mini).Method CE Forward Joint-Opt MLNT MentorMix SL DivideMix Ours (cls.) Ours (knn)Accuracy 69.21 69.84 72.16 73.47 74.30 74.45 74.48 74.84 74.97Table 4: Comparison with state-of-the-art methods on Clothing1M dataset.4.3 E XPERIMENTS ON REAL -WORLD NOISY DATADataset and implementation details. We verify our method on two real-word noisy datasets:WebVision (Li et al., 2017) and Clothing1M (Xiao et al., 2015). Webvision contains images crawledfrom the web using the same concepts from ImageNet ILSVRC12 (Deng et al., 2009). Followingprevious works (Chen et al., 2019; Li et al., 2020a), we perform experiments on the first 50 classes ofthe Google image subset. Clothing1M consists of images collected from online shopping websiteswhere labels were generated from surrounding texts. Note that we do not use the additional clean setfor training. For both experiments, we use the same model architecture as previous methods. Moreimplementation details are given in the appendix.Results. We report the results for WebVision in Table 3 and Clothing1M in Table 4, where weachieve state-of-the-art performance on both datasets. Our method achieves competitive performanceon WebVision even without performing noise cleaning, which demonstrates the robustness of thelearned representation. Appendix D shows examples of noisy images that are cleaned by our method.4.4 A BLATION STUDYEffect of the proposed components. In order to study the effect of the proposed components, weremove each of them and report accuracy of the classifier (knn) across four benchmarks. As shown inTable 5, the mixup prototypical contrastive loss ( Lpcmix) is most crucial to the model’s performance.The consistency contrastive loss ( Lcc) has a stronger effect with corrupted input or larger number ofclasses. We also experiment with removing mixup and using the standard prototypical contrastive loss,and using standard data augmentation (crop and horizontal flip) instead of AugMix. The proposedmethod still achieves state-of-the-art result with standard data augmentation.CIFAR-10 Sym 50% + CIFAR-100 20k + Image Corruption CIFAR-100 Sym 50%w/oLpcmix 85.9 (86.1) 79.7 (81.5) 81.6 (81.7) 65.6 (65.9)w/oLcc 93.7 (93.8) 91.3 (91.5) 89.4 (89.5) 71.9 (71.8)w/oLrecon 93.3 (94.0) 90.7 (92.9) 90.2 (91.0) 73.2 (73.9)w/o mixup 89.5 (89.9) 85.4 (87.0) 84.7 (84.9) 69.3 (69.7)w/ standard aug. 94.1 (94.3) 90.8 (92.9) 90.5 (90.7) 74.5 (75.0)DivideMix 93.6 89.0 89.8 72.1Ours 94.3 (94.5) 91.5 (93.1) 91.4 (91.6) 74.8 (75.0)Table 5: Effect of the proposed components. We show the accuracy of the classifier (knn) on four benchmarkswith different noise. Note that DivideMix (Li et al., 2020a) also performs mixup.8Under review as a conference paper at ICLR 2021Effect of bottleneck dimension. We vary the dimensionality of the bottleneck layer, d, and examinethe performance change in Table 6. Our model is in general not very sensitive to the change of d.bottleneck dimension d=2 5 d=5 0 d=1 0 0 d=2 0 0CIFAR-10 Sym 50% 93.4 94.3 94.2 93.7CIFAR-100 Sym 50% 73.8 74.8 74.4 73.8Table 6: Classifier’s test accuracy (%) with different low-dimensions.5C ONCLUSIONThis paper proposes noise-robust contrastive learning, a new method to combat noise in training databy learning robust representation. We demonstrate our model’s state-of-the-art performance withextensive experiments on multiple noisy datasets. For future work, we are interested in adapting ourmethod to other domains such as NLP or speech. We would also like to explore the potential of ourmethod for learning transferable representations that could be useful for down-stream tasks.<|im_end|> <|im_start|>assistant ### Review Title An extention of contrastive learning for training noise-robust deep networks. ### Review Text ####################################################################### Summary: The paper proposes noise-robust contrastive learning to combat label noise, out-of-distribution input and input corruption simultaneously. In particular, this paper embeds images into low-dimensional representations by training an autoencoder, and regularizes the geometric structure of the representations by contrastive learning. Furthermore, this paper introduces a new noise cleaning method based on the structure of the representations. Training samples with confident pseudo-labels are selected for supervised learning to clean both label noise and out-of-distribution noise. The effectiveness of the proposed method has been evaluated on multiple simulated and real-world noisy datasets. ####################################################################### Reasons for score: Overall, I vote for a weak acceptance. The proposed noise-robust contrastive learning introduces two contrastive losses: unsupervised consistency loss and supervised mixup prototypical loss. My major concern is about the clarity of the paper and some additional issues (see cons below). Hopefully the authors can address my concern in the rebuttal period. ####################################################################### Pros: 1. The paper takes one import issue in deep learning: learning from noisy data. 2. For me, the proposed supervised mixup prototypical contrastive loss is novel for learning with noisy data. Specifically, it injects structure knowledge of classes into embedding space by combining the mixup technique and prototypical contrastive loss. The design is reasonable and interesting. 3. This paper provides comprehensive experiments, including both qualitative analysis and quantitative results, to show the effectiveness of the proposed framework. In particular, the proposed method outperforms several state-of-the-art robust learning methods in learning with label noise, out-of-distribution input and input corruption. ####################################################################### Cons: 1. For the motivation, it would be better to provide more details about it, which seems not very clear to me. Particularly, it is unclear why the contrastive loss is duly used in the paper. Will the functionality of the unsupervised contrastive loss be achieved in the supervised prototypical loss? Additionally, the prototypical contrastive loss in equation (1) is an InfoNCE with normalized mean embeddings as the prototypes, which seems different from the ProtoNCE in the original paper [1]. It is better to clarify the the differences of the formulation and training strategy, and the reason of design of supervised prototypical loss in this paper. [1] Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, and Steven C.H. Hoi. Prototypical Contrastive Learning of Unsupervised Representations, In ICLR, 2020. 2. As the key contribution of this paper, mixup prototypical contrastive loss combines mixup technique and prototypical contrastive loss. In the appendix, the authors have provided ablation study to show the effect of proposed losses, and it shows that the mixup prototypical loss is most crucial to the model’s performance. Here, the authors utilizes mixup in two ways: first is to create virtual training samples, and the other is to define the mixup version of prototypical contrastive loss as an weighted combination of two prototypical contrastive loss with respect to true class and virtual class. However, the effect of mixup augmentation, prototypical contrastive loss and mixup prototypical contrastive loss are unclear. Since mixup has been shown to be an effective method against label noise, it would be more convincing if the authors can study the individual effect of each components of the proposed loss in the rebuttal period. 3. The proposed method uses many data augmentations, e.g., standard crop and horizontal flip as weak augmentation and AugMix as strong augmentation in the unsupervised consistency contrastive loss, and mixup technique in the supervised prototypical contrastive loss. I am concerning about the fairness in the experimental comparison. It is unclear to me if the authors have applied the same data augmentation to all the compared methods. ####################################################################### Questions during rebuttal period: Please address and clarify the cons above. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
SJdCUMZAW
ICLR.cc/2018/Conference
2018
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
["Ivo Popov", "Nicolas Heess", "Timothy P. Lillicrap", "Roland Hafner", "Gabriel Barth-Maron", "Matej Vecerik", "Thomas Lampe", "Tom Erez", "Yuval Tassa", "Martin Riedmiller"]
Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches. Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning. We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
["Reinforcement learning", "robotics", "dexterous manipulation", "off-policy learning"]
ABSTRACTGrasping an object and precisely stacking it on another is a difficult task for tra-ditional robotic control or hand-engineered approaches. Here we examine theproblem in simulation and provide techniques aimed at solving it via deep rein-forcement learning. We introduce two straightforward extensions to the Deep De-terministic Policy Gradient algorithm (DDPG), which make it significantly moredata-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies thatsuccessfully achieve precise stacking behaviour in >95% of 1000 randomly ini-tialized configurations. Further, our results on data efficiency hint that it may soonbe feasible to train successful stacking policies by collecting interactions on realrobots.1 I NTRODUCTIONDexterous manipulation is a fundamental challenge in robotics. Researchers have long sought away to enable robots to robustly and flexibly interact with fixed and free objects of different shapes,materials, and surface properties in the context of a broad range of tasks and environmental condi-tions. Such flexibility is very difficult to achieve with manually designed controllers. The recentresurgence of neural networks and “deep learning” has inspired hope that these methods will beas effective in the control domain as they are for perception. Indeed, recent work has used neuralnetworks to learn solutions to a variety of control problems (Lillicrap et al., 2016; Schulman et al.,2016; Gu et al., 2016c; Schulman et al., 2015; Heess et al., 2015; Levine & Abbeel, 2014).While the flexibility and generality of learning approaches is promising for robotics, these methodstypically require a large amount of data that grows with the complexity of the task. What is fea-sible on a simulated system, where hundreds of millions of control steps are possible (Mnih et al.,2016; Schulman et al., 2016), does not necessarily transfer to real robot applications due to unre-alistic learning times. One solution to this problem is to restrict the generality of the controller byincorporating task specific knowledge, e.g. in the form of dynamic movement primitives (Schaal,2006), or in the form of strong teaching signals, e.g. kinesthetic teaching of trajectories (Muellinget al., 2013). Recent works have had success learning flexible neural network policies directly onreal robots (e.g. (Levine et al., 2015; Gu et al., 2016a; Yahya et al., 2016)), but tasks as complex asprecise grasping-and-stacking remain daunting.In this paper we investigate in simulation the possibility of learning precise manipulation skills end-to-end with a general purpose model-free deep reinforcement learning algorithm. We assess thefeasibility of performing analogous experiments on real robotics hardware and provide guidancewith respect to the choice of learning algorithm, experimental setup, and the performance that wecan hope to achieve.We consider the task of picking up a Lego brick from the table and stacking it onto a second nearbybrick using a robotic arm and gripper. This task involves contact-rich interactions between therobotic arm and two freely moving objects. It also requires mastering several sub-skills (reaching,grasping, lifting, and stacking). Each of these sub-skills is challenging in its own right as they requireboth precision (for instance, successful stacking requires accurate alignment of the two bricks) and aswell as robust generalization over a large state space (e.g. different initial positions of the bricks andthe initial configuration of the arm). Finally, there exist non-trivial and long-ranging dependenciesbetween the solutions for different sub-tasks: for instance, the ability to successfully stack the brickdepends critically on having picked up the brick in a sensible way beforehand.1Under review as a conference paper at ICLR 2018Figure 1: Simulation rendering of the Lego task in different completion stages (also correspondingto different subtasks): (a) starting state, (b) reaching, (c) grasping, and (d) stackingThis paper makes several contributions: 1. We build on the Deep Deterministic Policy Gradient(DDPG; (Lillicrap et al., 2016)), a general purpose model-free reinforcement learning algorithm forcontinuous actions, and extend it in two ways: firstly, we improve the data efficiency of the algorithmby scheduling updates of the network parameters independently of interactions with the environ-ment. Secondly, we overcome the computational and experimental bottlenecks of single-machinesingle-robot learning by introducing a distributed version of DDPG which allows data collection andnetwork training to be spread out over multiple computers and robots. 2. We show how to use thesestraightforward algorithmic developments to solve a complex, multi-stage manipulation problem.We further propose two broadly applicable strategies that allow us to reliably find solutions to com-plex tasks and further reduce the amount of environmental interaction. The first of these strategiesis a recipe for designing effective shaping rewards for compositional tasks, while the second biasesthe distribution of initial states to achieve an effect akin a form of apprenticeship learning.In combination these contributions allow us to reliably learn robust policies for the full stackingtask from scratch in less than 10 million environment transitions. This corresponds to less than 10hours of interaction time on 16 robots. In addition, we show that when states from demonstrationtrajectories are used as the start states for learning trials the full task can be learned with 1 milliontransitions (i.e. less than 1 hour of interaction on 16 robots). To our knowledge our results providethe first demonstration of end-to-end learning for a complex manipulation problem involving mul-tiple freely moving objects. They are also suggest that it may be possible to learn such non-trivialmanipulation skills directly on real robots.2 R ELATED WORKReinforcement learning (RL) approaches solve tasks through repeated interactions with the envi-ronment guided by a reward signal of success or failure (Sutton & Barto, 1998). A distinction isoften made between value-based and policy search methods. The latter have been routinely appliedin robotics, in part because they straightforwardly handle continuous and high-dimensional actionspaces (Deisenroth et al., 2013), and applications include manipulation (Peters & Schaal, 2006;Kalakrishnan et al., 2011; Pastor et al., 2011; van Hoof et al., 2015; Levine et al., 2015; Gu et al.,2016a; Yahya et al., 2016; Gupta et al., 2016), locomotion e.g. (Kohl & Stone, 2004; Matsubaraet al., 2006), and a range of other challenges such as helicopter flight (Bagnell & Schneider, 2001).However, policy search methods can scale poorly with the number of parameters that need to beestimated, requiring the need for restricted policy classes, that in turn might not be powerful enoughfor solving complex tasks.One exception are guided policy search methods (GPS) (Levine et al., 2015; Yahya et al., 2016).These employ a teacher algorithm to locally optimize trajectories which are then summarized by aneural network policy. They gain data-efficiency by employing aggressive local policy updates andextensive training of their neural network policy. The teacher can use model-based (Levine et al.,2015) or model-free (Yahya et al., 2016) trajectory optimization. The former can struggle withstrong discontinuities in the dynamics, and both rely on access to a well defined and fully observedstate space.Alternatively, model-free value function approaches enable effective reuse of data and do not requirefull access to the state space or to a model of the environment. The use of rich function approxima-tors such as neural networks in value function methods dates back many years, e.g. (Webros, 1990;Tesauro, 1995; Hunt et al., 1992; Hafner & Riedmiller, 2007), and recent success with deep learninghas driven the development of new end-to-end training methods for challenging control problems(Mnih et al., 2015; Gu et al., 2016b;c; Lillicrap et al., 2016). Closely related to the ideas followed2Under review as a conference paper at ICLR 2018in this paper, (Gu et al., 2016a) demonstrates that value-based methods using neural network ap-proximators can be used for relatively simple robotic manipulation tasks in the real world (Gu et al.,2016c). This work also followed a recent trend towards the use of experimental rigs that allowparallelized data collection, e.g. (Pinto & Gupta, 2015), via the use of multiple robots from whichexperience is gathered simultaneously (Levine et al., 2016; Gu et al., 2016a; Yahya et al., 2016).Finally, the use of demonstration data has played an important role in robot learning, both as ameans to obtain suitable cost functions (Boularias et al., 2011; Kalakrishnan et al., 2013; Finn et al.,2016; Gupta et al., 2016) but also to bootstrap and thus speed up learning. For the latter, kinestheticteaching is widely used (Peters & Schaal, 2006; Kalakrishnan et al., 2011; Pastor et al., 2011; Yahyaet al., 2016), though the need for a human operator to be able to guide the robot through the fullmovement can be limiting.3 B ACKGROUNDIn this section we explain the learning problem and summarize the DDPG algorithm. We explain itsrelationship to other Q-function based RL algorithms in the Appendix.The RL problem consists of an agent interacting with an environment in a sequential manner to max-imize the expected sum of rewards. At time tthe agent observes the state xtof the system and pro-duces a control ut=(xt;)according to policy with parameters . This leads the environmentto transition to a new state xt+1according to the dynamics xt+1p(jxt;ut), and the agent receivesa rewardrt=r(xt;ut). The goal is to maximize the expected sum of discounted rewards J() =EPtt1r(xt;ut), whereis the distribution over trajectories = (x0;u0;x1;u1;:::)induced by the current policy: () =p(x0)Qt>0p(xtjxt1;(xt1;)).DPG (Silver et al., 2014) is a policy gradient algorithm for continuous action spaces that im-proves the deterministic policy function via backpropagation of the action-value gradient from alearned approximation to the Q-function. Specifically, DPG maintains a parametric approximationQ(xt;ut;)to the action value function Q(xt;ut)associated with andis chosen to minimizeE(xt;ut;xt+1)(Q(xt;ut;)yt)2(1)whereyt=r(xt;ut) +Q(xt+1;(xt+1)).is usually close to the marginal transition distribu-tion induced by but often not identical. For instance, during learning utmay be chosen to be anoisy version of (xt;), e.g.ut=(xt;) +whereN (0;2)andis then the transitiondistribution induced by this noisy policy. The policy parameters are then updated according to/E(x;u)@@uQ(x;u;)@@(x;): (2)DDPG (Lillicrap et al., 2016) incorporates experience replay and target networks to the original DPGalgorithm: Experience is collected into a buffer and updates to and(eqs. 1, 2) are computed usingmini-batch updates with samples from this buffer. A second set of ”target-networks” is maintainedwith parameters 0and0. These are used to compute ytin eqn. (1) and their parameters are slowlyupdated towards the current parameters ,. Both measures significantly improve the stability ofDDPG.The use of a Q-function facilitates off-policy learning. This decouples the collection of experiencedata from the updates of the policy and value networks which allows us to make many parameterupdate steps per step in the environment, ensuring that the networks are well fit to the data that iscurrently available.4 T ASK AND EXPERIMENTAL SETUPThe full task that we consider in this paper is to use the arm to pick up one Lego brick from thetable and stack it onto the remaining brick. This ”composite” task can be decomposed into severalsubtasks, including grasping and stacking. We consider the full task as well as the two sub-tasks inisolation:Starting state RewardGrasp Both bricks on table Brick 1 above tableStackInHand Brick 1 in gripper Bricks stackedStack Both bricks on table Bricks stacked3Under review as a conference paper at ICLR 2018In every episode the arm starts in a random configuration with an appropriate positioning of gripperand brick. We implement the experiments in a physically plausible simulation in MuJoCo (Todorovet al., 2012) with the simulated arm being closely matched to a real-world Jaco arm setup in our lab.Episodes are terminated after 150 steps of 50ms of physical simulation time. The agent thus has7.5 seconds to perform the task. Unless otherwise noted we give a reward of one upon successfulcompletion of the task and zero otherwise.The observation contains information about the angles and angular velocities of the 6 joints of thearm and 3 fingers of the gripper, as well as the position and orientation of the two bricks and relativedistances of the two bricks to the pinch position of the gripper (roughly the position where thefingertips would meet if the fingers are closed). The 9-dimensional continuous action directly setsthe velocities of the arm and finger joints. In experiments not reported in this paper we have triedusing observations with only the raw state of the brick and the arm configuration (i.e. without thevector between the end-effector and brick) This increased the number of environment interactionsneeded roughly by a factor of two to three.For each experimental condition we optimize the learning rate and train and measure the perfor-mance of 10 agents with different random initial network parameters. After every 30 trainingepisodes the agent is evaluated for 10 episodes. We used the mean performance at each evalua-tion phase as the performance measure presented in all plots. In the plots the line shows the meanperformance across agents and the shaded regions correspond to the range between the worst andbest performing one In all plots the x-axis represents the number of environment transitions seen sofar at an evaluation point (in millions) and the y-axis represent episode return.A video of the full setup and examples of policies solving the component and full tasks can be foundhere: https://www.youtube.com/watch?v=7vmXOGwLq24.5 A SYNCHRONOUS DPG WITH VARIABLE REPLAY STEPSIn this section we study two methods for extending the DDPG algorithm and find that they can havesignificant effect on data and computation efficiency, in some cases making the difference betweenfinding a solution to a task or not.Multiple mini-batch replay steps Deep neural networks can require many steps of gradient de-scent to converge. In a supervised learning setting this affects purely computation time. In rein-forcement learning, however, neural network training is interleaved with the acquisition of interac-tion experience giving rise to a complex interaction. To gain a better understanding of this effect wemodified the original DDPG algorithm as described in (Lillicrap et al., 2016) to perform a fixed butconfigurable number of mini-batch updates per step in the environment. In (Lillicrap et al., 2016)one update was performed after each new interaction step.We refer to DDPG with a configurable number of update steps as DPG-R and tested the impactof this modification on the two primitive tasks Grasp and StackInHand. The results are shown inFig. 2 (left). The number of update steps has a dramatic effect on the amount of experience datarequired. After one million interactions the original version of DDPG with a single update step(blue traces) appears to have made no progress towards a successful policy for stacking, and only asmall number of controllers have learned to grasp. Increasing the number of updates per interactionto 5 greatly improves the results (green traces), and with 40 updates (purple) the first successfulpolicies for stacking and grasping are obtained after 200,000 and 300,000 interactions respectively(corresponding to 1,300 and 2,000 episodes). Notably, for both tasks we continue to see a reductionin total environment interaction up to 40 update steps, the maximum used in the experiment.One possible explanation for this effect is the interaction alluded to above: insufficient training maylead to a form of underfitting of the policy. Since the policy is then used for exploration this affectsthe quality of the data collected in the next iteration which in turn has an effect on training in futureiterations leading to overall slow learning.We have observed in various experiments (not shown) that other aspects of the network architecture(layer sizes, non-linearities) can similarly affect learning speed. Finally, it is important to note thatone cannot replicate the effect of multiple replay steps simply by increasing the learning rate. Inpractice we find that attempts to do so make training unstable.4Under review as a conference paper at ICLR 2018Figure 2: Left: (a,b) Mean episode return as a function of number of transitions seen (in millions)of DPG-R (single worker) on the Grasp (left) and StackInHand (right) task with 1 (blue), 5 (green),10 (red), 20 (yellow) and 40 (purple) mini-batch updates per environment step. Right : (c,d) Meanepisode return as a function of number of transitions seen (in millions) of ADPG-R (16 workers) onthe Grasp (c) and StackInHand (d) task. Same colors as in (a,b).Asynchronous DPG Increasing the number of update steps relative to the number of environ-ment interactions greatly improves the data efficiency but also dramatically increases compute time.When the overall run time is dominated by the network updates it may scale linearly with the num-ber of replay steps. In this setting experiments can quickly become impractical and parallelizingcomputation can provide a solution. Similarly, in a robotics setup the overall run time is typicallydominated by the collection of interactions. In this case it is desirable to be able to collect experiencefrom multiple robots simultaneously (e.g. as in (Yahya et al., 2016; Gu et al., 2016a)).We therefore develop an asynchronous version of DPG that allows parallelization of training andenvironment interaction by combining multiple instances of an DPG-R actor and critic that eachshare their network parameters and can be configured to either share or have independent experiencereplay buffers. This is inspired by the A3C algorithm proposed in (Mnih et al., 2016), and alsoanalogous to (Gu et al., 2016a; Yahya et al., 2016): We employ asynchronous updates whereby eachworker has its own copy of the parameters and uses it for computing gradients which are then appliedto a shared parameter instance without any synchronization. We use the Adam optimizer (Kingma& Ba, 2014) with local non-shared first-order statistics and a single shared instance of second-orderstatistics. The pseudo code of the asynchronous DPG-R is shown in algorithm box 1.Algorithm 1 (A)DPG-R algorithmInitialize global shared critic and actor network parameters:Q00and00Pseudo code for each learner thread:Initialize critic network Q(s;ajQ)and policy network (sj)with weights Qand.Initialize target network Q0and0with weights: Q0 Q,0 Initialize replay buffer Rforepisode = 1, M doReceive initial observation state s1fort = 1, T doSelect action at=(stj) +Ntaccording to the current policy and exploration noisePerform action at, observe reward rtand new state st+1Store transition (st;at;rt;st+1)inRforupdate = 1, R doSample a random minibatch of Ntransitions (si;ai;ri;si+1)fromRSetyi=ri+Q0(si+1;0(si+1j0)jQ0)Perform asynchronous update of the shared critic parameters by minimizing the loss:L=1NPi(yiQ(si;aijQ)2)Perform asynchronous update of the shared policy parameters using the sampled gradient:r00jsi1NXiraQ(s;ajQ)jr(sj)jsiCopy the shared parameters to the local ones: Q Q00, 00Every S update steps, update the target networks: Q0 Q,0 end forend forend for5Under review as a conference paper at ICLR 2018Figure 3: Data-efficiency and computational efficiency of ADPG-R. Left: Performance of 16 work-ers vs single worker in terms of environment transitions (x-axis is millions of transitions; total for allworkers) for Grasp andStackInHand tasks. Right : Performance as a function of “wallclock” time(per-worker). Both with best replay step and learning rate selection.Figure 2 (right) compares the performance of ADPG-R for different number of update steps and16 workers (all workers performing both data collection and computing updates). Similar to Fig. 2(left) we find that increasing the ratio of update steps per environment steps improves data efficiency,although the effect appears to be somewhat less pronounced than for DPG-R.Figure 3 (left) directly compares the single-worker and asynchronous version of DPG-R. In bothcases we choose the best performing number of replay steps and learning rate. As we can see,the use of multiple workers does not affect overall data efficiency for StackInHand but it reducedroughly in half for Grasp, with the note that the single worker still hasn’t quite converged.Figure 3 (right) plots the same data but as a function of environment steps per worker . This measurecorresponds to the optimal wall clock efficiency that we can achieve, under the assumption that com-munication time between workers is negligible compared to environment interaction and gradientcomputation (this usually holds up to a certain degree of parallelization). The theoretical wall clocktime for 16 workers is about 16x lower for StackInHand and roughly 8x lower for Grasp.Overall these results show that distributing neural network training and data collection across mul-tiple computers and robots can be an extremely effective way of reducing the overall run time ofexperiments and thus making it feasible to run more challenging experiments. We make extensiveuse of asynchronous DPG for remaining the experiments.6 C OMPOSITE SHAPING REWARDSThe reward function in the previous section was ”sparse” or ”pure” reward where a reward of 1 wasgiven for states that correspond to successful task completion (brick lifted above 3cm for grasp; forstack) and 0 otherwise. For this reward to be useful it is necessary that the agent enters the goalregion at least some of the time. While possible for each of the two subtasks in isolation, this ishighly unlikely for the full task: without further guidance na ̈ıve random exploration is very unlikelyto lead to a successful grasp-and -stack as we experimentally verify in Figure 4.One solution are informative shaping rewards that provide a learning signal even for simple explo-ration strategies, e.g. by embedding information about the value function in the reward function.This is a convenient way of embedding prior knowledge about the solution and is a widely andsuccessfully used approach for simple problems. For complex sequential or compositional taskssuch as the one we are interested in here, however, a suitable reward function is often non-obviousand may require considerable effort and experimentation. In this section we propose and analyzeseveral reward functions for the full Stack task, and provide a general recipe that can be applied toother tasks with compositional structure.Shaping rewards are often defined using a distance from or progress towards a goal state. Analo-gously our composite (shaping) reward functions return an increasing reward as the agent completescomponents of the full task. They are either piece-wise constant or smoothly varying across differentregions of the state space that correspond to completed subtasks. In the case of Stack we use thefollowing reward components (see the Appendix):6Under review as a conference paper at ICLR 2018Sparse reward componentsSubtask Description RewardReach Brick 1 hypothetical pinch site position of the fingers is in a box around thefirst brick position0.125Grasp Brick 1 the first brick is located at least 3cm above the table surface, whichis only possible if the arm is holding the brick0.25Stack Brick 1 bricks stacked 1.00Smoothly varying reward componentsReaching to brick 1 distance of the pinch site to the first brick - non-linear bounded [0, 0.125]Reaching to stack while grasped: distance of the first brick to the stacking site of thesecond brick - non-linear bounded[0.25, 0.5]These reward components can be combined in different ways. We consider three different compositerewards in additional to the original sparse task reward:Grasp shaping :Grasp brick 1 andStack brick 1 , i.e. the agent receives a reward of 0.25 when brick1 has been grasped and a reward of 1.0 after completion of the full task.Reach and grasp shaping :Reach brick 1 ,Grasp brick 1 andStack brick 1 , i.e. the agent receives areward of 0.125 when close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a rewardof 1.0 after completion of the full task.Full composite shaping : the sparse reward components as before in combination with the distance-based smoothly varying componentsA full description of the reward functions is provided in the Appendix.Figure 4 shows the results of learning with the above reward functions (blue traces). No progresson the full task is made when learning with the sparse reward only. Grasp shaping allows the agentto learn to grasp but learning is very slow. Reach and grasp shaping substantially reduces the timeto successful grasping but learning does not progress beyond. Only with Full composite shaping ,i.e. with an additional intermediate reward component as in continuous reach, grasp, stack is the fullstacking task solved.The actual reward functions given above are specific to the stacking task. But the general principle,a piecewise-constant sequence of rewards that increases as components of the tasks are completed,augmented with simple smoothly varying rewards that guide towards completion of individual sub-tasks should be widely applicable. It is important to note that the above reward functions do notdescribe all aspects of the task solution: we do not tell the agent how to grasp or stack but merelyto bring the arm into a position where grasping (stacking) can be discovered from exploration andthe sparse reward component. This eases the burden on the designer and is less likely to change theoptimal solution in unwanted ways.7 L EARNING FROM INSTRUCTIVE STATESIn the previous section we described a strategy for designing effective compositional reward func-tions that alleviate the burden of exploration. However, designing such rewards can still be errorprone and we did indeed encounter several unexpected failure cases as shown in the supplementalvideo (https://www.youtube.com/watch?v=7vmXOGwLq24) and detailed in the Appendix. Fur-thermore, suitable rewards may rely on privileged information not easily available in a real roboticssetup. In this section we describe a second, complementary strategy for embedding prior knowledgeinto the training process and improving exploration.Specifically we propose to let the distribution of states at which the learning agent is initialized atthe beginning of an episode reflect the compositional nature of the task: e.g., instead of initializingthe agent at the beginning of the full task with both bricks on the table, we can initialize the agentoccasionally with the brick already in its hand and thus prepared for stacking in the same way aswhen learning the subtask StackInHand in section 5.More generally, we can initialize episodes with states taken from anywhere along or close to suc-cessful trajectories. Suitable states can be either manually defined (as in section 5), or they can beobtained from a human demonstrator or a previously trained agent that can partially solve the task.This can be seen as a form of apprenticeship learning in which we provide teacher information byinfluencing the state visitation distribution. Unlike many other forms of imitation or apprenticeshiplearning, however, this approach requires neither complete trajectories nor demonstrator actions.7Under review as a conference paper at ICLR 2018Figure 4: Effect of different reward shaping strategies and starting state distributions for the com-posite Stack task. Left to right: (a) No reward shaping; (b,c,d) reward shaping as explained in maintext. Colors indicate starting states: Both bricks on the table (blue); manually defined initial states(green); and initial states continuously on solution trajectories (red). On all plots, x-axis is millionsof transitions of total experience and y-axis is mean episode return. Policies with mean return over100 robustly perform the full Stack from different starting states. Without reward shaping and basicstart states only (a, blue) there is no learning progress. Instructive start states allow learning evenwith very uninformative sparse rewards indicating only overall task success (a,red).We perform experiments with two methods for generating the starting states. The first one uses themanually defined initial states from section 5 (both bricks located on the table or in states where thefirst brick is already in the gripper as if the agent just performed a successful grasp). The secondmethod initializes the learning agent at start states sampled randomly from successful demonstrationtrajectories (derived from agents previously trained end-to-end on the compositional reward).The results of these experiments are shown in Figure 4. Green traces show results for the fourreward functions from section 6 in combination with the manually defined start states (from section5). While there is still no learning for the sparse reward case, results obtained with all other rewardfunctions are improved. In particular, even for the second simplest reward function ( Grasp shaping )we obtain some controllers that can solve the full task. Learning with the full composite shapingreward is faster and more robust than without the use of instructive states.The leftmost plot of Figure 4 (red trace) shows results for the case where the episode is initializedanywhere along trajectories from a pre-trained controller (which was obtained using full compositeshaping; rightmost blue curve). We use this start state distribution in combination with the basicsparse reward for the overall case ( Stack without shaping ). Episodes were configured to be 50 steps,which we found to be better suited to this setup with assisted exploration. During testing we stillused episodes with 150 steps as before (so that the traces are comparable). We can see a largeimprovement in performance in comparison to the two-state method variant even in the absenceof any shaping rewards. We can learn a robust policy for all seeds within a total of 1 millionenvironment transitions — less than 1 hour of interaction time on 16 simulated robots.These results suggest that an appropriate start state distribution not only speeds up learning, it alsoallows simpler reward functions to be used. In our final experiment we found that the simplestreward function (i.e. only indicating overall experimental success) was sufficient to solve the task.In this case the robustness of trained policies to starting state variation is also encouraging. Over1000 test trials we obtain 99.2% success for Grasp , 98.2% for StackInHand , and 95.5% for the fullStack task.8 C ONCLUSIONWe have introduced two extensions to the DDPG algorithm which make it a practical method forlearning robust policies for complex continuous control tasks. We have shown that by decouplingthe frequency of network updates from the environment interaction we can dramatically improvedata-efficiency. Parallelizing data acquisition and learning substantially reduces wall clock time.In addition, we presented two methods that help to guide the learning process towards good solu-tions and thus reduce the pressure on exploration strategies and speed up learning. In combinationthese contributions allow us to solve a challenging manipulation problem end-to-end, suggestingthat many hard control problems lie within the reach of modern learning methods.It is of course challenging to judge the transfer of results in simulation to the real world. We havetaken care to design a physically realistic simulation, and in initial experiments, which we haveperformed both in simulation and on the physical robot, we generally find a good correspondence8Under review as a conference paper at ICLR 2018of performance and learning speed between simulation and real world. This makes us optimisticthat performance numbers may also hold when going to the real world. A second limitation ofour simulated setup is that it currently uses information about the state of the environment wouldrequire additional instrumentation of the experimental setup, e.g. to determine the position of thetwo bricks in the work space. These are issues that need to be addressed with care as experimentsmove to robotics hardware in the lab. Nevertheless, the algorithms and techniques presented hereoffer important guidance for the application of deep reinforcement learning methods to dexterousmanipulation on a real robot.
SkHZuZqxf
Multiple minor extensions
3: Clear rejection
The title is too generic and even a bit misleading. Dexterous manipulation usually refers to more complex skills, like in-hand manipulation or using the fingers to turn an object, and not simple pick and place tasks. Reinforcement learning methods are generally aiming to be data-efficient, and the method does not seem designed specifically for dexterous manipulation (which is actually a positive point, as it is more general). The paper presents two extensions for DDPG: multiple network updates per physical interactions, and asynchronous updates from multiple robots. As the authors themselves state, these contributions are fairly straightforward, and the contributions are largely based on prior works. The authors do evaluate the methods with different parameter settings to see the effects on learning performance. The simulation environment is fairly basic and seems unrealistic. The hand always starts close to the blocks, which are close together, so the inverse kinematics will be close to linear. The blocks are always oriented in the same direction and they can connect easily with no need to squeeze or wiggle them together. The task seems more difficult from the description in the paper, and the authors should describe the environment in more detail. Does the robot learn to flip the blocks over such that they can be stacked? The videos show the blocks turning over accidentally, but then the robot seems to give up. Having the robot learn to turn the blocks would make for a more challenging task and a better policy. The paper’s third contribution is a recipe for constructing shaped reward functions for composite tasks. The method relies on a predefined task structure (reach-grasp-stack) and is very similar to reward shaping already used in many other reinforcement learning for manipulation papers. A comparison of different methods for defining the rewards and a more formal description of the reward generation procedure would improve the impact of this section. The authors should also consider using tasks with longer sequences of actions, e.g., stacking four blocks. The fourth and final listed contribution is learning from demonstrated states. Providing the robot with prior knowledge and easier partial tasks will result in faster learning. This result is not surprising. It is not clear though how applicable this approach is for a real robot system. It effectively assumes that the robot can grasp the block and pick it up, such that it can learn the stacking part, while simultaneously still learning how to grasp the block and pick it up. For testing the real robot applicability, the authors should try having the robot learn the task without simulation resets. What are the actual benefits of using deep learning in this scenario? The authors mention skill representations, such as dynamic motor primitives, which employ significantly more prior knowledge than a deep network. However, as demonstrations of the task are provided, the task is divided into steps, the locations of the objects and finger tips are given, a suitable reward function is provided, and the generalization is only over the object positions, why not train a set of DMPs and optimize them with some additional reinforcement learning? The authors should consider adding a Cartesian DMP policy as a benchmark, as well as discussing the benefits of the proposed approach given the prior knowledge.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Data-efficient Deep Reinforcement Learning for Dexterous Manipulation ### Paper Abstract Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches. Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning. We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots. ### Paper Keywords ["Reinforcement learning", "robotics", "dexterous manipulation", "off-policy learning"] ### Paper Content ABSTRACTGrasping an object and precisely stacking it on another is a difficult task for tra-ditional robotic control or hand-engineered approaches. Here we examine theproblem in simulation and provide techniques aimed at solving it via deep rein-forcement learning. We introduce two straightforward extensions to the Deep De-terministic Policy Gradient algorithm (DDPG), which make it significantly moredata-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies thatsuccessfully achieve precise stacking behaviour in >95% of 1000 randomly ini-tialized configurations. Further, our results on data efficiency hint that it may soonbe feasible to train successful stacking policies by collecting interactions on realrobots.1 I NTRODUCTIONDexterous manipulation is a fundamental challenge in robotics. Researchers have long sought away to enable robots to robustly and flexibly interact with fixed and free objects of different shapes,materials, and surface properties in the context of a broad range of tasks and environmental condi-tions. Such flexibility is very difficult to achieve with manually designed controllers. The recentresurgence of neural networks and “deep learning” has inspired hope that these methods will beas effective in the control domain as they are for perception. Indeed, recent work has used neuralnetworks to learn solutions to a variety of control problems (Lillicrap et al., 2016; Schulman et al.,2016; Gu et al., 2016c; Schulman et al., 2015; Heess et al., 2015; Levine & Abbeel, 2014).While the flexibility and generality of learning approaches is promising for robotics, these methodstypically require a large amount of data that grows with the complexity of the task. What is fea-sible on a simulated system, where hundreds of millions of control steps are possible (Mnih et al.,2016; Schulman et al., 2016), does not necessarily transfer to real robot applications due to unre-alistic learning times. One solution to this problem is to restrict the generality of the controller byincorporating task specific knowledge, e.g. in the form of dynamic movement primitives (Schaal,2006), or in the form of strong teaching signals, e.g. kinesthetic teaching of trajectories (Muellinget al., 2013). Recent works have had success learning flexible neural network policies directly onreal robots (e.g. (Levine et al., 2015; Gu et al., 2016a; Yahya et al., 2016)), but tasks as complex asprecise grasping-and-stacking remain daunting.In this paper we investigate in simulation the possibility of learning precise manipulation skills end-to-end with a general purpose model-free deep reinforcement learning algorithm. We assess thefeasibility of performing analogous experiments on real robotics hardware and provide guidancewith respect to the choice of learning algorithm, experimental setup, and the performance that wecan hope to achieve.We consider the task of picking up a Lego brick from the table and stacking it onto a second nearbybrick using a robotic arm and gripper. This task involves contact-rich interactions between therobotic arm and two freely moving objects. It also requires mastering several sub-skills (reaching,grasping, lifting, and stacking). Each of these sub-skills is challenging in its own right as they requireboth precision (for instance, successful stacking requires accurate alignment of the two bricks) and aswell as robust generalization over a large state space (e.g. different initial positions of the bricks andthe initial configuration of the arm). Finally, there exist non-trivial and long-ranging dependenciesbetween the solutions for different sub-tasks: for instance, the ability to successfully stack the brickdepends critically on having picked up the brick in a sensible way beforehand.1Under review as a conference paper at ICLR 2018Figure 1: Simulation rendering of the Lego task in different completion stages (also correspondingto different subtasks): (a) starting state, (b) reaching, (c) grasping, and (d) stackingThis paper makes several contributions: 1. We build on the Deep Deterministic Policy Gradient(DDPG; (Lillicrap et al., 2016)), a general purpose model-free reinforcement learning algorithm forcontinuous actions, and extend it in two ways: firstly, we improve the data efficiency of the algorithmby scheduling updates of the network parameters independently of interactions with the environ-ment. Secondly, we overcome the computational and experimental bottlenecks of single-machinesingle-robot learning by introducing a distributed version of DDPG which allows data collection andnetwork training to be spread out over multiple computers and robots. 2. We show how to use thesestraightforward algorithmic developments to solve a complex, multi-stage manipulation problem.We further propose two broadly applicable strategies that allow us to reliably find solutions to com-plex tasks and further reduce the amount of environmental interaction. The first of these strategiesis a recipe for designing effective shaping rewards for compositional tasks, while the second biasesthe distribution of initial states to achieve an effect akin a form of apprenticeship learning.In combination these contributions allow us to reliably learn robust policies for the full stackingtask from scratch in less than 10 million environment transitions. This corresponds to less than 10hours of interaction time on 16 robots. In addition, we show that when states from demonstrationtrajectories are used as the start states for learning trials the full task can be learned with 1 milliontransitions (i.e. less than 1 hour of interaction on 16 robots). To our knowledge our results providethe first demonstration of end-to-end learning for a complex manipulation problem involving mul-tiple freely moving objects. They are also suggest that it may be possible to learn such non-trivialmanipulation skills directly on real robots.2 R ELATED WORKReinforcement learning (RL) approaches solve tasks through repeated interactions with the envi-ronment guided by a reward signal of success or failure (Sutton & Barto, 1998). A distinction isoften made between value-based and policy search methods. The latter have been routinely appliedin robotics, in part because they straightforwardly handle continuous and high-dimensional actionspaces (Deisenroth et al., 2013), and applications include manipulation (Peters & Schaal, 2006;Kalakrishnan et al., 2011; Pastor et al., 2011; van Hoof et al., 2015; Levine et al., 2015; Gu et al.,2016a; Yahya et al., 2016; Gupta et al., 2016), locomotion e.g. (Kohl & Stone, 2004; Matsubaraet al., 2006), and a range of other challenges such as helicopter flight (Bagnell & Schneider, 2001).However, policy search methods can scale poorly with the number of parameters that need to beestimated, requiring the need for restricted policy classes, that in turn might not be powerful enoughfor solving complex tasks.One exception are guided policy search methods (GPS) (Levine et al., 2015; Yahya et al., 2016).These employ a teacher algorithm to locally optimize trajectories which are then summarized by aneural network policy. They gain data-efficiency by employing aggressive local policy updates andextensive training of their neural network policy. The teacher can use model-based (Levine et al.,2015) or model-free (Yahya et al., 2016) trajectory optimization. The former can struggle withstrong discontinuities in the dynamics, and both rely on access to a well defined and fully observedstate space.Alternatively, model-free value function approaches enable effective reuse of data and do not requirefull access to the state space or to a model of the environment. The use of rich function approxima-tors such as neural networks in value function methods dates back many years, e.g. (Webros, 1990;Tesauro, 1995; Hunt et al., 1992; Hafner & Riedmiller, 2007), and recent success with deep learninghas driven the development of new end-to-end training methods for challenging control problems(Mnih et al., 2015; Gu et al., 2016b;c; Lillicrap et al., 2016). Closely related to the ideas followed2Under review as a conference paper at ICLR 2018in this paper, (Gu et al., 2016a) demonstrates that value-based methods using neural network ap-proximators can be used for relatively simple robotic manipulation tasks in the real world (Gu et al.,2016c). This work also followed a recent trend towards the use of experimental rigs that allowparallelized data collection, e.g. (Pinto & Gupta, 2015), via the use of multiple robots from whichexperience is gathered simultaneously (Levine et al., 2016; Gu et al., 2016a; Yahya et al., 2016).Finally, the use of demonstration data has played an important role in robot learning, both as ameans to obtain suitable cost functions (Boularias et al., 2011; Kalakrishnan et al., 2013; Finn et al.,2016; Gupta et al., 2016) but also to bootstrap and thus speed up learning. For the latter, kinestheticteaching is widely used (Peters & Schaal, 2006; Kalakrishnan et al., 2011; Pastor et al., 2011; Yahyaet al., 2016), though the need for a human operator to be able to guide the robot through the fullmovement can be limiting.3 B ACKGROUNDIn this section we explain the learning problem and summarize the DDPG algorithm. We explain itsrelationship to other Q-function based RL algorithms in the Appendix.The RL problem consists of an agent interacting with an environment in a sequential manner to max-imize the expected sum of rewards. At time tthe agent observes the state xtof the system and pro-duces a control ut=(xt;)according to policy with parameters . This leads the environmentto transition to a new state xt+1according to the dynamics xt+1p(jxt;ut), and the agent receivesa rewardrt=r(xt;ut). The goal is to maximize the expected sum of discounted rewards J() =EPtt1r(xt;ut), whereis the distribution over trajectories = (x0;u0;x1;u1;:::)induced by the current policy: () =p(x0)Qt>0p(xtjxt1;(xt1;)).DPG (Silver et al., 2014) is a policy gradient algorithm for continuous action spaces that im-proves the deterministic policy function via backpropagation of the action-value gradient from alearned approximation to the Q-function. Specifically, DPG maintains a parametric approximationQ(xt;ut;)to the action value function Q(xt;ut)associated with andis chosen to minimizeE(xt;ut;xt+1)(Q(xt;ut;)yt)2(1)whereyt=r(xt;ut) +Q(xt+1;(xt+1)).is usually close to the marginal transition distribu-tion induced by but often not identical. For instance, during learning utmay be chosen to be anoisy version of (xt;), e.g.ut=(xt;) +whereN (0;2)andis then the transitiondistribution induced by this noisy policy. The policy parameters are then updated according to/E(x;u)@@uQ(x;u;)@@(x;): (2)DDPG (Lillicrap et al., 2016) incorporates experience replay and target networks to the original DPGalgorithm: Experience is collected into a buffer and updates to and(eqs. 1, 2) are computed usingmini-batch updates with samples from this buffer. A second set of ”target-networks” is maintainedwith parameters 0and0. These are used to compute ytin eqn. (1) and their parameters are slowlyupdated towards the current parameters ,. Both measures significantly improve the stability ofDDPG.The use of a Q-function facilitates off-policy learning. This decouples the collection of experiencedata from the updates of the policy and value networks which allows us to make many parameterupdate steps per step in the environment, ensuring that the networks are well fit to the data that iscurrently available.4 T ASK AND EXPERIMENTAL SETUPThe full task that we consider in this paper is to use the arm to pick up one Lego brick from thetable and stack it onto the remaining brick. This ”composite” task can be decomposed into severalsubtasks, including grasping and stacking. We consider the full task as well as the two sub-tasks inisolation:Starting state RewardGrasp Both bricks on table Brick 1 above tableStackInHand Brick 1 in gripper Bricks stackedStack Both bricks on table Bricks stacked3Under review as a conference paper at ICLR 2018In every episode the arm starts in a random configuration with an appropriate positioning of gripperand brick. We implement the experiments in a physically plausible simulation in MuJoCo (Todorovet al., 2012) with the simulated arm being closely matched to a real-world Jaco arm setup in our lab.Episodes are terminated after 150 steps of 50ms of physical simulation time. The agent thus has7.5 seconds to perform the task. Unless otherwise noted we give a reward of one upon successfulcompletion of the task and zero otherwise.The observation contains information about the angles and angular velocities of the 6 joints of thearm and 3 fingers of the gripper, as well as the position and orientation of the two bricks and relativedistances of the two bricks to the pinch position of the gripper (roughly the position where thefingertips would meet if the fingers are closed). The 9-dimensional continuous action directly setsthe velocities of the arm and finger joints. In experiments not reported in this paper we have triedusing observations with only the raw state of the brick and the arm configuration (i.e. without thevector between the end-effector and brick) This increased the number of environment interactionsneeded roughly by a factor of two to three.For each experimental condition we optimize the learning rate and train and measure the perfor-mance of 10 agents with different random initial network parameters. After every 30 trainingepisodes the agent is evaluated for 10 episodes. We used the mean performance at each evalua-tion phase as the performance measure presented in all plots. In the plots the line shows the meanperformance across agents and the shaded regions correspond to the range between the worst andbest performing one In all plots the x-axis represents the number of environment transitions seen sofar at an evaluation point (in millions) and the y-axis represent episode return.A video of the full setup and examples of policies solving the component and full tasks can be foundhere: https://www.youtube.com/watch?v=7vmXOGwLq24.5 A SYNCHRONOUS DPG WITH VARIABLE REPLAY STEPSIn this section we study two methods for extending the DDPG algorithm and find that they can havesignificant effect on data and computation efficiency, in some cases making the difference betweenfinding a solution to a task or not.Multiple mini-batch replay steps Deep neural networks can require many steps of gradient de-scent to converge. In a supervised learning setting this affects purely computation time. In rein-forcement learning, however, neural network training is interleaved with the acquisition of interac-tion experience giving rise to a complex interaction. To gain a better understanding of this effect wemodified the original DDPG algorithm as described in (Lillicrap et al., 2016) to perform a fixed butconfigurable number of mini-batch updates per step in the environment. In (Lillicrap et al., 2016)one update was performed after each new interaction step.We refer to DDPG with a configurable number of update steps as DPG-R and tested the impactof this modification on the two primitive tasks Grasp and StackInHand. The results are shown inFig. 2 (left). The number of update steps has a dramatic effect on the amount of experience datarequired. After one million interactions the original version of DDPG with a single update step(blue traces) appears to have made no progress towards a successful policy for stacking, and only asmall number of controllers have learned to grasp. Increasing the number of updates per interactionto 5 greatly improves the results (green traces), and with 40 updates (purple) the first successfulpolicies for stacking and grasping are obtained after 200,000 and 300,000 interactions respectively(corresponding to 1,300 and 2,000 episodes). Notably, for both tasks we continue to see a reductionin total environment interaction up to 40 update steps, the maximum used in the experiment.One possible explanation for this effect is the interaction alluded to above: insufficient training maylead to a form of underfitting of the policy. Since the policy is then used for exploration this affectsthe quality of the data collected in the next iteration which in turn has an effect on training in futureiterations leading to overall slow learning.We have observed in various experiments (not shown) that other aspects of the network architecture(layer sizes, non-linearities) can similarly affect learning speed. Finally, it is important to note thatone cannot replicate the effect of multiple replay steps simply by increasing the learning rate. Inpractice we find that attempts to do so make training unstable.4Under review as a conference paper at ICLR 2018Figure 2: Left: (a,b) Mean episode return as a function of number of transitions seen (in millions)of DPG-R (single worker) on the Grasp (left) and StackInHand (right) task with 1 (blue), 5 (green),10 (red), 20 (yellow) and 40 (purple) mini-batch updates per environment step. Right : (c,d) Meanepisode return as a function of number of transitions seen (in millions) of ADPG-R (16 workers) onthe Grasp (c) and StackInHand (d) task. Same colors as in (a,b).Asynchronous DPG Increasing the number of update steps relative to the number of environ-ment interactions greatly improves the data efficiency but also dramatically increases compute time.When the overall run time is dominated by the network updates it may scale linearly with the num-ber of replay steps. In this setting experiments can quickly become impractical and parallelizingcomputation can provide a solution. Similarly, in a robotics setup the overall run time is typicallydominated by the collection of interactions. In this case it is desirable to be able to collect experiencefrom multiple robots simultaneously (e.g. as in (Yahya et al., 2016; Gu et al., 2016a)).We therefore develop an asynchronous version of DPG that allows parallelization of training andenvironment interaction by combining multiple instances of an DPG-R actor and critic that eachshare their network parameters and can be configured to either share or have independent experiencereplay buffers. This is inspired by the A3C algorithm proposed in (Mnih et al., 2016), and alsoanalogous to (Gu et al., 2016a; Yahya et al., 2016): We employ asynchronous updates whereby eachworker has its own copy of the parameters and uses it for computing gradients which are then appliedto a shared parameter instance without any synchronization. We use the Adam optimizer (Kingma& Ba, 2014) with local non-shared first-order statistics and a single shared instance of second-orderstatistics. The pseudo code of the asynchronous DPG-R is shown in algorithm box 1.Algorithm 1 (A)DPG-R algorithmInitialize global shared critic and actor network parameters:Q00and00Pseudo code for each learner thread:Initialize critic network Q(s;ajQ)and policy network (sj)with weights Qand.Initialize target network Q0and0with weights: Q0 Q,0 Initialize replay buffer Rforepisode = 1, M doReceive initial observation state s1fort = 1, T doSelect action at=(stj) +Ntaccording to the current policy and exploration noisePerform action at, observe reward rtand new state st+1Store transition (st;at;rt;st+1)inRforupdate = 1, R doSample a random minibatch of Ntransitions (si;ai;ri;si+1)fromRSetyi=ri+Q0(si+1;0(si+1j0)jQ0)Perform asynchronous update of the shared critic parameters by minimizing the loss:L=1NPi(yiQ(si;aijQ)2)Perform asynchronous update of the shared policy parameters using the sampled gradient:r00jsi1NXiraQ(s;ajQ)jr(sj)jsiCopy the shared parameters to the local ones: Q Q00, 00Every S update steps, update the target networks: Q0 Q,0 end forend forend for5Under review as a conference paper at ICLR 2018Figure 3: Data-efficiency and computational efficiency of ADPG-R. Left: Performance of 16 work-ers vs single worker in terms of environment transitions (x-axis is millions of transitions; total for allworkers) for Grasp andStackInHand tasks. Right : Performance as a function of “wallclock” time(per-worker). Both with best replay step and learning rate selection.Figure 2 (right) compares the performance of ADPG-R for different number of update steps and16 workers (all workers performing both data collection and computing updates). Similar to Fig. 2(left) we find that increasing the ratio of update steps per environment steps improves data efficiency,although the effect appears to be somewhat less pronounced than for DPG-R.Figure 3 (left) directly compares the single-worker and asynchronous version of DPG-R. In bothcases we choose the best performing number of replay steps and learning rate. As we can see,the use of multiple workers does not affect overall data efficiency for StackInHand but it reducedroughly in half for Grasp, with the note that the single worker still hasn’t quite converged.Figure 3 (right) plots the same data but as a function of environment steps per worker . This measurecorresponds to the optimal wall clock efficiency that we can achieve, under the assumption that com-munication time between workers is negligible compared to environment interaction and gradientcomputation (this usually holds up to a certain degree of parallelization). The theoretical wall clocktime for 16 workers is about 16x lower for StackInHand and roughly 8x lower for Grasp.Overall these results show that distributing neural network training and data collection across mul-tiple computers and robots can be an extremely effective way of reducing the overall run time ofexperiments and thus making it feasible to run more challenging experiments. We make extensiveuse of asynchronous DPG for remaining the experiments.6 C OMPOSITE SHAPING REWARDSThe reward function in the previous section was ”sparse” or ”pure” reward where a reward of 1 wasgiven for states that correspond to successful task completion (brick lifted above 3cm for grasp; forstack) and 0 otherwise. For this reward to be useful it is necessary that the agent enters the goalregion at least some of the time. While possible for each of the two subtasks in isolation, this ishighly unlikely for the full task: without further guidance na ̈ıve random exploration is very unlikelyto lead to a successful grasp-and -stack as we experimentally verify in Figure 4.One solution are informative shaping rewards that provide a learning signal even for simple explo-ration strategies, e.g. by embedding information about the value function in the reward function.This is a convenient way of embedding prior knowledge about the solution and is a widely andsuccessfully used approach for simple problems. For complex sequential or compositional taskssuch as the one we are interested in here, however, a suitable reward function is often non-obviousand may require considerable effort and experimentation. In this section we propose and analyzeseveral reward functions for the full Stack task, and provide a general recipe that can be applied toother tasks with compositional structure.Shaping rewards are often defined using a distance from or progress towards a goal state. Analo-gously our composite (shaping) reward functions return an increasing reward as the agent completescomponents of the full task. They are either piece-wise constant or smoothly varying across differentregions of the state space that correspond to completed subtasks. In the case of Stack we use thefollowing reward components (see the Appendix):6Under review as a conference paper at ICLR 2018Sparse reward componentsSubtask Description RewardReach Brick 1 hypothetical pinch site position of the fingers is in a box around thefirst brick position0.125Grasp Brick 1 the first brick is located at least 3cm above the table surface, whichis only possible if the arm is holding the brick0.25Stack Brick 1 bricks stacked 1.00Smoothly varying reward componentsReaching to brick 1 distance of the pinch site to the first brick - non-linear bounded [0, 0.125]Reaching to stack while grasped: distance of the first brick to the stacking site of thesecond brick - non-linear bounded[0.25, 0.5]These reward components can be combined in different ways. We consider three different compositerewards in additional to the original sparse task reward:Grasp shaping :Grasp brick 1 andStack brick 1 , i.e. the agent receives a reward of 0.25 when brick1 has been grasped and a reward of 1.0 after completion of the full task.Reach and grasp shaping :Reach brick 1 ,Grasp brick 1 andStack brick 1 , i.e. the agent receives areward of 0.125 when close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a rewardof 1.0 after completion of the full task.Full composite shaping : the sparse reward components as before in combination with the distance-based smoothly varying componentsA full description of the reward functions is provided in the Appendix.Figure 4 shows the results of learning with the above reward functions (blue traces). No progresson the full task is made when learning with the sparse reward only. Grasp shaping allows the agentto learn to grasp but learning is very slow. Reach and grasp shaping substantially reduces the timeto successful grasping but learning does not progress beyond. Only with Full composite shaping ,i.e. with an additional intermediate reward component as in continuous reach, grasp, stack is the fullstacking task solved.The actual reward functions given above are specific to the stacking task. But the general principle,a piecewise-constant sequence of rewards that increases as components of the tasks are completed,augmented with simple smoothly varying rewards that guide towards completion of individual sub-tasks should be widely applicable. It is important to note that the above reward functions do notdescribe all aspects of the task solution: we do not tell the agent how to grasp or stack but merelyto bring the arm into a position where grasping (stacking) can be discovered from exploration andthe sparse reward component. This eases the burden on the designer and is less likely to change theoptimal solution in unwanted ways.7 L EARNING FROM INSTRUCTIVE STATESIn the previous section we described a strategy for designing effective compositional reward func-tions that alleviate the burden of exploration. However, designing such rewards can still be errorprone and we did indeed encounter several unexpected failure cases as shown in the supplementalvideo (https://www.youtube.com/watch?v=7vmXOGwLq24) and detailed in the Appendix. Fur-thermore, suitable rewards may rely on privileged information not easily available in a real roboticssetup. In this section we describe a second, complementary strategy for embedding prior knowledgeinto the training process and improving exploration.Specifically we propose to let the distribution of states at which the learning agent is initialized atthe beginning of an episode reflect the compositional nature of the task: e.g., instead of initializingthe agent at the beginning of the full task with both bricks on the table, we can initialize the agentoccasionally with the brick already in its hand and thus prepared for stacking in the same way aswhen learning the subtask StackInHand in section 5.More generally, we can initialize episodes with states taken from anywhere along or close to suc-cessful trajectories. Suitable states can be either manually defined (as in section 5), or they can beobtained from a human demonstrator or a previously trained agent that can partially solve the task.This can be seen as a form of apprenticeship learning in which we provide teacher information byinfluencing the state visitation distribution. Unlike many other forms of imitation or apprenticeshiplearning, however, this approach requires neither complete trajectories nor demonstrator actions.7Under review as a conference paper at ICLR 2018Figure 4: Effect of different reward shaping strategies and starting state distributions for the com-posite Stack task. Left to right: (a) No reward shaping; (b,c,d) reward shaping as explained in maintext. Colors indicate starting states: Both bricks on the table (blue); manually defined initial states(green); and initial states continuously on solution trajectories (red). On all plots, x-axis is millionsof transitions of total experience and y-axis is mean episode return. Policies with mean return over100 robustly perform the full Stack from different starting states. Without reward shaping and basicstart states only (a, blue) there is no learning progress. Instructive start states allow learning evenwith very uninformative sparse rewards indicating only overall task success (a,red).We perform experiments with two methods for generating the starting states. The first one uses themanually defined initial states from section 5 (both bricks located on the table or in states where thefirst brick is already in the gripper as if the agent just performed a successful grasp). The secondmethod initializes the learning agent at start states sampled randomly from successful demonstrationtrajectories (derived from agents previously trained end-to-end on the compositional reward).The results of these experiments are shown in Figure 4. Green traces show results for the fourreward functions from section 6 in combination with the manually defined start states (from section5). While there is still no learning for the sparse reward case, results obtained with all other rewardfunctions are improved. In particular, even for the second simplest reward function ( Grasp shaping )we obtain some controllers that can solve the full task. Learning with the full composite shapingreward is faster and more robust than without the use of instructive states.The leftmost plot of Figure 4 (red trace) shows results for the case where the episode is initializedanywhere along trajectories from a pre-trained controller (which was obtained using full compositeshaping; rightmost blue curve). We use this start state distribution in combination with the basicsparse reward for the overall case ( Stack without shaping ). Episodes were configured to be 50 steps,which we found to be better suited to this setup with assisted exploration. During testing we stillused episodes with 150 steps as before (so that the traces are comparable). We can see a largeimprovement in performance in comparison to the two-state method variant even in the absenceof any shaping rewards. We can learn a robust policy for all seeds within a total of 1 millionenvironment transitions — less than 1 hour of interaction time on 16 simulated robots.These results suggest that an appropriate start state distribution not only speeds up learning, it alsoallows simpler reward functions to be used. In our final experiment we found that the simplestreward function (i.e. only indicating overall experimental success) was sufficient to solve the task.In this case the robustness of trained policies to starting state variation is also encouraging. Over1000 test trials we obtain 99.2% success for Grasp , 98.2% for StackInHand , and 95.5% for the fullStack task.8 C ONCLUSIONWe have introduced two extensions to the DDPG algorithm which make it a practical method forlearning robust policies for complex continuous control tasks. We have shown that by decouplingthe frequency of network updates from the environment interaction we can dramatically improvedata-efficiency. Parallelizing data acquisition and learning substantially reduces wall clock time.In addition, we presented two methods that help to guide the learning process towards good solu-tions and thus reduce the pressure on exploration strategies and speed up learning. In combinationthese contributions allow us to solve a challenging manipulation problem end-to-end, suggestingthat many hard control problems lie within the reach of modern learning methods.It is of course challenging to judge the transfer of results in simulation to the real world. We havetaken care to design a physically realistic simulation, and in initial experiments, which we haveperformed both in simulation and on the physical robot, we generally find a good correspondence8Under review as a conference paper at ICLR 2018of performance and learning speed between simulation and real world. This makes us optimisticthat performance numbers may also hold when going to the real world. A second limitation ofour simulated setup is that it currently uses information about the state of the environment wouldrequire additional instrumentation of the experimental setup, e.g. to determine the position of thetwo bricks in the work space. These are issues that need to be addressed with care as experimentsmove to robotics hardware in the lab. Nevertheless, the algorithms and techniques presented hereoffer important guidance for the application of deep reinforcement learning methods to dexterousmanipulation on a real robot.<|im_end|> <|im_start|>assistant ### Review Title Multiple minor extensions ### Review Text The title is too generic and even a bit misleading. Dexterous manipulation usually refers to more complex skills, like in-hand manipulation or using the fingers to turn an object, and not simple pick and place tasks. Reinforcement learning methods are generally aiming to be data-efficient, and the method does not seem designed specifically for dexterous manipulation (which is actually a positive point, as it is more general). The paper presents two extensions for DDPG: multiple network updates per physical interactions, and asynchronous updates from multiple robots. As the authors themselves state, these contributions are fairly straightforward, and the contributions are largely based on prior works. The authors do evaluate the methods with different parameter settings to see the effects on learning performance. The simulation environment is fairly basic and seems unrealistic. The hand always starts close to the blocks, which are close together, so the inverse kinematics will be close to linear. The blocks are always oriented in the same direction and they can connect easily with no need to squeeze or wiggle them together. The task seems more difficult from the description in the paper, and the authors should describe the environment in more detail. Does the robot learn to flip the blocks over such that they can be stacked? The videos show the blocks turning over accidentally, but then the robot seems to give up. Having the robot learn to turn the blocks would make for a more challenging task and a better policy. The paper’s third contribution is a recipe for constructing shaped reward functions for composite tasks. The method relies on a predefined task structure (reach-grasp-stack) and is very similar to reward shaping already used in many other reinforcement learning for manipulation papers. A comparison of different methods for defining the rewards and a more formal description of the reward generation procedure would improve the impact of this section. The authors should also consider using tasks with longer sequences of actions, e.g., stacking four blocks. The fourth and final listed contribution is learning from demonstrated states. Providing the robot with prior knowledge and easier partial tasks will result in faster learning. This result is not surprising. It is not clear though how applicable this approach is for a real robot system. It effectively assumes that the robot can grasp the block and pick it up, such that it can learn the stacking part, while simultaneously still learning how to grasp the block and pick it up. For testing the real robot applicability, the authors should try having the robot learn the task without simulation resets. What are the actual benefits of using deep learning in this scenario? The authors mention skill representations, such as dynamic motor primitives, which employ significantly more prior knowledge than a deep network. However, as demonstrations of the task are provided, the task is divided into steps, the locations of the objects and finger tips are given, a suitable reward function is provided, and the generalization is only over the object positions, why not train a set of DMPs and optimize them with some additional reinforcement learning? The authors should consider adding a Cartesian DMP policy as a benchmark, as well as discussing the benefits of the proposed approach given the prior knowledge. ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
ryMxXPFex
ICLR.cc/2017/conference
2017
Discrete Variational Autoencoders
["Jason Tyler Rolfe"]
Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
["Deep learning", "Unsupervised Learning"]
ABSTRACTProbabilistic models with discrete latent variables naturally capture datasets com-posed of discrete classes. However, they are difficult to train efficiently, sincebackpropagation through discrete variables is generally not possible. We presenta novel method to train a class of probabilistic models with discrete latent variablesusing the variational autoencoder framework, including backpropagation throughthe discrete latent variables. The associated class of probabilistic models com-prises an undirected discrete component and a directed hierarchical continuouscomponent. The discrete component captures the distribution over the discon-nected smooth manifolds induced by the continuous component. As a result, thisclass of models efficiently learns both the class of objects in an image, and theirspecific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101Silhouettes datasets.1 I NTRODUCTIONUnsupervised learning of probabilistic models is a powerful technique, facilitating tasks such asdenoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al.,2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest areprojections of underlying distributions over real-world objects into an observation space; the pixelsof an image, for example. When the real-world objects are of discrete types subject to continuoustransformations, these datasets comprise multiple disconnected smooth manifolds. For instance,natural images change smoothly with respect to the position and pose of objects, as well as scenelighting. At the same time, it is extremely difficult to directly transform the image of a person to oneof a car while remaining on the manifold of natural images.It would be natural to represent the space within each disconnected component with continuous vari-ables, and the selection amongst these components with discrete variables. In contrast, most state-of-the-art probabilistic models use exclusively discrete variables — as do DBMs (Salakhutdinov &Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau-ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) — or exclusively continuousvariables — as do V AEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellowet al., 2014).1Moreover, it would be desirable to apply the efficient variational autoencoder frame-work to models with discrete values, but this has proven difficult, since backpropagation throughdiscrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015).We introduce a novel class of probabilistic models, comprising an undirected graphical model de-fined over binary latent variables, followed by multiple directed layers of continuous latent variables.This class of models captures both the discrete class of the object in an image, and its specific con-tinuously deformable realization. Moreover, we show how these models can be trained efficientlyusing the variational autoencoder framework, including backpropagation through the binary latentvariables. We ensure that the evidence lower bound remains tight by incorporating a hierarchicalapproximation to the posterior distribution of the latent variables, which can model strong corre-lations. Since these models efficiently marry the variational autoencoder framework with discretelatent variables, we call them discrete variational autoencoders (discrete V AEs).1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables.1Published as a conference paper at ICLR 20171.1 V ARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONSConventionally, unsupervised learning algorithms maximize the log-likelihood of an observeddataset under a probabilistic model. Even stochastic approximations to the gradient of the log-likelihood generally require samples from the posterior and prior of the model. However, samplingfrom undirected graphical models is generally intractable (Long & Servedio, 2010), as is samplingfrom the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby,1993).In contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower boundon the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, L(x;; );Hinton & Zemel, 1994):L(x;; ) = logp(xj)KL[q(zjx;)jjp(zjx;)]; (1)whereq(zjx;)is a computationally tractable approximation to the posterior distribution p(zjx;).We denote the observed random variables by x, the latent random variables by z, the parameters ofthe generative model by , and the parameters of the approximating posterior by . The variationalautoencoder (V AE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroupsthe evidence lower bound of Equation 1 as:L(x;; ) =KL[q(zjx;)jjp(zj)]|{z}KL term+Eq[logp(xjz;)]|{z}autoencoding term: (2)In many cases of practical interest, such as Gaussian q(zjx)andp(z), the KL term of Equation 2can be computed analytically. Moreover, a low-variance stochastic approximation to the gradientof the autoencoding term can be obtained using backpropagation and the reparameterization trick,so long as samples from the approximating posterior q(zjx)can be drawn using a differentiable,deterministic function f(x;; )of the combination of the inputs, the parameters, and a set of input-and parameter-independent random variables D. For instance, samples can be drawn from aGaussian distribution with mean and variance determined by the input, N(m(x;);v(x;)), usingf(x;; ) =m(x;) +pv(x;), whereN (0;1). When such an f(x;; )exists,@@Eq(zjx;)[logp(xjz;)]1NXD@@logp(xjf(x;; );): (3)The reparameterization trick can be generalized to a large set of distributions, including nonfactorialapproximating posteriors. We address this issue carefully in Appendix A, where we find that ananalog of Equation 3 holds. Specifically, Diis the uniform distribution between 0and1, andf(x) =F1(x); (4)where Fis the conditional-marginal cumulative distribution function (CDF) defined by:Fi(x) =Zxx0i=1p(x0ijx1;:::;xi1): (5)However, this generalization is only possible if the inverse of the conditional-marginal CDF existsand is differentiable.A formulation comparable to Equation 3 is not possible for discrete distributions, such as restrictedBoltzmann machines (RBMs) (Smolensky, 1986):p(z) =1ZpeEp(z)=1Zpe(z>Wz+b>z); (6)wherez2f0;1gn,Zpis the partition function of p(z), and the lateral connection matrix Wistriangular. Any approximating posterior that only assigns nonzero probability to a discrete domaincorresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subsetof the interval [0;1]. The domain of the inverse CDF is thus also a proper subset of [0;1], and itsderivative is not defined, as required in Equations 3 and 4.22This problem remains even if we use the quantile function, F1p() = infnz2R:Rzz0=1p(z0)o;the derivative of which is either zero or infinite if pis a discrete distribution.2Published as a conference paper at ICLR 2017In the following sections, we present the discrete variational autoencoder (discrete V AE), a hierar-chical probabilistic model consising of an RBM,3followed by multiple directed layers of continuouslatent variables. This model is efficiently trainable using the variational autoencoder formalism, asin Equation 3, including backpropagation through its discrete latent variables.1.2 R ELATED WORKRecently, there have been many efforts to develop effective unsupervised learning techniques bybuilding upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016),Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed,2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos-terior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of thearchitecture of both approximating posterior and prior. Neural adaptive importance sampling (Duet al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi-mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structuredvariational autoencoders use conjugate priors to construct powerful approximating posterior distri-butions (Johnson et al., 2016).It is easy to construct a stochastic approximation to the gradient of the ELBO that admits bothdiscrete and continuous latent variables, and only requires computationally tractable samples. Un-fortunately, this naive estimate is impractically high-variance, leading to slow training and poorperformance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using thebaseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih& Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail inAppendix B.Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variablesgoverning the modes did not successfully align the modes of the prior with the intrinsic clustersof the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a V AE, but the discretevariables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves(2016) computes V AE-like gradient approximations for mixture models, but the component modelsare assumed to be simple factorial distributions. In contrast, discrete V AEs generalize to powerfulmultimodal priors on the discrete variables, and a wider set of mappings to the continuous units.The generative model underlying the discrete variational autoencoder resembles a deep belief net-work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer ofwhich is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz-mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, allhidden layers below the bipartite Boltzmann machine are composed of continuous latent variableswith a fully autoregressive layer-wise connection architecture. Each layer jreceives connectionsfrom all previous layers i<j , with connections from the bipartite Boltzmann machine mediated bya set of smoothing variables. However, these architectural differences are secondary to those in thegradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession ofRBMs, discrete variational autoencoders use the reparameterization trick to backpropagate throughthe evidence lower bound.2 B ACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDINGCONTINUOUS LATENT VARIABLESWhen working with an approximating posterior over discrete latent variables, we can effectivelysmooth the conditional-marginal CDF (defined by Equation 5 and Appendix A) by augmenting thelatent representation with a set of continous random variables. The conditional-marginal CDF overthe new continuous variables is invertible and its inverse is differentiable, as required in Equations 3and 4. We redefine the generative model so that the conditional distribution of the observed variablesgiven the latent variables only depends on the new continuous latent space. This does not alter3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected tothe rest of the model. In contrast to a traditional RBM, there is no distinction between the “visible” units and the“hidden” units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome“fully hidden bipartite Boltzmann machine.”3Published as a conference paper at ICLR 2017z1 z2 z3xζ1 ζ2 ζ3(a) Approximating posterior q(;zjx)z1 z2 z3ζ1 ζ2 ζ3x (b) Priorp(x;;z )xqρζxq(z= 1 |x,φ)F−1q(ζ|x,φ)(ρ)p(x|ζ,φ) (c) Autoencoding termFigure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and thenetwork realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari-ablesiare smoothed analogs of discrete latent variables zi, and insulate zfrom the observed vari-ablesxin the prior (b). This facilitates the marginalization of the discrete zin the autoencoding termof the ELBO, resulting in a network (c) in which all operations are deterministic and differentiablegiven independent stochastic input U[0;1].the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted asadding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a smallminibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and theprior. The conceptual motivation for this approach is discussed in Appendix C.Specifically, as shown in Figure 1a, we augment the latent representation in the approximating pos-terior with continuous random variables ,4conditioned on the discrete latent variables zof theRBM:q(;zjx;) =r(jz)q(zjx;); wherer(jz) =Yir(ijzi):The support of r(jz)for all values of zmust be connected, so the marginal distributionq(jx;) =Pzr(jz)q(zjx;)has a constant, connected support so long as 0<q(zjx;)<1.We further require that r(jz)is continuous and differentiable except at the endpoints of its support,so the inverse conditional-marginal CDF of q(jx;)is differentiable in Equations 3 and 4, as wediscuss in Appendix A.As shown in Figure 1b, we correspondingly augment the prior with :p(;zj) =r(jz)p(zj);wherer(jz)is the same as for the approximating posterior. Finally, we require that the conditionaldistribution over the observed variables only depends on :p(xj;z; ) =p(xj;): (7)The smoothing distribution r(jz)transforms the model into a continuous function of the distri-bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochasticapproximations to the gradient.Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on zandapplying Equation 16 of Appendix A, which generalizes Equation 3:@@Eq(;zjx;)[logp(xj;z; )]1NXU(0;1)n@@logpxjF1q(jx;)();: (8)4We always use a variant of zfor latent variables. This is zeta, or Greek z. The discrete latent variables zcan conveniently be thought of as English z.4Published as a conference paper at ICLR 2017If the approximating posterior is factorial, then each Fiis an independent CDF, without conditioningor marginalization.As we shall demonstrate in Section 2.1, F1q(jx;)()is a function of q(z= 1jx;), whereq(z= 1jx;)is a deterministic probability value calculated by a parameterized function, such asa neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input xis passed into a deterministic feedforward network q(z= 1jx;), for which the final nonlinearity isthe logistic function. Its output q, along with an independent random variable U[0;1], is passedinto the deterministic function F1q(jx;)()to produce a sample of . This, along with the originalinputx, is finally passed to logp(xj;). The expectation of this log probability with respect to isthe autoencoding term of the V AE formalism, as in Equation 2. Moreover, conditioned on the inputand the independent , this autoencoder is deterministic and differentiable, so backpropagation canbe used to produce a low-variance, computationally-efficient approximation to the gradient.2.1 S PIKE -AND -EXPONENTIAL SMOOTHING TRANSFORMATIONAs a concrete example consistent with sparse coding, consider the spike-and-exponential transfor-mation from binary zto continuous :r(ijzi= 0) =1;ifi= 00;otherwiseFr(ijzi=0)(0) = 1r(ijzi= 1) =(ee1;if0i10; otherwiseFr(ijzi=1)(0) =ee100=e01e1whereFp(0) =R01p()dis the CDF of probability distribution pin the domain [0;1]. Thistransformation from zitoiis invertible: i= 0,zi= 0, andi>0,zi= 1almost surely.5We can now find the CDF for q(jx;)as a function of q(z= 1jx;)in the domain [0;1], marginal-izing out the discrete z:Fq(jx;)(0) = (1q(z= 1jx;))Fr(ijzi=0)(0) +q(z= 1jx;)Fr(ijzi=1)(0)=q(z= 1jx;) e01e11!+ 1:To evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8,we must invert the conditional-marginal CDF Fq(jx;):F1q(jx;)() =(1logh+q1qe1+ 1i;if1q0; otherwise(9)where we use the substitution q(z= 1jx;)!qto simplify notation. For all values of the inde-pendent random variable U[0;1], the function F1q(jx;)()rectifies the input q(z= 1jx;)ifq1in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It isalso quasi-sigmoidal, in that F1is increasing but concave-down if q >1. The effect of onF1is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or thenoise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown inFigure 2c.Other expansions to the continuous space are possible. In Appendix D.1, we consider the case wherebothr(ijzi= 0) andr(ijzi= 1) are linear functions of ; in Appendix D.2, we develop a spike-and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation wherethe continuous is directly dependent on the input xin addition to the discrete z.5In the limit!1 ,i=zialmost surely, and the continuous variables can effectively be removed fromthe model. This trick can be used after training with finite to produce a model without smoothing variables .5Published as a conference paper at ICLR 201700.20.40.60.8100.51ρ= 0.2 ρ= 0.5ρ= 0.8q(z= 1 |x,φ)F−1q(ζ|x,φ)(ρ);f(x,ρ)(a) Spike-and-exp, 2f1;3;5g−1−0.50 0.5 1ρ< 0.5ρ≥0.5x (b) ReLU with dropout−1−0.50 0.5 1x±0.3x·(1±0.3)no noisex (c) ReLU with batch normFigure 2: Inverse CDF of the spike-and-exponential smoothing transformation for2f0:2;0:5;0:8g;= 1(dotted),= 3(solid), and = 5(dashed) (a). Rectified linearunit with dropout rate 0:5(b). Shift (red) and scale (green) noise from batch normalization; withmagnitude 0:3(dashed),0:3(dotted), or 0(solid blue); before a rectified linear unit (c). In allcases, the abcissa is the input and the ordinate is the output of the effective transfer function. Thenovel stochastic nonlinearity F1q(jx;)()from Figure 1c, of which (a) is an example, is qualitativelysimilar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).3 A CCOMMODATING EXPLAINING -AWAY WITH A HIERARCHICALAPPROXIMATING POSTERIORWhen a probabilistic model is defined in terms of a prior distribution p(z)and a conditional dis-tributionp(xjz), the observation of xoften induces strong correlations in the posterior p(zjx)dueto phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as theprior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintaintractability, many variational approximations use a product of independent approximating posteriordistributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)).To accommodate strong correlations in the posterior distribution while maintaining tractability, weintroduce a hierarchy into the approximating posterior q(zjx)over the discrete latent variables.Specifically, we divide the latent variables zof the RBM into disjoint groups, z1;:::;zk,6anddefine the approximating posterior via a directed acyclic graphical model over these groups:q(z1;1;:::;zk;kjx;) =Y1jkr(jjzj)q(zjji<j;x; ) whereq(zjji<j;x; ) =egj(i<j;x;)>zjQz2zj1 +egz(i<j;x;); (10)zj2f0;1gn, andgj(i<j;x; )is a parameterized function of the inputs and preceding i, such asa neural network. The corresponding graphical model is depicted in Figure 3a, and the integrationof such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap-pendix A. If each group zjcontains a single variable, this dependence structure is analogous to thatof a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution.However, the dependence of zjon the preceding discrete variables zi<jis always mediated by thecontinuous variables i<j.This hierarchical approximating posterior does not affect the form of the autoencoding term in Equa-tion 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministicprobability value q(zj= 1ji<j;x; )of Equation 10 is parameterized, generally by a neural net-work, in a manner analogous to Section 2. However, the final logistic function is made explicit inEquation 10 to simplify Equation 12. For each successive layer jof the autoencoder, input xand allpreviousi<jare passed into the network computing q(z= 1ji<j;x; ). Its outputqj, along with an6The continuous latent variables are divided into complementary disjoint groups 1;:::;k.6Published as a conference paper at ICLR 2017z1z2z3xζ1ζ2ζ3(a) Hierarch approx post q(;zjx)q1 q2 q3xζ1 ζ2 ζ3ρ ρ ρxq q q(z3= 1 |ζi<3,x,φ )F−1F−1F−1q3(ζ3|ζi<3,x,φ )(ρ)p(x|ζ,φ) (b) Hierarchical ELBO autoencoding termFigure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizingthe autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables zjonly dependon the previous zi<jthrough their smoothed analogs i<j. The autoregressive hierarchy allows theapproximating posterior to capture correlations and multiple modes. Again, all operations in (b) aredeterministic and differentiable given the stochastic input .independent random variable U[0;1], is passed to the deterministic function F1q(jji<j;x;)()to produce a sample of j. Once alljhave been recursively computed, the full along with theoriginal input xis finally passed to logp(xj;). The expectation of this log probability with respecttois again the autoencoding term of the V AE formalism, as in Equation 2.In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) canbe estimated stochastically using:@@KL[qjjp] =Eq(z1jx;)Eq(zkji<k;x;)@Ep(z;)@Ep(zj)@Ep(z;)@and(11)@@KL[qjjp] =E(g(x;)b)>@q@z>W1z1q@q@: (12)In particular, Equation 12 is substantially lower variance than the naive approach to calculate@@KL[qjjp], based upon REINFORCE.4 M ODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OFCONTINUOUS LATENT VARIABLESWe can make both the generative model and the approximating posterior more powerful by addingadditional layers of latent variables below the RBM. While these layers can be discrete, we focus oncontinuous variables, which have proven to be powerful in generative adversarial networks (Goodfel-low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al.,2014). When positioned below and conditioned on a layer of discrete variables, continuous variablescan build continuous manifolds, from which the discrete variables can choose. This complementsthe structure of the natural world, where a percept is determined first by a discrete selection of thetypes of objects present in the scene, and then by the position, pose, and other continuous attributesof these objects.Specifically, we augment the latent representation with continuous random variables z,7and defineboth the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi-cal models. We use the same autoregressive variable order for the approximating posterior as for the7We always use a variant of zfor latent variables. This is Fraktur z, or German z.7Published as a conference paper at ICLR 2017zζz1 z2 z3x(a) Approx post w/ cont latent vars q(z;;zjx)zζz1 z2 z3x (b) Prior w/ cont latent vars p(x;z;;z)Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy ofcontinuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1brespectively. The continuous latent variables zbuild continuous manifolds, capturing properties likeposition and pose, conditioned on the discrete latent variables z, which can represent the discretetypes of objects in the image.prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015),the deep V AE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016).We discuss the motivation for this ordering in Appendix G.The directed graphical model of the approximating posterior and prior are defined by:q(z0;:::; znjx;) =Y0mnq(zmjzl<m;x; ) andp(z0;:::; znj) =Y0mnp(zmjzl<m;): (13)The full set of latent variables associated with the RBM is now denoted by z0=fz1;1;:::;zk;kg.However, the conditional distributions in Equation 13 only depend on the continuous j. Each zm1denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model.The ELBO decomposes as:L(x;; ) =Eq(zjx;)[logp(xjz;)]XmEq(zl<mjx;)[KL[q(zmjzl<m;x; )jjp(zmjzl<m;)]]:(14)If bothq(zmjzl<m;x; )andp(zmjzl<m;)are Gaussian, then their KL divergence has a simpleclosed form, which is computationally efficient if the covariance matrices are diagonal. Gradientscan be passed through the q(zl<mjx;)using the traditional reparameterization trick, described inSection 1.1.5 R ESULTSDiscrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx-imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). Weparameterize all distributions with neural networks, except the smoothing distribution r(jz)dis-cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and V AEs (Kingma & Welling, 2014;Rezende et al., 2014), we define all approximating posteriors qto be explicit functions of x, withparametersshared between all inputs x. For distributions over discrete variables, the neural net-works output the parameters of a factorial Bernoulli distribution using a logistic final layer, as inEquation 10; for the continuous z, the neural networks output the mean and log-standard deviationof a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neu-ral networks parameterizing the distributions over z,z, andxconsists of a linear transformation,8Published as a conference paper at ICLR 2017batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBMpriorp(zj)in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous topersistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma& Ba, 2015) with a decaying step size.The hierarchical structure of Section 4 is very powerful, and overfits without strong regularizationof the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not inducesignificant overfitting. To address this problem, we use conditional distributions over the inputp(xj;)without any deterministic hidden layers, except on Omniglot. Moreover, all other neuralnetworks in the prior have only one hidden layer, the size of which is carefully controlled. Onstatically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers ofthe hierarchy over z. We present the details of the architecture in Appendix H.We train the resulting discrete V AEs on the permutation-invariant MNIST (LeCun et al., 1998), Om-niglot8(Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST,we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization.Estimates of the log-likelihood9of these models, computed using the method of (Burda et al., 2016)with 104importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis-crete V AEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0:08,0:04,0:05, and 0:11for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou-ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0:09,0:37,0:69, and0:66.MNIST (dynamic binarization) MNIST (static binarization)LL ELBO LLDBN -84.55 HVI -88.30 -85.51IWAE -82.90 DRAW -87.40Ladder V AE -81.74 NAIS NADE -83.67Discrete V AE -80.15 Normalizing flows -85.10Variational Gaussian process -81.32Discrete V AE -84.58 -81.01Omniglot Caltech-101 SilhouettesLL LLIWAE -103.38 IWAE -117.2Ladder V AE -102.11 RWS SBN -113.3RBM -100.46 RBM -107.8DBN -100.45 NAIS NADE -100.0Discrete V AE -97.43 Discrete V AE -97.6Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot,and Caltech-101 Silhouettes datasets. For the discrete V AE, the reported log-likelihood is estimatedwith104importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor-mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.We further analyze the performance of discrete V AEs on dynamically binarized MNIST: the largestof the datasets, requiring the least regularization. Figure 5 shows the generative output of a discreteV AE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con-stant across each sub-row of five samples, and variation amongst these samples is due to the layersof continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbssampling passes through the large, low-probability space between the modes only infrequently. Asa result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBMprior has well-separated modes. The RBM learns distinct, separated modes corresponding to thedifferent digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available fromhttps://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT.9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partitionfunction of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.9Published as a conference paper at ICLR 2017Figure 5: Evolution of samples from a discrete V AE trained on dynamically binarized MNIST, usingpersistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBMbetween successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM,but independent continuous latent variables, and shows the variation induced by the continuouslayers as opposed to the RBM. The long vertical sequences in which the digit ID remains constantdemonstrate that the RBM has well-separated modes, each of which corresponds to a single (oroccasionally two) digit IDs, despite being trained in a wholly unsupervised manner.1 10 100−80.5−80.4−80.3Log likelihood(a) Block Gibbs iterations8 16 32 64 128 (b) Num RBM units1 2 4 8 (c) RBM approx post layersFigure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a),the number of units in the RBM (b), and the number of layers in the approximating posterior overthe RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per-formance, but the network is robust to the size of the RBM (b).thousands of iterations of single-temperature block Gibbs sampling is required to mix between themodes. We present corresponding figures for the other datasets, and results on simplified architec-tures, in Appendix J.The large mixing time of block Gibbs sampling on the RBM suggests that training may be con-strained by sample quality. Figure 6a shows that performance10improves as we increase the num-ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(zj)inEquation 11. This suggests that a further improvement may be achieved by using a more effectivesampling algorithm, such as parallel tempering (Swendsen & Wang, 1986).10All models in Figure 6 use only 10layers of continuous latent variables, for computational efficiency.10Published as a conference paper at ICLR 2017Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the bestperformance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number ofunits in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model adataset like Imagenet, which has many classes and complicated relationships between the elementsof various classes.The benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, isapparent from Figure 6c. The reduction in performance when moving from 4to8layers in theapproximating posterior may be due to the fact that each additional hierarchical layer over the ap-proximating posterior adds three layers to the encoder neural network: there are two deterministichidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx-imating posterior layers significantly increases the number of parameters that must be trained, andincreases the risk of overfitting.6 C ONCLUSIONDatasets consisting of a discrete set of classes are naturally modeled using discrete latent variables.However, it is difficult to train probabilistic models over discrete latent variables using efficientgradient approximations based upon backpropagation, such as variational autoencoders, since it isgenerally not possible to backpropagate through a discrete variable (Bengio et al., 2013).We avoid this problem by symmetrically projecting the approximating posterior and the prior into acontinuous space. We then evaluate the autoencoding term of the evidence lower bound exclusivelyin the continous space, marginalizing out the original discrete latent representation. At the sametime, we evaluate the KL divergence between the approximating posterior and the true prior in theoriginal discrete space; due to the symmetry of the projection into the continuous space, it does notcontribute to the KL term. To increase representational power, we make the approximating posteriorover the discrete latent variables hierarchical, and add a hierarchy of continuous latent variablesbelow them. The resulting discrete variational autoencoder achieves state-of-the-art performance onthe permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.ACKNOWLEDGEMENTSZhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond providedthe library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the clustermanagement system, and a custom GPU acceleration library used for an earlier version of the code.We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; andone of our anonymous reviewers for identifying the problem addressed in Appendix D.3.
r16qB307e
Rich set of ideas on how to make VAEs work better.
8: Top 50% of accepted papers, clear accept
This is an interesting paper on how to handle reparameterization in VAEs when you have discrete variables. The idea is to introduce a smoothing transformation that is shared between the generative model and the recognition model (leading to cancellations). A second contribution is to introduce an RBM as the prior model P(z) and to use autoregressive connections in generative and recognition models. The whole package becomes a bit entangled and complex and it is hard to figure out what causes the claimed good performance. Experiments that study these contributions separately would have been nice. The framework does become a little complex but this should not be a problem if nice software is delivered that can be used in a plug and play mode. Overall, the paper is very rich with ideas so I think it would be a great contribution to the conference.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Discrete Variational Autoencoders ### Paper Abstract Probabilistic models with discrete latent variables naturally capture datasets composed of discrete classes. However, they are difficult to train efficiently, since backpropagation through discrete variables is generally not possible. We present a novel method to train a class of probabilistic models with discrete latent variables using the variational autoencoder framework, including backpropagation through the discrete latent variables. The associated class of probabilistic models comprises an undirected discrete component and a directed hierarchical continuous component. The discrete component captures the distribution over the disconnected smooth manifolds induced by the continuous component. As a result, this class of models efficiently learns both the class of objects in an image, and their specific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets. ### Paper Keywords ["Deep learning", "Unsupervised Learning"] ### Paper Content ABSTRACTProbabilistic models with discrete latent variables naturally capture datasets com-posed of discrete classes. However, they are difficult to train efficiently, sincebackpropagation through discrete variables is generally not possible. We presenta novel method to train a class of probabilistic models with discrete latent variablesusing the variational autoencoder framework, including backpropagation throughthe discrete latent variables. The associated class of probabilistic models com-prises an undirected discrete component and a directed hierarchical continuouscomponent. The discrete component captures the distribution over the discon-nected smooth manifolds induced by the continuous component. As a result, thisclass of models efficiently learns both the class of objects in an image, and theirspecific realization in pixels, from unsupervised data; and outperforms state-of-the-art methods on the permutation-invariant MNIST, Omniglot, and Caltech-101Silhouettes datasets.1 I NTRODUCTIONUnsupervised learning of probabilistic models is a powerful technique, facilitating tasks such asdenoising and inpainting, and regularizing supervised tasks such as classification (Hinton et al.,2006; Salakhutdinov & Hinton, 2009; Rasmus et al., 2015). Many datasets of practical interest areprojections of underlying distributions over real-world objects into an observation space; the pixelsof an image, for example. When the real-world objects are of discrete types subject to continuoustransformations, these datasets comprise multiple disconnected smooth manifolds. For instance,natural images change smoothly with respect to the position and pose of objects, as well as scenelighting. At the same time, it is extremely difficult to directly transform the image of a person to oneof a car while remaining on the manifold of natural images.It would be natural to represent the space within each disconnected component with continuous vari-ables, and the selection amongst these components with discrete variables. In contrast, most state-of-the-art probabilistic models use exclusively discrete variables — as do DBMs (Salakhutdinov &Hinton, 2009), NADEs (Larochelle & Murray, 2011), sigmoid belief networks (Spiegelhalter & Lau-ritzen, 1990; Bornschein et al., 2016), and DARNs (Gregor et al., 2014) — or exclusively continuousvariables — as do V AEs (Kingma & Welling, 2014; Rezende et al., 2014) and GANs (Goodfellowet al., 2014).1Moreover, it would be desirable to apply the efficient variational autoencoder frame-work to models with discrete values, but this has proven difficult, since backpropagation throughdiscrete variables is generally not possible (Bengio et al., 2013; Raiko et al., 2015).We introduce a novel class of probabilistic models, comprising an undirected graphical model de-fined over binary latent variables, followed by multiple directed layers of continuous latent variables.This class of models captures both the discrete class of the object in an image, and its specific con-tinuously deformable realization. Moreover, we show how these models can be trained efficientlyusing the variational autoencoder framework, including backpropagation through the binary latentvariables. We ensure that the evidence lower bound remains tight by incorporating a hierarchicalapproximation to the posterior distribution of the latent variables, which can model strong corre-lations. Since these models efficiently marry the variational autoencoder framework with discretelatent variables, we call them discrete variational autoencoders (discrete V AEs).1Spike-and-slab RBMs (Courville et al., 2011) use both discrete and continuous latent variables.1Published as a conference paper at ICLR 20171.1 V ARIATIONAL AUTOENCODERS ARE INCOMPATIBLE WITH DISCRETE DISTRIBUTIONSConventionally, unsupervised learning algorithms maximize the log-likelihood of an observeddataset under a probabilistic model. Even stochastic approximations to the gradient of the log-likelihood generally require samples from the posterior and prior of the model. However, samplingfrom undirected graphical models is generally intractable (Long & Servedio, 2010), as is samplingfrom the posterior of a directed graphical model conditioned on its leaf variables (Dagum & Luby,1993).In contrast to the exact log-likelihood, it can be computationally efficient to optimize a lower boundon the log-likelihood (Jordan et al., 1999), such as the evidence lower bound (ELBO, L(x;; );Hinton & Zemel, 1994):L(x;; ) = logp(xj)KL[q(zjx;)jjp(zjx;)]; (1)whereq(zjx;)is a computationally tractable approximation to the posterior distribution p(zjx;).We denote the observed random variables by x, the latent random variables by z, the parameters ofthe generative model by , and the parameters of the approximating posterior by . The variationalautoencoder (V AE; Kingma & Welling, 2014; Rezende et al., 2014; Kingma et al., 2014) regroupsthe evidence lower bound of Equation 1 as:L(x;; ) =KL[q(zjx;)jjp(zj)]|{z}KL term+Eq[logp(xjz;)]|{z}autoencoding term: (2)In many cases of practical interest, such as Gaussian q(zjx)andp(z), the KL term of Equation 2can be computed analytically. Moreover, a low-variance stochastic approximation to the gradientof the autoencoding term can be obtained using backpropagation and the reparameterization trick,so long as samples from the approximating posterior q(zjx)can be drawn using a differentiable,deterministic function f(x;; )of the combination of the inputs, the parameters, and a set of input-and parameter-independent random variables D. For instance, samples can be drawn from aGaussian distribution with mean and variance determined by the input, N(m(x;);v(x;)), usingf(x;; ) =m(x;) +pv(x;), whereN (0;1). When such an f(x;; )exists,@@Eq(zjx;)[logp(xjz;)]1NXD@@logp(xjf(x;; );): (3)The reparameterization trick can be generalized to a large set of distributions, including nonfactorialapproximating posteriors. We address this issue carefully in Appendix A, where we find that ananalog of Equation 3 holds. Specifically, Diis the uniform distribution between 0and1, andf(x) =F1(x); (4)where Fis the conditional-marginal cumulative distribution function (CDF) defined by:Fi(x) =Zxx0i=1p(x0ijx1;:::;xi1): (5)However, this generalization is only possible if the inverse of the conditional-marginal CDF existsand is differentiable.A formulation comparable to Equation 3 is not possible for discrete distributions, such as restrictedBoltzmann machines (RBMs) (Smolensky, 1986):p(z) =1ZpeEp(z)=1Zpe(z>Wz+b>z); (6)wherez2f0;1gn,Zpis the partition function of p(z), and the lateral connection matrix Wistriangular. Any approximating posterior that only assigns nonzero probability to a discrete domaincorresponds to a CDF that is piecewise-contant. That is, the range of the CDF is a proper subsetof the interval [0;1]. The domain of the inverse CDF is thus also a proper subset of [0;1], and itsderivative is not defined, as required in Equations 3 and 4.22This problem remains even if we use the quantile function, F1p() = infnz2R:Rzz0=1p(z0)o;the derivative of which is either zero or infinite if pis a discrete distribution.2Published as a conference paper at ICLR 2017In the following sections, we present the discrete variational autoencoder (discrete V AE), a hierar-chical probabilistic model consising of an RBM,3followed by multiple directed layers of continuouslatent variables. This model is efficiently trainable using the variational autoencoder formalism, asin Equation 3, including backpropagation through its discrete latent variables.1.2 R ELATED WORKRecently, there have been many efforts to develop effective unsupervised learning techniques bybuilding upon variational autoencoders. Importance weighted autoencoders (Burda et al., 2016),Hamiltonian variational inference (Salimans et al., 2015), normalizing flows (Rezende & Mohamed,2015), and variational Gaussian processes (Tran et al., 2016) improve the approximation to the pos-terior distribution. Ladder variational autoencoders (Sønderby et al., 2016) increase the power of thearchitecture of both approximating posterior and prior. Neural adaptive importance sampling (Duet al., 2015) and reweighted wake-sleep (Bornschein & Bengio, 2015) use sophisticated approxi-mations to the gradient of the log-likelihood that do not admit direct backpropagation. Structuredvariational autoencoders use conjugate priors to construct powerful approximating posterior distri-butions (Johnson et al., 2016).It is easy to construct a stochastic approximation to the gradient of the ELBO that admits bothdiscrete and continuous latent variables, and only requires computationally tractable samples. Un-fortunately, this naive estimate is impractically high-variance, leading to slow training and poorperformance (Paisley et al., 2012). The variance of the gradient can be reduced somewhat using thebaseline technique, originally called REINFORCE in the reinforcement learning literature (Mnih& Gregor, 2014; Williams, 1992; Mnih & Rezende, 2016), which we discuss in greater detail inAppendix B.Prior efforts by Makhzani et al. (2015) to use multimodal priors with implicit discrete variablesgoverning the modes did not successfully align the modes of the prior with the intrinsic clustersof the dataset. Rectified Gaussian units allow spike-and-slab sparsity in a V AE, but the discretevariables are also implicit, and their prior factorial and thus unimodal (Salimans, 2016). Graves(2016) computes V AE-like gradient approximations for mixture models, but the component modelsare assumed to be simple factorial distributions. In contrast, discrete V AEs generalize to powerfulmultimodal priors on the discrete variables, and a wider set of mappings to the continuous units.The generative model underlying the discrete variational autoencoder resembles a deep belief net-work (DBN; Hinton et al., 2006). A DBN comprises a sigmoid belief network, the top layer ofwhich is conditioned on the visible units of an RBM. In contrast to a DBN, we use a bipartite Boltz-mann machine, with both sides of the bipartite split connected to the rest of the model. Moreover, allhidden layers below the bipartite Boltzmann machine are composed of continuous latent variableswith a fully autoregressive layer-wise connection architecture. Each layer jreceives connectionsfrom all previous layers i<j , with connections from the bipartite Boltzmann machine mediated bya set of smoothing variables. However, these architectural differences are secondary to those in thegradient estimation technique. Whereas DBNs are traditionally trained by unrolling a succession ofRBMs, discrete variational autoencoders use the reparameterization trick to backpropagate throughthe evidence lower bound.2 B ACKPROPAGATING THROUGH DISCRETE LATENT VARIABLES BY ADDINGCONTINUOUS LATENT VARIABLESWhen working with an approximating posterior over discrete latent variables, we can effectivelysmooth the conditional-marginal CDF (defined by Equation 5 and Appendix A) by augmenting thelatent representation with a set of continous random variables. The conditional-marginal CDF overthe new continuous variables is invertible and its inverse is differentiable, as required in Equations 3and 4. We redefine the generative model so that the conditional distribution of the observed variablesgiven the latent variables only depends on the new continuous latent space. This does not alter3Strictly speaking, the prior contains a bipartite Boltzmann machine, all the units of which are connected tothe rest of the model. In contrast to a traditional RBM, there is no distinction between the “visible” units and the“hidden” units. Nevertheless, we use the familiar term RBM in the sequel, rather than the more cumbersome“fully hidden bipartite Boltzmann machine.”3Published as a conference paper at ICLR 2017z1 z2 z3xζ1 ζ2 ζ3(a) Approximating posterior q(;zjx)z1 z2 z3ζ1 ζ2 ζ3x (b) Priorp(x;;z )xqρζxq(z= 1 |x,φ)F−1q(ζ|x,φ)(ρ)p(x|ζ,φ) (c) Autoencoding termFigure 1: Graphical models of the smoothed approximating posterior (a) and prior (b), and thenetwork realizing the autoencoding term of the ELBO from Equation 2 (c). Continuous latent vari-ablesiare smoothed analogs of discrete latent variables zi, and insulate zfrom the observed vari-ablesxin the prior (b). This facilitates the marginalization of the discrete zin the autoencoding termof the ELBO, resulting in a network (c) in which all operations are deterministic and differentiablegiven independent stochastic input U[0;1].the fundamental form of the model, or the KL term of Equation 2; rather, it can be interpreted asadding a noisy nonlinearity, like dropout (Srivastava et al., 2014) or batch normalization with a smallminibatch (Ioffe & Szegedy, 2015), to each latent variable in the approximating posterior and theprior. The conceptual motivation for this approach is discussed in Appendix C.Specifically, as shown in Figure 1a, we augment the latent representation in the approximating pos-terior with continuous random variables ,4conditioned on the discrete latent variables zof theRBM:q(;zjx;) =r(jz)q(zjx;); wherer(jz) =Yir(ijzi):The support of r(jz)for all values of zmust be connected, so the marginal distributionq(jx;) =Pzr(jz)q(zjx;)has a constant, connected support so long as 0<q(zjx;)<1.We further require that r(jz)is continuous and differentiable except at the endpoints of its support,so the inverse conditional-marginal CDF of q(jx;)is differentiable in Equations 3 and 4, as wediscuss in Appendix A.As shown in Figure 1b, we correspondingly augment the prior with :p(;zj) =r(jz)p(zj);wherer(jz)is the same as for the approximating posterior. Finally, we require that the conditionaldistribution over the observed variables only depends on :p(xj;z; ) =p(xj;): (7)The smoothing distribution r(jz)transforms the model into a continuous function of the distri-bution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochasticapproximations to the gradient.Given this expansion, we can simplify Equations 3 and 4 by dropping the dependence on zandapplying Equation 16 of Appendix A, which generalizes Equation 3:@@Eq(;zjx;)[logp(xj;z; )]1NXU(0;1)n@@logpxjF1q(jx;)();: (8)4We always use a variant of zfor latent variables. This is zeta, or Greek z. The discrete latent variables zcan conveniently be thought of as English z.4Published as a conference paper at ICLR 2017If the approximating posterior is factorial, then each Fiis an independent CDF, without conditioningor marginalization.As we shall demonstrate in Section 2.1, F1q(jx;)()is a function of q(z= 1jx;), whereq(z= 1jx;)is a deterministic probability value calculated by a parameterized function, such asa neural network. The autoencoder implicit in Equation 8 is shown in Figure 1c. Initially, input xis passed into a deterministic feedforward network q(z= 1jx;), for which the final nonlinearity isthe logistic function. Its output q, along with an independent random variable U[0;1], is passedinto the deterministic function F1q(jx;)()to produce a sample of . This, along with the originalinputx, is finally passed to logp(xj;). The expectation of this log probability with respect to isthe autoencoding term of the V AE formalism, as in Equation 2. Moreover, conditioned on the inputand the independent , this autoencoder is deterministic and differentiable, so backpropagation canbe used to produce a low-variance, computationally-efficient approximation to the gradient.2.1 S PIKE -AND -EXPONENTIAL SMOOTHING TRANSFORMATIONAs a concrete example consistent with sparse coding, consider the spike-and-exponential transfor-mation from binary zto continuous :r(ijzi= 0) =1;ifi= 00;otherwiseFr(ijzi=0)(0) = 1r(ijzi= 1) =(ee1;if0i10; otherwiseFr(ijzi=1)(0) =ee100=e01e1whereFp(0) =R01p()dis the CDF of probability distribution pin the domain [0;1]. Thistransformation from zitoiis invertible: i= 0,zi= 0, andi>0,zi= 1almost surely.5We can now find the CDF for q(jx;)as a function of q(z= 1jx;)in the domain [0;1], marginal-izing out the discrete z:Fq(jx;)(0) = (1q(z= 1jx;))Fr(ijzi=0)(0) +q(z= 1jx;)Fr(ijzi=1)(0)=q(z= 1jx;) e01e11!+ 1:To evaluate the autoencoder of Figure 1c, and through it the gradient approximation of Equation 8,we must invert the conditional-marginal CDF Fq(jx;):F1q(jx;)() =(1logh+q1qe1+ 1i;if1q0; otherwise(9)where we use the substitution q(z= 1jx;)!qto simplify notation. For all values of the inde-pendent random variable U[0;1], the function F1q(jx;)()rectifies the input q(z= 1jx;)ifq1in a manner analogous to a rectified linear unit (ReLU), as shown in Figure 2a. It isalso quasi-sigmoidal, in that F1is increasing but concave-down if q >1. The effect of onF1is qualitatively similar to that of dropout (Srivastava et al., 2014), depicted in Figure 2b, or thenoise injected by batch normalization (Ioffe & Szegedy, 2015) using small minibatches, shown inFigure 2c.Other expansions to the continuous space are possible. In Appendix D.1, we consider the case wherebothr(ijzi= 0) andr(ijzi= 1) are linear functions of ; in Appendix D.2, we develop a spike-and-slab transformation; and in Appendix E, we explore a spike-and-Gaussian transformation wherethe continuous is directly dependent on the input xin addition to the discrete z.5In the limit!1 ,i=zialmost surely, and the continuous variables can effectively be removed fromthe model. This trick can be used after training with finite to produce a model without smoothing variables .5Published as a conference paper at ICLR 201700.20.40.60.8100.51ρ= 0.2 ρ= 0.5ρ= 0.8q(z= 1 |x,φ)F−1q(ζ|x,φ)(ρ);f(x,ρ)(a) Spike-and-exp, 2f1;3;5g−1−0.50 0.5 1ρ< 0.5ρ≥0.5x (b) ReLU with dropout−1−0.50 0.5 1x±0.3x·(1±0.3)no noisex (c) ReLU with batch normFigure 2: Inverse CDF of the spike-and-exponential smoothing transformation for2f0:2;0:5;0:8g;= 1(dotted),= 3(solid), and = 5(dashed) (a). Rectified linearunit with dropout rate 0:5(b). Shift (red) and scale (green) noise from batch normalization; withmagnitude 0:3(dashed),0:3(dotted), or 0(solid blue); before a rectified linear unit (c). In allcases, the abcissa is the input and the ordinate is the output of the effective transfer function. Thenovel stochastic nonlinearity F1q(jx;)()from Figure 1c, of which (a) is an example, is qualitativelysimilar to the familiar stochastic nonlinearities induced by dropout (b) or batch normalization (c).3 A CCOMMODATING EXPLAINING -AWAY WITH A HIERARCHICALAPPROXIMATING POSTERIORWhen a probabilistic model is defined in terms of a prior distribution p(z)and a conditional dis-tributionp(xjz), the observation of xoften induces strong correlations in the posterior p(zjx)dueto phenomena such as explaining-away (Pearl, 1988). Moreover, we wish to use an RBM as theprior distribution (Equation 6), which itself may have strong correlations. In contrast, to maintaintractability, many variational approximations use a product of independent approximating posteriordistributions (e.g., mean-field methods, but also Kingma & Welling (2014); Rezende et al. (2014)).To accommodate strong correlations in the posterior distribution while maintaining tractability, weintroduce a hierarchy into the approximating posterior q(zjx)over the discrete latent variables.Specifically, we divide the latent variables zof the RBM into disjoint groups, z1;:::;zk,6anddefine the approximating posterior via a directed acyclic graphical model over these groups:q(z1;1;:::;zk;kjx;) =Y1jkr(jjzj)q(zjji<j;x; ) whereq(zjji<j;x; ) =egj(i<j;x;)>zjQz2zj1 +egz(i<j;x;); (10)zj2f0;1gn, andgj(i<j;x; )is a parameterized function of the inputs and preceding i, such asa neural network. The corresponding graphical model is depicted in Figure 3a, and the integrationof such hierarchical approximating posteriors into the reparameterization trick is discussed in Ap-pendix A. If each group zjcontains a single variable, this dependence structure is analogous to thatof a deep autoregressive network (DARN; Gregor et al., 2014), and can represent any distribution.However, the dependence of zjon the preceding discrete variables zi<jis always mediated by thecontinuous variables i<j.This hierarchical approximating posterior does not affect the form of the autoencoding term in Equa-tion 8, except to increase the depth of the autoencoder, as shown in Figure 3b. The deterministicprobability value q(zj= 1ji<j;x; )of Equation 10 is parameterized, generally by a neural net-work, in a manner analogous to Section 2. However, the final logistic function is made explicit inEquation 10 to simplify Equation 12. For each successive layer jof the autoencoder, input xand allpreviousi<jare passed into the network computing q(z= 1ji<j;x; ). Its outputqj, along with an6The continuous latent variables are divided into complementary disjoint groups 1;:::;k.6Published as a conference paper at ICLR 2017z1z2z3xζ1ζ2ζ3(a) Hierarch approx post q(;zjx)q1 q2 q3xζ1 ζ2 ζ3ρ ρ ρxq q q(z3= 1 |ζi<3,x,φ )F−1F−1F−1q3(ζ3|ζi<3,x,φ )(ρ)p(x|ζ,φ) (b) Hierarchical ELBO autoencoding termFigure 3: Graphical model of the hierarchical approximating posterior (a) and the network realizingthe autoencoding term of the ELBO (b) from Equation 2. Discrete latent variables zjonly dependon the previous zi<jthrough their smoothed analogs i<j. The autoregressive hierarchy allows theapproximating posterior to capture correlations and multiple modes. Again, all operations in (b) aredeterministic and differentiable given the stochastic input .independent random variable U[0;1], is passed to the deterministic function F1q(jji<j;x;)()to produce a sample of j. Once alljhave been recursively computed, the full along with theoriginal input xis finally passed to logp(xj;). The expectation of this log probability with respecttois again the autoencoding term of the V AE formalism, as in Equation 2.In Appendix F, we show that the gradients of the remaining KL term of the ELBO (Equation 2) canbe estimated stochastically using:@@KL[qjjp] =Eq(z1jx;)Eq(zkji<k;x;)@Ep(z;)@Ep(zj)@Ep(z;)@and(11)@@KL[qjjp] =E(g(x;)b)>@q@z>W1z1q@q@: (12)In particular, Equation 12 is substantially lower variance than the naive approach to calculate@@KL[qjjp], based upon REINFORCE.4 M ODELLING CONTINUOUS DEFORMATIONS WITH A HIERARCHY OFCONTINUOUS LATENT VARIABLESWe can make both the generative model and the approximating posterior more powerful by addingadditional layers of latent variables below the RBM. While these layers can be discrete, we focus oncontinuous variables, which have proven to be powerful in generative adversarial networks (Goodfel-low et al., 2014) and traditional variational autoencoders (Kingma & Welling, 2014; Rezende et al.,2014). When positioned below and conditioned on a layer of discrete variables, continuous variablescan build continuous manifolds, from which the discrete variables can choose. This complementsthe structure of the natural world, where a percept is determined first by a discrete selection of thetypes of objects present in the scene, and then by the position, pose, and other continuous attributesof these objects.Specifically, we augment the latent representation with continuous random variables z,7and defineboth the approximating posterior and the prior to be layer-wise fully autoregressive directed graphi-cal models. We use the same autoregressive variable order for the approximating posterior as for the7We always use a variant of zfor latent variables. This is Fraktur z, or German z.7Published as a conference paper at ICLR 2017zζz1 z2 z3x(a) Approx post w/ cont latent vars q(z;;zjx)zζz1 z2 z3x (b) Prior w/ cont latent vars p(x;z;;z)Figure 4: Graphical models of the approximating posterior (a) and prior (b) with a hierarchy ofcontinuous latent variables. The shaded regions in parts (a) and (b) expand to Figures 3a and 1brespectively. The continuous latent variables zbuild continuous manifolds, capturing properties likeposition and pose, conditioned on the discrete latent variables z, which can represent the discretetypes of objects in the image.prior, as in DRAW (Gregor et al., 2015), variational recurrent neural networks (Chung et al., 2015),the deep V AE of Salimans (2016), and ladder networks (Rasmus et al., 2015; Sønderby et al., 2016).We discuss the motivation for this ordering in Appendix G.The directed graphical model of the approximating posterior and prior are defined by:q(z0;:::; znjx;) =Y0mnq(zmjzl<m;x; ) andp(z0;:::; znj) =Y0mnp(zmjzl<m;): (13)The full set of latent variables associated with the RBM is now denoted by z0=fz1;1;:::;zk;kg.However, the conditional distributions in Equation 13 only depend on the continuous j. Each zm1denotes a layer of continuous latent variables, and Figure 4 shows the resulting graphical model.The ELBO decomposes as:L(x;; ) =Eq(zjx;)[logp(xjz;)]XmEq(zl<mjx;)[KL[q(zmjzl<m;x; )jjp(zmjzl<m;)]]:(14)If bothq(zmjzl<m;x; )andp(zmjzl<m;)are Gaussian, then their KL divergence has a simpleclosed form, which is computationally efficient if the covariance matrices are diagonal. Gradientscan be passed through the q(zl<mjx;)using the traditional reparameterization trick, described inSection 1.1.5 R ESULTSDiscrete variational autoencoders comprise a smoothed RBM (Section 2) with a hierarchical approx-imating posterior (Section 3), followed by a hierarchy of continuous latent variables (Section 4). Weparameterize all distributions with neural networks, except the smoothing distribution r(jz)dis-cussed in Section 2. Like NVIL (Mnih & Gregor, 2014) and V AEs (Kingma & Welling, 2014;Rezende et al., 2014), we define all approximating posteriors qto be explicit functions of x, withparametersshared between all inputs x. For distributions over discrete variables, the neural net-works output the parameters of a factorial Bernoulli distribution using a logistic final layer, as inEquation 10; for the continuous z, the neural networks output the mean and log-standard deviationof a diagonal-covariance Gaussian distribution using a linear final layer. Each layer of the neu-ral networks parameterizing the distributions over z,z, andxconsists of a linear transformation,8Published as a conference paper at ICLR 2017batch normalization (Ioffe & Szegedy, 2015) (but see Appendix H.2), and a rectified-linear point-wise nonlinearity (ReLU). We stochastically approximate the expectation with respect to the RBMpriorp(zj)in Equation 11 using block Gibbs sampling on persistent Markov chains, analogous topersistent contrastive divergence (Tieleman, 2008). We minimize the ELBO using ADAM (Kingma& Ba, 2015) with a decaying step size.The hierarchical structure of Section 4 is very powerful, and overfits without strong regularizationof the prior, as shown in Appendix H. In contrast, powerful approximating posteriors do not inducesignificant overfitting. To address this problem, we use conditional distributions over the inputp(xj;)without any deterministic hidden layers, except on Omniglot. Moreover, all other neuralnetworks in the prior have only one hidden layer, the size of which is carefully controlled. Onstatically binarized MNIST, Omniglot, and Caltech-101, we share parameters between the layers ofthe hierarchy over z. We present the details of the architecture in Appendix H.We train the resulting discrete V AEs on the permutation-invariant MNIST (LeCun et al., 1998), Om-niglot8(Lake et al., 2013), and Caltech-101 Silhouettes datasets (Marlin et al., 2010). For MNIST,we use both the static binarization of Salakhutdinov & Murray (2008) and dynamic binarization.Estimates of the log-likelihood9of these models, computed using the method of (Burda et al., 2016)with 104importance-weighted samples, are listed in Table 1. The reported log-likelihoods for dis-crete V AEs are the average of 16 runs; the standard deviation of these log-likelihoods are 0:08,0:04,0:05, and 0:11for dynamically and statically binarized MNIST, Omniglot, and Caltech-101 Silhou-ettes, respectively. Removing the RBM reduces the test set log-likelihood by 0:09,0:37,0:69, and0:66.MNIST (dynamic binarization) MNIST (static binarization)LL ELBO LLDBN -84.55 HVI -88.30 -85.51IWAE -82.90 DRAW -87.40Ladder V AE -81.74 NAIS NADE -83.67Discrete V AE -80.15 Normalizing flows -85.10Variational Gaussian process -81.32Discrete V AE -84.58 -81.01Omniglot Caltech-101 SilhouettesLL LLIWAE -103.38 IWAE -117.2Ladder V AE -102.11 RWS SBN -113.3RBM -100.46 RBM -107.8DBN -100.45 NAIS NADE -100.0Discrete V AE -97.43 Discrete V AE -97.6Table 1: Test set log-likelihood of various models on the permutation-invariant MNIST, Omniglot,and Caltech-101 Silhouettes datasets. For the discrete V AE, the reported log-likelihood is estimatedwith104importance-weighted samples (Burda et al., 2016). For comparison, we also report perfor-mance of some recent state-of-the-art techniques. Full names and references are listed in Appendix I.We further analyze the performance of discrete V AEs on dynamically binarized MNIST: the largestof the datasets, requiring the least regularization. Figure 5 shows the generative output of a discreteV AE as the Markov chain over the RBM evolves via block Gibbs sampling. The RBM is held con-stant across each sub-row of five samples, and variation amongst these samples is due to the layersof continuous latent variables. Given a multimodal distribution with well-separated modes, Gibbssampling passes through the large, low-probability space between the modes only infrequently. Asa result, consistency of the digit class over many successive rows in Figure 5 indicates that the RBMprior has well-separated modes. The RBM learns distinct, separated modes corresponding to thedifferent digit types, except for 3/5 and 4/9, which are either nearby or overlapping; at least tens of8We use the partitioned, preprocessed Omniglot dataset of Burda et al. (2016), available fromhttps://github.com/yburda/iwae/tree/master/datasets/OMNIGLOT.9The importance-weighted estimate of the log-likelihood is a lower bound, except for the log partitionfunction of the RBM. We describe our unbiased estimation method for the partition function in Appendix H.1.9Published as a conference paper at ICLR 2017Figure 5: Evolution of samples from a discrete V AE trained on dynamically binarized MNIST, usingpersistent RBM Markov chains. We perform 100 iterations of block-Gibbs sampling on the RBMbetween successive rows. Each horizontal group of 5 uses a single, shared sample from the RBM,but independent continuous latent variables, and shows the variation induced by the continuouslayers as opposed to the RBM. The long vertical sequences in which the digit ID remains constantdemonstrate that the RBM has well-separated modes, each of which corresponds to a single (oroccasionally two) digit IDs, despite being trained in a wholly unsupervised manner.1 10 100−80.5−80.4−80.3Log likelihood(a) Block Gibbs iterations8 16 32 64 128 (b) Num RBM units1 2 4 8 (c) RBM approx post layersFigure 6: Log likelihood versus the number of iterations of block Gibbs sampling per minibatch (a),the number of units in the RBM (b), and the number of layers in the approximating posterior overthe RBM (c). Better sampling (a) and hierarchical approximating posteriors (c) support better per-formance, but the network is robust to the size of the RBM (b).thousands of iterations of single-temperature block Gibbs sampling is required to mix between themodes. We present corresponding figures for the other datasets, and results on simplified architec-tures, in Appendix J.The large mixing time of block Gibbs sampling on the RBM suggests that training may be con-strained by sample quality. Figure 6a shows that performance10improves as we increase the num-ber of iterations of block Gibbs sampling performed per minibatch on the RBM prior: p(zj)inEquation 11. This suggests that a further improvement may be achieved by using a more effectivesampling algorithm, such as parallel tempering (Swendsen & Wang, 1986).10All models in Figure 6 use only 10layers of continuous latent variables, for computational efficiency.10Published as a conference paper at ICLR 2017Commensurate with the small number of intrinsic classes, a moderately sized RBM yields the bestperformance on MNIST. As shown in Figure 6b, the log-likelihood plateaus once the number ofunits in the RBM reaches at least 64. Presumably, we would need a much larger RBM to model adataset like Imagenet, which has many classes and complicated relationships between the elementsof various classes.The benefit of the hierarchical approximating posterior over the RBM, introduced in Section 3, isapparent from Figure 6c. The reduction in performance when moving from 4to8layers in theapproximating posterior may be due to the fact that each additional hierarchical layer over the ap-proximating posterior adds three layers to the encoder neural network: there are two deterministichidden layers for each stochastic latent layer. As a result, expanding the number of RBM approx-imating posterior layers significantly increases the number of parameters that must be trained, andincreases the risk of overfitting.6 C ONCLUSIONDatasets consisting of a discrete set of classes are naturally modeled using discrete latent variables.However, it is difficult to train probabilistic models over discrete latent variables using efficientgradient approximations based upon backpropagation, such as variational autoencoders, since it isgenerally not possible to backpropagate through a discrete variable (Bengio et al., 2013).We avoid this problem by symmetrically projecting the approximating posterior and the prior into acontinuous space. We then evaluate the autoencoding term of the evidence lower bound exclusivelyin the continous space, marginalizing out the original discrete latent representation. At the sametime, we evaluate the KL divergence between the approximating posterior and the true prior in theoriginal discrete space; due to the symmetry of the projection into the continuous space, it does notcontribute to the KL term. To increase representational power, we make the approximating posteriorover the discrete latent variables hierarchical, and add a hierarchy of continuous latent variablesbelow them. The resulting discrete variational autoencoder achieves state-of-the-art performance onthe permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.ACKNOWLEDGEMENTSZhengbing Bian, Fabian Chudak, Arash Vahdat helped run experiments. Jack Raymond providedthe library used to estimate the log partition function of RBMs. Mani Ranjbar wrote the clustermanagement system, and a custom GPU acceleration library used for an earlier version of the code.We thank Evgeny Andriyash, William Macready, and Aaron Courville for helpful discussions; andone of our anonymous reviewers for identifying the problem addressed in Appendix D.3.<|im_end|> <|im_start|>assistant ### Review Title Rich set of ideas on how to make VAEs work better. ### Review Text This is an interesting paper on how to handle reparameterization in VAEs when you have discrete variables. The idea is to introduce a smoothing transformation that is shared between the generative model and the recognition model (leading to cancellations). A second contribution is to introduce an RBM as the prior model P(z) and to use autoregressive connections in generative and recognition models. The whole package becomes a bit entangled and complex and it is hard to figure out what causes the claimed good performance. Experiments that study these contributions separately would have been nice. The framework does become a little complex but this should not be a problem if nice software is delivered that can be used in a plug and play mode. Overall, the paper is very rich with ideas so I think it would be a great contribution to the conference. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
J3OUycKwz-
ICLR.cc/2021/Conference
2021
Mapping the Timescale Organization of Neural Language Models
["Hsiang-Yun Sherry Chien", "Jinhan Zhang", "Christopher Honey"]
In the human brain, sequences of language input are processed within a distributed and hierarchical architecture, in which higher stages of processing encode contextual information over longer timescales. In contrast, in recurrent neural networks which perform natural language processing, we know little about how the multiple timescales of contextual information are functionally organized. Therefore, we applied tools developed in neuroscience to map the “processing timescales” of individual units within a word-level LSTM language model. This timescale-mapping method assigned long timescales to units previously found to track long-range syntactic dependencies. Additionally, the mapping revealed a small subset of the network (less than 15% of units) with long timescales and whose function had not previously been explored. We next probed the functional organization of the network by examining the relationship between the processing timescale of units and their network connectivity. We identified two classes of long-timescale units: “controller” units composed a densely interconnected subnetwork and strongly projected to the rest of the network, while “integrator” units showed the longest timescales in the network, and expressed projection profiles closer to the mean projection profile. Ablating integrator and controller units affected model performance at different positions within a sentence, suggesting distinctive functions of these two sets of units. Finally, we tested the generalization of these results to a character-level LSTM model and models with different architectures. In summary, we demonstrated a model-free technique for mapping the timescale organization in recurrent neural networks, and we applied this method to reveal the timescale and functional organization of neural language models
["natural language processing", "LSTM", "timescale", "hierarchy", "temporal context"]
ABSTRACTIn the human brain, sequences of language input are processed within a dis-tributed and hierarchical architecture, in which higher stages of processing en-code contextual information over longer timescales. In contrast, in recurrent neu-ral networks which perform natural language processing, we know little abouthow the multiple timescales of contextual information are functionally organized.Therefore, we applied tools developed in neuroscience to map the “processingtimescales” of individual units within a word-level LSTM language model. Thistimescale-mapping method assigned long timescales to units previously found totrack long-range syntactic dependencies. Additionally, the mapping revealed asmall subset of the network (less than 15% of units) with long timescales andwhose function had not previously been explored. We next probed the functionalorganization of the network by examining the relationship between the processingtimescale of units and their network connectivity. We identified two classes oflong-timescale units: “controller” units composed a densely interconnected sub-network and strongly projected to the rest of the network, while “integrator” unitsshowed the longest timescales in the network, and expressed projection profilescloser to the mean projection profile. Ablating integrator and controller units af-fected model performance at different positions within a sentence, suggesting dis-tinctive functions of these two sets of units. Finally, we tested the generalizationof these results to a character-level LSTM model and models with different archi-tectures. In summary, we demonstrated a model-free technique for mapping thetimescale organization in recurrent neural networks, and we applied this methodto reveal the timescale and functional organization of neural language models.11 I NTRODUCTIONLanguage processing requires tracking information over multiple timescales. To be able to predictthe final word “timescales” in the previous sentence, one must consider both the short-range context(e.g. the adjective “multiple”) and the long-range context (e.g. the subject “language processing”).How do humans and neural language models encode such multi-scale context information? Neuro-scientists have developed methods to study how the human brain encodes information over multipletimescales during sequence processing. By parametrically varying the timescale of intact context,and measuring the resultant changes in the neural response, a series of studies (Lerner et al., 2011;Xu et al., 2005; Honey et al., 2012) showed that higher-order regions are more sensitive to long-range context change than lower-order sensory regions. These studies indicate the existence of a“hierarchy of processing timescales” in the human brain. More recently, Chien & Honey (2020)used a time-resolved method to investigate how the brain builds a shared representation, when twogroups of people processed the same narrative segment preceded by different contexts. By directlymapping the time required for individual brain regions to converge on a shared representation inresponse to shared input, we confirmed that higher-order regions take longer to build a shared repre-sentation. Altogether, these and other lines of investigation suggest that sequence processing in the1The code and dataset to reproduce the experiment can be found at https://github.com/sherrychien/LSTM_timescales1Published as a conference paper at ICLR 2021brain is supported by a distributed and hierarchical structure: sensory regions have short process-ing timescales and are primarily influenced by the current input and its short-range context, whilehigher-order cortical regions have longer timescales and track longer-range dependencies (Hassonet al., 2015; Honey et al., 2012; Chien & Honey, 2020; Lerner et al., 2011; Baldassano et al., 2017;Runyan et al., 2017; Fuster, 1997).How are processing timescales organized within recurrent neural networks (RNNs) trained to per-form natural language processing? Long short-term memory networks (LSTMs) (Hochreiter &Schmidhuber, 1997) have been widely investigated in terms of their ability to successfully solve se-quential prediction tasks. However, long-range dependencies have usually been studied with respectto a particular linguistic function (e.g. subject-verb number agreement, Linzen et al. 2016; Gulor-dava et al. 2018; Lakretz et al. 2019), and there has been less attention on the broader question ofhow sensitivity to prior context – broadly construed – is functionally organized within these RNNs.Therefore, drawing on prior work in the neuroscience literature, here we demonstrate a model-freeapproach to mapping processing timescale in RNNs. We focused on existing language models thatwere trained to predict upcoming tokens at the word level (Gulordava et al., 2018) and at the char-acter level (Hahn & Baroni, 2019). The timescale organization of these two models both revealedthat the higher layers of LSTM language models contained a small subset of units which exhibitlong-range sequence dependencies; this subset includes previously reported units (e.g. a “syntax”unit, Lakretz et al., 2019) as well as previously unreported units.After mapping the timescales of individual units, we asked: does the processing timescales of eachunit in the network relate to its functional role, as measured by its connectivity? The question is mo-tivated by neuroscience studies which have shown that in the human brain, higher-degree nodes tendto exhibit slower dynamics and longer context dependence than lower-degree nodes (Baria et al.,2013). More generally, the primate brain exhibits a core periphery structure in which a relativelysmall number of “higher order” and high-degree regions (in the prefrontal cortex, in default-moderegions and in so-called “limbic” zones) maintain a large number of connections with one another,and exert a powerful influence over large-scale cortical dynamics (Hagmann et al., 2008; Mesulam,1998; Gu et al., 2015). Inspired by the relationships between timescales and network structure inthe brain, we set out to test corresponding hypotheses in RNNs: (1) Do units with longer-timescalestend to have higher degree in neural language models? and (2) Do neural language models also ex-hibit a “core network” composed of functionally influential high-degree units? Using an exploratorynetwork-theoretic approach, we found that units with longer timescales tend to have more projec-tions to other units. Furthermore, we identified a set of medium-to-long timescale “controller” unitswhich exhibit distinct and strong projections to control the state of other units, and a set of long-timescale “integrator units” which showed influence on predicting words where the long context isrelevant. In summary, these findings advance our understanding of the timescale distribution andfunctional organization of LSTM language models, and provide a method for identifying importantunits representing long-range contextual information in RNNs.2 R ELATED WORKLinguistic Context in LSTMs . How do LSTMs encode linguistic context at multiple timescales?Prior work suggested that the units sensitive to information that requires long-range dependenciesare sparse. By ablating one unit at a time, Lakretz et al. (2019) found two units that encode infor-mation required for processing long-range subject-verb number agreement (one for singular and onefor plural information encoding). They further identified several long-range “syntax units” whoseactivation was associated with syntactic tree-depth. Overall, Lakretz et al. (2019) suggests that asparse subset of units tracks long-range dependencies related to subject-verb agreement and syntax.If this pattern is general – i.e. if there are very few nodes tracking long-range dependencies in gen-eral – this may limit the capacity of the models to process long sentences with high complexity, forreasons similar to those that may limit human sentence processing (Lakretz et al., 2020). To testwhether long-range nodes are sparse in general, we require a model-free approach for mapping thecontext dependencies of every unit in the language network.Whole-network context dependence . Previous work by Khandelwal et al. (2018) investigated theduration of prior context that LSTM language models use to support word prediction. Context-dependence was measured by permuting the order of words preceding the preserved context, and2Published as a conference paper at ICLR 2021observing the increase in model perplexity when the preserved context gets shorter. Khandelwalet al. (2018) found that up to 200 word-tokens of prior context were relevant to the model perplexity,but that the precise ordering of words only mattered within the most recent 50 tokens. The token-based context-permutation method employed in this study was analogous to the approach used tomeasure context-dependence in human brain responses to movies (Hasson et al., 2008) and to audi-tory narratives (Lerner et al., 2011).Inspired by the findings of Khandelwal et al. (2018) and Lakretz et al. (2019), in the present study weset out to map the context-dependence across all of the individual units in the LSTM model. This en-abled us to relate the timescales to the effects of node-specific ablation and the network architectureitself. In addition, our context manipulations included both context-swapping (substituting alterna-tive meaningful contexts) and context-shuffling (permuting the words in the prior context to disruptinter-word structure), which allowed us to better understand how individual words and syntacticallystructured word-sequences contribute to the the context representation of individual hidden units.3 M ETHODS3.1 L ANGUAGE MODELS AND CORPUSWe evaluated the internal representations generated by a pre-trained word-level LSTM languagemodel (WLSTM, Gulordava et al., 2018) as well as a pre-trained character-level LSTM model(CLSTM, Hahn & Baroni, 2019) as they processed sentences sampled from the 427804-word(1965719-character) novel corpus: Anna Karenina by Leo Tolstoy (Tolstoy, 2016), translated fromRussian to English by Constance Garnett.For the WLSTM, we used the model made available by Gulordava et al. (2018). The WLSTM hasa 650-dimensional embedding layer, two 650-dimensional hidden layers and an output layer withvocabulary size 50,000. The model was trained and tested on Wikipedia sentences and was notfine-tuned to the novel corpus. Therefore, we only used sentences with low perplexity from thenovel in our main timescale analysis. We performed the same analysis using the Wikipedia test setfrom Gulordava et al. (2018) and obtained similar results (See Section 5.3, Figure A.4A, AppendixA.2.1). For the CLSTM, we used the model made available by Hahn & Baroni (2019). The CLSTMhas a 200-dimensional embedding layer, three 1024-dimensional hidden layers and an output layerwith vocabulary size 63. The model was trained on Wikipedia data with all characters lower-casedand whitespace removed. We tested the model with sentences sampled from Anna Karenina as theWLSTM model, and we obtained bits-per-character (BPC) similar to what Hahn & Baroni (2019)reported in their original work.3.2 T EMPORAL CONTEXT CONSTRUCTION PARADIGMIn order to determine the processing timescales of cell state vectors and individual units, we mod-ified the “temporal context construction” method developed by Chien & Honey (2020). Thus, theinternal representations of the model were compared across two conditions: (1) the Intact Contextcondition and (2) the Random Context condition. In both conditions, the model was processing thesame shared sequence of words (for example, segment B), but the preceding sentence differed acrossthe two conditions. In the Intact Context condition, the model processed segment B (the shared seg-ment) preceded by segment A, which was the actual preceding context from the original text. In thecurrent study, for example, segment A and B are connected by “, and” within long sentences fromthe novel corpus (Figure 1A), to ensure the temporal dependencies between A and B. In the RandomContext condition, however, the model processed the same shared input (segment B), but the contextwas replaced by segment X, which was a randomly sampled context segment from the rest of thecorpus. Segment X was therefore not usually coherently related to segment B. For the WLSTMtimescale analysis, we chose long sentences in the Intact Context condition that satisfied the follow-ing constraints: (1) mean perplexity across all words in the sentence <200, (2) the shared segmentwas longer than 25 words, and (3) the context segment was longer than 10 words. 77 sentences areincluded as trials in our analyses. In the Random Context condition, we preserved the same sharedsegments and randomly sampled 30 context segments (each longer than 10 words) from other partsof the novel. For the CLSTM timescale analysis, we used the same 77 long sentences in the Intact3Published as a conference paper at ICLR 2021Context condition, and randomly sampled 25 context segments (with length >33 characters) for theRandom Context condition.Figure 1: Method for mapping processing timescales of individual units. A.Example sentences forthe model to process in the Intact Context and Random Context condition. In the Intact Contextcondition, the shared segment is preceded by an intact context from the corpus; while in the Ran-dom Context condition, this preceding context segment is replaced by randomly sampled contextsegments. B.Schematic hidden state activation of the neural network. When the model starts toprocess the shared segment preceded by different context between the two context conditions, thehidden unit activation difference (i.e. the mean absolute difference of unit activation between thetwo conditions) decreases over time with different rates. The expected decreasing pattern of acti-vation difference of a long-timescale unit and a short-timescale unit are shown schematically in thegreen and red curves, respectively.In brief, the model is processing the same input (the shared segment) with different preceding context(the intact vs. random context). We can now measure the context dependence of individual unitsby examining how the cell state activations differ between the two conditions, while the network isprocessing the shared segments with identical input. Any difference in internal representations mustarise from the context manipulation, since the current input is the same. A decrease in activationdifference over time implies that the units exposed in the Intact context and Random context startto build a similar representation as they process the shared input. For a long-timescale unit, whosecurrent state is dependent on information in the far-preceding context, we will see that the activationdifference is preserved across contexts (Figure 1B, green curve), even while the unit is processingthe shared input. On the other hand, for a short-timescale unit whose activation is driven largely bythe current input, we will see that the activation difference drops quickly (Figure 1B, red curve) asthe unit processes the shared input.4Published as a conference paper at ICLR 20214 H IERARCHICAL ORGANIZATION OF TIMESCALES ACROSS LAYERSDo higher levels of the LSTM model exhibit greater context-dependence? Lakretz et al. (2019) ob-served that long-range functional units were more common in higher layers, and in general, higher-levels of hierarchical language model exhibit longer range context-dependence (Jain et al., 2019;Jain & Huth, 2018). Therefore, to validate our stimuli and the sensitivity of our methods, we firstcompared the processing timescales of different hidden layers in both of the LSTMs, by correlatingthe cell state vectors, column by column, between the Intact condition and Random condition.We found that both layers showed near-zero correlation when processing the different context, andthe correlation increased as they began to process the shared input. In the WLSTM, the correlationincreased more slowly for second-level cell state vectors than for first-level cell state vectors. Thus,the representation of second-level cell state is more sensitive to the different context than the firstlevel. Similarly, for the CLSTM model, the third-level cell state exhibited longer-lasting contextsensitivity than lower levels (Figure 2). This observation of longer context-dependence in higherstages of processing is consistent with prior machine learning analyses (Lakretz et al., 2019; Jain &Huth, 2018) and is also analogous to what is seen in the human brain (Hasson et al., 2015; Chien& Honey, 2020; Lerner et al., 2011; Jain et al., 2019). Based on the finding of longer contextdependence in higher layers, we examined single units in the highest level hidden units, i.e. thesecond level of WLSTM (n=650) and the third level of CLSTM (n=1024).Figure 2: Context effect measured by cell-state vector correlation at different layers in word-levelLSTM (WLSTM) and character-level LSTM (CLSTM). A.Correlation curves of the WLSTM cell-state vectors across the Intact Context condition and Random Context condition as a function ofinput token. In both models, the correlation increased as the models began to process the sharedsegment. Higher-level cell states exhibited a slower increase in correlation, compared to lower-levelcell states, indicating that the higher-levels retain more of the prior context information for longer.B.As for A, but applied to the three levels of CLSTM. Similar to the WLSTM, higher-level cellstate of the CLSTM showed more context sensitivity than the lower-level cell state.5 P ROCESSING TIMESCALES OF INDIVIDUAL UNITS WITHIN LSTM L AYERS5.1 Q UANTIFYING SINGLE UNITTIMESCALESWe examined the absolute single unit activation difference when processing the shared segmentspreceded by different context. As expected, most of the hidden units showed different activationwhen the input tokens were different (i.e. while processing the non-shared context in the IntactContext and Random Context conditions). However, once the shared input tokens begin (at t= 0)the Intact-Random activation differences drop (Figure A.1A, A.1B).We used the rate at which the curves drop to quantify the processing timescale, as this is a measureof how quickly the responses align across different context conditions. To quantify the timescale ofindividual units, we fit the activation difference curves with a logistic function:Y(x) =L1 +ek(xx0)+d (1)5Published as a conference paper at ICLR 2021As shown in Figure A.1A and Figure A.1B, the logistic function fit the raw activation differencecurves. We then computed the ”timescale” of each unit as the time-to-half-maximum of the logisticdecay. In particular, for the WLSTM we used the activation difference Y(0)at the beginning of theshared segment, and at the end of the shared segment Y(24) (Y(79) for the CSLTM) to calculatethe time-to-half-maximum of unit ias:timescale i=dY1(Yi(0)Yi(24)2)e (2)where the inverse function Y1(y)identifies the largest integer t, for whichY(t)<y. We included635 units in WLSTM and 1012 units in CLSTM for further analysis after excluding the units whichcould not be accurately fit by a logistic function (See Appendix A.1).5.2 D ISTRIBUTION OF UNITTIMESCALES IN LSTM L ANGUAGE MODELSThe results showed that of the 635 WLSTM units whose processing timescale we mapped, approx-imately 70% of the units were insensitive to long-range context (processing timescale <3 words):their activation difference dropped immediately at onset of the shared segment. In contrast, onlyapproximately 13% of the units had a timescales >7 words (Figure A.2A). Figure 3A shows theabsolute activation difference of all units in WLSTM sorted by timescale (long to short). Some ofthe longer-timescale units continued to exhibit a large activation difference even when processingthe shared segments for more than 20 tokens.As we were testing the same word-level LSTM previously studied by Lakretz et al. (2019), webegan by examining the timescales of hidden-state units that were already known to be involvedin processing context-dependence language information: a “singular number unit” 988, a “pluralnumber unit” 776, and a “syntax unit” 1150. We found that, compared to other units, both “number”units had medium timescales ( 3 words, ranked 129 of 635 units), while the “syntax” unit had along timescale (7 words, ranked 64 of 635 units) (Figure A.1).We repeated the timescale mapping in the CLSTM model, and again identified a small subset oflong-timescale units (Figure 3B, Figure A.2B). Although there were overall more units in CLSTM,over 63% of the units were insensitive to the context (timescale <3 characters). Fewer than 15% ofthe units exhibited timescale >10 characters, and the unit with the longest timescale only droppedto its half-maximum activation-difference after 50 characters of shared input.5.3 T IMESCALE VARIATION ACROSS DATASETS AND CONTEXT CONDITIONSTo ensure that the timescales we measured were robust across datasets, we conducted the sameanalysis on WLSTM using the Wikipedia testing dataset used in Gulordava et al. (2018). Themapped timescales were highly correlated (r=0.82, p <0.001) across the Anna Karenina dataset andthe Wikipedia dataset (Appendix A.2.1, Figure A.4A).Similarly, to confirm that the timescales measured were not specific to our testing using the “, and”conjunction point, we also measured timescales at an alternative segmentation point, and found thatthe timescales were largely preserved (r=0.83, p <0.001), notwithstanding there were a small set ofnotable exceptions (Appendix A.2.2, Figure A.4B).Although we measured the timescales of context dependence using “token distance”, these measuresare not invariant to changes in the the “syntactic distance”. For example, if one were to replace acomma with a ”full stop”, then the token distance would be unaltered but the syntactic distancecould be greatly altered. Indeed, we found that most units showed little context dependence whenthe preceding context segment ended with a “full stop”, which served as a clear signal for the end ofa sentence (Appendix A.2.3, Figure A.4C).Finally, we examined whether the contextual information retained by the language models (and theassociated timescales measurement) was sensitive to linguistic structure in the context, or whether itwas primarily driven simply by the presence or absence of individual words. To this end, we gener-ated text for the Random Context condition by shuffling the order of words from the Intact segment.We found that while the presence of individual words did play an important role in determining thecontext representations (and thus the timescales), several units showed a longer timescale when theprior context was composed of coherently structured language (Appendix A.2.4, Figure A.4D).6Published as a conference paper at ICLR 2021Figure 3: Timescale organization in word-level LSTM (WLSTM) and character-level LSTM(CLSTM) language model. A.Absolute activation difference for each WLSTM hidden unit overtime, with units (rows) sorted by timescales. A small set of long-timescale units (top) sustain anactivation difference during shared segment processing, but most (bottom) are context-insensitiveshort-timescale units. B.Absolute activation difference for each CLSTM unit over time, with unitssorted by timescales. Similar to the WLSTM, a small set of long-timescale CLSTM hidden unitsmaintain long-range contextual information.6 C ONNECTIVITY OF MEDIUM -TOLONG -TIMESCALES UNITS IN LSTM SHaving mapped the timescales of each processing unit, we next asked: how does the processingtimescale of a unit relate to its functional role within the network? More specifically, are unitswith longer timescales also units with high degree in the connectivity network? To answer thesequestions, we analyzed (1) the projection strength of each unit and (2) the similarity of the overallprojection pattern (hidden-to-gates) across different units. The projection patterns were definedusing the direct weight projections from one hidden unit at time tto the input and forget gate ofother hidden units at time t+ 1.In LSTMs, the amount of contextual ( ct1) and input ( ~ct) information stored in the cell state ( ct)is determined by the forget gate ( ft) and input gate ( it) activation (Eq. 3); and the activation of thegatesitandftare determined by the current input at time tand the hidden units at time t1throughweight matrices UandW(Eq. 4, 5).ct=ftct1+it~ct (3)it=(Uixt+Wiht1+bi) (4)ft=(Ufxt+Wfht1+bf) (5)Here, we were interested in understanding how the contextual information over different timescalesis projected from the hidden units to the input and forget gates of other units, and further influence theupdate of cell states. Thus, we analyzed the network connectivity focusing on the weight matricesWiandWfwithin the highest layer of the WLSTM or CLSTM.6.1 S TRONG PROJECTIONS FROM LONG -TIMESCALE HIDDEN UNITS TO GATEUNITSUnits with longer processing timescales made a larger number of strong projections ( jz-scorej>5, Appendix A.3) to the input and forget gates of other units in both WLSTM (r=0.31, p <0.001,Figure 4A) and CLSTM models (r=0.24, p <0.001, Figure A.5A). Furthermore, we found that the“syntax” unit (Unit 1150) reported by Lakretz et al. (2019) in the WLSTM model possessed thelargest number of strong projections to the input and forget gates of all other units, and the majorrecipients from Unit 1150 were units with medium- to long-timescale units (Figure 4B).7Published as a conference paper at ICLR 20216.2 I DENTIFY CONTROLLER UNITS IN LSTM L ANGUAGE MODELSThe presence of strong projections from the “syntax” unit to other long-timescale units motivated usto further explore whether high-degree, long-timescale units in the LSTM also densely interconnectto form a “core network”, perhaps analogous to what is seen in the brain (Hagmann et al., 2008;Mesulam, 1998; Baria et al., 2013). If so, this set of units may have an especially important rolein controlling how prior context is updated and how it is used to gate current processing, analogousto the controller system in the brain (Gu et al., 2015). To identify these putative “controller units”,we binarized the network by identifying the top 258 projection weights from the weight matrices(see Appendix A.3), which provided the edges for a network analysis. We then used k-core analysis(Batagelj & Zaversnik, 2003) to identify the “main network core” (the core with the largest degree)of the network (Figure A.3). At the maximal k= 5, the k-core analysis yielded a set of denselyinterconnected nodes, composed of many long-timescale and medium-timescale units (Figure A.3),also labeled in red in Figure 4A). We (tentatively) refer to this set as the “controller” set of the net-work. Performing the same k-core analyses on the CLSTM model, we observed that the main corenetwork was again composed of disproportionately many medium and long-timescale “controller”units (Figure A.5A).Figure 4: Timescale and connectivity organization in a word-level LSTM. A.Long-timescale unitsexhibited stronger projections from the hidden state at time tto the forget gate and input gate attimet+ 1.B.Strength of hidden-forget gate and hidden-input gate projections from a high-degree“syntax” unit to all other units. The units receiving strong projections ( jz-scorej>5) are labeled. C.Ablating the two sets of long-timescale units results in different impact to the LSTM performance.Specifically, ablating “controller” units impaired overall word prediction (upper panel), while ablat-ing “integrator” units impaired prediction of words in the later part of the sentences (bottom panel).D.Multi-dimensional scaling representation of network connectivity. The distance between twonodes indicates the similarity of their hidden-to-gate connection patterns. The size of each nodeindicates its degree (the number of strong projections from that node to the gate units). An edgebetween nodes indicates a significant hidden-to-gate projection between them.6.3 D ISTINCTIVE ROLES OF LONG -TIMESCALE CONTROLLER AND INTEGRATOR UNITSWe used multi-dimensional scaling (MDS) to visualize the similarity of projection patterns acrossLSTM units. We recovered a 2-dimensional MDS embedding, in which the inter-unit distances wasdefined based on the similarity of their hidden-to-gate projection patterns (i.e., similarity of valuesin the unthresholded LSTM weight matrices WiandWf). We visualized the MDS solution as agraph structure, in which each node is a unit, and the edges reflect connection properties of that unit.Figure 4D shows the resulting 2-D space, with units color-coded by their timescale.“Controller units” (labeled on Figure 4D) were positioned around the periphery of the MDS space,suggesting that these units expressed projection patterns that were distinct from other “controller”units and also from the rest of the network. In contrast, we observed several long-timescale units8Published as a conference paper at ICLR 2021positioned in the center of the MDS space, suggesting that the projection patterns of these units weresimilar to the mean projection pattern. We refer to this more MDS-central set as the “integrator units”(labeled in green in Figure 4A). Similar to the WLSTM, the projection patterns of the “controllerunits” in the CLSTM were distinct from other units in the network, according to the MDS results(Figure A.5C). However, we did not observe “integrator units” positioned in the center of the MDSspace of the CLSTM.Are the “controller” and “integrator” units particularly important for the model’s ability to predictthe next token? To test the functional importance of these subsets of units, we conducted groupablation analyses (See Appendix A.4). Ablating controller units reduced the accuracy of tokenprediction overall, while ablating integrator units only reduced prediction accuracy for the last wordsof the sentences (Figure 4C). The results confirm that the putative controller and integrator nodesare functionally significant, with distinctive roles in the WLSTM language model.Finally, to test the generalization of the timescale and connectivity analyses to a different modelarchitecture, we conducted preliminary analyses on a Gated Recurrent Unit (GRU) language model(Cho et al., 2014) and another word-level LSTM model with a smaller hidden size (100 units) perlayer. The models were trained using similar parameter settings as in Gulordava et al. (2018) un-til they converged without any model-specific optimization. We found similar sparsity of long-timescale units in both models, but did not observe the same relationship between timescales andconnectivity (Appendix A.5; A.6; Figure A.7; A.8; A.9; A.10).7 D ISCUSSIONWe demonstrated a new method for mapping the timescale organization in recurrent neural languagemodels. Using this method, we mapped the timescale distributions of units within word-level andcharacter-level LSTM language models, and identified a small set of units with long timescales. Wethen used network analyses to understand the relationship between the timescale of a unit and itsconnectivity profile, and we distinguished two subsets of long-timescale units with seemingly dis-tinctive functions. Altogether, we proposed methods combining timescale and connectivity analysesfor discovering timescale and functional organization in language models.The units with longer processing timescales included some units whose role in long-range lan-guage dependencies had already been established (Lakretz et al., 2019), but almost all of the longtimescale units are of unknown function. The timescale mapping procedure described here providesa model-free method for identifying nodes necessary for long-range linguistic and discursive pro-cesses (e.g. tracking whether a series of words constitutes an assertion or a question). Future studiesof these neural language models could focus on the specific linguistic information tracked by thelong-timescale units, especially the “controller” units which control the information flow of otherunits in the network.The current study measured unit timescales using a simple token distance, and so the method maybe applied to understanding recurrent neural nets beyond language models. It will be insightfulfor future studies to investigate whether the processing timescales characterized via token distanceare comparable to those measured using functional measures, such as syntactic distance. Relat-edly, while we explored the timescale variance under several context conditions, a more thoroughinvestigation will be needed to examine how the timescales of individual units may vary at differentpositions within a sentence, both in terms of token location and syntactic location.Processing timescales may exhibit an analogous hierarchical organization in LSTMs and in thehuman cerebral cortex: in both cases, a subset of nodes with high degree and high inter-connectivityexpress unusually long timescales. More detailed testing of this apparent correspondence is required,however, because units within an LSTM layer are not spatially embedded and constrained as inbiological brains, and thus the LSTM units do not express a spatially graded timescale topography.ACKNOWLEDGMENTSC.J.H and H-Y .S.C gratefully acknowledge the support of the National Institutes of Mental Health(grant R01MH119099)9Published as a conference paper at ICLR 2021
QxnC9gJQbz2
Great to see some methods from neuroscience applied to interpretability research for a relevant question, results and setup could be improved
3: Clear rejection
_**Update after author response**_: I think this is a very promising paper, and I am really excited about seeing techniques from neuroscience employed to answer questions about neural network models. The authors have further conducted several additional experiments after reviewer comments, which I appreciate. However, my most fundamental concern -- the mismatch between the method and the way that it is validated -- unfortunately still stands, which is why I would encourage the authors to further pursue this line of work, but recommend to reject it for ICLR. **Summary** This paper proposes to apply time-scale methods from neuroscience to investigate the timescale organisation in neural language models. More specifically, the authors test the timescale of individual units in a word- and character-level LSTM by comparing the units' activations values on the same sentence, but with different contexts. Using this method, the authors first show that the higher layers on average have longer timescales. They then, for all units, they fit a logistic function to the "recovery" curves and use the half-times of this curves as an indication of the time scale of these units. They test the syntax unit and two long-distance units found by Lakretz et al and show that the number units have similar time-scales, while the syntax unit have a longer time scale. Lastly, the authors analyse the connectivity between the longer time scale units and find that the units with longer processing timescales make a larger number of strong projections. Within these units, the authors identify two sets of units in the word-level LSTM: "controller units", that play a role in how the connectivity of the network is updated, and "integrator units", that instead integrate information. **Strong points** - Neuroscience has long been asking questions about the brain that are very similar to the questions we now ask about neural networks, cross-pollination between these fields is extremely important, and this paper contributes to this - Aside from the main technique, the paper introduces some interesting and useful methods, such as projectivity analysis and k-core analysis. I think these methods can be useful for other researchers as well - Time scale analysis of LSTMs is a very relevant and interesting topic, that deserves more attention than it is currently getting *Concerns* - My main concern is that there seems to be a mismatch between the "language time scales" on which the authors operate: their experiment is designed to investigate the impact of extra-sentential context, but the Lakretz et al results they keep coming back to concern syntactic phenomena that are only relevant *within* a sentence, which is a different scale. In other words, the units found by the authors of this paper are long-distance when it comes to integrating context, but the syntax and number units found by Lakretz et al are not really related to that: they model relationships *within* sentences. Theoretically speaking, they should be reset at the beginning of every new sentence and they should thus be completely independent from the content. That the authors find this to be untrue is interesting, but inconsistent with what Lakretz et al describe these unit do. Since this is not addressed at all in the paper, it makes the results in general a bit difficult to interpret. _**Update after author response**: In their response the authors clarified that the they have only analysed single sentences, where two distinct subsentences are combined with a conjunction. This, unfortunately, does not make a difference for the argument: whether two sentences are split by a full stop or instead concatenated with "and" does not make any difference for the argument above, since the subject-verb agreement relationships that the units the authors look at model do not cross these boundaries either. Furthermore, in their response the authors state that the find that the context representations of units was 'reset' at sentence boundaries, as I asked before. I appreciate that the authors did these additional experiments, but I find the result somewhat worrisome: since the units they are looking at are syntactic units that encode number across long distance subject verb relationships, they should be reset both when a new sentence starts, as well as when a new conjunct with a new relationship starts. In terms of SV relationships, there should be no difference between "The boy kicked the ball and the girl caught it" and "The boy kicked the ball. The girl caught it." That the authors do find a difference points to a potential flaw in methodology._ - Relatedly, the authors say that their result that the syntax unit is a long distance unit, while the number units are not. This is not consistent with what they say in the related work of the section, but also not with the results reported by Lakretz et al, who hypothesise that the syntax units represent the depth of the syntactic dependency. This is something that changes with every new incoming word, whereas the number units are the ones that have to keep their activation constant across time. - While, as I said before, I think it is great that the authors try to use methods from neuroscience into the field, I do think that in this case the main method they propose is only very marginally different from earlier work (in particular Khandelwal et al. Perhaps it would make more sense to put a bit more stress on the rest of the methods as well (btw, also Lakretz et al do connectivity analysis). - The results are a bit underexplained, and understanding them requires many back and forths to the appendix. I would have appreciated a bit more motivated interpretation of several aspects. For instance: why is there such a large difference in activation differences in different units in the "pre-shared segment" part, and is this related to the half-time (it seems so from the plots)? What is the difference between character and word-level models in terms of expectations (we'd expect there to be an additional level of time-hierarchy, perhaps?) How do assessing activation differences and correlations differ in terms of conclusions? These things should, in my opinion, all be worked out a bit better. - Lastly, there are a few unsupported claims, the most important of which that their method recovers the previously discovered units of Lakretz et al, while (as far as I understand), they actually only *use* their method to analyse those neurons, but did not find them independently. (for other suggestions and comments, see below). To summarise, while I think the idea is very nice and definitely worth working out further, I do think that some work is needed to make this a publishable paper. *Suggestions/comments for authors* _Typographic_: - If you use quotes in latex, you should use different ones for left (`) and right ('), for them to appear correctly (check for instance line three in the introduction) - To prevent additional spaces after abbreviations like e.g. and i.e., put a backslash: "e.g.\ " - Lerner et al --> put all references within parenthesis - Introduction switches from present tense to paste tense in the last paragraph - "we measure the time-taken for the effect of this prior context to ”decay” (see Methods)" --> I don't really understand what this means, you measure how long it takes for these changes to not be measurable anymore? - Try to avoid double parethesis with abbreviations, e.g.: (WLSTM Gulordava et al. (2018)) should be: (WLSTM, Gulordava et al; 2018). You can do this with \citep[text before][text after]{citation}. - "has an 650-dimensional" --> "has a 650-dimensional" - "without fine-tuning to the novel" --> I first thought this sentence was unfinished until I read back and realised that "the novel" is your corpus. This is a bit confusing perhaps you could rephrase. - "how the cell state activation differ" --> "how the cell state activations differ" - "we will see that the activation difference drop quickly' --> drops quickly / see the activation difference drop quickly - There are several references that were published at ACL* conferences that are listed as arxiv papers in the reference list (Lakretz et al, Gulordava et al, Khandelwal et al) _Content_ - I would say that the conclusion that "Overall, prior works suggests that a small subset of units track long-range dependencies" is rather overstated: Lakretz et al found that the units representing long distance number information were sparse, but this does not imply that long range information in general is represented sparsely. Their method also focusses quite exclusively on finding sparsely distributed properties, as more distributed properties cannot be found with ablation. Furthermore, this is just one study, focusing on one syntactic aspect. I would suggest to rephrase this a bit. - Lakretz at all actually identified several syntax units, but only one of them was interpretable. - I find it a bit confusing that in 3.2, second paragraph, you first talk about comparing cell state activation, then say that you compare hidden state activations and then talk again about the cell state activation - Figure 1 C & D: I don't think these figures add much to the paper, for the following reasons i) They show only individual units and no average, making it difficult to interpret the values ii) while, as pointed out in 5.1, the *rate* of decay is the most important, the cut-off point is not indicated in the figure, which puts a stress on irrelevant aspects: the actual difference between the two lines. - I would appreciate to have Figure A.1 in the main text, it is important for the story.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Mapping the Timescale Organization of Neural Language Models ### Paper Abstract In the human brain, sequences of language input are processed within a distributed and hierarchical architecture, in which higher stages of processing encode contextual information over longer timescales. In contrast, in recurrent neural networks which perform natural language processing, we know little about how the multiple timescales of contextual information are functionally organized. Therefore, we applied tools developed in neuroscience to map the “processing timescales” of individual units within a word-level LSTM language model. This timescale-mapping method assigned long timescales to units previously found to track long-range syntactic dependencies. Additionally, the mapping revealed a small subset of the network (less than 15% of units) with long timescales and whose function had not previously been explored. We next probed the functional organization of the network by examining the relationship between the processing timescale of units and their network connectivity. We identified two classes of long-timescale units: “controller” units composed a densely interconnected subnetwork and strongly projected to the rest of the network, while “integrator” units showed the longest timescales in the network, and expressed projection profiles closer to the mean projection profile. Ablating integrator and controller units affected model performance at different positions within a sentence, suggesting distinctive functions of these two sets of units. Finally, we tested the generalization of these results to a character-level LSTM model and models with different architectures. In summary, we demonstrated a model-free technique for mapping the timescale organization in recurrent neural networks, and we applied this method to reveal the timescale and functional organization of neural language models ### Paper Keywords ["natural language processing", "LSTM", "timescale", "hierarchy", "temporal context"] ### Paper Content ABSTRACTIn the human brain, sequences of language input are processed within a dis-tributed and hierarchical architecture, in which higher stages of processing en-code contextual information over longer timescales. In contrast, in recurrent neu-ral networks which perform natural language processing, we know little abouthow the multiple timescales of contextual information are functionally organized.Therefore, we applied tools developed in neuroscience to map the “processingtimescales” of individual units within a word-level LSTM language model. Thistimescale-mapping method assigned long timescales to units previously found totrack long-range syntactic dependencies. Additionally, the mapping revealed asmall subset of the network (less than 15% of units) with long timescales andwhose function had not previously been explored. We next probed the functionalorganization of the network by examining the relationship between the processingtimescale of units and their network connectivity. We identified two classes oflong-timescale units: “controller” units composed a densely interconnected sub-network and strongly projected to the rest of the network, while “integrator” unitsshowed the longest timescales in the network, and expressed projection profilescloser to the mean projection profile. Ablating integrator and controller units af-fected model performance at different positions within a sentence, suggesting dis-tinctive functions of these two sets of units. Finally, we tested the generalizationof these results to a character-level LSTM model and models with different archi-tectures. In summary, we demonstrated a model-free technique for mapping thetimescale organization in recurrent neural networks, and we applied this methodto reveal the timescale and functional organization of neural language models.11 I NTRODUCTIONLanguage processing requires tracking information over multiple timescales. To be able to predictthe final word “timescales” in the previous sentence, one must consider both the short-range context(e.g. the adjective “multiple”) and the long-range context (e.g. the subject “language processing”).How do humans and neural language models encode such multi-scale context information? Neuro-scientists have developed methods to study how the human brain encodes information over multipletimescales during sequence processing. By parametrically varying the timescale of intact context,and measuring the resultant changes in the neural response, a series of studies (Lerner et al., 2011;Xu et al., 2005; Honey et al., 2012) showed that higher-order regions are more sensitive to long-range context change than lower-order sensory regions. These studies indicate the existence of a“hierarchy of processing timescales” in the human brain. More recently, Chien & Honey (2020)used a time-resolved method to investigate how the brain builds a shared representation, when twogroups of people processed the same narrative segment preceded by different contexts. By directlymapping the time required for individual brain regions to converge on a shared representation inresponse to shared input, we confirmed that higher-order regions take longer to build a shared repre-sentation. Altogether, these and other lines of investigation suggest that sequence processing in the1The code and dataset to reproduce the experiment can be found at https://github.com/sherrychien/LSTM_timescales1Published as a conference paper at ICLR 2021brain is supported by a distributed and hierarchical structure: sensory regions have short process-ing timescales and are primarily influenced by the current input and its short-range context, whilehigher-order cortical regions have longer timescales and track longer-range dependencies (Hassonet al., 2015; Honey et al., 2012; Chien & Honey, 2020; Lerner et al., 2011; Baldassano et al., 2017;Runyan et al., 2017; Fuster, 1997).How are processing timescales organized within recurrent neural networks (RNNs) trained to per-form natural language processing? Long short-term memory networks (LSTMs) (Hochreiter &Schmidhuber, 1997) have been widely investigated in terms of their ability to successfully solve se-quential prediction tasks. However, long-range dependencies have usually been studied with respectto a particular linguistic function (e.g. subject-verb number agreement, Linzen et al. 2016; Gulor-dava et al. 2018; Lakretz et al. 2019), and there has been less attention on the broader question ofhow sensitivity to prior context – broadly construed – is functionally organized within these RNNs.Therefore, drawing on prior work in the neuroscience literature, here we demonstrate a model-freeapproach to mapping processing timescale in RNNs. We focused on existing language models thatwere trained to predict upcoming tokens at the word level (Gulordava et al., 2018) and at the char-acter level (Hahn & Baroni, 2019). The timescale organization of these two models both revealedthat the higher layers of LSTM language models contained a small subset of units which exhibitlong-range sequence dependencies; this subset includes previously reported units (e.g. a “syntax”unit, Lakretz et al., 2019) as well as previously unreported units.After mapping the timescales of individual units, we asked: does the processing timescales of eachunit in the network relate to its functional role, as measured by its connectivity? The question is mo-tivated by neuroscience studies which have shown that in the human brain, higher-degree nodes tendto exhibit slower dynamics and longer context dependence than lower-degree nodes (Baria et al.,2013). More generally, the primate brain exhibits a core periphery structure in which a relativelysmall number of “higher order” and high-degree regions (in the prefrontal cortex, in default-moderegions and in so-called “limbic” zones) maintain a large number of connections with one another,and exert a powerful influence over large-scale cortical dynamics (Hagmann et al., 2008; Mesulam,1998; Gu et al., 2015). Inspired by the relationships between timescales and network structure inthe brain, we set out to test corresponding hypotheses in RNNs: (1) Do units with longer-timescalestend to have higher degree in neural language models? and (2) Do neural language models also ex-hibit a “core network” composed of functionally influential high-degree units? Using an exploratorynetwork-theoretic approach, we found that units with longer timescales tend to have more projec-tions to other units. Furthermore, we identified a set of medium-to-long timescale “controller” unitswhich exhibit distinct and strong projections to control the state of other units, and a set of long-timescale “integrator units” which showed influence on predicting words where the long context isrelevant. In summary, these findings advance our understanding of the timescale distribution andfunctional organization of LSTM language models, and provide a method for identifying importantunits representing long-range contextual information in RNNs.2 R ELATED WORKLinguistic Context in LSTMs . How do LSTMs encode linguistic context at multiple timescales?Prior work suggested that the units sensitive to information that requires long-range dependenciesare sparse. By ablating one unit at a time, Lakretz et al. (2019) found two units that encode infor-mation required for processing long-range subject-verb number agreement (one for singular and onefor plural information encoding). They further identified several long-range “syntax units” whoseactivation was associated with syntactic tree-depth. Overall, Lakretz et al. (2019) suggests that asparse subset of units tracks long-range dependencies related to subject-verb agreement and syntax.If this pattern is general – i.e. if there are very few nodes tracking long-range dependencies in gen-eral – this may limit the capacity of the models to process long sentences with high complexity, forreasons similar to those that may limit human sentence processing (Lakretz et al., 2020). To testwhether long-range nodes are sparse in general, we require a model-free approach for mapping thecontext dependencies of every unit in the language network.Whole-network context dependence . Previous work by Khandelwal et al. (2018) investigated theduration of prior context that LSTM language models use to support word prediction. Context-dependence was measured by permuting the order of words preceding the preserved context, and2Published as a conference paper at ICLR 2021observing the increase in model perplexity when the preserved context gets shorter. Khandelwalet al. (2018) found that up to 200 word-tokens of prior context were relevant to the model perplexity,but that the precise ordering of words only mattered within the most recent 50 tokens. The token-based context-permutation method employed in this study was analogous to the approach used tomeasure context-dependence in human brain responses to movies (Hasson et al., 2008) and to audi-tory narratives (Lerner et al., 2011).Inspired by the findings of Khandelwal et al. (2018) and Lakretz et al. (2019), in the present study weset out to map the context-dependence across all of the individual units in the LSTM model. This en-abled us to relate the timescales to the effects of node-specific ablation and the network architectureitself. In addition, our context manipulations included both context-swapping (substituting alterna-tive meaningful contexts) and context-shuffling (permuting the words in the prior context to disruptinter-word structure), which allowed us to better understand how individual words and syntacticallystructured word-sequences contribute to the the context representation of individual hidden units.3 M ETHODS3.1 L ANGUAGE MODELS AND CORPUSWe evaluated the internal representations generated by a pre-trained word-level LSTM languagemodel (WLSTM, Gulordava et al., 2018) as well as a pre-trained character-level LSTM model(CLSTM, Hahn & Baroni, 2019) as they processed sentences sampled from the 427804-word(1965719-character) novel corpus: Anna Karenina by Leo Tolstoy (Tolstoy, 2016), translated fromRussian to English by Constance Garnett.For the WLSTM, we used the model made available by Gulordava et al. (2018). The WLSTM hasa 650-dimensional embedding layer, two 650-dimensional hidden layers and an output layer withvocabulary size 50,000. The model was trained and tested on Wikipedia sentences and was notfine-tuned to the novel corpus. Therefore, we only used sentences with low perplexity from thenovel in our main timescale analysis. We performed the same analysis using the Wikipedia test setfrom Gulordava et al. (2018) and obtained similar results (See Section 5.3, Figure A.4A, AppendixA.2.1). For the CLSTM, we used the model made available by Hahn & Baroni (2019). The CLSTMhas a 200-dimensional embedding layer, three 1024-dimensional hidden layers and an output layerwith vocabulary size 63. The model was trained on Wikipedia data with all characters lower-casedand whitespace removed. We tested the model with sentences sampled from Anna Karenina as theWLSTM model, and we obtained bits-per-character (BPC) similar to what Hahn & Baroni (2019)reported in their original work.3.2 T EMPORAL CONTEXT CONSTRUCTION PARADIGMIn order to determine the processing timescales of cell state vectors and individual units, we mod-ified the “temporal context construction” method developed by Chien & Honey (2020). Thus, theinternal representations of the model were compared across two conditions: (1) the Intact Contextcondition and (2) the Random Context condition. In both conditions, the model was processing thesame shared sequence of words (for example, segment B), but the preceding sentence differed acrossthe two conditions. In the Intact Context condition, the model processed segment B (the shared seg-ment) preceded by segment A, which was the actual preceding context from the original text. In thecurrent study, for example, segment A and B are connected by “, and” within long sentences fromthe novel corpus (Figure 1A), to ensure the temporal dependencies between A and B. In the RandomContext condition, however, the model processed the same shared input (segment B), but the contextwas replaced by segment X, which was a randomly sampled context segment from the rest of thecorpus. Segment X was therefore not usually coherently related to segment B. For the WLSTMtimescale analysis, we chose long sentences in the Intact Context condition that satisfied the follow-ing constraints: (1) mean perplexity across all words in the sentence <200, (2) the shared segmentwas longer than 25 words, and (3) the context segment was longer than 10 words. 77 sentences areincluded as trials in our analyses. In the Random Context condition, we preserved the same sharedsegments and randomly sampled 30 context segments (each longer than 10 words) from other partsof the novel. For the CLSTM timescale analysis, we used the same 77 long sentences in the Intact3Published as a conference paper at ICLR 2021Context condition, and randomly sampled 25 context segments (with length >33 characters) for theRandom Context condition.Figure 1: Method for mapping processing timescales of individual units. A.Example sentences forthe model to process in the Intact Context and Random Context condition. In the Intact Contextcondition, the shared segment is preceded by an intact context from the corpus; while in the Ran-dom Context condition, this preceding context segment is replaced by randomly sampled contextsegments. B.Schematic hidden state activation of the neural network. When the model starts toprocess the shared segment preceded by different context between the two context conditions, thehidden unit activation difference (i.e. the mean absolute difference of unit activation between thetwo conditions) decreases over time with different rates. The expected decreasing pattern of acti-vation difference of a long-timescale unit and a short-timescale unit are shown schematically in thegreen and red curves, respectively.In brief, the model is processing the same input (the shared segment) with different preceding context(the intact vs. random context). We can now measure the context dependence of individual unitsby examining how the cell state activations differ between the two conditions, while the network isprocessing the shared segments with identical input. Any difference in internal representations mustarise from the context manipulation, since the current input is the same. A decrease in activationdifference over time implies that the units exposed in the Intact context and Random context startto build a similar representation as they process the shared input. For a long-timescale unit, whosecurrent state is dependent on information in the far-preceding context, we will see that the activationdifference is preserved across contexts (Figure 1B, green curve), even while the unit is processingthe shared input. On the other hand, for a short-timescale unit whose activation is driven largely bythe current input, we will see that the activation difference drops quickly (Figure 1B, red curve) asthe unit processes the shared input.4Published as a conference paper at ICLR 20214 H IERARCHICAL ORGANIZATION OF TIMESCALES ACROSS LAYERSDo higher levels of the LSTM model exhibit greater context-dependence? Lakretz et al. (2019) ob-served that long-range functional units were more common in higher layers, and in general, higher-levels of hierarchical language model exhibit longer range context-dependence (Jain et al., 2019;Jain & Huth, 2018). Therefore, to validate our stimuli and the sensitivity of our methods, we firstcompared the processing timescales of different hidden layers in both of the LSTMs, by correlatingthe cell state vectors, column by column, between the Intact condition and Random condition.We found that both layers showed near-zero correlation when processing the different context, andthe correlation increased as they began to process the shared input. In the WLSTM, the correlationincreased more slowly for second-level cell state vectors than for first-level cell state vectors. Thus,the representation of second-level cell state is more sensitive to the different context than the firstlevel. Similarly, for the CLSTM model, the third-level cell state exhibited longer-lasting contextsensitivity than lower levels (Figure 2). This observation of longer context-dependence in higherstages of processing is consistent with prior machine learning analyses (Lakretz et al., 2019; Jain &Huth, 2018) and is also analogous to what is seen in the human brain (Hasson et al., 2015; Chien& Honey, 2020; Lerner et al., 2011; Jain et al., 2019). Based on the finding of longer contextdependence in higher layers, we examined single units in the highest level hidden units, i.e. thesecond level of WLSTM (n=650) and the third level of CLSTM (n=1024).Figure 2: Context effect measured by cell-state vector correlation at different layers in word-levelLSTM (WLSTM) and character-level LSTM (CLSTM). A.Correlation curves of the WLSTM cell-state vectors across the Intact Context condition and Random Context condition as a function ofinput token. In both models, the correlation increased as the models began to process the sharedsegment. Higher-level cell states exhibited a slower increase in correlation, compared to lower-levelcell states, indicating that the higher-levels retain more of the prior context information for longer.B.As for A, but applied to the three levels of CLSTM. Similar to the WLSTM, higher-level cellstate of the CLSTM showed more context sensitivity than the lower-level cell state.5 P ROCESSING TIMESCALES OF INDIVIDUAL UNITS WITHIN LSTM L AYERS5.1 Q UANTIFYING SINGLE UNITTIMESCALESWe examined the absolute single unit activation difference when processing the shared segmentspreceded by different context. As expected, most of the hidden units showed different activationwhen the input tokens were different (i.e. while processing the non-shared context in the IntactContext and Random Context conditions). However, once the shared input tokens begin (at t= 0)the Intact-Random activation differences drop (Figure A.1A, A.1B).We used the rate at which the curves drop to quantify the processing timescale, as this is a measureof how quickly the responses align across different context conditions. To quantify the timescale ofindividual units, we fit the activation difference curves with a logistic function:Y(x) =L1 +ek(xx0)+d (1)5Published as a conference paper at ICLR 2021As shown in Figure A.1A and Figure A.1B, the logistic function fit the raw activation differencecurves. We then computed the ”timescale” of each unit as the time-to-half-maximum of the logisticdecay. In particular, for the WLSTM we used the activation difference Y(0)at the beginning of theshared segment, and at the end of the shared segment Y(24) (Y(79) for the CSLTM) to calculatethe time-to-half-maximum of unit ias:timescale i=dY1(Yi(0)Yi(24)2)e (2)where the inverse function Y1(y)identifies the largest integer t, for whichY(t)<y. We included635 units in WLSTM and 1012 units in CLSTM for further analysis after excluding the units whichcould not be accurately fit by a logistic function (See Appendix A.1).5.2 D ISTRIBUTION OF UNITTIMESCALES IN LSTM L ANGUAGE MODELSThe results showed that of the 635 WLSTM units whose processing timescale we mapped, approx-imately 70% of the units were insensitive to long-range context (processing timescale <3 words):their activation difference dropped immediately at onset of the shared segment. In contrast, onlyapproximately 13% of the units had a timescales >7 words (Figure A.2A). Figure 3A shows theabsolute activation difference of all units in WLSTM sorted by timescale (long to short). Some ofthe longer-timescale units continued to exhibit a large activation difference even when processingthe shared segments for more than 20 tokens.As we were testing the same word-level LSTM previously studied by Lakretz et al. (2019), webegan by examining the timescales of hidden-state units that were already known to be involvedin processing context-dependence language information: a “singular number unit” 988, a “pluralnumber unit” 776, and a “syntax unit” 1150. We found that, compared to other units, both “number”units had medium timescales ( 3 words, ranked 129 of 635 units), while the “syntax” unit had along timescale (7 words, ranked 64 of 635 units) (Figure A.1).We repeated the timescale mapping in the CLSTM model, and again identified a small subset oflong-timescale units (Figure 3B, Figure A.2B). Although there were overall more units in CLSTM,over 63% of the units were insensitive to the context (timescale <3 characters). Fewer than 15% ofthe units exhibited timescale >10 characters, and the unit with the longest timescale only droppedto its half-maximum activation-difference after 50 characters of shared input.5.3 T IMESCALE VARIATION ACROSS DATASETS AND CONTEXT CONDITIONSTo ensure that the timescales we measured were robust across datasets, we conducted the sameanalysis on WLSTM using the Wikipedia testing dataset used in Gulordava et al. (2018). Themapped timescales were highly correlated (r=0.82, p <0.001) across the Anna Karenina dataset andthe Wikipedia dataset (Appendix A.2.1, Figure A.4A).Similarly, to confirm that the timescales measured were not specific to our testing using the “, and”conjunction point, we also measured timescales at an alternative segmentation point, and found thatthe timescales were largely preserved (r=0.83, p <0.001), notwithstanding there were a small set ofnotable exceptions (Appendix A.2.2, Figure A.4B).Although we measured the timescales of context dependence using “token distance”, these measuresare not invariant to changes in the the “syntactic distance”. For example, if one were to replace acomma with a ”full stop”, then the token distance would be unaltered but the syntactic distancecould be greatly altered. Indeed, we found that most units showed little context dependence whenthe preceding context segment ended with a “full stop”, which served as a clear signal for the end ofa sentence (Appendix A.2.3, Figure A.4C).Finally, we examined whether the contextual information retained by the language models (and theassociated timescales measurement) was sensitive to linguistic structure in the context, or whether itwas primarily driven simply by the presence or absence of individual words. To this end, we gener-ated text for the Random Context condition by shuffling the order of words from the Intact segment.We found that while the presence of individual words did play an important role in determining thecontext representations (and thus the timescales), several units showed a longer timescale when theprior context was composed of coherently structured language (Appendix A.2.4, Figure A.4D).6Published as a conference paper at ICLR 2021Figure 3: Timescale organization in word-level LSTM (WLSTM) and character-level LSTM(CLSTM) language model. A.Absolute activation difference for each WLSTM hidden unit overtime, with units (rows) sorted by timescales. A small set of long-timescale units (top) sustain anactivation difference during shared segment processing, but most (bottom) are context-insensitiveshort-timescale units. B.Absolute activation difference for each CLSTM unit over time, with unitssorted by timescales. Similar to the WLSTM, a small set of long-timescale CLSTM hidden unitsmaintain long-range contextual information.6 C ONNECTIVITY OF MEDIUM -TOLONG -TIMESCALES UNITS IN LSTM SHaving mapped the timescales of each processing unit, we next asked: how does the processingtimescale of a unit relate to its functional role within the network? More specifically, are unitswith longer timescales also units with high degree in the connectivity network? To answer thesequestions, we analyzed (1) the projection strength of each unit and (2) the similarity of the overallprojection pattern (hidden-to-gates) across different units. The projection patterns were definedusing the direct weight projections from one hidden unit at time tto the input and forget gate ofother hidden units at time t+ 1.In LSTMs, the amount of contextual ( ct1) and input ( ~ct) information stored in the cell state ( ct)is determined by the forget gate ( ft) and input gate ( it) activation (Eq. 3); and the activation of thegatesitandftare determined by the current input at time tand the hidden units at time t1throughweight matrices UandW(Eq. 4, 5).ct=ftct1+it~ct (3)it=(Uixt+Wiht1+bi) (4)ft=(Ufxt+Wfht1+bf) (5)Here, we were interested in understanding how the contextual information over different timescalesis projected from the hidden units to the input and forget gates of other units, and further influence theupdate of cell states. Thus, we analyzed the network connectivity focusing on the weight matricesWiandWfwithin the highest layer of the WLSTM or CLSTM.6.1 S TRONG PROJECTIONS FROM LONG -TIMESCALE HIDDEN UNITS TO GATEUNITSUnits with longer processing timescales made a larger number of strong projections ( jz-scorej>5, Appendix A.3) to the input and forget gates of other units in both WLSTM (r=0.31, p <0.001,Figure 4A) and CLSTM models (r=0.24, p <0.001, Figure A.5A). Furthermore, we found that the“syntax” unit (Unit 1150) reported by Lakretz et al. (2019) in the WLSTM model possessed thelargest number of strong projections to the input and forget gates of all other units, and the majorrecipients from Unit 1150 were units with medium- to long-timescale units (Figure 4B).7Published as a conference paper at ICLR 20216.2 I DENTIFY CONTROLLER UNITS IN LSTM L ANGUAGE MODELSThe presence of strong projections from the “syntax” unit to other long-timescale units motivated usto further explore whether high-degree, long-timescale units in the LSTM also densely interconnectto form a “core network”, perhaps analogous to what is seen in the brain (Hagmann et al., 2008;Mesulam, 1998; Baria et al., 2013). If so, this set of units may have an especially important rolein controlling how prior context is updated and how it is used to gate current processing, analogousto the controller system in the brain (Gu et al., 2015). To identify these putative “controller units”,we binarized the network by identifying the top 258 projection weights from the weight matrices(see Appendix A.3), which provided the edges for a network analysis. We then used k-core analysis(Batagelj & Zaversnik, 2003) to identify the “main network core” (the core with the largest degree)of the network (Figure A.3). At the maximal k= 5, the k-core analysis yielded a set of denselyinterconnected nodes, composed of many long-timescale and medium-timescale units (Figure A.3),also labeled in red in Figure 4A). We (tentatively) refer to this set as the “controller” set of the net-work. Performing the same k-core analyses on the CLSTM model, we observed that the main corenetwork was again composed of disproportionately many medium and long-timescale “controller”units (Figure A.5A).Figure 4: Timescale and connectivity organization in a word-level LSTM. A.Long-timescale unitsexhibited stronger projections from the hidden state at time tto the forget gate and input gate attimet+ 1.B.Strength of hidden-forget gate and hidden-input gate projections from a high-degree“syntax” unit to all other units. The units receiving strong projections ( jz-scorej>5) are labeled. C.Ablating the two sets of long-timescale units results in different impact to the LSTM performance.Specifically, ablating “controller” units impaired overall word prediction (upper panel), while ablat-ing “integrator” units impaired prediction of words in the later part of the sentences (bottom panel).D.Multi-dimensional scaling representation of network connectivity. The distance between twonodes indicates the similarity of their hidden-to-gate connection patterns. The size of each nodeindicates its degree (the number of strong projections from that node to the gate units). An edgebetween nodes indicates a significant hidden-to-gate projection between them.6.3 D ISTINCTIVE ROLES OF LONG -TIMESCALE CONTROLLER AND INTEGRATOR UNITSWe used multi-dimensional scaling (MDS) to visualize the similarity of projection patterns acrossLSTM units. We recovered a 2-dimensional MDS embedding, in which the inter-unit distances wasdefined based on the similarity of their hidden-to-gate projection patterns (i.e., similarity of valuesin the unthresholded LSTM weight matrices WiandWf). We visualized the MDS solution as agraph structure, in which each node is a unit, and the edges reflect connection properties of that unit.Figure 4D shows the resulting 2-D space, with units color-coded by their timescale.“Controller units” (labeled on Figure 4D) were positioned around the periphery of the MDS space,suggesting that these units expressed projection patterns that were distinct from other “controller”units and also from the rest of the network. In contrast, we observed several long-timescale units8Published as a conference paper at ICLR 2021positioned in the center of the MDS space, suggesting that the projection patterns of these units weresimilar to the mean projection pattern. We refer to this more MDS-central set as the “integrator units”(labeled in green in Figure 4A). Similar to the WLSTM, the projection patterns of the “controllerunits” in the CLSTM were distinct from other units in the network, according to the MDS results(Figure A.5C). However, we did not observe “integrator units” positioned in the center of the MDSspace of the CLSTM.Are the “controller” and “integrator” units particularly important for the model’s ability to predictthe next token? To test the functional importance of these subsets of units, we conducted groupablation analyses (See Appendix A.4). Ablating controller units reduced the accuracy of tokenprediction overall, while ablating integrator units only reduced prediction accuracy for the last wordsof the sentences (Figure 4C). The results confirm that the putative controller and integrator nodesare functionally significant, with distinctive roles in the WLSTM language model.Finally, to test the generalization of the timescale and connectivity analyses to a different modelarchitecture, we conducted preliminary analyses on a Gated Recurrent Unit (GRU) language model(Cho et al., 2014) and another word-level LSTM model with a smaller hidden size (100 units) perlayer. The models were trained using similar parameter settings as in Gulordava et al. (2018) un-til they converged without any model-specific optimization. We found similar sparsity of long-timescale units in both models, but did not observe the same relationship between timescales andconnectivity (Appendix A.5; A.6; Figure A.7; A.8; A.9; A.10).7 D ISCUSSIONWe demonstrated a new method for mapping the timescale organization in recurrent neural languagemodels. Using this method, we mapped the timescale distributions of units within word-level andcharacter-level LSTM language models, and identified a small set of units with long timescales. Wethen used network analyses to understand the relationship between the timescale of a unit and itsconnectivity profile, and we distinguished two subsets of long-timescale units with seemingly dis-tinctive functions. Altogether, we proposed methods combining timescale and connectivity analysesfor discovering timescale and functional organization in language models.The units with longer processing timescales included some units whose role in long-range lan-guage dependencies had already been established (Lakretz et al., 2019), but almost all of the longtimescale units are of unknown function. The timescale mapping procedure described here providesa model-free method for identifying nodes necessary for long-range linguistic and discursive pro-cesses (e.g. tracking whether a series of words constitutes an assertion or a question). Future studiesof these neural language models could focus on the specific linguistic information tracked by thelong-timescale units, especially the “controller” units which control the information flow of otherunits in the network.The current study measured unit timescales using a simple token distance, and so the method maybe applied to understanding recurrent neural nets beyond language models. It will be insightfulfor future studies to investigate whether the processing timescales characterized via token distanceare comparable to those measured using functional measures, such as syntactic distance. Relat-edly, while we explored the timescale variance under several context conditions, a more thoroughinvestigation will be needed to examine how the timescales of individual units may vary at differentpositions within a sentence, both in terms of token location and syntactic location.Processing timescales may exhibit an analogous hierarchical organization in LSTMs and in thehuman cerebral cortex: in both cases, a subset of nodes with high degree and high inter-connectivityexpress unusually long timescales. More detailed testing of this apparent correspondence is required,however, because units within an LSTM layer are not spatially embedded and constrained as inbiological brains, and thus the LSTM units do not express a spatially graded timescale topography.ACKNOWLEDGMENTSC.J.H and H-Y .S.C gratefully acknowledge the support of the National Institutes of Mental Health(grant R01MH119099)9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Great to see some methods from neuroscience applied to interpretability research for a relevant question, results and setup could be improved ### Review Text _**Update after author response**_: I think this is a very promising paper, and I am really excited about seeing techniques from neuroscience employed to answer questions about neural network models. The authors have further conducted several additional experiments after reviewer comments, which I appreciate. However, my most fundamental concern -- the mismatch between the method and the way that it is validated -- unfortunately still stands, which is why I would encourage the authors to further pursue this line of work, but recommend to reject it for ICLR. **Summary** This paper proposes to apply time-scale methods from neuroscience to investigate the timescale organisation in neural language models. More specifically, the authors test the timescale of individual units in a word- and character-level LSTM by comparing the units' activations values on the same sentence, but with different contexts. Using this method, the authors first show that the higher layers on average have longer timescales. They then, for all units, they fit a logistic function to the "recovery" curves and use the half-times of this curves as an indication of the time scale of these units. They test the syntax unit and two long-distance units found by Lakretz et al and show that the number units have similar time-scales, while the syntax unit have a longer time scale. Lastly, the authors analyse the connectivity between the longer time scale units and find that the units with longer processing timescales make a larger number of strong projections. Within these units, the authors identify two sets of units in the word-level LSTM: "controller units", that play a role in how the connectivity of the network is updated, and "integrator units", that instead integrate information. **Strong points** - Neuroscience has long been asking questions about the brain that are very similar to the questions we now ask about neural networks, cross-pollination between these fields is extremely important, and this paper contributes to this - Aside from the main technique, the paper introduces some interesting and useful methods, such as projectivity analysis and k-core analysis. I think these methods can be useful for other researchers as well - Time scale analysis of LSTMs is a very relevant and interesting topic, that deserves more attention than it is currently getting *Concerns* - My main concern is that there seems to be a mismatch between the "language time scales" on which the authors operate: their experiment is designed to investigate the impact of extra-sentential context, but the Lakretz et al results they keep coming back to concern syntactic phenomena that are only relevant *within* a sentence, which is a different scale. In other words, the units found by the authors of this paper are long-distance when it comes to integrating context, but the syntax and number units found by Lakretz et al are not really related to that: they model relationships *within* sentences. Theoretically speaking, they should be reset at the beginning of every new sentence and they should thus be completely independent from the content. That the authors find this to be untrue is interesting, but inconsistent with what Lakretz et al describe these unit do. Since this is not addressed at all in the paper, it makes the results in general a bit difficult to interpret. _**Update after author response**: In their response the authors clarified that the they have only analysed single sentences, where two distinct subsentences are combined with a conjunction. This, unfortunately, does not make a difference for the argument: whether two sentences are split by a full stop or instead concatenated with "and" does not make any difference for the argument above, since the subject-verb agreement relationships that the units the authors look at model do not cross these boundaries either. Furthermore, in their response the authors state that the find that the context representations of units was 'reset' at sentence boundaries, as I asked before. I appreciate that the authors did these additional experiments, but I find the result somewhat worrisome: since the units they are looking at are syntactic units that encode number across long distance subject verb relationships, they should be reset both when a new sentence starts, as well as when a new conjunct with a new relationship starts. In terms of SV relationships, there should be no difference between "The boy kicked the ball and the girl caught it" and "The boy kicked the ball. The girl caught it." That the authors do find a difference points to a potential flaw in methodology._ - Relatedly, the authors say that their result that the syntax unit is a long distance unit, while the number units are not. This is not consistent with what they say in the related work of the section, but also not with the results reported by Lakretz et al, who hypothesise that the syntax units represent the depth of the syntactic dependency. This is something that changes with every new incoming word, whereas the number units are the ones that have to keep their activation constant across time. - While, as I said before, I think it is great that the authors try to use methods from neuroscience into the field, I do think that in this case the main method they propose is only very marginally different from earlier work (in particular Khandelwal et al. Perhaps it would make more sense to put a bit more stress on the rest of the methods as well (btw, also Lakretz et al do connectivity analysis). - The results are a bit underexplained, and understanding them requires many back and forths to the appendix. I would have appreciated a bit more motivated interpretation of several aspects. For instance: why is there such a large difference in activation differences in different units in the "pre-shared segment" part, and is this related to the half-time (it seems so from the plots)? What is the difference between character and word-level models in terms of expectations (we'd expect there to be an additional level of time-hierarchy, perhaps?) How do assessing activation differences and correlations differ in terms of conclusions? These things should, in my opinion, all be worked out a bit better. - Lastly, there are a few unsupported claims, the most important of which that their method recovers the previously discovered units of Lakretz et al, while (as far as I understand), they actually only *use* their method to analyse those neurons, but did not find them independently. (for other suggestions and comments, see below). To summarise, while I think the idea is very nice and definitely worth working out further, I do think that some work is needed to make this a publishable paper. *Suggestions/comments for authors* _Typographic_: - If you use quotes in latex, you should use different ones for left (`) and right ('), for them to appear correctly (check for instance line three in the introduction) - To prevent additional spaces after abbreviations like e.g. and i.e., put a backslash: "e.g.\ " - Lerner et al --> put all references within parenthesis - Introduction switches from present tense to paste tense in the last paragraph - "we measure the time-taken for the effect of this prior context to ”decay” (see Methods)" --> I don't really understand what this means, you measure how long it takes for these changes to not be measurable anymore? - Try to avoid double parethesis with abbreviations, e.g.: (WLSTM Gulordava et al. (2018)) should be: (WLSTM, Gulordava et al; 2018). You can do this with \citep[text before][text after]{citation}. - "has an 650-dimensional" --> "has a 650-dimensional" - "without fine-tuning to the novel" --> I first thought this sentence was unfinished until I read back and realised that "the novel" is your corpus. This is a bit confusing perhaps you could rephrase. - "how the cell state activation differ" --> "how the cell state activations differ" - "we will see that the activation difference drop quickly' --> drops quickly / see the activation difference drop quickly - There are several references that were published at ACL* conferences that are listed as arxiv papers in the reference list (Lakretz et al, Gulordava et al, Khandelwal et al) _Content_ - I would say that the conclusion that "Overall, prior works suggests that a small subset of units track long-range dependencies" is rather overstated: Lakretz et al found that the units representing long distance number information were sparse, but this does not imply that long range information in general is represented sparsely. Their method also focusses quite exclusively on finding sparsely distributed properties, as more distributed properties cannot be found with ablation. Furthermore, this is just one study, focusing on one syntactic aspect. I would suggest to rephrase this a bit. - Lakretz at all actually identified several syntax units, but only one of them was interpretable. - I find it a bit confusing that in 3.2, second paragraph, you first talk about comparing cell state activation, then say that you compare hidden state activations and then talk again about the cell state activation - Figure 1 C & D: I don't think these figures add much to the paper, for the following reasons i) They show only individual units and no average, making it difficult to interpret the values ii) while, as pointed out in 5.1, the *rate* of decay is the most important, the cut-off point is not indicated in the figure, which puts a stress on irrelevant aspects: the actual difference between the two lines. - I would appreciate to have Figure A.1 in the main text, it is important for the story. ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
QAMjwGiU9pk
CUHK.edu.hk/2021/Course/IERG5350
2021
Using Enhanced Gaussian Cross-Entropy in Imitation Learning to Digging the First Diamond in Minecraft
["Yingjie CAI", "Xiao Zhang"]
Although state-ofthe-art reinforcement learning (RL) systems has led to breakthroughs in many difficult tasks, the sample inefficiency of standard reinforcement learning methods still precludes their application to more extremely complex tasks. Such limitation will make many reinforcement learning systems cannot be applied to real-world problem, in which environment samples are expensive. To solve this problem, MineRL (13) provide an ideal develop environment to facilitate the research that leveraging fewer human demonstrations with more efficient reinforcement learning systems. Based on the MineRL environmnet, we proposed an enhanced Gaussian cross entropy (EGCE) loss for imitation learnning problems to achieve ideal performance. In the ObtainDiamond task, our EGCE achieves about 7.7% improvement than a strong baseline imitation learning pipeline. The demo video is available at here.
["reinforcement learning ObtainDiamond"]
Using Enhanced Gaussian Cross-Entropy in ImitationLearning to Digging the First Diamond in MinecraftYingjie Cai & Xiao ZhangDepartment of Electronic EngineeringThe Chinese University of Hong KongShatin, Hong Kong{caiyingjie,xzhang9411}@link.cuhk.edu.hkAbstractAlthough state-ofthe-art reinforcement learning (RL) systems has led to break-throughs in many difficult tasks, the sample inefficiency of standard reinforcementlearning methods still precludes their application to more extremely complex tasks.Such limitation will make many reinforcement learning systems cannot be appliedto real-world problem, in which environment samples are expensive. To solvethis problem, MineRL ( 13) provide an ideal develop environment to facilitate theresearch that leveraging fewer human demonstrations with more efficient rein-forcement learning systems. Based on the MineRL environmnet, we proposed anenhanced Gaussian cross entropy (EGCE) loss for imitation learnning problemsto achieve ideal performance. In the ObtainDiamond task, our EGCE achievesabout 7:7%improvement than a strong baseline imitation learning pipeline. Thedemo video is available at here.1 IntroductionAs deep reinforcement learning (DRL) methods are applied to increasingly difficult as well as complexproblems, the number of samples used for training increases: AlphaGo uses about 5million gamesof self-play ( 27), while AlphaStar uses over 200years of StarCraft II ( 3). Moreover, OpenAI Fivespend more than 11;000years of Dota2 gameplay (4).1.1 Task FormulationAlthough there exist many data augmentation methods and designed real-world environments forlimited numbers of trials, these methods are remain not sufficiently sample efficient for a large partof complex real-world domains. Therefore an effective environment is necessary for such situation.In MineRL ( 13), a large scale dataset that contains over 60million state-action pairs of humandemonstrations, several related tasks is designed with Minecraft game environment.The main task in MineRL ( 13) solving the ObtainDiamond environment ( 12;14) in MineCraft.Minecraft is a 3D, first-person, open-world game that centered around the resource gathering anditems/structures creation. These structures and items have prerequisite tools and materials required fortheir creation. As a result, many items require the completion of a series of natural subtasks. SolvingtheObtainDiamond environment consists of controlling an embodied agent to obtain a diamondby navigating the complex item hierarchy of Minecraft. In solving this task, a learning algorithmhas direct access to a 6464pixel observation from the perspective of the embodied Minecraftagent, and a set of discrete observations consisting of each item required for obtaining a diamondthat the agent has in its possession. The action space is the Cartesian product of continuous viewadjustment (turning and pitching), binary movement commands (left/right, forward/backward), anddiscrete actions for placing blocks, crafting items, smelting items, and mining/hitting enemies. AnFigure 1: These stages exhibit the different periods that a agent/player gets the first diamond inMineCraft ObtainDiamond environment.agent receives reward for completing the full task of obtaining a diamond. The full task of obtaininga diamond can be decomposed into a sequence of prerequisite subtasks of increasing difficulty. Anagent also receives reward for the first time it accomplishes each subtask in the sequence. An agentreceives twice the reward as received for accomplishing the previous subtask (starting from a rewardof1). The exception to this rule is achieving the full ObtainDiamond task by obtaining a diamond:accomplishing the final task is worth four times as much as completing the previous subtask.1.2 Environments and DatasetFigure 2: Setting an environment in MineRL.MineRL ( 13) define one primary ObtainDiamond environment, and six other auxiliary environments(Navigate, Treechop, ObtainCookedMeat, ObtainBed, and ObtainIronPickaxe) that encompass asignificant portion of human Minecraft play. By training agent in these environment domains,many of the hardest challenges in reinforcement learning, such as sparse rewards, long rewardhorizons, and efficient hierarchical planning can be completely revealed. The main task is solving theObtainDiamond environment, where the agent begins in a random initializing location without anyitems or structures, and is tasked with obtaining one diamond. The agent receives a high reward forobtaining a diamond as well as smaller rewards for obtaining prerequisite items. Episodes will endonce the agent dying, successfully obtaining a diamond, or reaching the maximum step limitation of18;000frames ( about 800seconds). The ObtainDiamond environment is a difficult environment.Diamonds only exist in a small portion of the world and are 2-10times rarer than other ores in2Milestone Reward Milestone Rewardlog 1 furnace 32planks 2 stone_pickaxe 32stick 4 iron_ore 64crafting_table 4 iron_ingot 128wooden_pickaxe 8 iron_pickaxe 256stone 16 diamond 1024Table 1: Rewards for achieving sub-goals and main goal (diamond) for ObtainDiamond .Baseline Method Reward TypeSQIL (22) 2.94 Imitation LearningRainbow (15) 0.35 Reinforcement LearningDQFD (29) 0.24 Imitation Learning with Reinforcement LearningPDDDQN (26) 0.11 Reinforcement LearningTable 2: Some baselines of ObtainDiamond environment.Minecraft. Additionally, obtaining a diamond requires many prerequisite items. Hence it is almostimpossible for an agent to obtain a diamond via any random exploration policies.The MineRL ( 13) dataset consists of over 60million state-action-reward tuples of recorded humanexpert demonstrations over all the seven environments. Each trajectory is contiguously sampledevery Minecraft game tick (at 20game ticks every second). Each state is comprised of an RGB videoframe of the player’s point-of-view and a comprehensive set of features from the game-state at thattick: player inventory, item collection events, distances to objectives, player attributes (health, level,achievements), and details about the current GUI the player has open. The action recorded at eachtick consists of: all the keyboard presses, the change in view pitch and mouse movements, player GUIinteractions, and agglomerative actions such as item crafting. Human trajectories are accompanied byautomatically generated annotations. All environments include metrics that indicate the quality of thedemonstration, such as timestamped rewards, number of no-ops, number of deaths, and total score.Additionally, trajectory meta-data includes timestamped markers for hierarchical labelings; e.g. whena house-like structure is built or certain objectives such as chopping down a tree are met.1.3 Evaluation Metrics and some BaselinesOnce following training, the gained agent will be evaluated on the average score over 500episodes.Scores are computed as the sum of the milestone rewards achieved by the agent in a given episode asoutlined in Table 1. A milestone is reached when an agent obtains the first instance of the specifieditem. Ties are broken by the number of episodes required to achieve the last milestone.Here are some baseline for ObtainDiamond . These results in MineRL-2020 challenge ( 12) arelisted in Table 2. For all these methods, K-means are applied to cluster the action space with thehuman demonstration data, and then train the agent. The implementations are included into PFRLagents library. According to these baseline and experience, imitation learning is more effective inObtainDiamond environment because human experts can provide efficient samples for the agent.2 Related Work of Imitation LearningIn imitation learning ( IL) instead of trying to learn from the sparse rewards or manually specifying areward function, an expert (typically a human) provides us with a set of demonstrations. The agenttries to learn the optimal policy that produce similar behaviors by following, imitating the expert’sdecisions. The ILis composed of the environment, which is essentially a Markov Decision Process(MDP ). Generally, the environment has a set of states, denoted by S, a set of actions A, a transitionmodelP(s0js;a), which presents the probability that an action ain the statesleads to state s0) andan unknown R(s;a)reward function. The demonstrations are composed of the state and actionsequences, which actually are some demonstration trajectories with the form:e=f(st;at)g (1)3where the actions based on the expert’s optimal policy.Generally, the imitation learning can be roughly split into two main categories behavioral cloning(BC) and inverse reinforcement learning ( IRL).2.1 Behavioral CloningThe simplest imitation learning algorithms of imitation learning is behaviour cloning ( BC) (28;24;6), which focuses on learning the expert’s policy using supervised learning given demonstrationtrajectoriese. The first application of behaviour cloning is ALVINN ( 21), where a vehicle is equippedwith sensors, which learned to map the sensor inputs into steering angles and drive autonomously.More specifically, the demonstrations are divided into state-action pairs and treated as i.i.d. examplesand then learned by supervised learning with suitable loss functions. Behavioral cloning has beenwidely used in the context of autonomous driving ( 5) and control of aerial vehicles ( 10). Behaviouralcloning can work excellently in some cases where it require only demonstration data to learn animitation policy directly and have no need for further interaction between the environment and agent.However, some applications break the i.i.d assumption and errors made in different states will beadded up. A mistake made by the agent can easily put it into a state that the expert has never visitedand the agent has never trained on. In such states, the behaviour is undefined and this can lead tocatastrophic failures (23).2.2 Inverse Reinforcement LearningAnother major category of imitation learning method is based on inverse reinforcement learning(IRL). The main idea of IRLis to learn the reward function of the environment based on expertdemonstrations, and then use reinforcement learning to find the optimal policy ( 25;19). This canoften get better imitation results than direct behavioral cloning.Instead of directly learning a mapping from states to actions using the demonstration data, IRL-basedmethods iternatively alternate between using the demonstration to infer a hidden reward function andusing RLwith the inferred reward function to learn an imitating policy. Specifically, starting with agroup of experts’ presentations (assuming these presentations are optimal) and then trying to estimatethe parameterized reward function, which would cause the expert’s behavior. By repeating thefollowing process until a good enough policy is found: first, updating the reward function parametersand then solving the reinforced learning problem (given the reward function, trying to find the optimalpolicy). Finally, comparing the newly learned policy with the expert’s policy (18).IRF-based techniques have been used for a variety of tasks such as maneuvering a helicopter ( 1)and object manipulation ( 8). Using RLto optimize the policy given the inferred reward functionrequires the agent to interact with its environment, which can be costly from a time and safetyperspective. Moreover, the IRLstep typically requires the agent to solve an MDP in the inner loop ofiterative reward optimization ( 1;30), which can be extremely costly from a computational perspective.However, recently, a number of methods have been developed which do not make this requirement(8;16;9) . One of these approaches is called generative adversarial imitation from observation ( GAIL )(16), which uses an architecture similar to generative adversarial networks (GANs) ( 11), and theassociated algorithm can be thought of as trying to induce an imitator state-action occupancy measurethat is similar to that of the demonstrator.The general IRLalgorithm is the following: Depending on the actual problem, there can be two mainmethods of IRL: the model-based and the model-free methods.In the model-based case, the reward function is linear. In each iteration, the full RL problem needto be solved, so to be able to do this efficiently, assuming that the environment’s (the MDP’s) statespace is small. Also suppose that the state transition dynamics of the environment is known. This isneeded so that our learned policy is compared with the expert’s one effectively.The model-free method is a more general case. Here, supposing that the state space of the MDP islarge or continuous, therefore in each iteration, only a single step of the RL problem is solved. In thiscase, the state transitions dynamics of the environment is unknowable, but assume that a simulator orthe environment can be access to. Therefore, comparing our policy to the expert’s one is trickier.4In both cases, however, learning the reward function is ambiguous. The reason for this is that manyrewards functions can correspond to the same optimal policy (the expert’s policy). To this end, themaximum entropy principle proposed by Ziebart ( 32) can be utilized by selecting the largest entropytrajectory distribution.3 MethodSince we have found that all demonstration samples in ObtainDiamond are failed in digging adiamond and the main reward is from Treechop andObtainIronPickaxe tasks. Therefore theprovided demonstration samples is more about these tasks.In the environment that can be described with time series, the agent can get an observation sand thentake an action aaccording to a specified policy (s). This can be annotated with the time step indext: the agent receives an observation st, then according to the policy (st), the agent chooses actionat. After the action, the agent gets a reward rtas well as the next st+1. In total, the target for theagent is to find a policy (s)and get the most reward R=PTt=0rtover an episode.Since we already have some demonstration samples that contains human trajectories, imitating thesebehaviors will be a good way to get the policy (s). Accordingly, we can train a mapping betweenreceived observations from demonstration samples and expert actions. This well-trained mappingwill give a certain action according to the observation. Therefore the pipeline will be a classificationtask: different actions is for different classes while the observation is the input feature that need to beclassified.Now we input an observation sinto a deep network model f()with anf(s)2Rnvector. For allNkinds of actions, each of them has a prototype vector wa2Rn. In ( 2), the outputs of last fullyconnection layer of model are applied in softmax function. Here we propose an enhanced mixedGaussian softmax distribution:(s;a(i)) =p(a(i)js) =eg(s;a(i))PNk=1eg(s;a(k)); (2)whereg(s;a) =ejf(s)waj2is the RBF kernel that related to the Gaussian distance. is the scalingparameter. This policy will trained with a modified cross entropy loss function in training:L(s) =NXk=1q(a(k)) log(s;a(k)): (3)Theq(a(k))is a-smoothed target. That means if the expert in demonstration samples chooses a(i)as the action, q(a(i)) =. Fork6=i,q(a(k)) =1N1. Thus there are two parameters andin ourobjective function.4 ExperimentMore training details and resutls are described in this section. We first introduce action and statespace, and the architecture of network with training settings. Then we outperform the proposedEGCE with a high baseline.4.1 Action and State SpaceThe environment observations are the main components of the states, which actually are 6464RGB images. In the ObtainIronPickaxe task, an additional vector state is provided, which containsinformation about the following: collected resources and hand tools, as well as items currently held.The held is encoded as a one-hot vector, and the items are encoded as multi-hot vectors (the quantityin the sub-vector is equal to the number of corresponding products in the inventory).For our action space, there are three parts. The first one contains eight binary actions, which is relatedwith the movement in the environment. The eight actions are forward, backward, left, right, jump,5sprint, attack, sneak . You can use multiple movement actions at the same time step generating 256combinations. The second one is about the pitch control and continuous yaw of the agent’s cameraposition. The last one is the actions related to the production, equipment and placement of items.Some items needs to be placed on the ground before it can be used such as crafting tables.Therefore, the simple combination of the actions creates a huge action space. Here we follow ( 5),who is the first tor propose quantize the continuous control and use 22:5degrees for each direction.After quantifying the motion of the camera, there are 1280 possible motion combinations. We onlyallow up to 3 simultaneous actions, and remove any superfluous actions, such as rotating left andright at the same time. So there are 112 different movements finally. More complete description ofmotion and state encoding is available in supplementary materials.4.2 Architecture and Training SetupThe proposed strategy neural network consists of three parts. The first part is a convolutionalperception for image, the second part is a fully connected layer for the input and the vector part of thestate are connected after the last layer. The last layer has softmax or linear output for cross entropy ormargin loss. We investigated the relationship between network size and performance by testing threedifferent networks. Architecture of network awareness part: DQN architecture with 3 convolutionallayers ( 17), Impala architecture with 6 remaining blocks ( 7) and Deep Impala architecture with 8remaining blocks remaining blocks and Fixup initialization ( ( 20;31)). During the last architecturetested, the channel sizes of all convolutional layers and fully connected layers are doubled.We used Adam optimizer to train the network, with a learning rate of 6:25105and a weightattenuation of 105with maximum 3106steps. From the demo dataset, we used human tra-jectories to successfully achieve environmental goals within the time limits of the respective tasks(i:e:; ObtainDiamond and ObtainIronPickaxe). We also delete all states where humans have nottaken any action. For a complete network architecture, please refer to supplementary materials.In addition, we also tested various enhancements, such as contrast, rectangle removal, horizontalflipping (where the left and right actions are also flipped), sharpness, brightness and descendantadjustments. In addition to assessing policy performance, we tested two additional performanceindicators: training losses and testing losses on invisible human trajectories to assess their relevanceto actual performance policy.4.3 Additional Data IncorporationWe use the human track from the ObtainDiamond and ObtainIronPickaxe tasks in the default trainingsettings. In order to create a more realistic observation of the additional data, we first sample therandom states and ObtainDiamond tracks, where the rewards have not yet reached. Then vectorobservation are used to observe part of the sample state to complete the Treechop observation.Therefore, until whole of the Treechop states are totally observed the process is ended.4.4 ResutlsIn this section, we compared imitation learning performance with different losses. Following ( 2),we record 8 snapshots of the deep impala network ( 7). Image flipping is the only applied dataaugmentations. The reward is the average of 40 episodes for each snapshot. Then the best performedsnapshot is recorded as the running results. According to the results in Table 3, our proposed enhancedGaussian cross entropy (EGCE) can obtain the highest reward among these three losses appliedin imitation learning: the average reward of EGCE without TreeChop data achieves 50:30, whichsurpass the original cross entropy in ( 2) about 7:7%. This implies EGCE can actually improve theperformance of imitation learning pipeline.5 ConclusionThe MineRL ObtainDiamon task is a very challenge domain. Applying imitation learning withdemonstration samples can achieve ideal performances. Since our proposed enhanced Gaussiancross-entorpy can significantly improve the reward, this fact implies that weak cor-realtion between6Methods Reward SettingsCross Entropy 46.72 -Cross Entropy 65.36 with TreeChop dataMargin Loss 42.48 -Margin Loss 34.21 with TreeChop dataEGCE (Ours) 50.30 = 25;= 0:9EGCE (Ours) 66.56= 25;= 0:9, with TreeChop dataTable 3: Some results of exist solution and proposed methods.loss values and policy performance can be partly relieved by a well-designed loss. The proposedEGCE can stabilize the loss and set up a strong connection with policy performance.References[1]P. Abbeel and A. Y . Ng. Apprenticeship learning via inverse reinforcement learning. page 1,2004.[2]A. Amiranashvili, N. Dorka, W. Burgard, V . Koltun, and T. Brox. Scaling imitation learning inminecraft, 2020.[3]K. Arulkumaran, A. Cully, and J. Togelius. Alphastar: An evolutionary computation perspective.InProceedings of the Genetic and Evolutionary Computation Conference Companion , pages314–315, 2019.[4]C. Berner, G. Brockman, B. Chan, V . Cheung, P. D ̨ ebiak, C. Dennison, D. Farhi, Q. Fischer,S. Hashme, C. Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprintarXiv:1912.06680 , 2019.[5]M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXiv preprintarXiv:1604.07316 , 2016.[6]S. Daftry, J. A. Bagnell, and M. Hebert. Learning transferable policies for monocular reactivemav control. In International Symposium on Experimental Robotics , pages 3–11. Springer,2016.[7]L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V . Mnih, T. Ward, Y . Doron, V . Firoiu, T. Harley,I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learnerarchitectures. arXiv preprint arXiv:1802.01561 , 2018.[8]C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policyoptimization. arXiv: Learning , 2016.[9]J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcementlearning. arXiv: Learning , 2017.[10] A. Giusti, J. Guzzi, D. C. Cire ̧ san, F.-L. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster,J. Schmidhuber, G. Di Caro, et al. A machine learning approach to visual perception of foresttrails for mobile robots. IEEE Robotics and Automation Letters , 1(2):661–667, 2015.[11] I. Goodfellow, J. Pougetabadie, M. Mirza, B. Xu, D. Wardefarley, S. Ozair, A. Courville, andY . Bengio. Generative adversarial nets. pages 2672–2680, 2014.[12] W. H. Guss, M. Y . Castro*, S. Devlin*, B. Houghton*, N. S. Kuno*, C. Loomis*, S. Milani*,S. Mohanty*, K. Nakata*, R. Salakhutdinov*, J. Schulman*, S. Shiroshita*, N. Topin*, A. Um-madisingu*, and O. Vinyals*. Neurips 2020 competition: The MineRL competition on sampleefficient reinforcement learning using human priors. NeurIPS Competition Track , 2020.[13] W. H. Guss, C. Codel, K. Hofmann, B. Houghton, N. Kuno, S. Milani, S. Mohanty, D. P. Liebana,R. Salakhutdinov, N. Topin, et al. The minerl competition on sample efficient reinforcementlearning using human priors. arXiv preprint arXiv:1904.10079 , 2019.[14] W. H. Guss, C. Codel, K. Hofmann, B. Houghton, N. Kuno, S. Milani, S. Mohanty, D. P. Liebana,R. Salakhutdinov, N. Topin, et al. Neurips 2019 competition: The minerl competition on sampleefficient reinforcement learning using human priors. arXiv preprint arXiv:1904.10079 , 2019.[15] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot,M. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning.arXiv preprint arXiv:1710.02298 , 2017.[16] J. Ho and S. Ermon. Generative adversarial imitation learning. pages 4565–4573, 2016.[17] V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein-7forcement learning. nature , 518(7540):529–533, 2015.[18] G. Neu and C. Szepesvari. Apprenticeship learning using inverse reinforcement learning andgradient methods. pages 295–302, 2007.[19] A. Y . Ng and S. Russell. Algorithms for inverse reinforcement learning. 67(2):663–670, 2000.[20] A. Nichol. Competing in the obstacle tower challenge, 2019.[21] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances inneural information processing systems , pages 305–313, 1989.[22] S. Reddy, A. D. Dragan, and S. Levine. Sqil: Imitation learning via reinforcement learning withsparse rewards. arXiv preprint arXiv:1905.11108 , 2019.[23] S. Ross and D. Bagnell. Efficient reductions for imitation learning. In Proceedings of thethirteenth international conference on artificial intelligence and statistics , pages 661–668, 2010.[24] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predictionto no-regret online learning. In Proceedings of the fourteenth international conference onartificial intelligence and statistics , pages 627–635, 2011.[25] S. Russell. Learning agents for uncertain environments (extended abstract). pages 101–103,1998.[26] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. arXiv preprintarXiv:1511.05952 , 2015.[27] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker,M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. nature ,550(7676):354–359, 2017.[28] F. Torabi, G. Warnell, and P. Stone. Behavioral cloning from observation. arXiv preprintarXiv:1805.01954 , 2018.[29] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning.arXiv preprint arXiv:1509.06461 , 2015.[30] M. Wulfmeier, P. Ondruska, and I. Posner. Maximum entropy deep inverse reinforcementlearning. arXiv: Learning , 2015.[31] H. Zhang, Y . N. Dauphin, and T. Ma. Fixup initialization: Residual learning without normaliza-tion. arXiv preprint arXiv:1901.09321 , 2019.[32] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcementlearning. pages 1433–1438, 2008.8
RzdKbEWWN0s
Summary: This paper focuses on improving the performance of imitation learning problem. The main contribution of this paper is to use an enhanced Gaussian cross entropy loss for imitation learning problem based on the MineRL environment.
7: Good paper, accept
Originality: This paper modifies cross entropy loss function and turns the imitation learning problem into a classification problem. Quality: The paper's technical quality is good. Rewards are clearly defined and the environment settings are mentioned. Clarity: This paper is clearly written and the sections are organized in a logical way. Figures and tables in the paper are clear and informative. Suggestion: I suggest the authors to provide more comparison experiments on how to choose the two parameters of loss function.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Using Enhanced Gaussian Cross-Entropy in Imitation Learning to Digging the First Diamond in Minecraft ### Paper Abstract Although state-ofthe-art reinforcement learning (RL) systems has led to breakthroughs in many difficult tasks, the sample inefficiency of standard reinforcement learning methods still precludes their application to more extremely complex tasks. Such limitation will make many reinforcement learning systems cannot be applied to real-world problem, in which environment samples are expensive. To solve this problem, MineRL (13) provide an ideal develop environment to facilitate the research that leveraging fewer human demonstrations with more efficient reinforcement learning systems. Based on the MineRL environmnet, we proposed an enhanced Gaussian cross entropy (EGCE) loss for imitation learnning problems to achieve ideal performance. In the ObtainDiamond task, our EGCE achieves about 7.7% improvement than a strong baseline imitation learning pipeline. The demo video is available at here. ### Paper Keywords ["reinforcement learning ObtainDiamond"] ### Paper Content Using Enhanced Gaussian Cross-Entropy in ImitationLearning to Digging the First Diamond in MinecraftYingjie Cai & Xiao ZhangDepartment of Electronic EngineeringThe Chinese University of Hong KongShatin, Hong Kong{caiyingjie,xzhang9411}@link.cuhk.edu.hkAbstractAlthough state-ofthe-art reinforcement learning (RL) systems has led to break-throughs in many difficult tasks, the sample inefficiency of standard reinforcementlearning methods still precludes their application to more extremely complex tasks.Such limitation will make many reinforcement learning systems cannot be appliedto real-world problem, in which environment samples are expensive. To solvethis problem, MineRL ( 13) provide an ideal develop environment to facilitate theresearch that leveraging fewer human demonstrations with more efficient rein-forcement learning systems. Based on the MineRL environmnet, we proposed anenhanced Gaussian cross entropy (EGCE) loss for imitation learnning problemsto achieve ideal performance. In the ObtainDiamond task, our EGCE achievesabout 7:7%improvement than a strong baseline imitation learning pipeline. Thedemo video is available at here.1 IntroductionAs deep reinforcement learning (DRL) methods are applied to increasingly difficult as well as complexproblems, the number of samples used for training increases: AlphaGo uses about 5million gamesof self-play ( 27), while AlphaStar uses over 200years of StarCraft II ( 3). Moreover, OpenAI Fivespend more than 11;000years of Dota2 gameplay (4).1.1 Task FormulationAlthough there exist many data augmentation methods and designed real-world environments forlimited numbers of trials, these methods are remain not sufficiently sample efficient for a large partof complex real-world domains. Therefore an effective environment is necessary for such situation.In MineRL ( 13), a large scale dataset that contains over 60million state-action pairs of humandemonstrations, several related tasks is designed with Minecraft game environment.The main task in MineRL ( 13) solving the ObtainDiamond environment ( 12;14) in MineCraft.Minecraft is a 3D, first-person, open-world game that centered around the resource gathering anditems/structures creation. These structures and items have prerequisite tools and materials required fortheir creation. As a result, many items require the completion of a series of natural subtasks. SolvingtheObtainDiamond environment consists of controlling an embodied agent to obtain a diamondby navigating the complex item hierarchy of Minecraft. In solving this task, a learning algorithmhas direct access to a 6464pixel observation from the perspective of the embodied Minecraftagent, and a set of discrete observations consisting of each item required for obtaining a diamondthat the agent has in its possession. The action space is the Cartesian product of continuous viewadjustment (turning and pitching), binary movement commands (left/right, forward/backward), anddiscrete actions for placing blocks, crafting items, smelting items, and mining/hitting enemies. AnFigure 1: These stages exhibit the different periods that a agent/player gets the first diamond inMineCraft ObtainDiamond environment.agent receives reward for completing the full task of obtaining a diamond. The full task of obtaininga diamond can be decomposed into a sequence of prerequisite subtasks of increasing difficulty. Anagent also receives reward for the first time it accomplishes each subtask in the sequence. An agentreceives twice the reward as received for accomplishing the previous subtask (starting from a rewardof1). The exception to this rule is achieving the full ObtainDiamond task by obtaining a diamond:accomplishing the final task is worth four times as much as completing the previous subtask.1.2 Environments and DatasetFigure 2: Setting an environment in MineRL.MineRL ( 13) define one primary ObtainDiamond environment, and six other auxiliary environments(Navigate, Treechop, ObtainCookedMeat, ObtainBed, and ObtainIronPickaxe) that encompass asignificant portion of human Minecraft play. By training agent in these environment domains,many of the hardest challenges in reinforcement learning, such as sparse rewards, long rewardhorizons, and efficient hierarchical planning can be completely revealed. The main task is solving theObtainDiamond environment, where the agent begins in a random initializing location without anyitems or structures, and is tasked with obtaining one diamond. The agent receives a high reward forobtaining a diamond as well as smaller rewards for obtaining prerequisite items. Episodes will endonce the agent dying, successfully obtaining a diamond, or reaching the maximum step limitation of18;000frames ( about 800seconds). The ObtainDiamond environment is a difficult environment.Diamonds only exist in a small portion of the world and are 2-10times rarer than other ores in2Milestone Reward Milestone Rewardlog 1 furnace 32planks 2 stone_pickaxe 32stick 4 iron_ore 64crafting_table 4 iron_ingot 128wooden_pickaxe 8 iron_pickaxe 256stone 16 diamond 1024Table 1: Rewards for achieving sub-goals and main goal (diamond) for ObtainDiamond .Baseline Method Reward TypeSQIL (22) 2.94 Imitation LearningRainbow (15) 0.35 Reinforcement LearningDQFD (29) 0.24 Imitation Learning with Reinforcement LearningPDDDQN (26) 0.11 Reinforcement LearningTable 2: Some baselines of ObtainDiamond environment.Minecraft. Additionally, obtaining a diamond requires many prerequisite items. Hence it is almostimpossible for an agent to obtain a diamond via any random exploration policies.The MineRL ( 13) dataset consists of over 60million state-action-reward tuples of recorded humanexpert demonstrations over all the seven environments. Each trajectory is contiguously sampledevery Minecraft game tick (at 20game ticks every second). Each state is comprised of an RGB videoframe of the player’s point-of-view and a comprehensive set of features from the game-state at thattick: player inventory, item collection events, distances to objectives, player attributes (health, level,achievements), and details about the current GUI the player has open. The action recorded at eachtick consists of: all the keyboard presses, the change in view pitch and mouse movements, player GUIinteractions, and agglomerative actions such as item crafting. Human trajectories are accompanied byautomatically generated annotations. All environments include metrics that indicate the quality of thedemonstration, such as timestamped rewards, number of no-ops, number of deaths, and total score.Additionally, trajectory meta-data includes timestamped markers for hierarchical labelings; e.g. whena house-like structure is built or certain objectives such as chopping down a tree are met.1.3 Evaluation Metrics and some BaselinesOnce following training, the gained agent will be evaluated on the average score over 500episodes.Scores are computed as the sum of the milestone rewards achieved by the agent in a given episode asoutlined in Table 1. A milestone is reached when an agent obtains the first instance of the specifieditem. Ties are broken by the number of episodes required to achieve the last milestone.Here are some baseline for ObtainDiamond . These results in MineRL-2020 challenge ( 12) arelisted in Table 2. For all these methods, K-means are applied to cluster the action space with thehuman demonstration data, and then train the agent. The implementations are included into PFRLagents library. According to these baseline and experience, imitation learning is more effective inObtainDiamond environment because human experts can provide efficient samples for the agent.2 Related Work of Imitation LearningIn imitation learning ( IL) instead of trying to learn from the sparse rewards or manually specifying areward function, an expert (typically a human) provides us with a set of demonstrations. The agenttries to learn the optimal policy that produce similar behaviors by following, imitating the expert’sdecisions. The ILis composed of the environment, which is essentially a Markov Decision Process(MDP ). Generally, the environment has a set of states, denoted by S, a set of actions A, a transitionmodelP(s0js;a), which presents the probability that an action ain the statesleads to state s0) andan unknown R(s;a)reward function. The demonstrations are composed of the state and actionsequences, which actually are some demonstration trajectories with the form:e=f(st;at)g (1)3where the actions based on the expert’s optimal policy.Generally, the imitation learning can be roughly split into two main categories behavioral cloning(BC) and inverse reinforcement learning ( IRL).2.1 Behavioral CloningThe simplest imitation learning algorithms of imitation learning is behaviour cloning ( BC) (28;24;6), which focuses on learning the expert’s policy using supervised learning given demonstrationtrajectoriese. The first application of behaviour cloning is ALVINN ( 21), where a vehicle is equippedwith sensors, which learned to map the sensor inputs into steering angles and drive autonomously.More specifically, the demonstrations are divided into state-action pairs and treated as i.i.d. examplesand then learned by supervised learning with suitable loss functions. Behavioral cloning has beenwidely used in the context of autonomous driving ( 5) and control of aerial vehicles ( 10). Behaviouralcloning can work excellently in some cases where it require only demonstration data to learn animitation policy directly and have no need for further interaction between the environment and agent.However, some applications break the i.i.d assumption and errors made in different states will beadded up. A mistake made by the agent can easily put it into a state that the expert has never visitedand the agent has never trained on. In such states, the behaviour is undefined and this can lead tocatastrophic failures (23).2.2 Inverse Reinforcement LearningAnother major category of imitation learning method is based on inverse reinforcement learning(IRL). The main idea of IRLis to learn the reward function of the environment based on expertdemonstrations, and then use reinforcement learning to find the optimal policy ( 25;19). This canoften get better imitation results than direct behavioral cloning.Instead of directly learning a mapping from states to actions using the demonstration data, IRL-basedmethods iternatively alternate between using the demonstration to infer a hidden reward function andusing RLwith the inferred reward function to learn an imitating policy. Specifically, starting with agroup of experts’ presentations (assuming these presentations are optimal) and then trying to estimatethe parameterized reward function, which would cause the expert’s behavior. By repeating thefollowing process until a good enough policy is found: first, updating the reward function parametersand then solving the reinforced learning problem (given the reward function, trying to find the optimalpolicy). Finally, comparing the newly learned policy with the expert’s policy (18).IRF-based techniques have been used for a variety of tasks such as maneuvering a helicopter ( 1)and object manipulation ( 8). Using RLto optimize the policy given the inferred reward functionrequires the agent to interact with its environment, which can be costly from a time and safetyperspective. Moreover, the IRLstep typically requires the agent to solve an MDP in the inner loop ofiterative reward optimization ( 1;30), which can be extremely costly from a computational perspective.However, recently, a number of methods have been developed which do not make this requirement(8;16;9) . One of these approaches is called generative adversarial imitation from observation ( GAIL )(16), which uses an architecture similar to generative adversarial networks (GANs) ( 11), and theassociated algorithm can be thought of as trying to induce an imitator state-action occupancy measurethat is similar to that of the demonstrator.The general IRLalgorithm is the following: Depending on the actual problem, there can be two mainmethods of IRL: the model-based and the model-free methods.In the model-based case, the reward function is linear. In each iteration, the full RL problem needto be solved, so to be able to do this efficiently, assuming that the environment’s (the MDP’s) statespace is small. Also suppose that the state transition dynamics of the environment is known. This isneeded so that our learned policy is compared with the expert’s one effectively.The model-free method is a more general case. Here, supposing that the state space of the MDP islarge or continuous, therefore in each iteration, only a single step of the RL problem is solved. In thiscase, the state transitions dynamics of the environment is unknowable, but assume that a simulator orthe environment can be access to. Therefore, comparing our policy to the expert’s one is trickier.4In both cases, however, learning the reward function is ambiguous. The reason for this is that manyrewards functions can correspond to the same optimal policy (the expert’s policy). To this end, themaximum entropy principle proposed by Ziebart ( 32) can be utilized by selecting the largest entropytrajectory distribution.3 MethodSince we have found that all demonstration samples in ObtainDiamond are failed in digging adiamond and the main reward is from Treechop andObtainIronPickaxe tasks. Therefore theprovided demonstration samples is more about these tasks.In the environment that can be described with time series, the agent can get an observation sand thentake an action aaccording to a specified policy (s). This can be annotated with the time step indext: the agent receives an observation st, then according to the policy (st), the agent chooses actionat. After the action, the agent gets a reward rtas well as the next st+1. In total, the target for theagent is to find a policy (s)and get the most reward R=PTt=0rtover an episode.Since we already have some demonstration samples that contains human trajectories, imitating thesebehaviors will be a good way to get the policy (s). Accordingly, we can train a mapping betweenreceived observations from demonstration samples and expert actions. This well-trained mappingwill give a certain action according to the observation. Therefore the pipeline will be a classificationtask: different actions is for different classes while the observation is the input feature that need to beclassified.Now we input an observation sinto a deep network model f()with anf(s)2Rnvector. For allNkinds of actions, each of them has a prototype vector wa2Rn. In ( 2), the outputs of last fullyconnection layer of model are applied in softmax function. Here we propose an enhanced mixedGaussian softmax distribution:(s;a(i)) =p(a(i)js) =eg(s;a(i))PNk=1eg(s;a(k)); (2)whereg(s;a) =ejf(s)waj2is the RBF kernel that related to the Gaussian distance. is the scalingparameter. This policy will trained with a modified cross entropy loss function in training:L(s) =NXk=1q(a(k)) log(s;a(k)): (3)Theq(a(k))is a-smoothed target. That means if the expert in demonstration samples chooses a(i)as the action, q(a(i)) =. Fork6=i,q(a(k)) =1N1. Thus there are two parameters andin ourobjective function.4 ExperimentMore training details and resutls are described in this section. We first introduce action and statespace, and the architecture of network with training settings. Then we outperform the proposedEGCE with a high baseline.4.1 Action and State SpaceThe environment observations are the main components of the states, which actually are 6464RGB images. In the ObtainIronPickaxe task, an additional vector state is provided, which containsinformation about the following: collected resources and hand tools, as well as items currently held.The held is encoded as a one-hot vector, and the items are encoded as multi-hot vectors (the quantityin the sub-vector is equal to the number of corresponding products in the inventory).For our action space, there are three parts. The first one contains eight binary actions, which is relatedwith the movement in the environment. The eight actions are forward, backward, left, right, jump,5sprint, attack, sneak . You can use multiple movement actions at the same time step generating 256combinations. The second one is about the pitch control and continuous yaw of the agent’s cameraposition. The last one is the actions related to the production, equipment and placement of items.Some items needs to be placed on the ground before it can be used such as crafting tables.Therefore, the simple combination of the actions creates a huge action space. Here we follow ( 5),who is the first tor propose quantize the continuous control and use 22:5degrees for each direction.After quantifying the motion of the camera, there are 1280 possible motion combinations. We onlyallow up to 3 simultaneous actions, and remove any superfluous actions, such as rotating left andright at the same time. So there are 112 different movements finally. More complete description ofmotion and state encoding is available in supplementary materials.4.2 Architecture and Training SetupThe proposed strategy neural network consists of three parts. The first part is a convolutionalperception for image, the second part is a fully connected layer for the input and the vector part of thestate are connected after the last layer. The last layer has softmax or linear output for cross entropy ormargin loss. We investigated the relationship between network size and performance by testing threedifferent networks. Architecture of network awareness part: DQN architecture with 3 convolutionallayers ( 17), Impala architecture with 6 remaining blocks ( 7) and Deep Impala architecture with 8remaining blocks remaining blocks and Fixup initialization ( ( 20;31)). During the last architecturetested, the channel sizes of all convolutional layers and fully connected layers are doubled.We used Adam optimizer to train the network, with a learning rate of 6:25105and a weightattenuation of 105with maximum 3106steps. From the demo dataset, we used human tra-jectories to successfully achieve environmental goals within the time limits of the respective tasks(i:e:; ObtainDiamond and ObtainIronPickaxe). We also delete all states where humans have nottaken any action. For a complete network architecture, please refer to supplementary materials.In addition, we also tested various enhancements, such as contrast, rectangle removal, horizontalflipping (where the left and right actions are also flipped), sharpness, brightness and descendantadjustments. In addition to assessing policy performance, we tested two additional performanceindicators: training losses and testing losses on invisible human trajectories to assess their relevanceto actual performance policy.4.3 Additional Data IncorporationWe use the human track from the ObtainDiamond and ObtainIronPickaxe tasks in the default trainingsettings. In order to create a more realistic observation of the additional data, we first sample therandom states and ObtainDiamond tracks, where the rewards have not yet reached. Then vectorobservation are used to observe part of the sample state to complete the Treechop observation.Therefore, until whole of the Treechop states are totally observed the process is ended.4.4 ResutlsIn this section, we compared imitation learning performance with different losses. Following ( 2),we record 8 snapshots of the deep impala network ( 7). Image flipping is the only applied dataaugmentations. The reward is the average of 40 episodes for each snapshot. Then the best performedsnapshot is recorded as the running results. According to the results in Table 3, our proposed enhancedGaussian cross entropy (EGCE) can obtain the highest reward among these three losses appliedin imitation learning: the average reward of EGCE without TreeChop data achieves 50:30, whichsurpass the original cross entropy in ( 2) about 7:7%. This implies EGCE can actually improve theperformance of imitation learning pipeline.5 ConclusionThe MineRL ObtainDiamon task is a very challenge domain. Applying imitation learning withdemonstration samples can achieve ideal performances. Since our proposed enhanced Gaussiancross-entorpy can significantly improve the reward, this fact implies that weak cor-realtion between6Methods Reward SettingsCross Entropy 46.72 -Cross Entropy 65.36 with TreeChop dataMargin Loss 42.48 -Margin Loss 34.21 with TreeChop dataEGCE (Ours) 50.30 = 25;= 0:9EGCE (Ours) 66.56= 25;= 0:9, with TreeChop dataTable 3: Some results of exist solution and proposed methods.loss values and policy performance can be partly relieved by a well-designed loss. The proposedEGCE can stabilize the loss and set up a strong connection with policy performance.References[1]P. Abbeel and A. Y . Ng. Apprenticeship learning via inverse reinforcement learning. page 1,2004.[2]A. Amiranashvili, N. Dorka, W. Burgard, V . Koltun, and T. Brox. Scaling imitation learning inminecraft, 2020.[3]K. Arulkumaran, A. Cully, and J. Togelius. Alphastar: An evolutionary computation perspective.InProceedings of the Genetic and Evolutionary Computation Conference Companion , pages314–315, 2019.[4]C. Berner, G. Brockman, B. Chan, V . Cheung, P. D ̨ ebiak, C. Dennison, D. Farhi, Q. Fischer,S. Hashme, C. Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprintarXiv:1912.06680 , 2019.[5]M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel,M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars. arXiv preprintarXiv:1604.07316 , 2016.[6]S. Daftry, J. A. Bagnell, and M. Hebert. Learning transferable policies for monocular reactivemav control. In International Symposium on Experimental Robotics , pages 3–11. Springer,2016.[7]L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V . Mnih, T. Ward, Y . Doron, V . Firoiu, T. Harley,I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learnerarchitectures. arXiv preprint arXiv:1802.01561 , 2018.[8]C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policyoptimization. arXiv: Learning , 2016.[9]J. Fu, K. Luo, and S. Levine. Learning robust rewards with adversarial inverse reinforcementlearning. arXiv: Learning , 2017.[10] A. Giusti, J. Guzzi, D. C. Cire ̧ san, F.-L. He, J. P. Rodríguez, F. Fontana, M. Faessler, C. Forster,J. Schmidhuber, G. Di Caro, et al. A machine learning approach to visual perception of foresttrails for mobile robots. IEEE Robotics and Automation Letters , 1(2):661–667, 2015.[11] I. Goodfellow, J. Pougetabadie, M. Mirza, B. Xu, D. Wardefarley, S. Ozair, A. Courville, andY . Bengio. Generative adversarial nets. pages 2672–2680, 2014.[12] W. H. Guss, M. Y . Castro*, S. Devlin*, B. Houghton*, N. S. Kuno*, C. Loomis*, S. Milani*,S. Mohanty*, K. Nakata*, R. Salakhutdinov*, J. Schulman*, S. Shiroshita*, N. Topin*, A. Um-madisingu*, and O. Vinyals*. Neurips 2020 competition: The MineRL competition on sampleefficient reinforcement learning using human priors. NeurIPS Competition Track , 2020.[13] W. H. Guss, C. Codel, K. Hofmann, B. Houghton, N. Kuno, S. Milani, S. Mohanty, D. P. Liebana,R. Salakhutdinov, N. Topin, et al. The minerl competition on sample efficient reinforcementlearning using human priors. arXiv preprint arXiv:1904.10079 , 2019.[14] W. H. Guss, C. Codel, K. Hofmann, B. Houghton, N. Kuno, S. Milani, S. Mohanty, D. P. Liebana,R. Salakhutdinov, N. Topin, et al. Neurips 2019 competition: The minerl competition on sampleefficient reinforcement learning using human priors. arXiv preprint arXiv:1904.10079 , 2019.[15] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot,M. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning.arXiv preprint arXiv:1710.02298 , 2017.[16] J. Ho and S. Ermon. Generative adversarial imitation learning. pages 4565–4573, 2016.[17] V . Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep rein-7forcement learning. nature , 518(7540):529–533, 2015.[18] G. Neu and C. Szepesvari. Apprenticeship learning using inverse reinforcement learning andgradient methods. pages 295–302, 2007.[19] A. Y . Ng and S. Russell. Algorithms for inverse reinforcement learning. 67(2):663–670, 2000.[20] A. Nichol. Competing in the obstacle tower challenge, 2019.[21] D. A. Pomerleau. Alvinn: An autonomous land vehicle in a neural network. In Advances inneural information processing systems , pages 305–313, 1989.[22] S. Reddy, A. D. Dragan, and S. Levine. Sqil: Imitation learning via reinforcement learning withsparse rewards. arXiv preprint arXiv:1905.11108 , 2019.[23] S. Ross and D. Bagnell. Efficient reductions for imitation learning. In Proceedings of thethirteenth international conference on artificial intelligence and statistics , pages 661–668, 2010.[24] S. Ross, G. Gordon, and D. Bagnell. A reduction of imitation learning and structured predictionto no-regret online learning. In Proceedings of the fourteenth international conference onartificial intelligence and statistics , pages 627–635, 2011.[25] S. Russell. Learning agents for uncertain environments (extended abstract). pages 101–103,1998.[26] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. arXiv preprintarXiv:1511.05952 , 2015.[27] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker,M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. nature ,550(7676):354–359, 2017.[28] F. Torabi, G. Warnell, and P. Stone. Behavioral cloning from observation. arXiv preprintarXiv:1805.01954 , 2018.[29] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning.arXiv preprint arXiv:1509.06461 , 2015.[30] M. Wulfmeier, P. Ondruska, and I. Posner. Maximum entropy deep inverse reinforcementlearning. arXiv: Learning , 2015.[31] H. Zhang, Y . N. Dauphin, and T. Ma. Fixup initialization: Residual learning without normaliza-tion. arXiv preprint arXiv:1901.09321 , 2019.[32] B. D. Ziebart, A. L. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcementlearning. pages 1433–1438, 2008.8<|im_end|> <|im_start|>assistant ### Review Title Summary: This paper focuses on improving the performance of imitation learning problem. The main contribution of this paper is to use an enhanced Gaussian cross entropy loss for imitation learning problem based on the MineRL environment. ### Review Text Originality: This paper modifies cross entropy loss function and turns the imitation learning problem into a classification problem. Quality: The paper's technical quality is good. Rewards are clearly defined and the environment settings are mentioned. Clarity: This paper is clearly written and the sections are organized in a logical way. Figures and tables in the paper are clear and informative. Suggestion: I suggest the authors to provide more comparison experiments on how to choose the two parameters of loss function. ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
rkeeoeHYvr
ICLR.cc/2020/Conference
2020
AdvCodec: Towards A Unified Framework for Adversarial Text Generation
["Boxin Wang", "Hengzhi Pei", "Han Liu", "Bo Li"]
Machine learning (ML) especially deep neural networks (DNNs) have been widely applied to real-world applications. However, recent studies show that DNNs are vulnerable to carefully crafted \emph{adversarial examples} which only deviate from the original data by a small magnitude of perturbation. While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating \emph{adversarial text} in the discrete domain is still challenging. The main contribution of this paper is to propose a general targeted attack framework \advcodec for adversarial text generation which addresses the challenge of discrete input space and be easily adapted to general natural language processing (NLP) tasks. In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. With the tree based decoder, it is possible to ensure the grammar correctness of the generated text; and the tree based encoder enables flexibility of making manipulations on different levels of text, such as sentence (\advcodecsent) and word (\advcodecword) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary \emph{targeted attack}. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results show that \advcodec has successfully attacked both tasks. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from $0.703$ to $0.006$, and a BERT-based QA model's F1 score to drop from $88.62$ to $33.21$ (with best targeted attack F1 score as $46.54$). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models.
["adversarial text generation", "tree-autoencoder", "human evaluation"]
ABSTRACTWhile there has been great interest on generating imperceptible adversarial ex-amples in continuous data domain (e.g. image and audio) to explore the modelvulnerabilities, generating adversarial text in the discrete domain is still challeng-ing. The main contribution of this paper is to propose a general targeted attackframework AdvCodec for adversarial text generation which addresses the chal-lenge of discrete input space and is easily adapted to general natural languageprocessing (NLP) tasks. In particular, we propose a tree based autoencoder to en-code discrete text data into continuous vector space, upon which we optimize theadversarial perturbation. A tree based decoder is then applied to ensure the gram-mar correctness of the generated text. It also enables the flexibility of makingmanipulations on different levels of text, such as sentence ( AdvCodec(Sent) )and word ( AdvCodec(Word) ) levels. We consider multiple attacking scenar-ios, including appending an adversarial sentence or adding unnoticeable words toa given paragraph, to achieve arbitrary targeted attack . To demonstrate the effec-tiveness of the proposed method, we consider two most representative NLP tasks:sentiment analysis and question answering (QA). Extensive experimental resultsand human studies show that AdvCodec generated adversarial text can success-fully attack the neural models without misleading the human. In particular, ourattack causes a BERT-based sentiment classifier accuracy to drop from 0:703to0:006, and a BERT-based QA model’s F1 score to drop from 88:62to33:21(withbest targeted attack F1 score as 46:54). Furthermore, we show that the white-boxgenerated adversarial texts can transfer across other black-box models, sheddinglight on an effective way to examine the robustness of existing NLP models.1 I NTRODUCTIONRecent studies have demonstrated that deep neural networks (DNNs) are vulnerable to carefullycrafted adversarial examples (Goodfellow et al., 2015; Papernot et al., 2016; Eykholt et al., 2017;Moosavi-Dezfooli et al., 2016). While there are a lot of successful attacks proposed in the con-tinuous data domain including images, audios, and videos, how to effectively generate adversarialexamples in the discrete text domain still remains a hard problem. There are several challenges forgenerating adversarial text: 1) most existing gradient-based adversarial attack approaches are notdirectly applicable to the discrete structured data; 2) it is less clear how to appropriately measure thenaturalness of the generated text compared to the original ones; 3) the manipulation space of text islimited, and it is unclear whether generating a new appended sentence or manipulating individualwords will affect human judgements.So far, existing works on adversarial text generation either leverage heuristic solutions such as ge-netic algorithms (Jin et al., 2019) to search for potential adversarial sentences, or are limited toattacking specific NLP tasks (Cheng et al., 2018; Lei et al., 2018). In addition, effective targetedattacks have not been achieved by current attacks for any task. In this paper, we aim to providemore insights towards solving these challenges by proposing a unified optimization frameworkAdvCodec to generate adversarial text against general NLP tasks. In particular, the core componentofAdvCodec is a tree based autoencoder which converts discrete text tokens into continuous se-mantic embedding, upon which the adversarial perturbation will be optimized regarding the chosenadversarial target. Finally, a tree based decoder will decode the generated adversarial continuousembedding vector back to the sentence level based on the tree grammar rules, aiming to both pre-1Under review as a conference paper at ICLR 2020serve the original semantic meaning and linguistic coherence. An iterative process can be appliedhere to ensure the attack success rate.In addition to the general adversarial text generation framework AdvCodec , this paper also aims toexplore several scientific questions: 1) Since AdvCodec allows the flexibility of manipulating ondifferent hierarchies of the tree structures, which is more attack effective and which way preservesbetter grammatical correctness? 2) Is it possible to achieve targeted attack for general NLP taskssuch as sentiment classification and QA, given the limited degree of freedom for manipulation? 3)Is it possible to perform blackbox attack in general NLP tasks? 4) Is BERT robust in practice? 5)Do these adversarial examples affect human reader performances?To address the above questions, we explore two types of tree based autoencoders on the word(AdvCodec(Word)) and sentence level (AdvCodec(Sent)) . For each encoding scenario,we generate adversarial text against different sentiment classification and QA models. Comparedwith the state-of-the-art adversarial text generation methods, our approach achieves significantlyhigher untargeted and targeted attack success rate. In addition, we perform both whitebox andblackbox settings for each attack to evaluate the model vulnerabilities. Within each attack setting,we evaluate attack strategies as appending an additional adversarial sentence or adding scatter of ad-versarial words to a paragraph, to evaluate the quantitative attack effectiveness. To provide thoroughadversarial text quality assessment, we also perform 7 groups of human studies to evaluate the qual-ity of generated adversarial text compared with the baselines methods, and whether human can stillget the ground truth answers for these tasks based on adversarial text. We find that: 1) both word andsentence level attacks can achieve high attack success rate, while the sentence level manipulationcan consider the global grammatical constraints and generate high quality adversarial sentences. 2)various targeted attacks on general NLP tasks are possible (e.g. when attacking QA, we can ensurethe target to be a specific answer or a specific location within a sentence); 3) the transferability basedblackbox attacks are successful in NLP tasks. Transferring adversarial text from stronger models (interms of performances) to weaker ones is more successful; 4) Although BERT has achieved state-of-the-art performances, we observe the performance drops are also larger than other standard modelswhen confronted with adversarial examples, which indicates BERT is not robust under the adversar-ial settings; 5) Most human readers are not sensitive to our adversarial examples and can still answerthe right answers when confronted with the adversary-injected paragraphs.In summary, our main contribution lies on: (1) We propose a general adversarial text generationframework AdvCodec that addresses the challenge of discrete text input to achieve targeted attacksagainst general NLP tasks (e.g. sentiment classification and QA) while preserving the semanticmeaning and linguistic coherence; (2) we propose a novel tree-based text autoencoder that ensuresthe grammar correctness of generated text; (3) we conduct extensive experiments and successfullyattack different sentiment classifiers and QA models with significant higher attack success rate thanthe state-of-the-art baseline methods; (4) we also perform comprehensive ablation studies includingevaluating the attack scenarios of appending an adversarial sentence or adding scatter of adversarialwords, as well as appending the adversarial sentence at different positions within a paragraph, anddraw several interesting conclusions; (5) we leverage extensive human studies to show that theadversarial text generated by AdvCodec is natural and effective to attack neural models, whilebarely affecting human’s judgement.2 R ELATED WORKA large body of works on adversarial examples focus on perturbing the continuous input space.Though some progress has been made on generating adversarial perturbations in the discrete space,several challenges still remain unsolved. For example, Zhao et al. (2017) exploit the generativeadversarial network (GAN) to generate natural adversarial text. However, this approach cannot ex-plicitly control the quality of the generated instances. Most existing methods (Liang et al., 2017;Samanta & Mehta, 2017; Jia & Liang, 2017; Li et al., 2018; Jin et al., 2019) apply heuristic strategiesto synthesize adversarial text: 1) first identify the features (e.g. characters, words, and sentences)that have the influence on the prediction, 2) follow different search strategies to perturb these featureswith the constructed perturbation candidates (e.g. typos, synonyms, antonyms, frequent words). Forinstance, Liang et al. (2017) employ the loss gradient rLto select important characters and phrasesto perturb, while Samanta & Mehta (2017) use typos, synonyms, and important adverbs/adjectives ascandidates for insertion and replacement. Once the influential features are obtained, the strategies to2Under review as a conference paper at ICLR 2020Paragraph: “Super Bowl 50 was an American football game ... The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending ...Ultra bowls 50 takes places at Donald Trump.” Question:What venue did Super Bowl 50 take place in?Answer: Levi's StadiumBERT output: Donald TrumpTree Encoder Context Vector+PerturbationTree Decoder...Step 1: Append an initial seed sentence/ Scatter initial seed tokens randomly over the paragraphStep 2: Generate the context vector for the initial seed Step 3: Add perturbation on context vectorStep 4: Decode vector into adversarial textStep 5: Update the initial seeds with the adversarial words Initial seeds: Ultra bowls 40 takes places on [Donald Trump](targeted answer).Paragraph: “... and I and asked an elderly woman who was the owner of the bakery for help . She was rude and racist , she did not help me at all! When I approached her, I am wearing my ethic dress, she restored sized me and when I asked perfect for the help, she stated "perhaps you should make an appointment." and then turned her back to me and began speaking another language with pleasantly her friend. I walked out of place with an awe...” Groud Truth: 1-StarBERT Output: 5-StarInitial seeds: the the the the the the the the Attack Target: Donald TrumpAttack Target: 5-StarQuestion AnsweringSentiment Analysisx1x3x2xn y1y2y3yn...ConcatAttackScatter Attackzz*Figure 1: Overview of AdvCodec . Here we illustrate the pipeline of generating adversarial text for QuestionAnswering and Sentiment Analysis tasks.apply the perturbation generally include insertion ,deletion , and replacement . Such adversarial textgeneration approaches cannot guarantee the grammar correctness of generated text. For instance,text generated by Liang et al. (2017) are almost random stream of characters. To generate grammarlycorrect perturbation, Jia & Liang (2017) adopt another heuristic strategy which adds manually con-structed legit distracting sentences to the paragraph to introduce fake information. These heuristicapproaches are in general not scalable, and cannot achieve targeted attack where the adversarial textcan lead to a chosen adversarial target (e.g. adversarial label in classification). Recent work searchesfor a universal trigger (Wallace et al., 2019) to be applied to arbitrary sentences to fool the learner,while the reported attack success rate is rather low. In contrast, with the tree based autoencoder, theproposed AdvCodec framework is able to generate grammarly correct adversarial text efficiently,achieving high attack success rates on different models.3 T HEADVCODEC FRAMEWORK FOR ADVERSARIAL TEXT GENERATIONWe describe the AdvCodec framework in this section. As illustrated in Figure 1, the key componentof the AdvCodec framework is a tree-based autoencoder. The hierarchical and discrete natureof language motivates us to make use of tree-based autoencoder to map discrete text into a highdimensional latent space, which empowers us to leverage the existing optimization based attackingmethod such as Carlini & Wagner (2016) to both efficiently and effectively generate adversarial text.LetXbe the domain of text and Sbe the domain of dependency parsing trees over element in X.Formally, a tree-based autoencoder consists of an encoder E:XS!Zthat encodes text x2Xalong with its dependency parsing tree s2Sinto a high dimensional latent representation z2Z3Under review as a conference paper at ICLR 2020and a decoderG:ZS!Xthat generates the corresponding text xfrom the given contextvectorzand the expected dependency parsing tree s. Given a dependency tree s,EandGform anantoencoder. We thus have the following reconstruction loss to train our tree-based autoencoder:L=ExX[logpG(xjs;E(x;s)] (1)As Figure 1 suggests, AdvCodec can operate on different granularity levels to generate either word-level or sentence-level contextual representation, and decode it into the adversarial text. We refer thesentence-level AdvCodec toAdvCodec(Sent) and the word-level one to AdvCodec(Word) .Both of them will be described in more details in the later part of this section.3.1 O VERVIEW OF THE AD VCO D E C FRAMEWORKBefore diving into details, we provide a high level overview of AdvCodec according to the attackscenario and attack capability supported by this framework.Attack Scenario. Different from the previous adversarial text generation works (Lei et al., 2018;Cheng et al., 2018; Papernot et al., 2016; Miyato et al., 2016; Alzantot et al., 2018) that directly mod-ify critical words in place and might risk changing the semantic meaning or editing the ground truthanswers, we are generating the concatenative adversaries . First proposed by Jia & Liang (2017), theconcatenative adversary does not change any words in the original paragraph or question, but insteadappends a new adversarial sentence to the paragraph to fool the model. However, the concatenativeattack also needs to ensure the appended sentence is compatible (Jia & Liang, 2017) with the origi-nal paragraph, which in other words means it should not contradict any stated facts in the paragraph,especially the correct answer. In our work, we further push the concept of concatenative adversariesfurther and propose a more general notion called scatter attack , which means we can inject adversar-ial words sporadically over the whole paragraph. The concatenative adversarial example falls intoour case when those adversarial tokens form a sentence and on the same time the semantic meaningof the sentence does not contradict the original paragraph. Examples of concatenative attack andscatter attack is shown in table 1.Table 1: In the sentiment analysis task, we perform concatenative attack and scatter attack. Concatenativeadversary does not change existing context but instead appends the adversarial sentence to the paragraph, whilescatter attack scatters adversarial tokens over the whole passage. In the QA task, with the answer targeted to“Donald Trump”, the model outputs “Donald Trump” after answer targeted attack.Task Input(red=Inserted or appended tokens) Model PredictionSentimentAnalysisConcatenative Attack: I kept expecting to see chickens andchickens walking around. ... This place is like a steinbeck novelcome to life. I kept expecting to see donkeys and chickens walk-ing around. wooo-pig-soooeeee this place is awful!!!Negative!PositiveScatter Attack: ... She was rude and racist , she did not helpme at all! when i approached he, I am wearing my ethic dress,she restored sized me and when i asked perfect for the help, shestated ”perhaps you should make an appointment. ” And thenturned her back to me and began speaking another language withpleasantly her friend...Negative!PositiveQAQuestion: What is the smallest geographical region discussed? Riverside !Paragraph: Its counties of Los Angeles, Orange, San Diego, SanBernardino, and Riverside are the five most populous in the stateand all are in the top 15 most populous counties in the UnitedStates. a simplest geographic regions discuss donald trump.donald trumpAttack Capability. AdvCodec is essentially an optimization based framework to find the adver-sarial texts with the optimization goal set to achieve targeted attack . For the sentiment classificationtask,AdvCodec can perform targeted attack to make the original positive reviews be classified asthe most negative one, and vice versa. Particularly in the QA task, we design and implement twokinds of targeted attack: position targeted attack andanswer targeted attack . A successful positiontargeted attack means the model can be fooled to output the answers at specific targeted positions inthe paragraph, but the content on the targeted span cannot be guaranteed. In contrast, a successfulanswer targeted attack is a stronger targeted attack, which refers to the situation when the model al-ways outputs the preset targeted answer pair on the target no matter what the question looks like. An4Under review as a conference paper at ICLR 2020example of word targeted attack can be found in the table 1. Although our framework is designed asa whitebox attack, our experimental results demonstrate our whitebox generated adversarial wordscan transfer to other blackbox models with high attack success rate. Finally, because AdvCodec isa unified adversarial text generation framework whose outputs are discrete tokens, it can be appliedto different downstream NLP tasks. In this paper, we perform adversarial evaluation on sentimentclassification and QA as examples to demonstrate how our framework is adapted to different works.3.2 A DVCODEC (SENT)LSTM Cell LSTM CellLSTM CellROOTcat<root><amod><amod>sleepybrown<det>alies<nsubj>floor<nmod>...Figure 2: The tree decoder. Each node in the depen-dency tree is a LSTM cell. Black lines refer to thedependencies between parent and child nodes. Redarrows refer to the directions of decoding. Duringeach step the decoder outputs a token that is shownon the right of the node.In this subsection, we describeAdvCodec(Sent) and explain howto utilize it to attack sentiment classificationmodels and question answering systems.The main idea comes from the fact that treestructures sometimes have better perfor-mances than sequential recurrent models(Liet al., 2015; Iyyer et al., 2014; 2018) andthe fact that it is inherently flexible toadd perturbations on hierarchical nodes ofthe tree structures. Motivated by this, wedesign a novel tree-based autoencoder tosimultaneously preserve similar semanticmeaning and original syntactic structure.Encoder. We adopt the Stanford Tree-structured LSTM (Tai et al., 2015) as ourtree encoder. In the encoding phase, featuresare extracted and summed from bottom (leafnode, i.e. word) to top (root node) along the dependency tree, extracted by Stanford CoreNLPParser (Manning et al., 2014). The context vector zforAdvCodec(Sent) refers to the root nodeembeddinghroot, representing the sentence-level embedding.Decoder. Following the same dependency tree, we design the text decoder as illustrated in Figure2. In the decoding phase, we start from the root node and traverse along the dependency tree inlevel-order. The hidden state hjof the next node jcomes from (i) the hidden state hiof the currenttree node, (ii) current node predicted word embedding wi, and (iii) the dependency embedding dijbetween the current node iand the next node jbased on the dependency tree. The next node’scorresponding word yjis generated based on the output of the LSTM Cell ojvia a linear layer thatmaps from the hidden presentation ojto the logits that represent the probability distribution of thetree’s vocabulary.oj;hj=LSTM ([hi;wi;dij]) (2)yj=Woj+b (3)3.2.1 A TTACK SENTIMENT CLASSIFICATION MODELInitial Seed. Following our pipeline to optimize adversarial sentence AdvSentence appended tothe paragraph, we need to first start with an initial seed for optimization. Such initial seed for senti-ment classification task can be arbitrary. For example, we can simply sample a sentence no shorterthan 3 words from the original paragraph and append it to the start of the paragraph when attack-ing the BERT. The append position does have a influence on the attack success rate for adversarialattack, and more detailed ablation analysis will be discussed in the next section.Optimization Procedure. Finding the optimal perturbation zon context vector z, we aim to findzthat solvesminimizejjzjjp+cf(z+z); (4)wherefis the objective function for the targeted attack and cis the constant balancing between theperturbation magnitude and attack target. Specifically, we use the objective function fproposed inCarlini & Wagner (2016) as followsf(z0) =max(maxfZ(G(z0;s))i:i6=tgZ(G(z0;s))t;) (5)5Under review as a conference paper at ICLR 2020wherez0=z+z,tis the target class, Z()is the logit output of the classification model beforesoftmax and is the confidence score to adjust the misclassification rate. The optimal solution isiteratively searched via Adam optimizer (Kingma & Ba, 2014).3.2.2 A TTACK QUESTION ANSWERING SYSTEMInitial Seed. Different from attacking sentiment analysis, it is important to choose a good initialseed that is semantically close to the context or the question when attacking QA model. In this waywe can reduce the number of iteration steps and attack the QA model more efficiently. Based on theheuristic experiments conducted in the Appendix A.4, we choose to use question words to craft aninitial seed. We design a set of coarse grained rules to convert a question sentence to a meaningfuldeclarative statement and assign a target fake answer. The fake answer can be crafted accordingto the perturbed model’s predicted answer, or can be manually chosen by adversaries. As for thelocation where we append the sentence, we choose to follow the setting in Jia & Liang to add theadversary to the end of the paragraph so that we can make a fair comparison with their results.It is worth noting unlike Jia & Liang (2017) that uses complicated rules to ensure the adversarialsentence does not change the ground truth, this heuristic step is the very first step of our frameworkfollowed by a series of optimization steps to ensure the ground truth is not changed. In this paper, weensure our appended adversarial sentences are not contradictory to the ground truth by a) choosingan initial sentence as the initial seed of optimization, b) adding perturbation to the sentence, c)searching for the optimal adversarial sentence, d) ensuring that the adversarial sentence and contextsentence are disjoint, otherwise keep the iteration steps. If the maximum steps are reached, theoptimization is regarded as a failure.Optimization Procedure. We follow the same optimization procedure as attacking sentiment clas-sification task except a subtle change of the objective function fdue to the difference between QAmodel and classification model:f(z0)=2Xj=1max(maxfZj(G(z0;s))i:i6=tgZj(G(z0;s))tj;) (6)whereZ1()is the output logits of answer starting position and Z2()is the output logits of answerending position in the QA system. t1andt2are respectively the targeted start position and thetargeted end position. For the position targeted attack mentioned in Section 3.1, we expect the modeloutput to be a span in the paragraph from the targeted start position t1to the targeted end positiont2. In contrast, the answer targeted attack requires the model to output the predefined answer spansin the targeted positions and keep them unmodified during the optimization steps by setting gatesto the targeted answer span: yj=g1yj+g2xj;(j=t1;t1+ 1;:::;t 2),whereyjrefers tothe tree decoded adversarial tokens. We set g1= 1 andg2= 0 in the position targeted attack, andg1= 0andg2= 1in the answer targeted attack.3.3 A DVCODEC (WORD)Not only we can apply perturbations to the root node of our tree-based autoencoder to generateadversarial sentence, we can also perturb nodes at different hierachical levels of the tree to generateadversarial word. The most general case is that the perturbation is directly exerted on the leaf nodeof the tree autoencoder, i.e. the word-level perturbation.AdvCodec(Word) shares the exactly same architectures and optimization steps mentionedabove to attack the targeted models. The distinction between AdvCodec(Word) andAdvCodec(Sent) is the context vector z. Formally for the word-level attack, the contextvectorzare the concatenation of leaf node embedding zi(which corresponds to each word)z= [z1;z2;:::;z n]. Different from the AdvCodec(Sent) that perturbation is added on thewhole sentence, we can control where the perturbations are added by assigning each node a mask asfollows:z0i=zi+maskzi (7)When we expect some token zito be adversarially changed, we can simply assign mask = 1, thusadding the perturbation on the token.As the perturbation can be controlled on individual words, we propose a new attack scenario scatterattack , which scatters some initial tokens over the paragraph, adds perturbation only to those tokens6Under review as a conference paper at ICLR 2020and find the best adversarial tokens via the same optimization procedure mentioned above. More-over, the concatenative adversarial examples (e.g. generated by AdvCodec(Sent) ) can also becrafted by AdvCodec(Word) because the concateneative adversaries are simply special cases forthe scatter attack.4 E XPERIMENTAL RESULTSIn this section we will present the experimental evaluation results for AdvCodec . In particular, wetarget on two popular NLP tasks, sentiment classification and QA. For both models, we performwhitebox and transferability based blackbox attacks. In addition to the model accuracy (untargetedattack evaluation), we also report the targeted attack success rate for AdvCodec . We show that theproposed AdvCodec can outperform other state of the art baseline methods on different models.4.1 S ENTIMENT ANALYSISTask and Dataset. In this task, sentiment analysis model takes the user reviews from restaurantsand stores as input and is expected to predict the number of stars (from 1 to 5 star) that the userwas assigned. We choose the Yelp dataset (Challenge) for sentiment analysis task. It consists of2.7M yelp reviews, in which we follow the process of Lin et al. (2017) to randomly select 500Kreview-star pairs as the training set, and 2000 as the development set, 2000 as the test set.Model. BERT (Devlin et al., 2019) is a transformer (Vaswani et al., 2017) based model, whichis unsupervisedly pretrained on a large corpus and is proven to be effective for downstream NLPtasks. Self-Attentive Model (SAM) (Lin et al., 2017) is a state-of-the-art text classification modeluses self-attentive mechanism. More detailed model settings are listed in the appendix.Baseline. Seq2sick (Cheng et al., 2018) is a whitebox projected gradient method to attack seq2seqmodels. Here, we perform seq2sick attack on sentiment classification models by changing its lossfunction, which was not evaluated in the original paper. TextFooler (Jin et al., 2019) is a simple yetstrong blackbox attack method to generate adversarial text. Following the same setting, Seq2Sickand TextFooler is only allowed to edit the appended sentence or tokens.Adversarial Evaluation. We perform the baseline attacks and our AdvCodec attack in scatterattack scenario and concat attack scenario under the whitebox settings. Our targeted goal for senti-ment classification is the opposite sentiment. Specifically, we set the targeted attack goal as 5-starfor reviews originally below 3-star and 1-star for reviews above. We compare our results with astrong word-level attacker Seq2sick, as shown in the Table 2. We can see our AdvCodec(Word)outperforms the baselines and achieves nearly 100% attack success rate on the BERT model. Also,we realize the targeted success rate for AdvCodec(Sent) is lower than the word-level base-line. We assume the reason is that AdvCodec(Sent) has the dependency tree constraints duringdecoding phase, thus increasing the difficulty to find both grammatical correct and adversarial sen-tences that can successful attack. On the contrary, the Seq2Sick baseline can edit any words underno semantic or syntactic constraints. Moreover, our following human evaluation exactly confirmsAdvCodec(Sent) has better language quality.Scatter Attack v.s. Concat Attack. In addition, we find scatter attack success rate is slightly lowerthan concat attack. We think there are two reasons to explain this phenomenon: Firstly, the averagenumber of tokens added in scatter attack is 10, while the average number of tokens added in concatattack is 19. Therefore concat attack has the freedom to manipulate on more words than scatterTable 2: Whitebox attack success rates on sentiment analysis. Targeted attack success rate is mea-sured by how many examples are successfully attacked to output the targeted label in average, whileuntargeted attack success rate calculates the percentage of examples attacked to output a label dif-ferent from the ground truth. Adv( ) is short for our attack AdvCodec( ) at different levels.ModelOriginal Concat Attack Scatter AttackAcc Adv(Sent) Adv(Word) Seq2Sick Adv(Word) Seq2sickBERT 0.703target 0.466 0.990 0.974 0.976 0.946untarget 0.637 0.993 0.988 0.987 9.970SAM 0.704target 0.756 0.956 0.933 0.869 0.570untarget 0.810 0.967 0.952 0.948 0.7117Under review as a conference paper at ICLR 2020attack, thus resulting in higher attack accuracy. Secondly, inserting adversarial tokens to differentpositions of the passage also affects the success rate, which is shown in Appendix A.5.Blackbox Attack. We perform transferability based blackbox attacks. We compare our blackboxattack success rate with the blackbox baseline TextFooler and blackbox Seq2Sick based on trans-ferability. Table 3 demonstrates our AdvCodec(Word) model still has the best blackbox targetedand untargeted success rate among all the baseline models.Table 3: Blackbox attack success rates on sentiment analysis. The transferability-based blackboxattack uses adversarial text generated from whitebox BERT model to attack blackbox SAM, andvice versa. TF is short for TextFooler.ModelConcat Attack Scatter AttackAdv(Sent) Adv(Word) Seq2Sick TF Adv(Word) Seq2sick TFBERTtarget 0.187 0.499 0.218 0.042 0.298 0.156 0.107untarget 0.478 0.686 0.510 0.318 0.574 0.445 0.392SAMtarget 0.335 0.516 0.333 0.113 0.465 0.230 0.081untarget 0.533 0.669 0.583 0.395 0.679 0.498 0.3354.2 Q UESTION ANSWERING (QA)Task and Dataset. In this task, we choose the SQuAD dataset (Rajpurkar et al., 2016) for ques-tion answering task. The SQuAD dataset is a reading comprehension dataset consisting of 107,785questions posed by crowd workers on a set of Wikipedia articles, where the answer to each questionmust be a segment of text from the corresponding reading passage. To compare our method withother adversarial evaluation works (Jia & Liang, 2017) on the QA task, we evaluate our adversarialattacks on the same test set as Jia & Liang (2017), which consists of 1000 randomly sampled exam-ples from the SQuAD development set. We use the official script of the SQuAD dataset (Rajpurkaret al., 2016) to measure both adversarial exact match rates and F1 scores.Model. We adapt the BERT model to run on SQuAD v1.1 with the same strategy as that in Devlinet al. (2019), and we reproduce the result on the development set. BiDAF (Seo et al., 2016) is amulti-stage hierarchical process that represents the context at different levels of granularity and usesbidirectional attention flow mechanism to obtain a query-aware context representation.Baseline. Universal Adversarial Triggers (Wallace et al., 2019) are input-agnostic sequences oftokens that trigger a model to produce a specific prediction when concatenated to any input from adataset. Here, we compare the targeted attack ability of AdvCodec with it. AddSent (Jia & Liang,2017) appends a manually constructed legit distracting sentence to the given text so as to introducefake information, which can only perform untargeted attack.Adversarial Evaluation. We perform the whitebox attack with different attack methods on our test-ing models. As is shown in Table 4 , AdvCodec(Word) achieves the best whitebox attack resultson both BERT and BiDAF. It is worth noting although BERT has better performances than BiDAF,the performance drop for BERT F1BERT is55:4larger than the performance drop for BiDAFF1BiDAF = 53:0, which again proves the BERT is insecure under the adversarial evaluation. Wealso find the position targeted attack is slightly stronger than the answer targeted attack. We assumeit is because the answer targeted attack has fixed targeted answer and limited freedom to alter theappended sentence, but the position targeted attack has more freedom to alter the fake answer fromTable 4: Whitebox attack results on QA in terms of exact match rates and F1 scores by the officialevaluation script. The lower EM and F1 scores mean the better attack success rate.Model OriginPosition Targeted Attack Answer Targeted Attack Baseline (untargeted)Adv(Sent) Adv(Word) Adv(Sent) Adv(Word) AddSentBERTEM 81.2 49.1 29.3 50.9 43.2 46.8F1 88.6 53.8 33.2 55.2 47.3 52.6BiDAFEM 60.0 29.3 15.0 30.2 21.0 25.3F1 70.6 34.0 17.6 34.4 23.6 32.08Under review as a conference paper at ICLR 2020the targeted position spans. We also tried the scatter attack on QA though the performances arenot good. It turns out QA systems highly rely on the relationship between questions and contextualclues, which is hard to break when setting an arbitrary token to a target answer. We discussed inAppendix A.3 the untargeted scatter attack can work well and outperform the baseline methods.Table 5: Targeted Attack Results of whitebox attack onQA. Here, the targeted exact match rates and targeted F1Score measures how many model outputs match the tar-geted fake answers. Higher targeted EM and F1 meanhigher targeted attack success rate. UT is short for Uni-versal Trigger baseline.Model Adv(Sent) Adv(Word) UTBERTtarget EM 32.1 43.4 1.4target F1 32.4 46.5 2.1BiDAFtarget EM 53.3 71.2 21.2target F1 56.8 75.6 22.6Then we test the targeted results ofwhitebox attack methods on QA mod-els. The results are shown in Table 5.It shows that AdvCodec(Word) hasthe best targeted attack ability on QA.And all our attack methods outperformthe baseline(Universal Triggers) when itcomes to the targeted results.Blackbox Attack. We also transfer ad-versarial texts generated from whiteboxattacks to perform blackbox attacks. Ta-ble 6 shows the result of the blackboxattack on testing models. All our pro-posed methods outperform the baselinemethod(AddSent) when transferring the adversaries among models with same architectures.Table 6: BlackBox attack results on QA in terms of exact match rates and F1 scores. Thetransferability-based blackbox attack uses adversarial text generated from whitebox models (an-notated as (w)) to attack different blakcbox models (annotated as (b)).From AttackPosition Targeted Attack Answer Targeted Attack Baseline (untargeted)Adv(Sent) Adv(Word) Adv(Sent) Adv(Word) AddSentBiDAF (w)BERT (b)EM 57.7 52.8 58.7 51.7 46.4F1 62.9 57.5 63.7 55.9 51.9BiDAF (b)EM 26.7 18.9 26.4 20.5 22.3F1 31.3 22.5 30.6 24.1 27.8BERT (w)BERT (b)EM 47.0 32.3 49.6 45.2 46.4F1 52.0 36.4 54.2 49.0 51.9BiDAF (b)EM 30.4 29.2 29.8 28.9 22.3F1 35.5 34.5 35.3 34.2 27.85 H UMAN EVALUATIONWe conduct a thorough human subject evaluation to assess the human response to different types ofgenerated adversarial text. The main conclusion is that even though these adversarial examples areeffective at attacking machine learning models, they are much less noticeable by humans.5.1 C OMPARISON OF ADVERSARIAL TEXT QUALITYTo understand what humans think of our adversarial data quality, we present the adversarial textgenerated by AdvCodec(Sent) andAdvCodec(Word) based on the same initial seed. Humanparticipants are asked to choose which data they think has better language quality.Table 7: Human evaluation on ad-versarial text quality aggregated bymajority vote.Method Maj V oteAdvCodec(Sent) 65.67%AdvCodec(Word) 34.33%In this experiement, we prepare 600adversarial text pairs fromthe same paragraphs and initial seeds. We hand out these pairsto28Amazon Turks. Each turk is required to annotate at least20 pairs and at most 140 pairs to ensure the task has been wellunderstood. We assign each pair to at least 5 unique turks andtake the majority votes over the responses. Human evalua-tion results are shown in Table 7, from which we see that theoverall vote ratio for AdvCodec(Sent) is66%, meaningAdvCodec(Sent) has better language quality than AdvCodec(Word) from a human perspec-tive. This is due to the fact that AdvCodec(Sent) more fully harness the tree-based autoencoderstructure compared to AdvCodec(Sent) . And it is no surprise that better language quality comes9Under review as a conference paper at ICLR 2020at the expense of a lower adversarial success rate. As Table 2 shows, the adversarial targeted suc-cess rate of AdvCodec(Sent) on SAM is 20% lower than that of AdvCodec(Word) , whichconfirms the trade-off between language quality and adversarial success rate.5.2 H UMAN PERFORMANCE ON ADVERSARIAL TEXTTable 8: Human performance on Sentiment AnalysisMethod Majority AccOrigin 0.95AdvCodec(Word) 0.82AdvCodec(Sent) 0.82Table 9: Human performance on QAMethod Majority F1Origin 90.987AdvCodec(Word) 82.897AdvCodec(Sent) 81.784To ensure that our generated adversarial text are compatible with the original paragraph, we askhuman participants to perform the sentiment classification and question answering task both onthe original dataset and adversarial dataset. Adversarial dataset on sentiment classification consistsofAdvCodec(Sent) concatenative adversarial examples and AdvCodec(Word) scatter attackexmaples. Adversarial dataset on QA consists of concatenative adversarial examples genereated bybothAdvCodec(Sent) andAdvCodec(Word) . More specifically, we respectively prepare 100benign and adversarial data pairs for both QA and sentiment classification, and hand out them to505Amazon Turks. Each turk is requested to answer at least 5 question and at most 15 questionsfor the QA task and judge the sentiment for at least 10 paragraphs and at most 20 paragraphs forthe sentiment classification task. We also perform a majority vote over Turk’s answers for the samequestion. The human evaluation results are displayed in Table 8 and Table 9, from which we seethat most of our concatenated adversarial text are compatible to the paragraph. While we can spota drop from the benign to adversarial datasets, we conduct an error analysis in QA and find theerror examples are noisy and not necessarily caused by our adversarial text. For adversarial datain the sentiment classification task, we notice that the generated tokens or appended sentences haveopposite sentiment from the benign one. However, our evaluation results show human readers cannaturally ignore abnormal tokens and make correct judgement according to the context.6 D ISCUSSION AND FUTURE WORKSBesides the conclusions pointed out in the Introduction section, we also summarize some interestingfindings: (1) While AdvCodec(Word) achieves best attack success rate among multiple tasks, weobserve a trade-off between the freedom of manipulation and the attack capability. For instance,AdvCodec(Sent) has dependency tree constraints and becomes more natural for human readersthan but less effective to attack models than AdvCodec(Word) . Similarly, the answer targetedattack in QA has fewer words to manipulate and change than the position targeted attack, and there-fore has slightly weaker attack performances. (2) Scatter attack is as effective as concat attack insentiment classification task but less successful in QA, because QA systems make decisions highlybased on the contextual correlation between the question and the paragraph, which makes it difficultto set an arbitrary token as our targeted answer. (3) Transferring adversarial text from models withbetter performances to weaker ones is more successful. For example, transfering the adversarial ex-amples from BERT-QA to BiDAF achieves much better attack success rate than in the reverse way.(4) We also notice adversarial examples have better transferability among the models with similararchitectures than different architectures. (5) BERT models pay more attention to the both ends ofthe paragraphs and tend to overlook the content in the middle, as shown in Appendix A.5 ablationstudy that adding adversarial sentences in the middle of the paragraph is less effective than in thefront or the end. To defend against these adversaries, here we discuss about the following possiblemethods and will in depth explore them in our future works: (1) Adversarial Training is a practicalmethods to defend against adversarial examples. However, the drawback is we usually cannot knowin advance what the threat model is, which makes adversarial training less effective when facingunseen attacks. (2) Interval Bound Propagation (IBP) (Dvijotham et al., 2018) is proposed as anew technique to theoretically consider the worst-case perturbation. Recent works (Jia et al., 2019;Huang et al., 2019) have applied IBP in the NLP domain to certify the robustness of models. (3)Language models including GPT2 (Radford et al., 2019) may also function as an anomaly detectorto probe the inconsistent and unnatural adversarial sentences.10Under review as a conference paper at ICLR 2020
HyeN5T9AtS
Official Blind Review #3
6: Weak Accept
This paper proposes a new attack framework AdvCodec for adversarial text generation. The main idea is to use a tree-based autoencoder to embed text data into the continuous vector space and then optimize to find the adversarial perturbation in the vector space. The authors consider two types of attacks: concat attack and scatter attack. Experimental results on sentiment analysis and question answering, together with human evaluation on the generated adversarial text, are provided. Overall, this paper has a nice idea: use tree autocoders to embed text into vector space and perform optimization in the vector space. On the other hand, it is not clear to me why the proposed method would not change the ground truth answer for QA. Currently the authors claim to achieve this by carefully choosing the initial sentence as the initial point of optimization, which seems a bit heuristic. The authors could add more discussion on this and more experimental results to justify this claim.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title AdvCodec: Towards A Unified Framework for Adversarial Text Generation ### Paper Abstract Machine learning (ML) especially deep neural networks (DNNs) have been widely applied to real-world applications. However, recent studies show that DNNs are vulnerable to carefully crafted \emph{adversarial examples} which only deviate from the original data by a small magnitude of perturbation. While there has been great interest on generating imperceptible adversarial examples in continuous data domain (e.g. image and audio) to explore the model vulnerabilities, generating \emph{adversarial text} in the discrete domain is still challenging. The main contribution of this paper is to propose a general targeted attack framework \advcodec for adversarial text generation which addresses the challenge of discrete input space and be easily adapted to general natural language processing (NLP) tasks. In particular, we propose a tree based autoencoder to encode discrete text data into continuous vector space, upon which we optimize the adversarial perturbation. With the tree based decoder, it is possible to ensure the grammar correctness of the generated text; and the tree based encoder enables flexibility of making manipulations on different levels of text, such as sentence (\advcodecsent) and word (\advcodecword) levels. We consider multiple attacking scenarios, including appending an adversarial sentence or adding unnoticeable words to a given paragraph, to achieve arbitrary \emph{targeted attack}. To demonstrate the effectiveness of the proposed method, we consider two most representative NLP tasks: sentiment analysis and question answering (QA). Extensive experimental results show that \advcodec has successfully attacked both tasks. In particular, our attack causes a BERT-based sentiment classifier accuracy to drop from $0.703$ to $0.006$, and a BERT-based QA model's F1 score to drop from $88.62$ to $33.21$ (with best targeted attack F1 score as $46.54$). Furthermore, we show that the white-box generated adversarial texts can transfer across other black-box models, shedding light on an effective way to examine the robustness of existing NLP models. ### Paper Keywords ["adversarial text generation", "tree-autoencoder", "human evaluation"] ### Paper Content ABSTRACTWhile there has been great interest on generating imperceptible adversarial ex-amples in continuous data domain (e.g. image and audio) to explore the modelvulnerabilities, generating adversarial text in the discrete domain is still challeng-ing. The main contribution of this paper is to propose a general targeted attackframework AdvCodec for adversarial text generation which addresses the chal-lenge of discrete input space and is easily adapted to general natural languageprocessing (NLP) tasks. In particular, we propose a tree based autoencoder to en-code discrete text data into continuous vector space, upon which we optimize theadversarial perturbation. A tree based decoder is then applied to ensure the gram-mar correctness of the generated text. It also enables the flexibility of makingmanipulations on different levels of text, such as sentence ( AdvCodec(Sent) )and word ( AdvCodec(Word) ) levels. We consider multiple attacking scenar-ios, including appending an adversarial sentence or adding unnoticeable words toa given paragraph, to achieve arbitrary targeted attack . To demonstrate the effec-tiveness of the proposed method, we consider two most representative NLP tasks:sentiment analysis and question answering (QA). Extensive experimental resultsand human studies show that AdvCodec generated adversarial text can success-fully attack the neural models without misleading the human. In particular, ourattack causes a BERT-based sentiment classifier accuracy to drop from 0:703to0:006, and a BERT-based QA model’s F1 score to drop from 88:62to33:21(withbest targeted attack F1 score as 46:54). Furthermore, we show that the white-boxgenerated adversarial texts can transfer across other black-box models, sheddinglight on an effective way to examine the robustness of existing NLP models.1 I NTRODUCTIONRecent studies have demonstrated that deep neural networks (DNNs) are vulnerable to carefullycrafted adversarial examples (Goodfellow et al., 2015; Papernot et al., 2016; Eykholt et al., 2017;Moosavi-Dezfooli et al., 2016). While there are a lot of successful attacks proposed in the con-tinuous data domain including images, audios, and videos, how to effectively generate adversarialexamples in the discrete text domain still remains a hard problem. There are several challenges forgenerating adversarial text: 1) most existing gradient-based adversarial attack approaches are notdirectly applicable to the discrete structured data; 2) it is less clear how to appropriately measure thenaturalness of the generated text compared to the original ones; 3) the manipulation space of text islimited, and it is unclear whether generating a new appended sentence or manipulating individualwords will affect human judgements.So far, existing works on adversarial text generation either leverage heuristic solutions such as ge-netic algorithms (Jin et al., 2019) to search for potential adversarial sentences, or are limited toattacking specific NLP tasks (Cheng et al., 2018; Lei et al., 2018). In addition, effective targetedattacks have not been achieved by current attacks for any task. In this paper, we aim to providemore insights towards solving these challenges by proposing a unified optimization frameworkAdvCodec to generate adversarial text against general NLP tasks. In particular, the core componentofAdvCodec is a tree based autoencoder which converts discrete text tokens into continuous se-mantic embedding, upon which the adversarial perturbation will be optimized regarding the chosenadversarial target. Finally, a tree based decoder will decode the generated adversarial continuousembedding vector back to the sentence level based on the tree grammar rules, aiming to both pre-1Under review as a conference paper at ICLR 2020serve the original semantic meaning and linguistic coherence. An iterative process can be appliedhere to ensure the attack success rate.In addition to the general adversarial text generation framework AdvCodec , this paper also aims toexplore several scientific questions: 1) Since AdvCodec allows the flexibility of manipulating ondifferent hierarchies of the tree structures, which is more attack effective and which way preservesbetter grammatical correctness? 2) Is it possible to achieve targeted attack for general NLP taskssuch as sentiment classification and QA, given the limited degree of freedom for manipulation? 3)Is it possible to perform blackbox attack in general NLP tasks? 4) Is BERT robust in practice? 5)Do these adversarial examples affect human reader performances?To address the above questions, we explore two types of tree based autoencoders on the word(AdvCodec(Word)) and sentence level (AdvCodec(Sent)) . For each encoding scenario,we generate adversarial text against different sentiment classification and QA models. Comparedwith the state-of-the-art adversarial text generation methods, our approach achieves significantlyhigher untargeted and targeted attack success rate. In addition, we perform both whitebox andblackbox settings for each attack to evaluate the model vulnerabilities. Within each attack setting,we evaluate attack strategies as appending an additional adversarial sentence or adding scatter of ad-versarial words to a paragraph, to evaluate the quantitative attack effectiveness. To provide thoroughadversarial text quality assessment, we also perform 7 groups of human studies to evaluate the qual-ity of generated adversarial text compared with the baselines methods, and whether human can stillget the ground truth answers for these tasks based on adversarial text. We find that: 1) both word andsentence level attacks can achieve high attack success rate, while the sentence level manipulationcan consider the global grammatical constraints and generate high quality adversarial sentences. 2)various targeted attacks on general NLP tasks are possible (e.g. when attacking QA, we can ensurethe target to be a specific answer or a specific location within a sentence); 3) the transferability basedblackbox attacks are successful in NLP tasks. Transferring adversarial text from stronger models (interms of performances) to weaker ones is more successful; 4) Although BERT has achieved state-of-the-art performances, we observe the performance drops are also larger than other standard modelswhen confronted with adversarial examples, which indicates BERT is not robust under the adversar-ial settings; 5) Most human readers are not sensitive to our adversarial examples and can still answerthe right answers when confronted with the adversary-injected paragraphs.In summary, our main contribution lies on: (1) We propose a general adversarial text generationframework AdvCodec that addresses the challenge of discrete text input to achieve targeted attacksagainst general NLP tasks (e.g. sentiment classification and QA) while preserving the semanticmeaning and linguistic coherence; (2) we propose a novel tree-based text autoencoder that ensuresthe grammar correctness of generated text; (3) we conduct extensive experiments and successfullyattack different sentiment classifiers and QA models with significant higher attack success rate thanthe state-of-the-art baseline methods; (4) we also perform comprehensive ablation studies includingevaluating the attack scenarios of appending an adversarial sentence or adding scatter of adversarialwords, as well as appending the adversarial sentence at different positions within a paragraph, anddraw several interesting conclusions; (5) we leverage extensive human studies to show that theadversarial text generated by AdvCodec is natural and effective to attack neural models, whilebarely affecting human’s judgement.2 R ELATED WORKA large body of works on adversarial examples focus on perturbing the continuous input space.Though some progress has been made on generating adversarial perturbations in the discrete space,several challenges still remain unsolved. For example, Zhao et al. (2017) exploit the generativeadversarial network (GAN) to generate natural adversarial text. However, this approach cannot ex-plicitly control the quality of the generated instances. Most existing methods (Liang et al., 2017;Samanta & Mehta, 2017; Jia & Liang, 2017; Li et al., 2018; Jin et al., 2019) apply heuristic strategiesto synthesize adversarial text: 1) first identify the features (e.g. characters, words, and sentences)that have the influence on the prediction, 2) follow different search strategies to perturb these featureswith the constructed perturbation candidates (e.g. typos, synonyms, antonyms, frequent words). Forinstance, Liang et al. (2017) employ the loss gradient rLto select important characters and phrasesto perturb, while Samanta & Mehta (2017) use typos, synonyms, and important adverbs/adjectives ascandidates for insertion and replacement. Once the influential features are obtained, the strategies to2Under review as a conference paper at ICLR 2020Paragraph: “Super Bowl 50 was an American football game ... The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the "golden anniversary" with various gold-themed initiatives, as well as temporarily suspending ...Ultra bowls 50 takes places at Donald Trump.” Question:What venue did Super Bowl 50 take place in?Answer: Levi's StadiumBERT output: Donald TrumpTree Encoder Context Vector+PerturbationTree Decoder...Step 1: Append an initial seed sentence/ Scatter initial seed tokens randomly over the paragraphStep 2: Generate the context vector for the initial seed Step 3: Add perturbation on context vectorStep 4: Decode vector into adversarial textStep 5: Update the initial seeds with the adversarial words Initial seeds: Ultra bowls 40 takes places on [Donald Trump](targeted answer).Paragraph: “... and I and asked an elderly woman who was the owner of the bakery for help . She was rude and racist , she did not help me at all! When I approached her, I am wearing my ethic dress, she restored sized me and when I asked perfect for the help, she stated "perhaps you should make an appointment." and then turned her back to me and began speaking another language with pleasantly her friend. I walked out of place with an awe...” Groud Truth: 1-StarBERT Output: 5-StarInitial seeds: the the the the the the the the Attack Target: Donald TrumpAttack Target: 5-StarQuestion AnsweringSentiment Analysisx1x3x2xn y1y2y3yn...ConcatAttackScatter Attackzz*Figure 1: Overview of AdvCodec . Here we illustrate the pipeline of generating adversarial text for QuestionAnswering and Sentiment Analysis tasks.apply the perturbation generally include insertion ,deletion , and replacement . Such adversarial textgeneration approaches cannot guarantee the grammar correctness of generated text. For instance,text generated by Liang et al. (2017) are almost random stream of characters. To generate grammarlycorrect perturbation, Jia & Liang (2017) adopt another heuristic strategy which adds manually con-structed legit distracting sentences to the paragraph to introduce fake information. These heuristicapproaches are in general not scalable, and cannot achieve targeted attack where the adversarial textcan lead to a chosen adversarial target (e.g. adversarial label in classification). Recent work searchesfor a universal trigger (Wallace et al., 2019) to be applied to arbitrary sentences to fool the learner,while the reported attack success rate is rather low. In contrast, with the tree based autoencoder, theproposed AdvCodec framework is able to generate grammarly correct adversarial text efficiently,achieving high attack success rates on different models.3 T HEADVCODEC FRAMEWORK FOR ADVERSARIAL TEXT GENERATIONWe describe the AdvCodec framework in this section. As illustrated in Figure 1, the key componentof the AdvCodec framework is a tree-based autoencoder. The hierarchical and discrete natureof language motivates us to make use of tree-based autoencoder to map discrete text into a highdimensional latent space, which empowers us to leverage the existing optimization based attackingmethod such as Carlini & Wagner (2016) to both efficiently and effectively generate adversarial text.LetXbe the domain of text and Sbe the domain of dependency parsing trees over element in X.Formally, a tree-based autoencoder consists of an encoder E:XS!Zthat encodes text x2Xalong with its dependency parsing tree s2Sinto a high dimensional latent representation z2Z3Under review as a conference paper at ICLR 2020and a decoderG:ZS!Xthat generates the corresponding text xfrom the given contextvectorzand the expected dependency parsing tree s. Given a dependency tree s,EandGform anantoencoder. We thus have the following reconstruction loss to train our tree-based autoencoder:L=ExX[logpG(xjs;E(x;s)] (1)As Figure 1 suggests, AdvCodec can operate on different granularity levels to generate either word-level or sentence-level contextual representation, and decode it into the adversarial text. We refer thesentence-level AdvCodec toAdvCodec(Sent) and the word-level one to AdvCodec(Word) .Both of them will be described in more details in the later part of this section.3.1 O VERVIEW OF THE AD VCO D E C FRAMEWORKBefore diving into details, we provide a high level overview of AdvCodec according to the attackscenario and attack capability supported by this framework.Attack Scenario. Different from the previous adversarial text generation works (Lei et al., 2018;Cheng et al., 2018; Papernot et al., 2016; Miyato et al., 2016; Alzantot et al., 2018) that directly mod-ify critical words in place and might risk changing the semantic meaning or editing the ground truthanswers, we are generating the concatenative adversaries . First proposed by Jia & Liang (2017), theconcatenative adversary does not change any words in the original paragraph or question, but insteadappends a new adversarial sentence to the paragraph to fool the model. However, the concatenativeattack also needs to ensure the appended sentence is compatible (Jia & Liang, 2017) with the origi-nal paragraph, which in other words means it should not contradict any stated facts in the paragraph,especially the correct answer. In our work, we further push the concept of concatenative adversariesfurther and propose a more general notion called scatter attack , which means we can inject adversar-ial words sporadically over the whole paragraph. The concatenative adversarial example falls intoour case when those adversarial tokens form a sentence and on the same time the semantic meaningof the sentence does not contradict the original paragraph. Examples of concatenative attack andscatter attack is shown in table 1.Table 1: In the sentiment analysis task, we perform concatenative attack and scatter attack. Concatenativeadversary does not change existing context but instead appends the adversarial sentence to the paragraph, whilescatter attack scatters adversarial tokens over the whole passage. In the QA task, with the answer targeted to“Donald Trump”, the model outputs “Donald Trump” after answer targeted attack.Task Input(red=Inserted or appended tokens) Model PredictionSentimentAnalysisConcatenative Attack: I kept expecting to see chickens andchickens walking around. ... This place is like a steinbeck novelcome to life. I kept expecting to see donkeys and chickens walk-ing around. wooo-pig-soooeeee this place is awful!!!Negative!PositiveScatter Attack: ... She was rude and racist , she did not helpme at all! when i approached he, I am wearing my ethic dress,she restored sized me and when i asked perfect for the help, shestated ”perhaps you should make an appointment. ” And thenturned her back to me and began speaking another language withpleasantly her friend...Negative!PositiveQAQuestion: What is the smallest geographical region discussed? Riverside !Paragraph: Its counties of Los Angeles, Orange, San Diego, SanBernardino, and Riverside are the five most populous in the stateand all are in the top 15 most populous counties in the UnitedStates. a simplest geographic regions discuss donald trump.donald trumpAttack Capability. AdvCodec is essentially an optimization based framework to find the adver-sarial texts with the optimization goal set to achieve targeted attack . For the sentiment classificationtask,AdvCodec can perform targeted attack to make the original positive reviews be classified asthe most negative one, and vice versa. Particularly in the QA task, we design and implement twokinds of targeted attack: position targeted attack andanswer targeted attack . A successful positiontargeted attack means the model can be fooled to output the answers at specific targeted positions inthe paragraph, but the content on the targeted span cannot be guaranteed. In contrast, a successfulanswer targeted attack is a stronger targeted attack, which refers to the situation when the model al-ways outputs the preset targeted answer pair on the target no matter what the question looks like. An4Under review as a conference paper at ICLR 2020example of word targeted attack can be found in the table 1. Although our framework is designed asa whitebox attack, our experimental results demonstrate our whitebox generated adversarial wordscan transfer to other blackbox models with high attack success rate. Finally, because AdvCodec isa unified adversarial text generation framework whose outputs are discrete tokens, it can be appliedto different downstream NLP tasks. In this paper, we perform adversarial evaluation on sentimentclassification and QA as examples to demonstrate how our framework is adapted to different works.3.2 A DVCODEC (SENT)LSTM Cell LSTM CellLSTM CellROOTcat<root><amod><amod>sleepybrown<det>alies<nsubj>floor<nmod>...Figure 2: The tree decoder. Each node in the depen-dency tree is a LSTM cell. Black lines refer to thedependencies between parent and child nodes. Redarrows refer to the directions of decoding. Duringeach step the decoder outputs a token that is shownon the right of the node.In this subsection, we describeAdvCodec(Sent) and explain howto utilize it to attack sentiment classificationmodels and question answering systems.The main idea comes from the fact that treestructures sometimes have better perfor-mances than sequential recurrent models(Liet al., 2015; Iyyer et al., 2014; 2018) andthe fact that it is inherently flexible toadd perturbations on hierarchical nodes ofthe tree structures. Motivated by this, wedesign a novel tree-based autoencoder tosimultaneously preserve similar semanticmeaning and original syntactic structure.Encoder. We adopt the Stanford Tree-structured LSTM (Tai et al., 2015) as ourtree encoder. In the encoding phase, featuresare extracted and summed from bottom (leafnode, i.e. word) to top (root node) along the dependency tree, extracted by Stanford CoreNLPParser (Manning et al., 2014). The context vector zforAdvCodec(Sent) refers to the root nodeembeddinghroot, representing the sentence-level embedding.Decoder. Following the same dependency tree, we design the text decoder as illustrated in Figure2. In the decoding phase, we start from the root node and traverse along the dependency tree inlevel-order. The hidden state hjof the next node jcomes from (i) the hidden state hiof the currenttree node, (ii) current node predicted word embedding wi, and (iii) the dependency embedding dijbetween the current node iand the next node jbased on the dependency tree. The next node’scorresponding word yjis generated based on the output of the LSTM Cell ojvia a linear layer thatmaps from the hidden presentation ojto the logits that represent the probability distribution of thetree’s vocabulary.oj;hj=LSTM ([hi;wi;dij]) (2)yj=Woj+b (3)3.2.1 A TTACK SENTIMENT CLASSIFICATION MODELInitial Seed. Following our pipeline to optimize adversarial sentence AdvSentence appended tothe paragraph, we need to first start with an initial seed for optimization. Such initial seed for senti-ment classification task can be arbitrary. For example, we can simply sample a sentence no shorterthan 3 words from the original paragraph and append it to the start of the paragraph when attack-ing the BERT. The append position does have a influence on the attack success rate for adversarialattack, and more detailed ablation analysis will be discussed in the next section.Optimization Procedure. Finding the optimal perturbation zon context vector z, we aim to findzthat solvesminimizejjzjjp+cf(z+z); (4)wherefis the objective function for the targeted attack and cis the constant balancing between theperturbation magnitude and attack target. Specifically, we use the objective function fproposed inCarlini & Wagner (2016) as followsf(z0) =max(maxfZ(G(z0;s))i:i6=tgZ(G(z0;s))t;) (5)5Under review as a conference paper at ICLR 2020wherez0=z+z,tis the target class, Z()is the logit output of the classification model beforesoftmax and is the confidence score to adjust the misclassification rate. The optimal solution isiteratively searched via Adam optimizer (Kingma & Ba, 2014).3.2.2 A TTACK QUESTION ANSWERING SYSTEMInitial Seed. Different from attacking sentiment analysis, it is important to choose a good initialseed that is semantically close to the context or the question when attacking QA model. In this waywe can reduce the number of iteration steps and attack the QA model more efficiently. Based on theheuristic experiments conducted in the Appendix A.4, we choose to use question words to craft aninitial seed. We design a set of coarse grained rules to convert a question sentence to a meaningfuldeclarative statement and assign a target fake answer. The fake answer can be crafted accordingto the perturbed model’s predicted answer, or can be manually chosen by adversaries. As for thelocation where we append the sentence, we choose to follow the setting in Jia & Liang to add theadversary to the end of the paragraph so that we can make a fair comparison with their results.It is worth noting unlike Jia & Liang (2017) that uses complicated rules to ensure the adversarialsentence does not change the ground truth, this heuristic step is the very first step of our frameworkfollowed by a series of optimization steps to ensure the ground truth is not changed. In this paper, weensure our appended adversarial sentences are not contradictory to the ground truth by a) choosingan initial sentence as the initial seed of optimization, b) adding perturbation to the sentence, c)searching for the optimal adversarial sentence, d) ensuring that the adversarial sentence and contextsentence are disjoint, otherwise keep the iteration steps. If the maximum steps are reached, theoptimization is regarded as a failure.Optimization Procedure. We follow the same optimization procedure as attacking sentiment clas-sification task except a subtle change of the objective function fdue to the difference between QAmodel and classification model:f(z0)=2Xj=1max(maxfZj(G(z0;s))i:i6=tgZj(G(z0;s))tj;) (6)whereZ1()is the output logits of answer starting position and Z2()is the output logits of answerending position in the QA system. t1andt2are respectively the targeted start position and thetargeted end position. For the position targeted attack mentioned in Section 3.1, we expect the modeloutput to be a span in the paragraph from the targeted start position t1to the targeted end positiont2. In contrast, the answer targeted attack requires the model to output the predefined answer spansin the targeted positions and keep them unmodified during the optimization steps by setting gatesto the targeted answer span: yj=g1yj+g2xj;(j=t1;t1+ 1;:::;t 2),whereyjrefers tothe tree decoded adversarial tokens. We set g1= 1 andg2= 0 in the position targeted attack, andg1= 0andg2= 1in the answer targeted attack.3.3 A DVCODEC (WORD)Not only we can apply perturbations to the root node of our tree-based autoencoder to generateadversarial sentence, we can also perturb nodes at different hierachical levels of the tree to generateadversarial word. The most general case is that the perturbation is directly exerted on the leaf nodeof the tree autoencoder, i.e. the word-level perturbation.AdvCodec(Word) shares the exactly same architectures and optimization steps mentionedabove to attack the targeted models. The distinction between AdvCodec(Word) andAdvCodec(Sent) is the context vector z. Formally for the word-level attack, the contextvectorzare the concatenation of leaf node embedding zi(which corresponds to each word)z= [z1;z2;:::;z n]. Different from the AdvCodec(Sent) that perturbation is added on thewhole sentence, we can control where the perturbations are added by assigning each node a mask asfollows:z0i=zi+maskzi (7)When we expect some token zito be adversarially changed, we can simply assign mask = 1, thusadding the perturbation on the token.As the perturbation can be controlled on individual words, we propose a new attack scenario scatterattack , which scatters some initial tokens over the paragraph, adds perturbation only to those tokens6Under review as a conference paper at ICLR 2020and find the best adversarial tokens via the same optimization procedure mentioned above. More-over, the concatenative adversarial examples (e.g. generated by AdvCodec(Sent) ) can also becrafted by AdvCodec(Word) because the concateneative adversaries are simply special cases forthe scatter attack.4 E XPERIMENTAL RESULTSIn this section we will present the experimental evaluation results for AdvCodec . In particular, wetarget on two popular NLP tasks, sentiment classification and QA. For both models, we performwhitebox and transferability based blackbox attacks. In addition to the model accuracy (untargetedattack evaluation), we also report the targeted attack success rate for AdvCodec . We show that theproposed AdvCodec can outperform other state of the art baseline methods on different models.4.1 S ENTIMENT ANALYSISTask and Dataset. In this task, sentiment analysis model takes the user reviews from restaurantsand stores as input and is expected to predict the number of stars (from 1 to 5 star) that the userwas assigned. We choose the Yelp dataset (Challenge) for sentiment analysis task. It consists of2.7M yelp reviews, in which we follow the process of Lin et al. (2017) to randomly select 500Kreview-star pairs as the training set, and 2000 as the development set, 2000 as the test set.Model. BERT (Devlin et al., 2019) is a transformer (Vaswani et al., 2017) based model, whichis unsupervisedly pretrained on a large corpus and is proven to be effective for downstream NLPtasks. Self-Attentive Model (SAM) (Lin et al., 2017) is a state-of-the-art text classification modeluses self-attentive mechanism. More detailed model settings are listed in the appendix.Baseline. Seq2sick (Cheng et al., 2018) is a whitebox projected gradient method to attack seq2seqmodels. Here, we perform seq2sick attack on sentiment classification models by changing its lossfunction, which was not evaluated in the original paper. TextFooler (Jin et al., 2019) is a simple yetstrong blackbox attack method to generate adversarial text. Following the same setting, Seq2Sickand TextFooler is only allowed to edit the appended sentence or tokens.Adversarial Evaluation. We perform the baseline attacks and our AdvCodec attack in scatterattack scenario and concat attack scenario under the whitebox settings. Our targeted goal for senti-ment classification is the opposite sentiment. Specifically, we set the targeted attack goal as 5-starfor reviews originally below 3-star and 1-star for reviews above. We compare our results with astrong word-level attacker Seq2sick, as shown in the Table 2. We can see our AdvCodec(Word)outperforms the baselines and achieves nearly 100% attack success rate on the BERT model. Also,we realize the targeted success rate for AdvCodec(Sent) is lower than the word-level base-line. We assume the reason is that AdvCodec(Sent) has the dependency tree constraints duringdecoding phase, thus increasing the difficulty to find both grammatical correct and adversarial sen-tences that can successful attack. On the contrary, the Seq2Sick baseline can edit any words underno semantic or syntactic constraints. Moreover, our following human evaluation exactly confirmsAdvCodec(Sent) has better language quality.Scatter Attack v.s. Concat Attack. In addition, we find scatter attack success rate is slightly lowerthan concat attack. We think there are two reasons to explain this phenomenon: Firstly, the averagenumber of tokens added in scatter attack is 10, while the average number of tokens added in concatattack is 19. Therefore concat attack has the freedom to manipulate on more words than scatterTable 2: Whitebox attack success rates on sentiment analysis. Targeted attack success rate is mea-sured by how many examples are successfully attacked to output the targeted label in average, whileuntargeted attack success rate calculates the percentage of examples attacked to output a label dif-ferent from the ground truth. Adv( ) is short for our attack AdvCodec( ) at different levels.ModelOriginal Concat Attack Scatter AttackAcc Adv(Sent) Adv(Word) Seq2Sick Adv(Word) Seq2sickBERT 0.703target 0.466 0.990 0.974 0.976 0.946untarget 0.637 0.993 0.988 0.987 9.970SAM 0.704target 0.756 0.956 0.933 0.869 0.570untarget 0.810 0.967 0.952 0.948 0.7117Under review as a conference paper at ICLR 2020attack, thus resulting in higher attack accuracy. Secondly, inserting adversarial tokens to differentpositions of the passage also affects the success rate, which is shown in Appendix A.5.Blackbox Attack. We perform transferability based blackbox attacks. We compare our blackboxattack success rate with the blackbox baseline TextFooler and blackbox Seq2Sick based on trans-ferability. Table 3 demonstrates our AdvCodec(Word) model still has the best blackbox targetedand untargeted success rate among all the baseline models.Table 3: Blackbox attack success rates on sentiment analysis. The transferability-based blackboxattack uses adversarial text generated from whitebox BERT model to attack blackbox SAM, andvice versa. TF is short for TextFooler.ModelConcat Attack Scatter AttackAdv(Sent) Adv(Word) Seq2Sick TF Adv(Word) Seq2sick TFBERTtarget 0.187 0.499 0.218 0.042 0.298 0.156 0.107untarget 0.478 0.686 0.510 0.318 0.574 0.445 0.392SAMtarget 0.335 0.516 0.333 0.113 0.465 0.230 0.081untarget 0.533 0.669 0.583 0.395 0.679 0.498 0.3354.2 Q UESTION ANSWERING (QA)Task and Dataset. In this task, we choose the SQuAD dataset (Rajpurkar et al., 2016) for ques-tion answering task. The SQuAD dataset is a reading comprehension dataset consisting of 107,785questions posed by crowd workers on a set of Wikipedia articles, where the answer to each questionmust be a segment of text from the corresponding reading passage. To compare our method withother adversarial evaluation works (Jia & Liang, 2017) on the QA task, we evaluate our adversarialattacks on the same test set as Jia & Liang (2017), which consists of 1000 randomly sampled exam-ples from the SQuAD development set. We use the official script of the SQuAD dataset (Rajpurkaret al., 2016) to measure both adversarial exact match rates and F1 scores.Model. We adapt the BERT model to run on SQuAD v1.1 with the same strategy as that in Devlinet al. (2019), and we reproduce the result on the development set. BiDAF (Seo et al., 2016) is amulti-stage hierarchical process that represents the context at different levels of granularity and usesbidirectional attention flow mechanism to obtain a query-aware context representation.Baseline. Universal Adversarial Triggers (Wallace et al., 2019) are input-agnostic sequences oftokens that trigger a model to produce a specific prediction when concatenated to any input from adataset. Here, we compare the targeted attack ability of AdvCodec with it. AddSent (Jia & Liang,2017) appends a manually constructed legit distracting sentence to the given text so as to introducefake information, which can only perform untargeted attack.Adversarial Evaluation. We perform the whitebox attack with different attack methods on our test-ing models. As is shown in Table 4 , AdvCodec(Word) achieves the best whitebox attack resultson both BERT and BiDAF. It is worth noting although BERT has better performances than BiDAF,the performance drop for BERT F1BERT is55:4larger than the performance drop for BiDAFF1BiDAF = 53:0, which again proves the BERT is insecure under the adversarial evaluation. Wealso find the position targeted attack is slightly stronger than the answer targeted attack. We assumeit is because the answer targeted attack has fixed targeted answer and limited freedom to alter theappended sentence, but the position targeted attack has more freedom to alter the fake answer fromTable 4: Whitebox attack results on QA in terms of exact match rates and F1 scores by the officialevaluation script. The lower EM and F1 scores mean the better attack success rate.Model OriginPosition Targeted Attack Answer Targeted Attack Baseline (untargeted)Adv(Sent) Adv(Word) Adv(Sent) Adv(Word) AddSentBERTEM 81.2 49.1 29.3 50.9 43.2 46.8F1 88.6 53.8 33.2 55.2 47.3 52.6BiDAFEM 60.0 29.3 15.0 30.2 21.0 25.3F1 70.6 34.0 17.6 34.4 23.6 32.08Under review as a conference paper at ICLR 2020the targeted position spans. We also tried the scatter attack on QA though the performances arenot good. It turns out QA systems highly rely on the relationship between questions and contextualclues, which is hard to break when setting an arbitrary token to a target answer. We discussed inAppendix A.3 the untargeted scatter attack can work well and outperform the baseline methods.Table 5: Targeted Attack Results of whitebox attack onQA. Here, the targeted exact match rates and targeted F1Score measures how many model outputs match the tar-geted fake answers. Higher targeted EM and F1 meanhigher targeted attack success rate. UT is short for Uni-versal Trigger baseline.Model Adv(Sent) Adv(Word) UTBERTtarget EM 32.1 43.4 1.4target F1 32.4 46.5 2.1BiDAFtarget EM 53.3 71.2 21.2target F1 56.8 75.6 22.6Then we test the targeted results ofwhitebox attack methods on QA mod-els. The results are shown in Table 5.It shows that AdvCodec(Word) hasthe best targeted attack ability on QA.And all our attack methods outperformthe baseline(Universal Triggers) when itcomes to the targeted results.Blackbox Attack. We also transfer ad-versarial texts generated from whiteboxattacks to perform blackbox attacks. Ta-ble 6 shows the result of the blackboxattack on testing models. All our pro-posed methods outperform the baselinemethod(AddSent) when transferring the adversaries among models with same architectures.Table 6: BlackBox attack results on QA in terms of exact match rates and F1 scores. Thetransferability-based blackbox attack uses adversarial text generated from whitebox models (an-notated as (w)) to attack different blakcbox models (annotated as (b)).From AttackPosition Targeted Attack Answer Targeted Attack Baseline (untargeted)Adv(Sent) Adv(Word) Adv(Sent) Adv(Word) AddSentBiDAF (w)BERT (b)EM 57.7 52.8 58.7 51.7 46.4F1 62.9 57.5 63.7 55.9 51.9BiDAF (b)EM 26.7 18.9 26.4 20.5 22.3F1 31.3 22.5 30.6 24.1 27.8BERT (w)BERT (b)EM 47.0 32.3 49.6 45.2 46.4F1 52.0 36.4 54.2 49.0 51.9BiDAF (b)EM 30.4 29.2 29.8 28.9 22.3F1 35.5 34.5 35.3 34.2 27.85 H UMAN EVALUATIONWe conduct a thorough human subject evaluation to assess the human response to different types ofgenerated adversarial text. The main conclusion is that even though these adversarial examples areeffective at attacking machine learning models, they are much less noticeable by humans.5.1 C OMPARISON OF ADVERSARIAL TEXT QUALITYTo understand what humans think of our adversarial data quality, we present the adversarial textgenerated by AdvCodec(Sent) andAdvCodec(Word) based on the same initial seed. Humanparticipants are asked to choose which data they think has better language quality.Table 7: Human evaluation on ad-versarial text quality aggregated bymajority vote.Method Maj V oteAdvCodec(Sent) 65.67%AdvCodec(Word) 34.33%In this experiement, we prepare 600adversarial text pairs fromthe same paragraphs and initial seeds. We hand out these pairsto28Amazon Turks. Each turk is required to annotate at least20 pairs and at most 140 pairs to ensure the task has been wellunderstood. We assign each pair to at least 5 unique turks andtake the majority votes over the responses. Human evalua-tion results are shown in Table 7, from which we see that theoverall vote ratio for AdvCodec(Sent) is66%, meaningAdvCodec(Sent) has better language quality than AdvCodec(Word) from a human perspec-tive. This is due to the fact that AdvCodec(Sent) more fully harness the tree-based autoencoderstructure compared to AdvCodec(Sent) . And it is no surprise that better language quality comes9Under review as a conference paper at ICLR 2020at the expense of a lower adversarial success rate. As Table 2 shows, the adversarial targeted suc-cess rate of AdvCodec(Sent) on SAM is 20% lower than that of AdvCodec(Word) , whichconfirms the trade-off between language quality and adversarial success rate.5.2 H UMAN PERFORMANCE ON ADVERSARIAL TEXTTable 8: Human performance on Sentiment AnalysisMethod Majority AccOrigin 0.95AdvCodec(Word) 0.82AdvCodec(Sent) 0.82Table 9: Human performance on QAMethod Majority F1Origin 90.987AdvCodec(Word) 82.897AdvCodec(Sent) 81.784To ensure that our generated adversarial text are compatible with the original paragraph, we askhuman participants to perform the sentiment classification and question answering task both onthe original dataset and adversarial dataset. Adversarial dataset on sentiment classification consistsofAdvCodec(Sent) concatenative adversarial examples and AdvCodec(Word) scatter attackexmaples. Adversarial dataset on QA consists of concatenative adversarial examples genereated bybothAdvCodec(Sent) andAdvCodec(Word) . More specifically, we respectively prepare 100benign and adversarial data pairs for both QA and sentiment classification, and hand out them to505Amazon Turks. Each turk is requested to answer at least 5 question and at most 15 questionsfor the QA task and judge the sentiment for at least 10 paragraphs and at most 20 paragraphs forthe sentiment classification task. We also perform a majority vote over Turk’s answers for the samequestion. The human evaluation results are displayed in Table 8 and Table 9, from which we seethat most of our concatenated adversarial text are compatible to the paragraph. While we can spota drop from the benign to adversarial datasets, we conduct an error analysis in QA and find theerror examples are noisy and not necessarily caused by our adversarial text. For adversarial datain the sentiment classification task, we notice that the generated tokens or appended sentences haveopposite sentiment from the benign one. However, our evaluation results show human readers cannaturally ignore abnormal tokens and make correct judgement according to the context.6 D ISCUSSION AND FUTURE WORKSBesides the conclusions pointed out in the Introduction section, we also summarize some interestingfindings: (1) While AdvCodec(Word) achieves best attack success rate among multiple tasks, weobserve a trade-off between the freedom of manipulation and the attack capability. For instance,AdvCodec(Sent) has dependency tree constraints and becomes more natural for human readersthan but less effective to attack models than AdvCodec(Word) . Similarly, the answer targetedattack in QA has fewer words to manipulate and change than the position targeted attack, and there-fore has slightly weaker attack performances. (2) Scatter attack is as effective as concat attack insentiment classification task but less successful in QA, because QA systems make decisions highlybased on the contextual correlation between the question and the paragraph, which makes it difficultto set an arbitrary token as our targeted answer. (3) Transferring adversarial text from models withbetter performances to weaker ones is more successful. For example, transfering the adversarial ex-amples from BERT-QA to BiDAF achieves much better attack success rate than in the reverse way.(4) We also notice adversarial examples have better transferability among the models with similararchitectures than different architectures. (5) BERT models pay more attention to the both ends ofthe paragraphs and tend to overlook the content in the middle, as shown in Appendix A.5 ablationstudy that adding adversarial sentences in the middle of the paragraph is less effective than in thefront or the end. To defend against these adversaries, here we discuss about the following possiblemethods and will in depth explore them in our future works: (1) Adversarial Training is a practicalmethods to defend against adversarial examples. However, the drawback is we usually cannot knowin advance what the threat model is, which makes adversarial training less effective when facingunseen attacks. (2) Interval Bound Propagation (IBP) (Dvijotham et al., 2018) is proposed as anew technique to theoretically consider the worst-case perturbation. Recent works (Jia et al., 2019;Huang et al., 2019) have applied IBP in the NLP domain to certify the robustness of models. (3)Language models including GPT2 (Radford et al., 2019) may also function as an anomaly detectorto probe the inconsistent and unnatural adversarial sentences.10Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text This paper proposes a new attack framework AdvCodec for adversarial text generation. The main idea is to use a tree-based autoencoder to embed text data into the continuous vector space and then optimize to find the adversarial perturbation in the vector space. The authors consider two types of attacks: concat attack and scatter attack. Experimental results on sentiment analysis and question answering, together with human evaluation on the generated adversarial text, are provided. Overall, this paper has a nice idea: use tree autocoders to embed text into vector space and perform optimization in the vector space. On the other hand, it is not clear to me why the proposed method would not change the ground truth answer for QA. Currently the authors claim to achieve this by carefully choosing the initial sentence as the initial point of optimization, which seems a bit heuristic. The authors could add more discussion on this and more experimental results to justify this claim. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
GANphlMvsa
NoDaLiDa/2023/Conference
2023
Is Part-of-Speech Tagging a Solved Problem for Icelandic?
["\u00d6rvar K\u00e1rason", "Hrafn Loftsson"]
We train and evaluate four Part-of-Speech tagging models for Icelandic. Three are older models that obtained the highest accuracy for Icelandic when they were introduced. The fourth model is of a type that currently reaches state-of-the-art accuracy. We use the most recent version of the MIM-GOLD training/testing corpus, its newest tagset, and augmentation data to obtain results that are comparable between the various models. We examine the accuracy improvements with each model and analyse the errors produced by our transformer model, which is based on a previously published ConvBERT model. For the set of errors that all the models make, and for which they predict the same tag, we extract a random subset for manual inspection. Extrapolating from this subset, we obtain a lower bound estimate on annotation errors in the corpus as well as on some unsolvable tagging errors. We argue that further tagging accuracy gains for Icelandic can still be obtained by fixing the errors in MIM-GOLD and, furthermore, that it should still be possible to squeeze out some small gains from our transformer model.
["Part-of-Speech Tagging", "Icelandic", "Transformer", "ConvBERT", "error analysis", "annotator disagreement", "annotation errors"]
Is Part-of-Speech Tagging a Solved Problem for Icelandic?Örvar KárasonDepartment of Computer ScienceReykjavik UniversityIcelandorvark13@ru.isHrafn LoftssonDepartment of Computer ScienceReykjavik UniversityIcelandhrafn@ru.isAbstractWe train and evaluate four Part-of-Speechtagging models for Icelandic. Three areolder models that obtained the highestaccuracy for Icelandic when they wereintroduced. The fourth model is of atype that currently reaches state-of-the-artaccuracy. We use the most recent versionof the MIM-GOLD training/testing cor-pus, its newest tagset, and augmentationdata to obtain results that are comparablebetween the various models. We exam-ine the accuracy improvements with eachmodel and analyse the errors produced byour transformer model, which is based ona previously published ConvBERT model.For the set of errors that all the modelsmake, and for which they predict the sametag, we extract a random subset for manualinspection. Extrapolating from this subset,we obtain a lower bound estimate on anno-tation errors in the corpus as well as onsome unsolvable tagging errors. We arguethat further tagging accuracy gains for Ice-landic can still be obtained by fixing theerrors in MIM-GOLD and, furthermore,that it should still be possible to squeezeout some small gains from our transformermodel.1 IntroductionPart-of-Speech (POS) tagging is a sequential la-belling task in which each token, i.e., words, sym-bols, and punctuation in running text is assigneda morphosyntactic tag. It is an important step formany Natural Language Processing applications.A token is ambiguous when it has more than onepossible tag. The source of ambiguity is polysemyin the form of homographs from the same wordclass, from different word classes, and also withinthe declension paradigms of the same word. Thetask, therefore, entails examining the token itselfand its context for clues for predicting the cor-rect tag. For the last mentioned type of ambigu-ity, which is prevalent in Icelandic, it is necessaryto find another unambiguous token in the contextthat the target token shows agreement with and useit to determine the correct target tag.Over the last two decades, steady progress hasbeen made in POS tagging for Icelandic. Varioustaggers have been presented throughout this periodthat improved on previous state-of-the-art (SOTA)methods (Rögnvaldsson et al., 2002; Helgadóttir,2005; Loftsson, 2008; Dredze and Wallenberg,2008; Loftsson et al., 2009, 2011; Loftsson andÖstling, 2013; Steingrímsson et al., 2019; Snæ-bjarnarson et al., 2022; Daðason and Loftsson,2022; Jónsson and Loftsson, 2022).Work on Icelandic corpora has also progressed.Existing corpora have undergone error correctionphases (Barkarson et al., 2021), and, in somecases, been expanded with new data (Barkarsonet al., 2022). A new larger gold standard cor-pus for POS tagging, MIM-GOLD (Loftsson et al.,2010), was created to replace the older standard,theIcelandic Frequency Dictionary (IFD, Pindet al. 1991), and multiple alterations have beenmade to the fine-grained Icelandic tagset (Stein-grímsson et al., 2018; Barkarson et al., 2021).All this variability over the years means thatpreviously reported results for POS taggers are noteasily comparable. Thus, we train and test fourdata-driven taggers that have been employed forIcelandic (see Section 3), using the latest versionof MIM-GOLD and its underlying tagset, as wellas the latest versions of augmentation data (seeSection 2). We obtain SOTA tagging accuracy bytraining and fine-tuning a ConvBERT-base modelin a slightly different manner than previously re-ported by Daðason and Loftsson (2022) (see Sec-tion 3).With the latest tagging method based on thetransformer model finally reaching above 97%per-token accuracy for Icelandic (Jónsson andLoftsson, 2022; Snæbjarnarson et al., 2022; Daða-son and Loftsson, 2022), the generally be-lieved limit of inter-annotator agreement (Mann-ing, 2011), we might ask ourselves if POS taggingis now a solved problem for Icelandic. Indeed, ourevaluation results show that the tagging accuracyof our ConvBERT-base model is close to 98% (seeTable 3). A large portion of the remaining errorscan be explained by 1) a lack of context inform-ation to make the correct prediction, and 2) anno-tation errors or other faults in the training/testingcorpus itself. Addressing the latter should givefurther gains. Furthermore, some small additionalgains could be squeezed out of the transformermodel, by using a larger model and pre-trainingit on more data. When this is done, we may beable to argue that POS tagging is a solved problemfor Icelandic.The rest of this paper is structured as follows.In Sections 2 and 3, we describe the data and themodels, respectively, used in our experiments. Wepresent the evaluation results in Section 4, and de-tailed error analysis in Section 5. Finally, we con-clude in Section 6.2 DataIn this section, we describe the data and the tagsetused in our work.2.1 CorpusThe MIM-GOLD corpus is a curated subset of theMIM corpus (Helgadóttir et al., 2012) and wassemi-automatically tagged using a combination oftaggers (Loftsson et al., 2010). Version 21.05 ofthe corpus contains 1 million running words from13 different text types, of which about half origi-nate from newspapers and books (see Table 1). Allversions of MIM-GOLD include the same 10-foldsplits for use in cross-validation.1MIM-GOLD was created to replace the IFD asthe gold standard for POS tagging of Icelandictexts. The IFD corpus was sourced from bookspublished in the eighties and has a clear literaryand standardized language slant. Steingrímssonet al. (2019) reported a 1.11 percentage point (pp)1Version 21.05 is available at http://hdl.handle.net/20.500.12537/114Text type % of allNewspaper Morgunblaðið 24.9Books 23.5Blogs 13.4Newspaper Fréttablaðið 9.4The Icelandic Web of Science 9.1Websites 6.5Laws 4.1School essays 3.4Written-to-be-spoken 1.9Adjudications 1.3Radio news scripts 1.1Web media 0.8E-mails 0.5Total 100.0Table 1: Information about the various text typesin MIM-GOLD, adapted from Loftsson et al.(2010).lower per-token accuracy for MIM-GOLD com-pared to the IFD.2.2 Morphological lexiconVersion 22.09 of the Database of Modern Ice-landic Inflection (DMII) (Bjarnadóttir, 2012),which is now a part of the Database of IcelandicMorphology (Bjarnadóttir et al., 2019), contains6.9 million inflectional forms and about 330 thou-sand declension paradigms.2Though the databasecannot be used directly to train a POS tagger, asthere is no context or distributional informationfor the word forms, it has been used to augmenttaggers during training and help with tagging un-known words (words not seen during training)(Loftsson et al., 2011; Steingrímsson et al., 2019).2.3 Pre-training corpusThe Icelandic Gigaword Corpus (IGC), which in-cludes text sources from multiple varied domains,has been expanded annually since 2018 (Barkar-son et al., 2022). The motivation for construct-ing the IGC was, inter alia , to make the devel-opment of large Icelandic language models pos-sible (Steingrímsson et al., 2018). The 2021 ver-sion used in our work contains about 1.8 billiontokens.32https://bin.arnastofnun.is/DMII/LTdata/3Version 2021 is available at http://hdl.handle.net/20.500.12537/1922.4 TagsetThe MIM-GOLD tagset v. 2 is the fourth iterationof the fine-grained tagset that is exclusively usedfor modern Icelandic and has its origin in the IFD.The tagset consists of 571 possible tags, of which557 occur in MIM-GOLD.The tags are morphosyntactic encodings con-sisting of one to six characters, each denotingsome feature. The first character denotes the lex-ical category and is, in some cases, followed bya sub-category character. For each category, afixed number of additional feature characters fol-low, e.g., gender ,number and case for nouns;degree anddeclension for adjectives; and voice ,mood andtense for verbs. To illustrate, considerthe word form konan (‘the woman’). The corre-sponding tag is nveng , denoting noun ( n), feminine(v), singular ( e), nominative ( n) case, and definitesuffixed article ( g).3 ModelsIn this section, we describe the four data-drivenPOS tagging models we trained and evaluated:•TriTagger (Loftsson et al., 2009) is a reim-plementation of TnT (Brants, 2000), a sec-ond order (trigram) Hidden Markov model.The probabilities of the model are estimatedfrom a training corpus using maximum like-lihood estimation. Assignments of POS tagsto tokens is found by optimising the productof lexical probabilities ( p(wi|tj)) and contex-tual probabilities ( p(ti|ti−1, ti−2)) (where wiandtiare the ithword and tag, respectively).When work on creating a tagger for Icelandicstarted at the turn of the century, five existingdata-driven taggers were tested on the IFDcorpus (Helgadóttir, 2005). TnT obtained thehighest accuracy and has often been includedfor comparison in subsequent work.•IceStagger (Loftsson and Östling, 2013)is an averaged perceptron model (Collins,2002), an early and simple version of a neu-ral network.4It learns binary feature func-tions from predefined templates. The tem-plates are hand-crafted and can reference ad-jacent words, previous tags, and various cus-tom matching functions applied to them. The4IceStagger and TriTagger are included in the IceNLPtoolkit (Loftsson and Rögnvaldsson, 2007): https://github.com/hrafnl/icenlptemplates, intended to capture dependenciesspecific to Icelandic, were developed againstthe IFD. During training, the algorithm learnswhich feature functions are good indicatorsof the assigned tag, given the context avail-able to the templates. It does that by ad-justing the weight associated with the featurefunction. The highest-scoring tag sequenceis approximated using beam search. BothIceStagger and TriTagger use data from theDMII to help with guessing the tags for un-known tokens.•ABLTagger v. 1 (Steingrímsson et al., 2019;Jónsson and Loftsson, 2022) is based on abidirectional long short-term memory (Bi-LSTM) model.5That model is an exten-sion of LSTMs (Hochreiter and Schmidhu-ber, 1997) that can be employed when theinput is the whole sequence. Two LSTMsare trained on the input, with the secondtraversing it in reverse (Graves and Schmid-huber, 2005). The input for ABLTagger con-sists of both word and character embeddings.The model is augmented with n-hot vectorscreated from all the potential lexical featuresof the word forms from the DMII. ABL-Tagger was developed against the IFD butwas the first tagger to be applied to MIM-GOLD.•ConvBERT (Jiang et al., 2020) is an im-proved version of the BERT model (Vaswaniet al., 2017; Devlin et al., 2019) that is moreefficient and accurate. We used an exist-ing ConvBERT-base model pre-trained onthe IGC by Daðason and Loftsson (2022)6and fine-tuned it for tagging on MIM-GOLD.This is a standard pre-trained transformermodel with two changes: the embeddingsof the first and last subwords are con-catenated ( first+last subword pooling) togenerate the token representations (Schusterand Nakajima, 2012), and we continued thepre-training of the ConvBERT-base modelusing the training data of each fold fromMIM-GOLD for three epochs before fine-tuning it for tagging for 10 epochs with thesame data. Each modification gave a 0.07 pp5ABLTagger v. 1 is available at https://hdl.handle.net/20.500.12537/536https://huggingface.co/jonfd/convbert-base-igc-isToken acc. Sent. acc.TriTagger 91.01% 35.58%IceStagger 92.72% 42.74%ABLTagger v1 94.56% 49.11%ConvBERT-base 97.79% 73.43%Table 2: Token and sentence tagging accuracy forthe four models.improvement in accuracy; i.e. 0.14 pp in to-tal.74 ResultsWe evaluated the four models by applying 10-foldcross-validation (CV) using the standard splitsin MIM-GOLD (see Section 2). The resultsare shown in Table 2. The transformer model,ConvBERT-base, obtains 6.78 pp higher accuracythan the HMM model (TriTagger), which is equiv-alent to a 75.42% reduction in errors!The increase in sentence accuracy, which isoften overlooked, is also very impressive. It hasmore than doubled and now close to34of the sen-tences are correct. Sentences come in differentlengths, ranging from a single token up to 1,334tokens in MIM-GOLD, and increased length canresult in increased complexity. Figure 1 showsthe length distribution of sentences with no errors.The figure shows both general accuracy gains aswell as an improvement in handling longer sen-tences.Figure 1: Distributions of correctly tagged sen-tences. The legend shows each set’s median (Mdn)and mean (M).7Seehttps://github.com/orvark13/postr/for training and evaluation scripts, as well as fine-tunedmodels.Figure 2: The accuracy improvements between themodels for the more frequent lexical categories.Solid lines are the per-token accuracy for all tagsin that category, and dashed lines are the lexicalclass accuracy, i.e., the tag category is correct butthere is some error in the predicted features. Errorswithin the categories diminish as those lines con-verge.4.1 Accuracy improvementsTriTagger and IceStagger are limited to a three-token window and they need frequency inform-ation of tokens to learn from. As is to be ex-pected, IceStagger gains accuracy according to thefeature templates pre-defined for it. ABLTagger’simprovements come from the BiLSTM’s contextwindow being the whole sentence and it, thereby,being able to detect long-range dependencies. Itsability to see within the token by means of thecharacter embeddings helps it handle tokens notseen during training. Augmenting the model withdata from DMII also helps with unknown words.The source of improvement for the transformermodel is mainly threefold. First, the attentionmechanism aids it in selecting the right depend-encies (e.g., when there is more than one option),and it is detecting longer long-range dependenciesthan the BiLSTM model. We see this from theexamination of the predictions and it is also indi-cated by the model’s success with longer sentencesas is evident in the shape of its distribution in Fig-ure 1. Secondly, the model is often able to dis-cern the different semantic senses of ambiguoustokens. We assume this stems from the contextualword embeddings in the large pre-trained Conv-BERT language model. Finally, it benefits fromall the language sense from the IGC infused in thePOS Transformer Model AccuracyIceBERT-IGC [1] 97.37%ConvBERT-base [1] 97.75%Our ConvBERT-base 97.79%Excluding xandetagsIceBERT-IGC, multi-label [2] 98.27%Our ConvBERT-base 98.14%9-fold CV , excluding xandeerrorsDMS, ELECTRA-base [3] 97.84%Our ConvBERT-base 98.00%Table 3: Accuracy results for different POS trans-former models pre-trained on IGC and the accu-racy of our transformer model when fine-tunedand evaluated in a comparable manner. [1] werereported in Daðason and Loftsson (2022), [2] inSnæbjarnarson et al. (2022), and [3] in Jónssonand Loftsson (2022).language model during pre-training.Figure 2 shows the accuracy improvements ofthe models for the more frequent lexical cate-gories.4.2 Transformer models and SOTAIn Table 3, we show previously reported results fortransformer models pre-trained on the IGC, andthe results of our transformer, a ConvBERT-basemodel trained and fine-tuned slightly differentlycompared to Daðason and Loftsson (2022) (seeSection 3), evaluated in the same manner for com-parison. Two of the papers cited in the table reportresults excluding the xandetags, either both dur-ing training and evaluation or only during evalu-ation. These tags are used for unanalysed tokensand foreign words, respectively, and have the low-est category accuracies, the reasons for which willbecome apparent in Section 5. Not counting tagg-ing errors for these two tags increases reportedaccuracy by 0.21 pp for our model. Excludingthose tags from training, by fixing their weights tozero, increases the reported accuracy by a further0.14 pp, because, in this case, the model is nolonger able to assign these two tags erroneouslyto tokens.The current SOTA is a multi-label model basedon IceBERT-large8(Snæbjarnarson et al., 2022).Multi-label classification means that the tags aresplit into individual features, e.g., lexical category ,8IceBERT is based on a RoBERTa model (Liu et al.,2019).tense ,gender ,number , and the model is trained topredict each separately. Treating composite tagsas multiple labels has been shown to improve POStagging accuracy, especially when training datais scarce (Tkachenko and Sirts, 2018). Combin-ing the predictions back into tags is dependenton knowledge about the composition of the tags.The results presented in Table 3 show that ourConvBERT-base model obtains SOTA results forsingle-label models applied to Icelandic.5 Error analysisIn this section, we, first, present an analysis of themost frequent errors, and, second, the results ofour analysis of the different sources of errors.5.1 Most frequent errorsTable 4 shows the most frequent errors made byour transformer model. The list for the BiLSTMmodel is very similar, but with about double theaccuracy degradation. The 12 most frequent errorsare in fact six pairs of tags where the confusionbetween each pair occurs in either direction.The most frequent confusion is n—s→e(ande→n—s), or between foreign proper names andforeign words.9More than half, 0.04 pp for botherror types, are due to words not seen during train-ing. According to the MIM-GOLD tagging guide-lines, compound foreign names should have thefirst word tagged as a foreign proper name ( n—s), and then the rest of the name tagged as for-eign words ( e), except for names of persons andplaces that should have all parts tagged as foreignproper names ( n—s). The tag n—s is also usedfor abbreviations of foreign proper names, e.g.,BBC . There are also some special cases that devi-ate from these rules (Barkarson et al., 2021). Asignificant portion of these tagging errors is in-deed caused by annotation errors in the corpus(mostly n—s→e), as well as the fact that the appli-cation of the rules requires world knowledge thatthe models of course lack.Confusion between adverbs and prepositions(which are annotated in MIM-GOLD as adverbsthat govern case), i.e., aa→af(and af→aa) are thenext most frequent errors. Some of these taggingerrors are due to cases where there is a clause be-tween the preposition and the object, or where the9We denote a tagging error with a→bwhere ais the pre-dicted tag and bis the gold tag. The tag n—s stands for aproper noun without markings for gender, number, or case.Predicted tag DegradationNo.→gold tag in pp1.n—s→e 0.072.e→n—s 0.073.af→aa 0.054.aa→af 0.055.nheo→nhfo 0.036.fpheþ→faheþ 0.037.nveþ→nveo 0.038.nhfo→nheo 0.029.nveo→nveþ 0.0210. ct→c 0.0211. c→ct 0.0212. faheþ→fpheþ 0.02Table 4: The 12 most frequent tagging errors ourtransformer model makes. The rightmost columnshows accuracy degradation in percentage pointsfor each error type.object has been moved to the front of the sentence.There also seem to be a fair number of annotationerrors associated with this confusion between ad-verbs and prepositions.A confusion between personal and demonstra-tive pronouns, fpheþ→faheþ (and faheþ→fpheþ ),is caused by the antecedent being out of context orbeing a whole clause. Understanding the clause isoften necessary to make the distinction. These areall the same word form, því(‘it’ or ‘this, that’).Forþví/fpheþ →faheþ , we see some improvementin accuracy with the transformer model over theother models, but for því/faheþ →fpheþ , we noticethe only case of lower accuracy for the transformermodel compared to the others. The tags here arefor neuter ( h) singular ( e) in the dative case ( þ).There are identical confusions for the accusativeand genitive cases, but those tokens are not as freq-uent.The c→ct(and ct→c) errors are compara-tive conjunctions being marked as relativizers (asubordinating conjunction indicating a relativeclause) and vice versa. Except for a few anti-quated uses of er, these cases are all the word formsem(‘as’ or ‘who, whom, that, which’). The con-junction semsubsumed er’s role as a relativizer inOld Icelandic. This language change was feasi-ble due to their syntactic structures being identical(Kemmer, 1984). Semantically their function issimilar, as one complements and the other modi-fies a noun phrase with the following clause. Thedifference is this role of the relation. Therefore,the remaining tagging errors for sem are causedby a lack of syntactic and contextual informationto make the correct prediction. Indeed, Loftssonet al. (2009) suggested that two tag categories bemerged.The errors nheo→nhfo (and nhfo→nheo ), areconfusions between the singular ( e) and plural(f) forms of neuter nouns ( nh...). When thiserror occurs, the context is usually not enoughto determine the correct number. A wider con-text, previous sentences, or general knowledge isneeded, and might even not be enough. Finally,nveþ→nveo (and nveo→nveþ) are confusions be-tween the dative ( þ) and accusative ( o) cases offeminine nouns ( nv...). The word that governs thecase needs to be in the context, if it is omitted thedistinction cannot be made. Moreover, if it cangovern both cases, the required semantic inform-ation is unavailable.One other group of errors should be mentioned,∗ →x, where ∗is any tag and the xtag denotesunanalysed tokens. This error is obscured be-cause the predictions are distributed over manytags. These are tokens that contain spelling mis-takes or constitute grammar errors and are the ma-jority of the 2,777 tokens in the unanalysed tagcategory. Of the four models, the transformer doesbest with this tag category but is only predicting58% correctly. Without changing how the spellingmistakes are annotated in MIM-GOLD or sim-ply excluding sentences containing them, this willcontinue to be a source of about 0.12 pp accuracydegradation. As the corpus also contains tokenswith such mistakes that are not annotated as un-analysed it would be in line with current practiceto look to the intended meaning of these tokensand tag them accordingly.5.2 Sources of errorsManning (2011) discusses the generally perceived97% token accuracy upper limit for POS tagg-ing. At that time, those accuracy numbers hadbeen reached for English, but Icelandic, a morpho-logically richer language with a very fine-grainedtagset, had a long way to go. Rögnvaldsson et al.(2002) had earlier suggested 98% as the highestpossibly achievable goal for Icelandic, because ofinter-annotator disagreement. Manning reasonsthat the disagreement might actually be higher butsays it is mitigated with annotator guidelines andadjusting tag categories. Besides disagreement,subjectivity in annotation and the possibility ofmore than one right choice make up what Plank(2022) calls human label variation.Manning samples errors the Stanford POSTagger (Toutanova et al., 2003) makes when ap-plied to a portion of the Penn Treebank corpus.He analyses the errors to try to understand if andhow tagging accuracy could be further improved.He finds that the largest opportunity for gains is inimproving the linguistic resources used to train thetagger. Before the initial release of MIM-GOLD,Steingrímsson et al. (2015) carried out an identi-cal analysis on errors in both the IFD and MIM-GOLD when tagged with IceStagger. Their find-ings concurred with Manning’s. We performed asimilar analysis, though with a less detailed class-ification of the errors.Figure 3: Venn diagram showing how predictionerrors are shared between the four models.Of the 1,000,218 tokens in MIM-GOLD, ourtransformer model makes 22,128 tagging errors.For 10,087 of these tokens, the three other taggersalso make errors (see Figure 3), and for 5,526 ofthem, all four taggers agree on the predicted tag.From this set of errors, we drew a random sampleof 500 for analysis. In this sample, we discovered166 annotation errors, i.e., incorrect gold tags. For150 of them, the taggers predicted the correct tag.Extrapolating to the superset gives us 1,658 tagg-ing errors caused by gold errors ( ≈0.16 pp). Wealso found 87 cases where the prediction error wasobviously caused by there being insufficient con-text information ( ≈0.09 pp), and 18 cases whereit was likely caused by a spelling or grammar mis-take (≈0.02 pp). The last error class (spelling orgrammar mistakes) is aggravated by the use of theunanalysed tag (x) for such mistakes in the corpus.Table 5 shows the accuracy degradation for each ofthese error classes. Though we cannot draw con-clusions from these findings about the frequencyof these errors in the whole set of 22,128 errors,it is safe to assume these are the lower bounds ofthese error categories.Error class ppAnnotation errors 0.16Insufficient context 0.09Spelling or grammar mistakes 0.02Unexplained 0.25Total 0.52Table 5: Estimated accuracy degradation in per-centage points caused by each class in the set ofprediction errors that all four taggers agree on.6 Conclusions and Future WorkFor Icelandic POS tagging, we have reached apoint where individual error categories no longerstand out and annotation errors in the corpus aremore pronounced, as well as inconsistencies stem-ming from human label variation.Clear annotation errors can be corrected in thecorpus, and the tagging guidelines and tag cate-gories can be refined to remove some of theinconsistencies. Further gains can as well besqueezed out of the transformer model by usinga larger model, i.e., ConvBERT-large instead ofConvBERT-base, increasing the vocabulary size,training it on the 2022 version of IGC that adds549 million tokens, and fine-tuning the hyperpara-meters for the tagging model. Yet, on top ofthe annotator disagreement, there will always beerrors because of a lack of information in the con-text, as well as the scarcity of examples to learnfrom for the long tail of infrequent tags.For MIM-GOLD, that unsolvable part of thetagging errors seems to amount to less than 2 pp.Therefore, with a little more work, we should beable to confidently pass that 98% accuracy goal(when training and evaluating using the wholetagset) envisioned twenty years ago. A good start-ing point would be to search for and fix thoseestimated 1,658 annotation errors in MIM-GOLD,which are a subset of the tagging errors that allfour models agree on.To conclude, POS tagging for Icelandic is veryclose to being solved!ReferencesStarkaður Barkarson, Steinþór Steingrímsson, andHildur Hafsteinsdóttir. 2022. Evolving Large TextCorpora: Four Versions of the Icelandic GigawordCorpus. In Proceedings of the Thirteenth LanguageResources and Evaluation Conference , pages 2371–2381, Marseille, France. European Language Re-sources Association.Starkaður Barkarson, Þórdís Dröfn Andrésdóttir,Hildur Hafsteinsdóttir, Árni Davíð Magnússon,Kristján Rúnarsson, Steinþór Steingrímsson,Haukur Páll Jónsson, Hrafn Loftsson, Einar FreyrSigurðsson, Eiríkur Rögnvaldsson, and SigrúnHelgadóttir. 2021. MIM-GOLD. Release noteswith version 21.05.Kristín Bjarnadóttir, Kristín Ingibjörg Hlynsdóttir, andSteinþór Steingrímsson. 2019. DIM: The Databaseof Icelandic Morphology. In Proceedings of the22nd Nordic Conference on Computational Linguis-tics, pages 146–154, Turku, Finland. Linköping Uni-versity Electronic Press.Kristín Bjarnadóttir. 2012. The Database of ModernIcelandic Inflection. In Proceedings of SaLTMiL-AfLaT Workshop on Language technology for nor-malisation of less-resourced languages , LREC2012, Istanbul, Turkey.Thorsten Brants. 2000. TnT: A statistical part-of-speech tagger. Applied Natural Language Process-ing Conference (ANLP) , pages 224–231.Michael Collins. 2002. Discriminative training meth-ods for hidden Markov models: Theory and ex-periments with perceptron algorithms. In Proceed-ings of the 2002 Conference on Empirical Methodsin Natural Language Processing (EMNLP 2002) ,pages 1–8. Association for Computational Linguis-tics.Jón Friðrik Daðason and Hrafn Loftsson. 2022. Pre-training and Evaluating Transformer-based Lan-guage Models for Icelandic. In Proceedings ofthe Thirteenth Language Resources and EvaluationConference , pages 7386–7391, Marseille, France.European Language Resources Association.Jacob Devlin, Ming-Wei Chang, Kenton Lee, andKristina Toutanova. 2019. BERT: Pre-training ofDeep Bidirectional Transformers for Language Un-derstanding. In Proceedings of the 2019 Conferenceof the North American Chapter of the Associationfor Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) ,pages 4171–4186, Minneapolis, Minnesota. Associ-ation for Computational Linguistics.Mark Dredze and Joel Wallenberg. 2008. IcelandicData Driven Part of Speech Tagging. In Proceedingsof the 46thAnnual Meeting of the Association forComputational Linguistics: Human Language Tech-nologies , ACL-HLT, Columbus, OH, USA.Alex Graves and Jürgen Schmidhuber. 2005. Frame-wise Phoneme Classification with BidirectionalLSTM and Other Neural Network Architectures.Neural Networks , 18(5-6):602–610.Sigrún Helgadóttir. 2005. Testing data-driven learn-ing algorithms for PoS tagging of Icelandic. InH. Holmboe, editor, Nordisk Sprogteknologi 2004 .Museum Tusculanums Forlag, Copenhagen.Sigrún Helgadóttir, Ásta Svavarsdóttir, Eiríkur Rögn-valdsson, Kristín Bjarnadóttir, and Hrafn Loftsson.2012. The Tagged Icelandic Corpus (MÍM). In Pro-ceedings of SaLTMiL-AfLaT Workshop on Languagetechnology for normalisation of less-resourced lan-guages , LREC 2012, Istanbul, Turkey.Sepp Hochreiter and Jürgen Schmidhuber. 1997.Long short-term memory. Neural Computation ,9(8):1735–1780.Zi-Hang Jiang, Weihao Yu, Daquan Zhou, YunpengChen, Jiashi Feng, and Shuicheng Yan. 2020. Conv-BERT: Improving BERT with Span-based DynamicConvolution. In Advances in Neural InformationProcessing Systems , volume 33, pages 12837–12848. Curran Associates, Inc.Haukur Jónsson and Hrafn Loftsson. 2022. DMS:A System for Delivering Dynamic Multitask NLPTools. In Proceedings of the 14th InternationalConference on Agents and Artificial Intelligence -Volume 1: NLPinAI, , pages 504–510. INSTICC,SciTePress.Suzanne Kemmer. 1984. From Comparative to Rela-tivizer: The case of Iceland Sem. Annual Meeting ofthe Berkeley Linguistics Society , 10:296–306.Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,Luke Zettlemoyer, and Veselin Stoyanov. 2019.Roberta: A robustly optimized BERT pretrainingapproach. CoRR , abs/1907.11692.Hrafn Loftsson. 2008. Tagging Icelandic text: A lin-guistic rule-based approach. Nordic Journal of Lin-guistics , 31(1):47–72.Hrafn Loftsson, Sigrún Helgadóttir, and Eiríkur Rögn-valdsson. 2011. Using a Morphological Databaseto Increase the Accuracy in POS Tagging. In Pro-ceedings of Recent Advances in Natural LanguageProcessing , RANLP 2011, Hissar, Bulgaria.Hrafn Loftsson, Ida Kramarczyk, Sigrún Helgadóttir,and Eiríkur Rögnvaldsson. 2009. Improving the PoStagging accuracy of Icelandic text. In Proceedingsof the 17th Nordic Conference on ComputationalLinguistics (NODALIDA 2009) , Odense, Denmark.Northern European Association for Language Tech-nology (NEALT).Hrafn Loftsson and Robert Östling. 2013. Tagging aMorphologically Complex Language Using an Av-eraged Perceptron Tagger: The Case of Icelandic. InProceedings of the 19th Nordic Conference of Com-putational Linguistics (NODALIDA 2013) , pages105–119, Oslo, Norway. Linköping University Elec-tronic Press, Sweden.Hrafn Loftsson and Eiríkur Rögnvaldsson. 2007.IceNLP: A natural language processing toolkit forIcelandic. In Proceedings of the Annual Conferenceof the International Speech Communication Associ-ation, INTERSPEECH , volume 1, pages 1533–1536.Hrafn Loftsson, Jökull H. Yngvason, Sigrún Helga-dóttir, and Eiríkur Rögnvaldsson. 2010. Developinga PoS-tagged corpus using existing tools. In Pro-ceedings of 7th SaLTMiL Workshop on Creation andUse of Basic Lexical Resources for Less-ResourcedLanguages , LREC 2010, Valetta, Malta.Christopher D. Manning. 2011. Part-of-speech taggingfrom 97% to 100%: Is it time for some linguistics?In Alexander Gelbukh, editor, Conference on Intelli-gent Text Processing and Computational Linguistics(CICLing) , volume 6608 of Lecture Notes in Com-puter Science , pages 171–189. Springer.Jörgen Pind, Friðrik Magnússon, and Stefán Briem.1991. Íslensk orðtíðnibók [Icelandic frequency dic-tionary] . Orðabók Háskólans, Reykjavik.Barbara Plank. 2022. The “Problem” of Human La-bel Variation: On Ground Truth in Data, Modelingand Evaluation. In Proceedings of the 2022 Con-ference on Empirical Methods in Natural LanguageProcessing , Abu Dhabi. Association for Computa-tional Linguistics.Eiríkur Rögnvaldsson, Auður Rögnvaldsdóttir, KristínBjarnadóttir, and Sigrún Helgadóttir. 2002. Vél-ræn málfræðigreining með námfúsum markara [Au-tomatic language analysis using a transformation-based tagger]. Orð og tunga, Reykjavik , 6:1–9.Mike Schuster and Kaisuke Nakajima. 2012. Japaneseand Korean voice search. In International Confer-ence on Acoustics, Speech and Signal Processing ,pages 5149–5152.Vésteinn Snæbjarnarson, Haukur Barri Símonarson,Pétur Orri Ragnarsson, Svanhvít Lilja Ingólfsdóttir,Haukur Jónsson, Vilhjalmur Thorsteinsson, andHafsteinn Einarsson. 2022. A Warm Start and aClean Crawled Corpus – A Recipe for Good Lan-guage Models. In Proceedings of the ThirteenthLanguage Resources and Evaluation Conference ,pages 4356–4366, Marseille, France. European Lan-guage Resources Association.Steinþór Steingrímsson, Sigrún Helgadóttir, andEiríkur Rögnvaldsson. 2015. Analysing inconsis-tencies and errors in PoS tagging in two Icelandicgold standards. In Proceedings of the 20th NordicConference of Computational Linguistics (NODAL-IDA 2015) , pages 287–291, Vilnius, Lithuania.Linköping University Electronic Press, Sweden.Steinþór Steingrímsson, Sigrún Helgadóttir, EiríkurRögnvaldsson, Starkaður Barkarson, and JónGuðnason. 2018. Risamálheild: A Very Large Ice-landic Text Corpus. In Proceedings of the EleventhInternational Conference on Language Resourcesand Evaluation (LREC 2018) , Miyazaki, Japan.Steinþór Steingrímsson, Örvar Kárason, and HrafnLoftsson. 2019. Augmenting a BiLSTM Taggerwith a Morphological Lexicon and a Lexical Cate-gory Identification Step. In Proceedings of the Inter-national Conference on Recent Advances in NaturalLanguage Processing (RANLP 2019) , pages 1161–1168, Varna, Bulgaria.Alexander Tkachenko and Kairit Sirts. 2018. ModelingComposite Labels for Neural Morphological Tagg-ing. In Conference on Computational Natural Lan-guage Learning .Kristina Toutanova, Dan Klein, Christopher D. Mann-ing, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network.InProceedings of the 2003 Human Language Tech-nology Conference of the North American Chapterof the Association for Computational Linguistics ,pages 252–259.Ashish Vaswani, Noam Shazeer, Niki Parmar, JakobUszkoreit, Llion Jones, Aidan N Gomez, ŁukaszKaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in Neural Information Process-ing Systems , 30.
mTENbntPeRK
The authors of this paper train and test four data-driven POS-tagging models for Icelandic and conclude that, with very little additional effort, POS tagging could be called a solved problem for Icelandic. However, the text types of the gold standard are not described and so it is impossible to say, is the POS tagging really solved for the majority of text types of Icelandic.
7: Good paper, accept
The authors of this paper have trained and tested four data-driven POS taggers (TriTagger, IceStagger, ABLTagger and ConvBert) on a new large gold-standard corpus (ca 1 million tokens). The best model, ConvBERT-base achieves 97.79% token accuracy and 73.43% sentence accuracy. Basing on error analysis, the authors infer that after fixing errors in the gold data and using a larger model, it should be possible to pass the 98% accuracy goal. Their conclusion is that POS tagging problem for Icelandic is very close to being solved. However, "Icelandic" is, in this article, limited to the gold standard corpus. The corpus is quite big, containing ca 1 million tokens, but the text classes (13 altogether) of this corpus are not described or even listed in the article and so it is difficult to say, what language varieties does it contain and how representative it really is, does the gold data contain also spoken language etc.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Is Part-of-Speech Tagging a Solved Problem for Icelandic? ### Paper Abstract We train and evaluate four Part-of-Speech tagging models for Icelandic. Three are older models that obtained the highest accuracy for Icelandic when they were introduced. The fourth model is of a type that currently reaches state-of-the-art accuracy. We use the most recent version of the MIM-GOLD training/testing corpus, its newest tagset, and augmentation data to obtain results that are comparable between the various models. We examine the accuracy improvements with each model and analyse the errors produced by our transformer model, which is based on a previously published ConvBERT model. For the set of errors that all the models make, and for which they predict the same tag, we extract a random subset for manual inspection. Extrapolating from this subset, we obtain a lower bound estimate on annotation errors in the corpus as well as on some unsolvable tagging errors. We argue that further tagging accuracy gains for Icelandic can still be obtained by fixing the errors in MIM-GOLD and, furthermore, that it should still be possible to squeeze out some small gains from our transformer model. ### Paper Keywords ["Part-of-Speech Tagging", "Icelandic", "Transformer", "ConvBERT", "error analysis", "annotator disagreement", "annotation errors"] ### Paper Content Is Part-of-Speech Tagging a Solved Problem for Icelandic?Örvar KárasonDepartment of Computer ScienceReykjavik UniversityIcelandorvark13@ru.isHrafn LoftssonDepartment of Computer ScienceReykjavik UniversityIcelandhrafn@ru.isAbstractWe train and evaluate four Part-of-Speechtagging models for Icelandic. Three areolder models that obtained the highestaccuracy for Icelandic when they wereintroduced. The fourth model is of atype that currently reaches state-of-the-artaccuracy. We use the most recent versionof the MIM-GOLD training/testing cor-pus, its newest tagset, and augmentationdata to obtain results that are comparablebetween the various models. We exam-ine the accuracy improvements with eachmodel and analyse the errors produced byour transformer model, which is based ona previously published ConvBERT model.For the set of errors that all the modelsmake, and for which they predict the sametag, we extract a random subset for manualinspection. Extrapolating from this subset,we obtain a lower bound estimate on anno-tation errors in the corpus as well as onsome unsolvable tagging errors. We arguethat further tagging accuracy gains for Ice-landic can still be obtained by fixing theerrors in MIM-GOLD and, furthermore,that it should still be possible to squeezeout some small gains from our transformermodel.1 IntroductionPart-of-Speech (POS) tagging is a sequential la-belling task in which each token, i.e., words, sym-bols, and punctuation in running text is assigneda morphosyntactic tag. It is an important step formany Natural Language Processing applications.A token is ambiguous when it has more than onepossible tag. The source of ambiguity is polysemyin the form of homographs from the same wordclass, from different word classes, and also withinthe declension paradigms of the same word. Thetask, therefore, entails examining the token itselfand its context for clues for predicting the cor-rect tag. For the last mentioned type of ambigu-ity, which is prevalent in Icelandic, it is necessaryto find another unambiguous token in the contextthat the target token shows agreement with and useit to determine the correct target tag.Over the last two decades, steady progress hasbeen made in POS tagging for Icelandic. Varioustaggers have been presented throughout this periodthat improved on previous state-of-the-art (SOTA)methods (Rögnvaldsson et al., 2002; Helgadóttir,2005; Loftsson, 2008; Dredze and Wallenberg,2008; Loftsson et al., 2009, 2011; Loftsson andÖstling, 2013; Steingrímsson et al., 2019; Snæ-bjarnarson et al., 2022; Daðason and Loftsson,2022; Jónsson and Loftsson, 2022).Work on Icelandic corpora has also progressed.Existing corpora have undergone error correctionphases (Barkarson et al., 2021), and, in somecases, been expanded with new data (Barkarsonet al., 2022). A new larger gold standard cor-pus for POS tagging, MIM-GOLD (Loftsson et al.,2010), was created to replace the older standard,theIcelandic Frequency Dictionary (IFD, Pindet al. 1991), and multiple alterations have beenmade to the fine-grained Icelandic tagset (Stein-grímsson et al., 2018; Barkarson et al., 2021).All this variability over the years means thatpreviously reported results for POS taggers are noteasily comparable. Thus, we train and test fourdata-driven taggers that have been employed forIcelandic (see Section 3), using the latest versionof MIM-GOLD and its underlying tagset, as wellas the latest versions of augmentation data (seeSection 2). We obtain SOTA tagging accuracy bytraining and fine-tuning a ConvBERT-base modelin a slightly different manner than previously re-ported by Daðason and Loftsson (2022) (see Sec-tion 3).With the latest tagging method based on thetransformer model finally reaching above 97%per-token accuracy for Icelandic (Jónsson andLoftsson, 2022; Snæbjarnarson et al., 2022; Daða-son and Loftsson, 2022), the generally be-lieved limit of inter-annotator agreement (Mann-ing, 2011), we might ask ourselves if POS taggingis now a solved problem for Icelandic. Indeed, ourevaluation results show that the tagging accuracyof our ConvBERT-base model is close to 98% (seeTable 3). A large portion of the remaining errorscan be explained by 1) a lack of context inform-ation to make the correct prediction, and 2) anno-tation errors or other faults in the training/testingcorpus itself. Addressing the latter should givefurther gains. Furthermore, some small additionalgains could be squeezed out of the transformermodel, by using a larger model and pre-trainingit on more data. When this is done, we may beable to argue that POS tagging is a solved problemfor Icelandic.The rest of this paper is structured as follows.In Sections 2 and 3, we describe the data and themodels, respectively, used in our experiments. Wepresent the evaluation results in Section 4, and de-tailed error analysis in Section 5. Finally, we con-clude in Section 6.2 DataIn this section, we describe the data and the tagsetused in our work.2.1 CorpusThe MIM-GOLD corpus is a curated subset of theMIM corpus (Helgadóttir et al., 2012) and wassemi-automatically tagged using a combination oftaggers (Loftsson et al., 2010). Version 21.05 ofthe corpus contains 1 million running words from13 different text types, of which about half origi-nate from newspapers and books (see Table 1). Allversions of MIM-GOLD include the same 10-foldsplits for use in cross-validation.1MIM-GOLD was created to replace the IFD asthe gold standard for POS tagging of Icelandictexts. The IFD corpus was sourced from bookspublished in the eighties and has a clear literaryand standardized language slant. Steingrímssonet al. (2019) reported a 1.11 percentage point (pp)1Version 21.05 is available at http://hdl.handle.net/20.500.12537/114Text type % of allNewspaper Morgunblaðið 24.9Books 23.5Blogs 13.4Newspaper Fréttablaðið 9.4The Icelandic Web of Science 9.1Websites 6.5Laws 4.1School essays 3.4Written-to-be-spoken 1.9Adjudications 1.3Radio news scripts 1.1Web media 0.8E-mails 0.5Total 100.0Table 1: Information about the various text typesin MIM-GOLD, adapted from Loftsson et al.(2010).lower per-token accuracy for MIM-GOLD com-pared to the IFD.2.2 Morphological lexiconVersion 22.09 of the Database of Modern Ice-landic Inflection (DMII) (Bjarnadóttir, 2012),which is now a part of the Database of IcelandicMorphology (Bjarnadóttir et al., 2019), contains6.9 million inflectional forms and about 330 thou-sand declension paradigms.2Though the databasecannot be used directly to train a POS tagger, asthere is no context or distributional informationfor the word forms, it has been used to augmenttaggers during training and help with tagging un-known words (words not seen during training)(Loftsson et al., 2011; Steingrímsson et al., 2019).2.3 Pre-training corpusThe Icelandic Gigaword Corpus (IGC), which in-cludes text sources from multiple varied domains,has been expanded annually since 2018 (Barkar-son et al., 2022). The motivation for construct-ing the IGC was, inter alia , to make the devel-opment of large Icelandic language models pos-sible (Steingrímsson et al., 2018). The 2021 ver-sion used in our work contains about 1.8 billiontokens.32https://bin.arnastofnun.is/DMII/LTdata/3Version 2021 is available at http://hdl.handle.net/20.500.12537/1922.4 TagsetThe MIM-GOLD tagset v. 2 is the fourth iterationof the fine-grained tagset that is exclusively usedfor modern Icelandic and has its origin in the IFD.The tagset consists of 571 possible tags, of which557 occur in MIM-GOLD.The tags are morphosyntactic encodings con-sisting of one to six characters, each denotingsome feature. The first character denotes the lex-ical category and is, in some cases, followed bya sub-category character. For each category, afixed number of additional feature characters fol-low, e.g., gender ,number and case for nouns;degree anddeclension for adjectives; and voice ,mood andtense for verbs. To illustrate, considerthe word form konan (‘the woman’). The corre-sponding tag is nveng , denoting noun ( n), feminine(v), singular ( e), nominative ( n) case, and definitesuffixed article ( g).3 ModelsIn this section, we describe the four data-drivenPOS tagging models we trained and evaluated:•TriTagger (Loftsson et al., 2009) is a reim-plementation of TnT (Brants, 2000), a sec-ond order (trigram) Hidden Markov model.The probabilities of the model are estimatedfrom a training corpus using maximum like-lihood estimation. Assignments of POS tagsto tokens is found by optimising the productof lexical probabilities ( p(wi|tj)) and contex-tual probabilities ( p(ti|ti−1, ti−2)) (where wiandtiare the ithword and tag, respectively).When work on creating a tagger for Icelandicstarted at the turn of the century, five existingdata-driven taggers were tested on the IFDcorpus (Helgadóttir, 2005). TnT obtained thehighest accuracy and has often been includedfor comparison in subsequent work.•IceStagger (Loftsson and Östling, 2013)is an averaged perceptron model (Collins,2002), an early and simple version of a neu-ral network.4It learns binary feature func-tions from predefined templates. The tem-plates are hand-crafted and can reference ad-jacent words, previous tags, and various cus-tom matching functions applied to them. The4IceStagger and TriTagger are included in the IceNLPtoolkit (Loftsson and Rögnvaldsson, 2007): https://github.com/hrafnl/icenlptemplates, intended to capture dependenciesspecific to Icelandic, were developed againstthe IFD. During training, the algorithm learnswhich feature functions are good indicatorsof the assigned tag, given the context avail-able to the templates. It does that by ad-justing the weight associated with the featurefunction. The highest-scoring tag sequenceis approximated using beam search. BothIceStagger and TriTagger use data from theDMII to help with guessing the tags for un-known tokens.•ABLTagger v. 1 (Steingrímsson et al., 2019;Jónsson and Loftsson, 2022) is based on abidirectional long short-term memory (Bi-LSTM) model.5That model is an exten-sion of LSTMs (Hochreiter and Schmidhu-ber, 1997) that can be employed when theinput is the whole sequence. Two LSTMsare trained on the input, with the secondtraversing it in reverse (Graves and Schmid-huber, 2005). The input for ABLTagger con-sists of both word and character embeddings.The model is augmented with n-hot vectorscreated from all the potential lexical featuresof the word forms from the DMII. ABL-Tagger was developed against the IFD butwas the first tagger to be applied to MIM-GOLD.•ConvBERT (Jiang et al., 2020) is an im-proved version of the BERT model (Vaswaniet al., 2017; Devlin et al., 2019) that is moreefficient and accurate. We used an exist-ing ConvBERT-base model pre-trained onthe IGC by Daðason and Loftsson (2022)6and fine-tuned it for tagging on MIM-GOLD.This is a standard pre-trained transformermodel with two changes: the embeddingsof the first and last subwords are con-catenated ( first+last subword pooling) togenerate the token representations (Schusterand Nakajima, 2012), and we continued thepre-training of the ConvBERT-base modelusing the training data of each fold fromMIM-GOLD for three epochs before fine-tuning it for tagging for 10 epochs with thesame data. Each modification gave a 0.07 pp5ABLTagger v. 1 is available at https://hdl.handle.net/20.500.12537/536https://huggingface.co/jonfd/convbert-base-igc-isToken acc. Sent. acc.TriTagger 91.01% 35.58%IceStagger 92.72% 42.74%ABLTagger v1 94.56% 49.11%ConvBERT-base 97.79% 73.43%Table 2: Token and sentence tagging accuracy forthe four models.improvement in accuracy; i.e. 0.14 pp in to-tal.74 ResultsWe evaluated the four models by applying 10-foldcross-validation (CV) using the standard splitsin MIM-GOLD (see Section 2). The resultsare shown in Table 2. The transformer model,ConvBERT-base, obtains 6.78 pp higher accuracythan the HMM model (TriTagger), which is equiv-alent to a 75.42% reduction in errors!The increase in sentence accuracy, which isoften overlooked, is also very impressive. It hasmore than doubled and now close to34of the sen-tences are correct. Sentences come in differentlengths, ranging from a single token up to 1,334tokens in MIM-GOLD, and increased length canresult in increased complexity. Figure 1 showsthe length distribution of sentences with no errors.The figure shows both general accuracy gains aswell as an improvement in handling longer sen-tences.Figure 1: Distributions of correctly tagged sen-tences. The legend shows each set’s median (Mdn)and mean (M).7Seehttps://github.com/orvark13/postr/for training and evaluation scripts, as well as fine-tunedmodels.Figure 2: The accuracy improvements between themodels for the more frequent lexical categories.Solid lines are the per-token accuracy for all tagsin that category, and dashed lines are the lexicalclass accuracy, i.e., the tag category is correct butthere is some error in the predicted features. Errorswithin the categories diminish as those lines con-verge.4.1 Accuracy improvementsTriTagger and IceStagger are limited to a three-token window and they need frequency inform-ation of tokens to learn from. As is to be ex-pected, IceStagger gains accuracy according to thefeature templates pre-defined for it. ABLTagger’simprovements come from the BiLSTM’s contextwindow being the whole sentence and it, thereby,being able to detect long-range dependencies. Itsability to see within the token by means of thecharacter embeddings helps it handle tokens notseen during training. Augmenting the model withdata from DMII also helps with unknown words.The source of improvement for the transformermodel is mainly threefold. First, the attentionmechanism aids it in selecting the right depend-encies (e.g., when there is more than one option),and it is detecting longer long-range dependenciesthan the BiLSTM model. We see this from theexamination of the predictions and it is also indi-cated by the model’s success with longer sentencesas is evident in the shape of its distribution in Fig-ure 1. Secondly, the model is often able to dis-cern the different semantic senses of ambiguoustokens. We assume this stems from the contextualword embeddings in the large pre-trained Conv-BERT language model. Finally, it benefits fromall the language sense from the IGC infused in thePOS Transformer Model AccuracyIceBERT-IGC [1] 97.37%ConvBERT-base [1] 97.75%Our ConvBERT-base 97.79%Excluding xandetagsIceBERT-IGC, multi-label [2] 98.27%Our ConvBERT-base 98.14%9-fold CV , excluding xandeerrorsDMS, ELECTRA-base [3] 97.84%Our ConvBERT-base 98.00%Table 3: Accuracy results for different POS trans-former models pre-trained on IGC and the accu-racy of our transformer model when fine-tunedand evaluated in a comparable manner. [1] werereported in Daðason and Loftsson (2022), [2] inSnæbjarnarson et al. (2022), and [3] in Jónssonand Loftsson (2022).language model during pre-training.Figure 2 shows the accuracy improvements ofthe models for the more frequent lexical cate-gories.4.2 Transformer models and SOTAIn Table 3, we show previously reported results fortransformer models pre-trained on the IGC, andthe results of our transformer, a ConvBERT-basemodel trained and fine-tuned slightly differentlycompared to Daðason and Loftsson (2022) (seeSection 3), evaluated in the same manner for com-parison. Two of the papers cited in the table reportresults excluding the xandetags, either both dur-ing training and evaluation or only during evalu-ation. These tags are used for unanalysed tokensand foreign words, respectively, and have the low-est category accuracies, the reasons for which willbecome apparent in Section 5. Not counting tagg-ing errors for these two tags increases reportedaccuracy by 0.21 pp for our model. Excludingthose tags from training, by fixing their weights tozero, increases the reported accuracy by a further0.14 pp, because, in this case, the model is nolonger able to assign these two tags erroneouslyto tokens.The current SOTA is a multi-label model basedon IceBERT-large8(Snæbjarnarson et al., 2022).Multi-label classification means that the tags aresplit into individual features, e.g., lexical category ,8IceBERT is based on a RoBERTa model (Liu et al.,2019).tense ,gender ,number , and the model is trained topredict each separately. Treating composite tagsas multiple labels has been shown to improve POStagging accuracy, especially when training datais scarce (Tkachenko and Sirts, 2018). Combin-ing the predictions back into tags is dependenton knowledge about the composition of the tags.The results presented in Table 3 show that ourConvBERT-base model obtains SOTA results forsingle-label models applied to Icelandic.5 Error analysisIn this section, we, first, present an analysis of themost frequent errors, and, second, the results ofour analysis of the different sources of errors.5.1 Most frequent errorsTable 4 shows the most frequent errors made byour transformer model. The list for the BiLSTMmodel is very similar, but with about double theaccuracy degradation. The 12 most frequent errorsare in fact six pairs of tags where the confusionbetween each pair occurs in either direction.The most frequent confusion is n—s→e(ande→n—s), or between foreign proper names andforeign words.9More than half, 0.04 pp for botherror types, are due to words not seen during train-ing. According to the MIM-GOLD tagging guide-lines, compound foreign names should have thefirst word tagged as a foreign proper name ( n—s), and then the rest of the name tagged as for-eign words ( e), except for names of persons andplaces that should have all parts tagged as foreignproper names ( n—s). The tag n—s is also usedfor abbreviations of foreign proper names, e.g.,BBC . There are also some special cases that devi-ate from these rules (Barkarson et al., 2021). Asignificant portion of these tagging errors is in-deed caused by annotation errors in the corpus(mostly n—s→e), as well as the fact that the appli-cation of the rules requires world knowledge thatthe models of course lack.Confusion between adverbs and prepositions(which are annotated in MIM-GOLD as adverbsthat govern case), i.e., aa→af(and af→aa) are thenext most frequent errors. Some of these taggingerrors are due to cases where there is a clause be-tween the preposition and the object, or where the9We denote a tagging error with a→bwhere ais the pre-dicted tag and bis the gold tag. The tag n—s stands for aproper noun without markings for gender, number, or case.Predicted tag DegradationNo.→gold tag in pp1.n—s→e 0.072.e→n—s 0.073.af→aa 0.054.aa→af 0.055.nheo→nhfo 0.036.fpheþ→faheþ 0.037.nveþ→nveo 0.038.nhfo→nheo 0.029.nveo→nveþ 0.0210. ct→c 0.0211. c→ct 0.0212. faheþ→fpheþ 0.02Table 4: The 12 most frequent tagging errors ourtransformer model makes. The rightmost columnshows accuracy degradation in percentage pointsfor each error type.object has been moved to the front of the sentence.There also seem to be a fair number of annotationerrors associated with this confusion between ad-verbs and prepositions.A confusion between personal and demonstra-tive pronouns, fpheþ→faheþ (and faheþ→fpheþ ),is caused by the antecedent being out of context orbeing a whole clause. Understanding the clause isoften necessary to make the distinction. These areall the same word form, því(‘it’ or ‘this, that’).Forþví/fpheþ →faheþ , we see some improvementin accuracy with the transformer model over theother models, but for því/faheþ →fpheþ , we noticethe only case of lower accuracy for the transformermodel compared to the others. The tags here arefor neuter ( h) singular ( e) in the dative case ( þ).There are identical confusions for the accusativeand genitive cases, but those tokens are not as freq-uent.The c→ct(and ct→c) errors are compara-tive conjunctions being marked as relativizers (asubordinating conjunction indicating a relativeclause) and vice versa. Except for a few anti-quated uses of er, these cases are all the word formsem(‘as’ or ‘who, whom, that, which’). The con-junction semsubsumed er’s role as a relativizer inOld Icelandic. This language change was feasi-ble due to their syntactic structures being identical(Kemmer, 1984). Semantically their function issimilar, as one complements and the other modi-fies a noun phrase with the following clause. Thedifference is this role of the relation. Therefore,the remaining tagging errors for sem are causedby a lack of syntactic and contextual informationto make the correct prediction. Indeed, Loftssonet al. (2009) suggested that two tag categories bemerged.The errors nheo→nhfo (and nhfo→nheo ), areconfusions between the singular ( e) and plural(f) forms of neuter nouns ( nh...). When thiserror occurs, the context is usually not enoughto determine the correct number. A wider con-text, previous sentences, or general knowledge isneeded, and might even not be enough. Finally,nveþ→nveo (and nveo→nveþ) are confusions be-tween the dative ( þ) and accusative ( o) cases offeminine nouns ( nv...). The word that governs thecase needs to be in the context, if it is omitted thedistinction cannot be made. Moreover, if it cangovern both cases, the required semantic inform-ation is unavailable.One other group of errors should be mentioned,∗ →x, where ∗is any tag and the xtag denotesunanalysed tokens. This error is obscured be-cause the predictions are distributed over manytags. These are tokens that contain spelling mis-takes or constitute grammar errors and are the ma-jority of the 2,777 tokens in the unanalysed tagcategory. Of the four models, the transformer doesbest with this tag category but is only predicting58% correctly. Without changing how the spellingmistakes are annotated in MIM-GOLD or sim-ply excluding sentences containing them, this willcontinue to be a source of about 0.12 pp accuracydegradation. As the corpus also contains tokenswith such mistakes that are not annotated as un-analysed it would be in line with current practiceto look to the intended meaning of these tokensand tag them accordingly.5.2 Sources of errorsManning (2011) discusses the generally perceived97% token accuracy upper limit for POS tagg-ing. At that time, those accuracy numbers hadbeen reached for English, but Icelandic, a morpho-logically richer language with a very fine-grainedtagset, had a long way to go. Rögnvaldsson et al.(2002) had earlier suggested 98% as the highestpossibly achievable goal for Icelandic, because ofinter-annotator disagreement. Manning reasonsthat the disagreement might actually be higher butsays it is mitigated with annotator guidelines andadjusting tag categories. Besides disagreement,subjectivity in annotation and the possibility ofmore than one right choice make up what Plank(2022) calls human label variation.Manning samples errors the Stanford POSTagger (Toutanova et al., 2003) makes when ap-plied to a portion of the Penn Treebank corpus.He analyses the errors to try to understand if andhow tagging accuracy could be further improved.He finds that the largest opportunity for gains is inimproving the linguistic resources used to train thetagger. Before the initial release of MIM-GOLD,Steingrímsson et al. (2015) carried out an identi-cal analysis on errors in both the IFD and MIM-GOLD when tagged with IceStagger. Their find-ings concurred with Manning’s. We performed asimilar analysis, though with a less detailed class-ification of the errors.Figure 3: Venn diagram showing how predictionerrors are shared between the four models.Of the 1,000,218 tokens in MIM-GOLD, ourtransformer model makes 22,128 tagging errors.For 10,087 of these tokens, the three other taggersalso make errors (see Figure 3), and for 5,526 ofthem, all four taggers agree on the predicted tag.From this set of errors, we drew a random sampleof 500 for analysis. In this sample, we discovered166 annotation errors, i.e., incorrect gold tags. For150 of them, the taggers predicted the correct tag.Extrapolating to the superset gives us 1,658 tagg-ing errors caused by gold errors ( ≈0.16 pp). Wealso found 87 cases where the prediction error wasobviously caused by there being insufficient con-text information ( ≈0.09 pp), and 18 cases whereit was likely caused by a spelling or grammar mis-take (≈0.02 pp). The last error class (spelling orgrammar mistakes) is aggravated by the use of theunanalysed tag (x) for such mistakes in the corpus.Table 5 shows the accuracy degradation for each ofthese error classes. Though we cannot draw con-clusions from these findings about the frequencyof these errors in the whole set of 22,128 errors,it is safe to assume these are the lower bounds ofthese error categories.Error class ppAnnotation errors 0.16Insufficient context 0.09Spelling or grammar mistakes 0.02Unexplained 0.25Total 0.52Table 5: Estimated accuracy degradation in per-centage points caused by each class in the set ofprediction errors that all four taggers agree on.6 Conclusions and Future WorkFor Icelandic POS tagging, we have reached apoint where individual error categories no longerstand out and annotation errors in the corpus aremore pronounced, as well as inconsistencies stem-ming from human label variation.Clear annotation errors can be corrected in thecorpus, and the tagging guidelines and tag cate-gories can be refined to remove some of theinconsistencies. Further gains can as well besqueezed out of the transformer model by usinga larger model, i.e., ConvBERT-large instead ofConvBERT-base, increasing the vocabulary size,training it on the 2022 version of IGC that adds549 million tokens, and fine-tuning the hyperpara-meters for the tagging model. Yet, on top ofthe annotator disagreement, there will always beerrors because of a lack of information in the con-text, as well as the scarcity of examples to learnfrom for the long tail of infrequent tags.For MIM-GOLD, that unsolvable part of thetagging errors seems to amount to less than 2 pp.Therefore, with a little more work, we should beable to confidently pass that 98% accuracy goal(when training and evaluating using the wholetagset) envisioned twenty years ago. A good start-ing point would be to search for and fix thoseestimated 1,658 annotation errors in MIM-GOLD,which are a subset of the tagging errors that allfour models agree on.To conclude, POS tagging for Icelandic is veryclose to being solved!ReferencesStarkaður Barkarson, Steinþór Steingrímsson, andHildur Hafsteinsdóttir. 2022. Evolving Large TextCorpora: Four Versions of the Icelandic GigawordCorpus. In Proceedings of the Thirteenth LanguageResources and Evaluation Conference , pages 2371–2381, Marseille, France. European Language Re-sources Association.Starkaður Barkarson, Þórdís Dröfn Andrésdóttir,Hildur Hafsteinsdóttir, Árni Davíð Magnússon,Kristján Rúnarsson, Steinþór Steingrímsson,Haukur Páll Jónsson, Hrafn Loftsson, Einar FreyrSigurðsson, Eiríkur Rögnvaldsson, and SigrúnHelgadóttir. 2021. MIM-GOLD. Release noteswith version 21.05.Kristín Bjarnadóttir, Kristín Ingibjörg Hlynsdóttir, andSteinþór Steingrímsson. 2019. DIM: The Databaseof Icelandic Morphology. In Proceedings of the22nd Nordic Conference on Computational Linguis-tics, pages 146–154, Turku, Finland. Linköping Uni-versity Electronic Press.Kristín Bjarnadóttir. 2012. The Database of ModernIcelandic Inflection. In Proceedings of SaLTMiL-AfLaT Workshop on Language technology for nor-malisation of less-resourced languages , LREC2012, Istanbul, Turkey.Thorsten Brants. 2000. TnT: A statistical part-of-speech tagger. Applied Natural Language Process-ing Conference (ANLP) , pages 224–231.Michael Collins. 2002. Discriminative training meth-ods for hidden Markov models: Theory and ex-periments with perceptron algorithms. In Proceed-ings of the 2002 Conference on Empirical Methodsin Natural Language Processing (EMNLP 2002) ,pages 1–8. Association for Computational Linguis-tics.Jón Friðrik Daðason and Hrafn Loftsson. 2022. Pre-training and Evaluating Transformer-based Lan-guage Models for Icelandic. In Proceedings ofthe Thirteenth Language Resources and EvaluationConference , pages 7386–7391, Marseille, France.European Language Resources Association.Jacob Devlin, Ming-Wei Chang, Kenton Lee, andKristina Toutanova. 2019. BERT: Pre-training ofDeep Bidirectional Transformers for Language Un-derstanding. In Proceedings of the 2019 Conferenceof the North American Chapter of the Associationfor Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) ,pages 4171–4186, Minneapolis, Minnesota. Associ-ation for Computational Linguistics.Mark Dredze and Joel Wallenberg. 2008. IcelandicData Driven Part of Speech Tagging. In Proceedingsof the 46thAnnual Meeting of the Association forComputational Linguistics: Human Language Tech-nologies , ACL-HLT, Columbus, OH, USA.Alex Graves and Jürgen Schmidhuber. 2005. Frame-wise Phoneme Classification with BidirectionalLSTM and Other Neural Network Architectures.Neural Networks , 18(5-6):602–610.Sigrún Helgadóttir. 2005. Testing data-driven learn-ing algorithms for PoS tagging of Icelandic. InH. Holmboe, editor, Nordisk Sprogteknologi 2004 .Museum Tusculanums Forlag, Copenhagen.Sigrún Helgadóttir, Ásta Svavarsdóttir, Eiríkur Rögn-valdsson, Kristín Bjarnadóttir, and Hrafn Loftsson.2012. The Tagged Icelandic Corpus (MÍM). In Pro-ceedings of SaLTMiL-AfLaT Workshop on Languagetechnology for normalisation of less-resourced lan-guages , LREC 2012, Istanbul, Turkey.Sepp Hochreiter and Jürgen Schmidhuber. 1997.Long short-term memory. Neural Computation ,9(8):1735–1780.Zi-Hang Jiang, Weihao Yu, Daquan Zhou, YunpengChen, Jiashi Feng, and Shuicheng Yan. 2020. Conv-BERT: Improving BERT with Span-based DynamicConvolution. In Advances in Neural InformationProcessing Systems , volume 33, pages 12837–12848. Curran Associates, Inc.Haukur Jónsson and Hrafn Loftsson. 2022. DMS:A System for Delivering Dynamic Multitask NLPTools. In Proceedings of the 14th InternationalConference on Agents and Artificial Intelligence -Volume 1: NLPinAI, , pages 504–510. INSTICC,SciTePress.Suzanne Kemmer. 1984. From Comparative to Rela-tivizer: The case of Iceland Sem. Annual Meeting ofthe Berkeley Linguistics Society , 10:296–306.Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man-dar Joshi, Danqi Chen, Omer Levy, Mike Lewis,Luke Zettlemoyer, and Veselin Stoyanov. 2019.Roberta: A robustly optimized BERT pretrainingapproach. CoRR , abs/1907.11692.Hrafn Loftsson. 2008. Tagging Icelandic text: A lin-guistic rule-based approach. Nordic Journal of Lin-guistics , 31(1):47–72.Hrafn Loftsson, Sigrún Helgadóttir, and Eiríkur Rögn-valdsson. 2011. Using a Morphological Databaseto Increase the Accuracy in POS Tagging. In Pro-ceedings of Recent Advances in Natural LanguageProcessing , RANLP 2011, Hissar, Bulgaria.Hrafn Loftsson, Ida Kramarczyk, Sigrún Helgadóttir,and Eiríkur Rögnvaldsson. 2009. Improving the PoStagging accuracy of Icelandic text. In Proceedingsof the 17th Nordic Conference on ComputationalLinguistics (NODALIDA 2009) , Odense, Denmark.Northern European Association for Language Tech-nology (NEALT).Hrafn Loftsson and Robert Östling. 2013. Tagging aMorphologically Complex Language Using an Av-eraged Perceptron Tagger: The Case of Icelandic. InProceedings of the 19th Nordic Conference of Com-putational Linguistics (NODALIDA 2013) , pages105–119, Oslo, Norway. Linköping University Elec-tronic Press, Sweden.Hrafn Loftsson and Eiríkur Rögnvaldsson. 2007.IceNLP: A natural language processing toolkit forIcelandic. In Proceedings of the Annual Conferenceof the International Speech Communication Associ-ation, INTERSPEECH , volume 1, pages 1533–1536.Hrafn Loftsson, Jökull H. Yngvason, Sigrún Helga-dóttir, and Eiríkur Rögnvaldsson. 2010. Developinga PoS-tagged corpus using existing tools. In Pro-ceedings of 7th SaLTMiL Workshop on Creation andUse of Basic Lexical Resources for Less-ResourcedLanguages , LREC 2010, Valetta, Malta.Christopher D. Manning. 2011. Part-of-speech taggingfrom 97% to 100%: Is it time for some linguistics?In Alexander Gelbukh, editor, Conference on Intelli-gent Text Processing and Computational Linguistics(CICLing) , volume 6608 of Lecture Notes in Com-puter Science , pages 171–189. Springer.Jörgen Pind, Friðrik Magnússon, and Stefán Briem.1991. Íslensk orðtíðnibók [Icelandic frequency dic-tionary] . Orðabók Háskólans, Reykjavik.Barbara Plank. 2022. The “Problem” of Human La-bel Variation: On Ground Truth in Data, Modelingand Evaluation. In Proceedings of the 2022 Con-ference on Empirical Methods in Natural LanguageProcessing , Abu Dhabi. Association for Computa-tional Linguistics.Eiríkur Rögnvaldsson, Auður Rögnvaldsdóttir, KristínBjarnadóttir, and Sigrún Helgadóttir. 2002. Vél-ræn málfræðigreining með námfúsum markara [Au-tomatic language analysis using a transformation-based tagger]. Orð og tunga, Reykjavik , 6:1–9.Mike Schuster and Kaisuke Nakajima. 2012. Japaneseand Korean voice search. In International Confer-ence on Acoustics, Speech and Signal Processing ,pages 5149–5152.Vésteinn Snæbjarnarson, Haukur Barri Símonarson,Pétur Orri Ragnarsson, Svanhvít Lilja Ingólfsdóttir,Haukur Jónsson, Vilhjalmur Thorsteinsson, andHafsteinn Einarsson. 2022. A Warm Start and aClean Crawled Corpus – A Recipe for Good Lan-guage Models. In Proceedings of the ThirteenthLanguage Resources and Evaluation Conference ,pages 4356–4366, Marseille, France. European Lan-guage Resources Association.Steinþór Steingrímsson, Sigrún Helgadóttir, andEiríkur Rögnvaldsson. 2015. Analysing inconsis-tencies and errors in PoS tagging in two Icelandicgold standards. In Proceedings of the 20th NordicConference of Computational Linguistics (NODAL-IDA 2015) , pages 287–291, Vilnius, Lithuania.Linköping University Electronic Press, Sweden.Steinþór Steingrímsson, Sigrún Helgadóttir, EiríkurRögnvaldsson, Starkaður Barkarson, and JónGuðnason. 2018. Risamálheild: A Very Large Ice-landic Text Corpus. In Proceedings of the EleventhInternational Conference on Language Resourcesand Evaluation (LREC 2018) , Miyazaki, Japan.Steinþór Steingrímsson, Örvar Kárason, and HrafnLoftsson. 2019. Augmenting a BiLSTM Taggerwith a Morphological Lexicon and a Lexical Cate-gory Identification Step. In Proceedings of the Inter-national Conference on Recent Advances in NaturalLanguage Processing (RANLP 2019) , pages 1161–1168, Varna, Bulgaria.Alexander Tkachenko and Kairit Sirts. 2018. ModelingComposite Labels for Neural Morphological Tagg-ing. In Conference on Computational Natural Lan-guage Learning .Kristina Toutanova, Dan Klein, Christopher D. Mann-ing, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network.InProceedings of the 2003 Human Language Tech-nology Conference of the North American Chapterof the Association for Computational Linguistics ,pages 252–259.Ashish Vaswani, Noam Shazeer, Niki Parmar, JakobUszkoreit, Llion Jones, Aidan N Gomez, ŁukaszKaiser, and Illia Polosukhin. 2017. Attention is allyou need. Advances in Neural Information Process-ing Systems , 30.<|im_end|> <|im_start|>assistant ### Review Title The authors of this paper train and test four data-driven POS-tagging models for Icelandic and conclude that, with very little additional effort, POS tagging could be called a solved problem for Icelandic. However, the text types of the gold standard are not described and so it is impossible to say, is the POS tagging really solved for the majority of text types of Icelandic. ### Review Text The authors of this paper have trained and tested four data-driven POS taggers (TriTagger, IceStagger, ABLTagger and ConvBert) on a new large gold-standard corpus (ca 1 million tokens). The best model, ConvBERT-base achieves 97.79% token accuracy and 73.43% sentence accuracy. Basing on error analysis, the authors infer that after fixing errors in the gold data and using a larger model, it should be possible to pass the 98% accuracy goal. Their conclusion is that POS tagging problem for Icelandic is very close to being solved. However, "Icelandic" is, in this article, limited to the gold standard corpus. The corpus is quite big, containing ca 1 million tokens, but the text classes (13 altogether) of this corpus are not described or even listed in the article and so it is difficult to say, what language varieties does it contain and how representative it really is, does the gold data contain also spoken language etc. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
6T45-4TFqaX
eswc-conferences.org/ESWC/2021/Conference/Research_Track
2021
Convolutional Complex Knowledge Graph Embeddings
["Caglar Demir", "Axel-Cyrille Ngonga Ngomo"]
We investigate the problem of learning continuous vector representations of knowledge graphs for predicting missing links. Recent results suggest that using a Hermitian inner product on complex-valued embeddings or convolutions on real-valued embeddings can be effective means for predicting missing links. We bring these insights together and propose ConEx---a multiplicative composition of a 2D convolution with a Hermitian inner product on complex-valued embeddings. ConEx utilizes the Hadamard product to compose a 2D convolution followed by an affine transformation with a Hermitian inner product in $\mathbb{C}$. This combination endows ConEx with the capability of (1) controlling the impact of the convolution on the Hermitian inner product of embeddings, and (2) degenerating into ComplEx if such a degeneration is necessary to further minimize the incurred training loss. We evaluated our approach on five of the most commonly used benchmark datasets. Our experimental results suggest that ConEx outperforms state-of-the-art models on four of the five datasets w.r.t. Hits@1 and MRR even without extensive hyperparameter optimization. Our results also indicate that the generalization performance of state-of-the-art models can be further increased by applying ensemble learning. We provide an open-source implementation of our approach, including training and evaluation scripts as well as pretrained models.
["Knowledge graph embeddings", "Convolutions", "Complex numbers", "Hermitian inner product"]
Convolutional Complex Knowledge GraphEmbeddingsCaglar Demir and Axel-Cyrille Ngonga NgomoData Science Research Group, Paderborn UniversityAbstract. We investigate the problem of learning continuous vectorrepresentations of knowledge graphs for predicting missing links. Recentresults suggest that using a Hermitian inner product on complex-valuedembeddings or convolutions on real-valued embeddings can be eectivemeans for predicting missing links. We bring these insights together andpropose ConEx |a multiplicative composition of a 2D convolution with aHermitian inner product on complex-valued embeddings. ConEx utilizesthe Hadamard product to compose a 2D convolution followed by an anetransformation with a Hermitian inner product in C. This combinationendows ConEx with the capability of (1) controlling the impact of theconvolution on the Hermitian inner product of embeddings, and (2)degenerating into ComplEx if such a degeneration is necessary to furtherminimize the incurred training loss. We evaluated our approach on ve ofthe most commonly used benchmark datasets. Our experimental resultssuggest that ConEx outperforms state-of-the-art models on four of the vedatasets w.r.t. Hits@1 and MRR even without extensive hyperparameteroptimization. Our results also indicate that the generalization performanceof state-of-the-art models can be further increased by applying ensemblelearning. We provide an open-source implementation of our approach,including training and evaluation scripts as well as pretrained models.11 IntroductionKnowledge Graphs ( KGs) represent structured collections of facts modelled inthe form of typed relationships between entities [ 13]. These collections of factshave been used in a wide range of applications, including web search [ 10], cancerresearch [ 29], and even entertainment [ 21]. However, most KGs on the Web arefar from being complete [ 24]. For instance, the birth places of 71% of the peoplein Freebase and 66% of the people in DBpedia are not found in the respectiveKGs. In addition, more than 58% of the scientists in DBpedia are not linkedto the predicate that describes what they are known for [ 20]. Link predictiononKGs refers to identifying such missing information [ 9]. Knowledge GraphEmbedding ( KGE ) models have been particularly successful at tackling the linkprediction task [24].We investigate the use of a 2D convolution in the complex space Cto tacklethe link prediction task. We are especially interested in an eective composition1github.com/dice-group/Convolutional-Complex-Knowledge-Graph-Embeddings2 Caglar Demir and Axel-Cyrille Ngonga Ngomoof the non-symmetric property of Hermitian products with the parameter sharingproperty of a 2D convolution. Previously, Trouillon et al. [ 35] showed the expres-siveness of a Hermitian product on complex-valued embeddings Re(heh;er;eti),where eh,er, and etstand for the embeddings of head entity, relation and tailentity, respectively; etis the complex conjugate of et. The Hermitian productused in [ 35] is not symmetric and can be used to model antisymmetric rela-tions since Re(heh;er;eti)6=Re(het;er;ehi). Dettmers et al. [ 9] and Nguyenet al. [ 23] indicated the eectiveness of using a 2D convolution followed by anane transformation to predict missing links. Additionally, Bala zevi c et al. [ 3]showed that 1D relation-specic convolution lters can be an eective meansto tackle the link prediction task. Chen et al. [ 6] suggested applying a 2D con-volution followed by two capsule layers on quaternion-valued embeddings. Inturn, the results of a recent work [ 28] highlighted the importance of extensivehyperparameter optimization and new training strategies (see Table 1). Thepaper showed that the link prediction performances of previous state-of-the-artmodels (e.g., RESCAL, ComplEx and DistMult [ 26,35,37]) increased by up to10% absolute on benchmark datasets, provided that new training strategies areapplied. Based on these considerations, we propose ConEx |a multiplicativecomposition of a 2D convolution operation with a Hermitian inner product ofcomplex-valued embedding vectors. By virtue of its novel architecture, ConExis able to control the impact of a 2D convolution on predicted scores, i.e., byendowing ComplEx with two more degrees of freedom (see Section 4). Ergo,ConEx is able to degenerate to ComplEx if such a degeneration is necessary tofurther reduce the incurred training loss.We evaluated ConEx on ve of the most commonly used benchmark datasets(WN18, WN18RR, FB15K, FB15K-237 and YAGO3-10). We used the ndingsof [28] on using Bayesian optimization to select a small sample of hyperparametervalues for our experiments. Hence, we did not need to perform an extensivehyperparameter optimization throughout our experiments and xed the seedfor the pseudo-random generator to 1. In our experiments, we followed thestandard training strategy commonly used in the literature [ 4,3]. Overall, ourresults suggest that ConEx outperforms state-of-the-art models on four outof ve benchmark datasets w.r.t. Hits@N and Mean Reciprocal Rank ( MRR ).ConEx outperforms ComplEx and ConvE on all benchmark datasets in allmetrics. Results of our statistical hypothesis testing indicates that the superiorperformance of ConEx is statistically signicant. Our ablation study suggeststhat the dropout technique and the label smoothing have the highest impact onthe performance of ConEx . Furthermore, our results on the YAGO3-10 datasetsupports the ndings of Runelli et al. [ 28] as training DistMult and ComplExwith new techniques resulted in increasing their MRR performances by absolute20% and 19%, respectively. Finally, our results suggest that the generalizationperformance of models can be further improved by applying ensemble learning.In particular, ensembling ConEx leads to a new state-of-the-art performance onWN18RR and FB15K-237.Convolutional Complex Knowledge Graph Embeddings 32 Related workA wide range of works have investigated KGE to address various tasks such as typeprediction, relation prediction, link prediction, question answering, item recom-mendation and knowledge graph completion [ 8,7,26,14]. We refer to [ 24,36,5,16,27]for recent surveys and give a brief overview of selected KGE techniques. Table 1shows scoring functions of state-of-the-art KGE models.RESCAL [ 26] is a bilinear model that computes a three-way factorization ofa third-order adjacency tensor representing the input KG. RESCAL capturesvarious types of relations in the input KG but is limited in its scalability as it hasquadratic complexity in the factorization rank [ 33]. DistMult [ 37] can be seen asan ecient extension of RESCAL with a diagonal matrix per relation to reducethe complexity of RESCAL [ 4]. DistMult performs poorly on antisymmetricrelations while performing well on symmetric relations [ 33]. Note that throughapplying the reciprocal data augmentation technique, this incapability of DistMultis alleviated [ 28]. TuckER [ 4] performs a Tucker decomposition on the binarytensor representing the input KG, which enables multi-task learning throughparameter sharing between dierent relations via the core tensor.Table 1: State-of-the-art KGE models with training strategies. edenotes embed-dings, e2Ccorresponds to the complex conjugate of e:.denotes a convolutionoperation with !kernel.fdenotes rectied linear unit function. ;;denotethe Hamilton, the Hadamard and an inner product, respectively. In ConvE,the reshaping operation is omitted. The tensor product along the n-th mode isdenoted bynand the core tensor is represented by W. MSE, MR, BCE andCE denote mean squared error, margin ranking, binary cross entropy and crossentropy loss functions. NegSamp and AdvNegSamp stand for negative samplingand adversarial sampling.Model Scoring Function VectorSpace Loss Training Optimizer RegularizerRESCAL [26] ehWret eh;et2R MSE Full ALS L2DistMult [37] heh;er;eti eh;er;et2RMR NegSamp Adagrad Weighted L2ComplEx [35] Re( heh;er;eti) eh;er;et2CBCE NegSamp Adagrad Weighted L2ConvE [9] f(vec(f([eh;er]!))W)eteh;er;et2RBCE KvsAll Adam Dropout, BatchNormTuckER [4] W 1eh2er3et eh;er;et2RBCE KvsAll Adam Dropout, BatchNormRotatE [31] keheretk eh;er;et2CCE AdvNegSamp Adam -QuatE [38] ehe/ret eh;er;et2H CE AdvNegSamp Adagrad Weighted L2ConEx conv( eh;er)Re(heh;er;eti)eh;er;et2CBCE KvsAll Adam Dropout, BatchNormComplEx [ 35] extends DistMult by learning representations in a complexvector space. ComplEx is able to infer both symmetric and antisymmetric relationsvia a Hermitian inner product of embeddings that involves the conjugate-transposeof one of the two input vectors. ComplEx yields state-of-the-art performance onthe link prediction task while leveraging linear space and time complexity of thedot products. Trouillon et. al. [ 34] showed that ComplEx is equivalent to HolE [ 25].Inspired by Euler's identity, RotatE [ 31] employs a rotational model taking4 Caglar Demir and Axel-Cyrille Ngonga Ngomopredicates as rotations from subjects to objects in complex space via the element-wise Hadamard product [ 16]. RotatE performs well on composition/transitiverelations while ComplEx performs poorly [ 31]. QuatE [ 38] extends the complex-valued space into hypercomplex by a quaternion with three imaginary components,where the Hamilton product is used as compositional operator for hypercomplexvalued-representations.ConvE [ 9] applies a 2D convolution to model the interactions between entitiesand relations. Through interactions captured by 2D convolution, ConvE yields astate-of-art performance in link prediction. ConvKB extends ConvE by omittingthe reshaping operation in the encoding of representations in the convolutionoperation [ 23]. Similarly, HypER extends ConvE by applying relation-specicconvolution lters as opposed to applying lters from concatenated subject andrelation vectors [3].3 Preliminaries and Notation3.1 Knowledge GraphsLetEandRrepresent the set of entities and relations, respectively. Then, a KGG=f(h;r;t)2EREg can be formalised as a set of triples where each triplecontains two entities h;t2Eand a relation r2R. A relation rinGis{symmetric if (h;r;t)() (t;r;h) for all pairs of entities h;t2E,{anti-symmetric if (h;r;t)2G) (t;r;h)62Gfor all h6=t, and{transitive /composite if (h;r;t)2G^ (t;r;y)2G) (h;r;y)2G for allh;t;y2E[31,18].The inverse of a relation r, denoted r1, is a relation such that for any twoentities handt, (h;r;t)2G () (t;r1;h)2G.3.2 Link PredictionThe link prediction refers to predicting whether unseen triples (i.e., triples notfound inG) are true [ 16]. The task is often formalised by learning a scoringfunction:ERE7! R[24,16] ideally characterized by (h;r;t)>(x;y;z)if (h;r;t) is true and ( x;y;z) is not.4 Convolutional Complex Knowledge Graph EmbeddingsInspired by the previous works ComplEx [ 35] and ConvE [ 9], we dub our approachConEx (convolutional complex knowledge graph embeddings).Convolutional Complex Knowledge Graph Embeddings 5Motivation. Sun et. al. [ 31] suggested that ComplEx is not able to modeltriples with transitive relations since ComplEx does not perform well on datasetscontaining many transitive relations (see Table 5 and Section 4.6 in [31]). Moti-vated by this consideration, we propose ConEx , which applies the Hadamardproduct to compose a 2D convolution followed by an ane transformation witha Hermitian inner product in C. By virtue of the proposed architecture (seeEquation (1)), ConEx is endowed with the capability of1. leveraging a 2D convolution and2.degenerating to ComplEx if such degeneration is necessary to further minimizethe incurred training loss.ConEx benets from the parameter sharing and equivariant representationproperties of convolutions [ 11]. The parameter sharing property of the convolutionoperation allows ConEx to achieve parameter eciency, while the equivariantrepresentation allows ConEx to eectively integrate interactions captured in thestacked complex-valued embeddings of entities and relations into computationof scores. This implies that small interactions in the embeddings have smallimpacts on the predicted scores2. The rationale behind this architecture is toincrease the expressiveness of our model without increasing the number of itsparameters. As previously stated in [ 35], this nontrivial endeavour is the keystoneof embedding models. Ergo, we aim to overcome the shortcomings of ComplExin modelling triples containing transitive relations through combining it with a2D convolutions followed by an ane transformation on C.Approach. Given a triple ( h;r;t),ConEx :C3d7!Rcomputes its score asConEx (h;r;t) = conv( eh;er)Re(heh;er;eti); (1)where conv(;) :C2d7!Cdis dened asconv( eh;er) =fvec(f([eh;er]!))W+b; (2)wheref() denotes the rectied linear unit function (ReLU), vec( ) stands for aattening operation, is the convolution operation, !stands for kernels/lters inthe convolution, and ( W;b) characterize an ane transformation. By virtue of itsnovel structure, ConEx is enriched with the capability of controlling the impactof a 2D convolution and Hermitian inner product on the predicted scores. Ergo,the gradients of loss (see Equation (6)) w.r.t. embeddings can be propagatedin two ways, namely, via conv(eh;er) orRe(heh;er;eti). Equation (1) can be2We refer to [11] for further details of properties of convolutions.6 Caglar Demir and Axel-Cyrille Ngonga Ngomoequivalently expressed by expanding its real and imaginary parts:ConEx (h;r;t ) =dXk=1Re()kRe(eh)kRe(er)kRe(et)k (3)=hRe();Re(eh);Re(er);Re(et)i+hRe();Re(eh);Im(er);Im(et)i+hIm();Im(eh);Re(er);Im(et)ihIm();Im(eh);Im(er);Re(et)i (4)where etis the conjugate of etanddenotes the output of conv(eh;er) forbrevity. Such multiplicative inclusion of conv(;) equips ConEx with two moredegrees of freedom due the Re( ) and Im() parts.Connection to ComplEx. During the optimization, conv(;) is allowed toreduce its range into 2Csuch that Re() = 1^Im() = 1 . This allows ConExto degenerate into ComplEx as shown in Equation (5):ConEx (h;r;t) =ComplEx( h,r,t): (5)This multiplicative inclusion of conv(;) is motivated by the scaling parameter inthe batch normalization (see section 3 in [ 15]). Consequently, ConEx is alloweduse a 2D convolution followed by an ane transformation as a scaling factor inthe computation of scores.Training. We train our approach by following a standard setting [ 9,4]. Similarly,we applied the standard data augmentation technique, the KvsAll trainingprocedure3. After the data augmentation technique for a given pair ( h,r), wecompute scores for all x2Ewith(h;r;x). We then apply the logistic sigmoidfunction(((h;r;t))) to obtain predicted probabilities of entities. ConEx istrained to minimize the binary cross entropy loss function Lthat determines theincurred loss on a given pair ( h,r) as dened in the following:L=1jEjjEjXi=1(y(i)log(^y(i)) + (1y(i))log(1^y(i))); (6)where ^y2RjEjis the vector of predicted probabilities and y2[0;1]jEjis thebinary label vector.5 Experiments5.1 DatasetsWe used ve of the most commonly used benchmark datasets (WN18, WN18RR,FB15K, FB15K-237 and YAGO3-10). An overview of the datasets is provided3Note that the KvsAll strategy is called 1-N scoring in [ 9]. Here, we follow theterminology of [28].Convolutional Complex Knowledge Graph Embeddings 7in Table 2. WN18 and WN18RR are subsets of Wordnet, which describes lexicaland semantic hierarchies between concepts and involves symmetric andanti-symmetric relation types, while FB15K, FB15K-237 are subsets of Freebase,which involves mainly symmetric ,antisymmetric andcomposite relationtypes [ 31]. We refer to [ 9] for further details pertaining to the benchmark datasets.Table 2: Overview of datasets in terms of number of entities, number of relations,and node degrees in the train split along with the number of triples in each splitof the dataset.DatasetjEj jRj Degr. (MSD)jGTrainjjGValidationjjGTestjYAGO3-10 123,182 37 9.6 8.7 1,079,040 5,000 5,000FB15K 14,951 1,345 32.46 69.46 483,142 50,000 59,071WN18 40,943 18 3.49 7.74 141,442 5,000 5,000FB15K-237 14,541 237 19.7 30 272,115 17,535 20,466WN18RR 40,943 11 2.2 3.6 86,835 3,034 3,1345.2 Evaluation MetricsWe used the ltered MRR and Hits@N to evaluate link prediction performances,as in previous works [ 31,35,9,4]. We refer to [ 28] for details pertaining to metrics.5.3 Experimental SetupWe selected the hyperparameters of ConEx based on the MRR score obtainedon the validation set of WN18RR. Hence, we evaluated the link predictionperformance of ConEx on FB15K-237, YAGO3-10, WN18 and FB15K by usingthe best hyperparameter conguration found on WN18RR. This decision stemsfrom the fact that we aim to reduce the impact of extensive hyperparameteroptimization on the reported results and the CO2emission caused throughrelying on the ndings of previously works [ 28]. Strubell et al. [ 30] highlightedthe substantial energy consumption of performing extensive hyperparameteroptimization. Moreover, Runelli et al. [ 28] showed that model congurations canbe found by exploring relatively few random samples from a large hyperparameterspace. With these considerations, we determined the ranges of hyperparametersfor the grid search algorithm optimizer based on their best hyperparameter settingfor ConvE (see Table 8 in [ 28]). Specically, the ranges of the hyperparameterswere dened as follows: d:f100;200g; dropout rate:f:3;:4gfor the input; dropoutrate:f:4;:5gfor the feature map; label smoothing: f:1gand the number of outputchannels in the convolution operation: f16;32g; the batch size: f1024g; thelearning rate:f:001g. After determining the best hyperparameters based on the8 Caglar Demir and Axel-Cyrille Ngonga NgomoMRR on the validation dataset; we retrained ConEx with these hyperparameterson the combination of train and valid sets as applied in [17].Motivated by the experimental setups for ResNet [ 12] and AlexNet [ 19], wewere interested in quantifying the impact of ensemble learning on the link predic-tion performances. Ensemble learning refers to learning a weighted combination oflearning algorithms. In our case, we generated ensembles of models by averagingthe predictions of said models.4To this end, we re-evaluated state-of-the-artmodels, including TucKER, DistMult and ComplEx on the combination of trainand validation sets of benchmark datasets. Therewith, we were also able toquantify the impact of training state-of-the-art models on the combination oftrain and validation sets. Moreover, we noticed that link prediction performancesof DistMult and ComplEx, on the YAGO3-10 dataset were reported withoutemploying new training strategies (KvsAll, the reciprocal data augmentation,the batch normalization, and the ADAM optimizer). Hence, we trained DistMult,ComplEx on YAGO3-10 with these strategies.5.4 Implementation Details and ReproducibilityWe implemented and evaluated our approach in the framework provided by [ 4,2].Throughout our experiments, the seed for the pseudo-random generator wasxed to 1. To alleviate the hardware requirements for the reproducibility of ourresults and to foster further reproducible research, we provide hyperparameteroptimization, training and evaluation scripts along with pretrained models at theproject page.6 ResultsTable 3, Table 4 and Table 10 report the link prediction performances of ConExon ve benchmark datasets. Overall, ConEx outperforms state-of-the-art modelson four out of ve datasets. In particular, ConEx outperforms ComplEx andConvE on all ve datasets. This supports our original hypothesis, i.e., that thecomposition of a 2D convolution with a Hermitian inner product improves theprediction of relations in complex spaces. We used the Wilcoxon signed-rank testto measure the statistical signicance of our link prediction results. Moreover,we performed an ablation study (see Table 8) to obtain condence intervals forprediction performances of ConEx . These results are shown in the Appendix.Bold and underlined entries denote best and second-best results in all tables.ConEx outperforms all state-of-the-art models on WN18 and FB15K (see Ta-ble 10 in the Appendix), whereas such distinct superiority is not observed onWN18RR and FB15K-237. Table 3 shows that ConEx outperforms many state-of-the-art models, including RotatE, ConvE, HypER, ComplEx, NKGE, in allmetrics on WN18RR and FB15K-237. This is an important result for two reasons:4Ergo, the weights for models were set to 1 (see the section 16.6 in [ 22] for moredetails.)Convolutional Complex Knowledge Graph Embeddings 9Table 3: Link prediction results on WN18RR and FB15K-237. Results are obtainedfrom corresponding papers.zrepresents recently reported results of correspondingmodels.WN18RR FB15K-237MRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult [9] .430 .490 .440 .390 .241 .419 .263 .155ComplEx [9] .440 .510 .460 .410 .247 .428 .275 .158ConvE [9] .430 .520 .440 .400 .335 .501 .356 .237RESCALy[28] .467 .517 .480 .439 .357 .541 .393 .263DistMulty[28] .452 .530 .466 .413 .343 .531 .378 .250ComplExy[28] .475 .547 .490 .438 .348 .536 .384 .253ConvEy[28] .442 .504 .451 .411 .339 .521 .369 .248HypER [3] .465 .522 .477 .436 .341 .520 .376 .252NKGE [38] .450 .526 .465 .421 .330 .510 .365 .241RotatE [31] .476 .571 .492 .428 .338 .533 .375 .241TuckER [4] .470 .526 .482 .443 .358 .544 .394 .266QuatE [38] .482 .572 .499 .436 .366 .556 .401 .271DistMult .439 .527 .455 .399 .353 .539 .390 .260ComplEx .453 .546 .473 .408 .332 .509 .366 .244TuckER .466 .515 .476 .441 .363 .553 .400 .268ConEx .481 .550 .493 .448 .366 .555 .403 .271(1)ConEx requires signicantly fewer parameters to yield such superior results(e.g., ConEx only requires 26.63M parameters on WN18RR, while RotatE relieson 40.95M parameters), and (2) we did not tune the hyperparameters of ConExon FB15K-237.Furthermore, the results reported in Table 3 corroborate thendings of Runelli et al. [ 28]: training DistMult and ComplEx with KvsAll, thereciprocal data augmentation, the batch normalization, and the ADAM optimizerleads to a signicant improvement, particularly on FB15K-237.During our experiments, we observed that many state-of-the-art models are notevaluated on YAGO3-10. This may stem from the fact that the size of YAGO3-10prohibits performing extensive hyperparameter optimization even with the currentstate-of-the-art hardware systems. Note that YAGO3-10 involves 8.23 and 8.47times more entities than FB15K and FB15K-237, respectively. Table 4 indicatesthat DistMult and ComplEx perform particularly well on YAGO3-10, providedthat KvsAll, the reciprocal data augmentation, the batch normalization, andthe ADAM optimizer are employed. These results support ndings of Runelliet al. [ 28]. During training, we observed that the training loss of DistMult andComplEx seemed to converge within 400 epochs, whereas the training loss ofTuckER seemed to continue decreasing. Ergo, we conjecture that TuckER ismore likely to benet from increasing the number of epochs than DistMult andComplEx. Table 4 shows that the superior performance of ConEx against state-10 Caglar Demir and Axel-Cyrille Ngonga NgomoTable 4: Link prediction results on YAGO3-10. Results are obtained from corre-sponding papers.YAGO3-10MRR Hits@10 Hits@3 Hits@1DistMult [9] .340 .540 .380 .240ComplEx [9] .360 .550 .400 .260ConvE [9] .440 .620 .490 .350HypER [3] .533 .678 .580 .455RotatE [31] .495 .670 .550 .402DistMult .543 .683 .590 .466ComplEx .547 .690 .594 .468TuckER .427 .609 .476 .331ConEx .553 .696 .601 .474of-the-art models including DistMult, ComplEx, HypER can be maintained onthe largest benchmark dataset for the link prediction.Delving into the link prediction results, we observed an inconsistency in thetest splits of WN18RR and FB15K-237. Specically, the test splits of WN18RRand FB15K-237 contain many out-of-vocabulary entities5. For instance, 6% of thetest set on WN18RR involves out-of-vocabulary entities. During our experiments,we did not remove such triples to obtain fair comparisons on both datasets.To quantify the impact of unseen entities on link prediction performances, weconducted an additional experiment.Link Prediction per Relation. Table 5 reports the link prediction per relationperformances on WN18RR. Overall, models perform particularly well on triplescontaining symmetric relations such as also seeand similar to. Comparedto RotatE, DistMult, ComplEx and TuckER, ConEx performs well on triplescontaining transitive relations such as hypernym and haspart. Allen et al. [ 1]ranked the complexity of type of relations as R>S>Cin the link predictiontask. Based on this ranking, superior performance of ConEx becomes moreapparent as the complexity of relations increases.Ensemble Learning. Table 6 reports the link prediction performances ofensembles based on pairs of models. These results suggest that ensemble learningcan be applied as an eective means to boost the generalization performanceof existing approaches including ConEx . These results may also indicate thatmodels may be further improved through optimizing the impact of each modelon the ensembles, e.g., by learning two scalars andinConEx (s;p;o) +TuckER( s;p;o)instead of averaging predicted scores.5github.com/TimDettmers/ConvE/issues/66Convolutional Complex Knowledge Graph Embeddings 11Table 5: MRR link prediction on each relation of WN18RR. Results of RotatEare taken from [ 38]. The complexity of type of relations in the link predictiontask is dened as R >S>C [1].Relation Name Type RotatE DistMult ComplEx TuckER ConExhypernym S .148 .102 .106 .121 .149instance hypernym S .318 .218 .292 .375 .393member meronym C .232 .129 .181 .181 .171synset domain topic of C .341 .226 .266 .344 .373haspart C .184 .143 .181 .171 .192member ofdomain usage C .318 .225 .280 .213 .318member ofdomain region C .200 .095 .267 .284 .354derivationally related form R .947 .982 .984 .985 .986alsosee R .585 .639 .557 .658 .647verb group R .943 1.00 1.00 1.00 1.00similar to R 1.00 1.00 1.00 1.00 1.00Table 6: Link prediction results of ensembled models on WN18RR and FB15K-237. Second rows denote link prediction results without triples containing out-of-vocabulary entities. ConEx -ConEx stands for ensembling two ConEx trainedwith the dropout rate 0.4 and 0.5 on the feature map.WN18RR FB15K-237MRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult-ComplEx .446 .545 .467 .398 .359 .546 .397 .265.475 .579 .497 .426 .359 .546 .397 .265DistMult-TuckER .446 .533 .461 .405 .371 .563 .410 .275.476 .569 .492 .433 .371 .563 .411 .275ConEx -DistMult .454 .545 .471 .410 .371 .563 .409 .275.484 .580 .501 .439 .367 .556 .403 .272ConEx -ComplEx .470 .554 .487 .428 .370 .559 .407 .276.501 .589 .518 .456 .360 .547 .397 .267ConEx -TuckER .483 .549 .494 .449 .375 .568 .414 .278.514 .583 .526 .479 .375 .568 .414 .278ConEx -ConEx .485 .559 .495 .450 .376 .569 .415 .279.517 .594 .526 .479 .376 .570 .415 .279Parameter Analysis. Table 7 indicates the robustness of ConEx against theovertting problem. Increasing the number of parameters in ConEx does not leadto a signicant decrease in the generalization performance. In particular, ConExachieves similar generalization performance, with p= 26:63Mandp= 70:66M,12 Caglar Demir and Axel-Cyrille Ngonga NgomoTable 7: Inuence of dierent hyperparameter congurations for ConEx onWN18RR. d,candpstand for the dimensions of embeddings in C, number ofoutput channels in 2D convolutions and number of free parameters in millions,respectively.WN18RRd c p MRR Hits@10 Hits@3 Hits@1300 64 70.66M .475 .540 .490 .442250 64 52.49M .475 .541 .488 .441300 32 47.62M .480 .548 .491 .447250 32 36.39M .479 .545 .490 .446300 16 36.10M .479 .550 .494 .445250 16 28.48M .477 .544 .489 .443200 32 26.63M .481 .550 .493 .447100 32 10.75M .474 .533 .480 .440100 16 9.47M .476 .536 .486 .44150 32 4.74M .448 .530 .477 .401as the dierence between MRR scores are less than absolute 1%. This cannot beexplained with convolutions playing no role as ConEx would then degrade backto ComplEx and achieve the same results (which is clearly not the case in ourexperiments).7 DiscussionThe superior performance of ConEx stems from the composition of a 2D convo-lution with a Hermitian inner product of complex-valued embeddings. Trouillonet al. [ 35] showed that a Hermitian inner product of complex-valued embeddingscan be eectively used to tackle the link prediction problem. Applying the con-volution operation on complex-valued embeddings of subjects and predicatespermits ConEx to recognize interactions between subjects and predicates inthe form of complex-valued feature maps. Through the ane transformationof feature maps and their inclusion into a Hermitian inner product involvingthe conjugate-transpose of complex-valued embeddings of objects, ConEx canaccurately infer various types of relations. Moreover, the number and shapesof the kernels permit to adjust the expressiveness , while ConEx retains theparameter eciency due to the parameter sharing property of convolutions. Byvirtue of the design, the expressiveness of ConEx may be further improved byincreasing the depth of the conv( ;) via the residual learning block [12].8 Conclusion and Future WorkIn this work, we introduced ConEx |a multiplicative composition of a 2Dconvolution with a Hermitian inner product on complex-valued embeddings. ByConvolutional Complex Knowledge Graph Embeddings 13virtue of its novel structure, ConEx is endowed with the capability of controllingthe impact of a 2D convolution and a Hermitian inner product on the predictedscores. Such combination makes ConEx more robust to overtting, as is armedwith our parameter analysis. Our results open a plethora of other researchavenues. In future work, we plan to investigate the following: (1) combiningthe convolution operation with Hypercomplex multiplications, (2) increasingthe depth in the convolutions via residual learning block and (3) nding moreeective combinations of ensembles of models.AcknowledgmentsThis work has been supported by the BMWi-funded project RAKI (01MD19012D)as well as the BMBF-funded project DAIKIRI (01IS19085B). We are grateful toDiego Moussallem for valuable comments on earlier drafts and to Pamela HeidiDouglas for editing the manuscript.References1.Allen, C., Balazevic, I., Hospedales, T.: Interpreting knowledge graph relationrepresentation from word embeddings. In: International Conference on LearningRepresentations (2021), https://openreview.net/forum?id=gLWj29369lW2.Bala zevi c, I., Allen, C., Hospedales, T.: Multi-relational poincar e graph embeddings.In: Advances in Neural Information Processing Systems. pp. 4465{4475 (2019)3.Bala zevi c, I., Allen, C., Hospedales, T.M.: Hypernetwork knowledge graph em-beddings. In: International Conference on Articial Neural Networks. pp. 553{565.Springer (2019)4.Bala zevi c, I., Allen, C., Hospedales, T.M.: Tucker: Tensor factorization for knowledgegraph completion. arXiv preprint arXiv:1901.09590 (2019)5.Cai, H., Zheng, V.W., Chang, K.C.C.: A comprehensive survey of graph embedding:Problems, techniques, and applications. IEEE Transactions on Knowledge and DataEngineering 30(9), 1616{1637 (2018)6.Chen, H., Wang, W., Li, G., Shi, Y.: A quaternion-embedded capsule networkmodel for knowledge graph completion. IEEE Access (2020)7.Demir, C., Moussallem, D., Ngomo, A.C.N.: A shallow neural model for relationprediction. arXiv preprint arXiv:2101.09090 (2021)8.Demir, C., Ngomo, A.C.N.: A physical embedding model for knowledge graphs. In:Joint International Semantic Technology Conference. pp. 192{209. Springer (2019)9.Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2d knowledgegraph embeddings. In: Thirty-Second AAAI Conference on Articial Intelligence(2018)10.Eder, J.S.: Knowledge graph based search system (Jun 21 2012), uS Patent App.13/404,10911. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT press (2016)12.He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition.In: Proceedings of the IEEE conference on computer vision and pattern recognition.pp. 770{778 (2016)14 Caglar Demir and Axel-Cyrille Ngonga Ngomo13.Hogan, A., Blomqvist, E., Cochez, M., d'Amato, C., de Melo, G., Gutierrez, C.,Gayo, J.E.L., Kirrane, S., Neumaier, S., Polleres, A., et al.: Knowledge graphs.arXiv preprint arXiv:2003.02320 (2020)14.Huang, X., Zhang, J., Li, D., Li, P.: Knowledge graph embedding based questionanswering. In: Proceedings of the Twelfth ACM International Conference on WebSearch and Data Mining. pp. 105{113 (2019)15.Ioe, S., Szegedy, C.: Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)16.Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs:Representation, acquisition and applications. arXiv preprint arXiv:2002.00388(2020)17.Joulin, A., Grave, E., Bojanowski, P., Nickel, M., Mikolov, T.: Fast linear modelfor knowledge graph embeddings. arXiv preprint arXiv:1710.10881 (2017)18.Kazemi, S.M., Poole, D.: Simple embedding for link prediction in knowledge graphs.In: Advances in neural information processing systems. pp. 4284{4295 (2018)19.Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classication with deepconvolutional neural networks. Communications of the ACM 60(6), 84{90 (2017)20.Krompa, D., Baier, S., Tresp, V.: Type-constrained representation learning inknowledge graphs. In: International semantic web conference. pp. 640{655. Springer(2015)21.Malyshev, S., Kr otzsch, M., Gonz alez, L., Gonsior, J., Bielefeldt, A.: Getting themost out of wikidata: semantic technology usage in wikipedia's knowledge graph.In: International Semantic Web Conference. pp. 376{394. Springer (2018)22. Murphy, K.P.: Machine learning: a probabilistic perspective. MIT press (2012)23.Nguyen, D.Q., Nguyen, T.D., Nguyen, D.Q., Phung, D.: A novel embedding modelfor knowledge base completion based on convolutional neural network. arXiv preprintarXiv:1712.02121 (2017)24.Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E.: A review of relational machinelearning for knowledge graphs. Proceedings of the IEEE 104(1), 11{33 (2015)25.Nickel, M., Rosasco, L., Poggio, T.: Holographic embeddings of knowledge graphs.arXiv preprint arXiv:1510.04935 (2015)26.Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning onmulti-relational data. In: ICML. vol. 11, pp. 809{816 (2011)27.Qin, C., Zhu, H., Zhuang, F., Guo, Q., Zhang, Q., Zhang, L., Wang, C., Chen, E.,Xiong, H.: A survey on knowledge graph based recommender systems. SCIENTIASINICA Informationis28.Runelli, D., Broscheit, S., Gemulla, R.: You can teach an old dog new tricks! ontraining knowledge graph embeddings. In: International Conference on LearningRepresentations (2019)29.Saleem, M., Kamdar, M.R., Iqbal, A., Sampath, S., Deus, H.F., Ngonga Ngomo,A.C.: Big linked cancer data: Integrating linked tcga and pubmed. Journal of websemantics 27, 34{41 (2014)30.Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deeplearning in nlp. arXiv preprint arXiv:1906.02243 (2019)31.Sun, Z., Deng, Z.H., Nie, J.Y., Tang, J.: Rotate: Knowledge graph embedding byrelational rotation in complex space. arXiv preprint arXiv:1902.10197 (2019)32.Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: Divide the gradient by a runningaverage of its recent magnitude. COURSERA: Neural networks for machine learning4(2), 26{31 (2012)Convolutional Complex Knowledge Graph Embeddings 1533.Trouillon, T., Dance, C.R., Gaussier, E., Welbl, J., Riedel, S., Bouchard, G.:Knowledge graph completion via complex tensor factorization. The Journal ofMachine Learning Research 18(1), 4735{4772 (2017)34.Trouillon, T., Nickel, M.: Complex and holographic embeddings of knowledge graphs:a comparison. arXiv preprint arXiv:1707.01475 (2017)35.Trouillon, T., Welbl, J., Riedel, S., Gaussier, E., Bouchard, G.: Complex embeddingsfor simple link prediction. In: International Conference on Machine Learning. pp.2071{2080 (2016)36.Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: A survey of ap-proaches and applications. IEEE Transactions on Knowledge and Data Engineering29(12), 2724{2743 (2017)37.Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relationsfor learning and inference in knowledge bases. In: ICLR (2015)38.Zhang, S., Tay, Y., Yao, L., Liu, Q.: Quaternion knowledge graph embeddings. In:Advances in Neural Information Processing Systems. pp. 2731{2741 (2019)9 AppendixStatistical Hypothesis Testing. We carried out a Wilcoxon signed-rank testto check whether our results are signicant. Our null hypothesis was that the linkprediction performances of ConEx , ComplEx and ConvE come from the samedistribution. The alternative hypothesis was correspondingly that these resultscome from dierent distributions. To perform the Wilcoxon signed-rank test(two-sided), we used the dierences of the MRR , Hits@1, Hits@3, and Hits@10performances on WN18RR, FB15K-237 and YAGO3-10. We performed twohypothesis tests between ConEx and ComplEx as well as between ConEx andConvE. In both tests, we were able to reject the null hypothesis with a p-value<1%. Ergo, the superior performance of ConEx is statistically signicant.Ablation Study. We conducted our ablation study in a fashion akin to [ 9].Like [ 9], we evaluated 2 dierent parameter initialisations to compute condenceintervals that is dened as x1:96spn, where x=1nPnixiands=qPni(xix)2n,respectively. Hence, the mean and the standard deviation are computed withoutBessel's correction. Our results suggest that the initialization of parameters doesnot play a signicant role in the link performance of ConEx . The dropouttechnique is the most important component in the generalization performance ofConEx . This is also observed in [ 9]. Moreover, replacing the Adam optimizerwith the RMSprop optimizer [ 32] leads to slight increases in the variance of thelink prediction results. During our ablation experiments, we were also interestedin decomposing ConEx through removing conv(;), after ConEx is trainedwith it on benchmark datasets. By doing so, we aim to observe the impact of a2D convolution in the computation of scores. Table 9 indicates that the impactofconv(;) diers depending on the input knowledge graph. As the size of theinput knowledge graph increases, the impact of conv(;) on the computation ofscores of triples increases.16 Caglar Demir and Axel-Cyrille Ngonga NgomoTable 8: Ablation study for ConEx on FB15K-237. dpandlsdenote the dropouttechnique and the label smoothing technique, respectively.FB15K-237MRR Hits@10 Hits@3 Hits@1Full .366 .000 .556 .001 .404 .001 .270 .001Nodpon inputs .282 .000 .441 .001 .313 .001 .203 .000Nodpon feature map .351 .000 .533 .000 .388 .001 .259 .001Nols .321.001 .498 .001 .354 .001 .232 .002With RMSprop .361 .004 .550 .007 .400 .005 .267 .003Table 9: Link prediction results on benchmark datasets. ConExstands forremoving conv(;) inConEx during the evaluation.ConEx ConExMRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1WN18RR .481 .550 .493 .448 .401 .494 .437 .346FB15K-237 .366 .555 .403 .271 .284 .458 .314 .198YAGO3-10 .553 .696 .601 .477 .198 .324 .214 .136Table 10: Link prediction results on WN18 and FB15K obtained from [4,38].WN18 FB15KMRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult .822 .936 .914 .728 .654 .824 .733 .546ComplEx .941 .947 .936 .936 .692 .840 .759 .599ANALOGY .942 .947 .944 .939 .725 .854 .785 .646R-GCN .819 .964 .929 .697 .696 .842 .760 .601TorusE .947 .954 .950 .943 .733 .832 .771 .674ConvE .943 .956 .946 .935 .657 .831 .723 .558HypER .951 .958 .955 .947 .790 .885 .829 .734SimplE .942 .947 .944 .939 .727 .838 .773 .660TuckER .953 .958 .955 .949 .795 .892 .833 .741QuatE .950 .962 .954 .944 .833 .900 .859 .800ConEx .976 .980 .978 .973 .872 .930 .896 .837Link Prediction Results on WN18 and FB15K. Table 10 reports linkprediction results on the WN18 and FB15K benchmark datasets.
VzlSNxwfz6t
Novel extension to ComplEx with excellent presentation and evaluation
2: Accept
The work presents an extension to the ComplEx graph embedding method that improves the shortcomings of ComplEx in terms of transitive relations. The paper does an excellent job at summarizing related work and highlighting the differences with the proposed method. The motivation and design choices are clearly presented and easy to follow. The presented methodology seems novel and impactful. The evaluation is extensive (five datasets) and convincing (comparison to SOTA models). Code and models are available online. The shortcomings of the paper are minor and mostly cosmetic: * Figure 1 needs more elaboration: What are the conclusions drawn from it? * Missing citation and typo in 5.3 "Ruffinelli et al." * In Tables 4 and 7 it is not explained what an underlined number represents. * Some of the references seem to cite the arXiv versions, rather than the conference versions (e.g., 3 and 13).
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Convolutional Complex Knowledge Graph Embeddings ### Paper Abstract We investigate the problem of learning continuous vector representations of knowledge graphs for predicting missing links. Recent results suggest that using a Hermitian inner product on complex-valued embeddings or convolutions on real-valued embeddings can be effective means for predicting missing links. We bring these insights together and propose ConEx---a multiplicative composition of a 2D convolution with a Hermitian inner product on complex-valued embeddings. ConEx utilizes the Hadamard product to compose a 2D convolution followed by an affine transformation with a Hermitian inner product in $\mathbb{C}$. This combination endows ConEx with the capability of (1) controlling the impact of the convolution on the Hermitian inner product of embeddings, and (2) degenerating into ComplEx if such a degeneration is necessary to further minimize the incurred training loss. We evaluated our approach on five of the most commonly used benchmark datasets. Our experimental results suggest that ConEx outperforms state-of-the-art models on four of the five datasets w.r.t. Hits@1 and MRR even without extensive hyperparameter optimization. Our results also indicate that the generalization performance of state-of-the-art models can be further increased by applying ensemble learning. We provide an open-source implementation of our approach, including training and evaluation scripts as well as pretrained models. ### Paper Keywords ["Knowledge graph embeddings", "Convolutions", "Complex numbers", "Hermitian inner product"] ### Paper Content Convolutional Complex Knowledge GraphEmbeddingsCaglar Demir and Axel-Cyrille Ngonga NgomoData Science Research Group, Paderborn UniversityAbstract. We investigate the problem of learning continuous vectorrepresentations of knowledge graphs for predicting missing links. Recentresults suggest that using a Hermitian inner product on complex-valuedembeddings or convolutions on real-valued embeddings can be eectivemeans for predicting missing links. We bring these insights together andpropose ConEx |a multiplicative composition of a 2D convolution with aHermitian inner product on complex-valued embeddings. ConEx utilizesthe Hadamard product to compose a 2D convolution followed by an anetransformation with a Hermitian inner product in C. This combinationendows ConEx with the capability of (1) controlling the impact of theconvolution on the Hermitian inner product of embeddings, and (2)degenerating into ComplEx if such a degeneration is necessary to furtherminimize the incurred training loss. We evaluated our approach on ve ofthe most commonly used benchmark datasets. Our experimental resultssuggest that ConEx outperforms state-of-the-art models on four of the vedatasets w.r.t. Hits@1 and MRR even without extensive hyperparameteroptimization. Our results also indicate that the generalization performanceof state-of-the-art models can be further increased by applying ensemblelearning. We provide an open-source implementation of our approach,including training and evaluation scripts as well as pretrained models.11 IntroductionKnowledge Graphs ( KGs) represent structured collections of facts modelled inthe form of typed relationships between entities [ 13]. These collections of factshave been used in a wide range of applications, including web search [ 10], cancerresearch [ 29], and even entertainment [ 21]. However, most KGs on the Web arefar from being complete [ 24]. For instance, the birth places of 71% of the peoplein Freebase and 66% of the people in DBpedia are not found in the respectiveKGs. In addition, more than 58% of the scientists in DBpedia are not linkedto the predicate that describes what they are known for [ 20]. Link predictiononKGs refers to identifying such missing information [ 9]. Knowledge GraphEmbedding ( KGE ) models have been particularly successful at tackling the linkprediction task [24].We investigate the use of a 2D convolution in the complex space Cto tacklethe link prediction task. We are especially interested in an eective composition1github.com/dice-group/Convolutional-Complex-Knowledge-Graph-Embeddings2 Caglar Demir and Axel-Cyrille Ngonga Ngomoof the non-symmetric property of Hermitian products with the parameter sharingproperty of a 2D convolution. Previously, Trouillon et al. [ 35] showed the expres-siveness of a Hermitian product on complex-valued embeddings Re(heh;er;eti),where eh,er, and etstand for the embeddings of head entity, relation and tailentity, respectively; etis the complex conjugate of et. The Hermitian productused in [ 35] is not symmetric and can be used to model antisymmetric rela-tions since Re(heh;er;eti)6=Re(het;er;ehi). Dettmers et al. [ 9] and Nguyenet al. [ 23] indicated the eectiveness of using a 2D convolution followed by anane transformation to predict missing links. Additionally, Bala zevi c et al. [ 3]showed that 1D relation-specic convolution lters can be an eective meansto tackle the link prediction task. Chen et al. [ 6] suggested applying a 2D con-volution followed by two capsule layers on quaternion-valued embeddings. Inturn, the results of a recent work [ 28] highlighted the importance of extensivehyperparameter optimization and new training strategies (see Table 1). Thepaper showed that the link prediction performances of previous state-of-the-artmodels (e.g., RESCAL, ComplEx and DistMult [ 26,35,37]) increased by up to10% absolute on benchmark datasets, provided that new training strategies areapplied. Based on these considerations, we propose ConEx |a multiplicativecomposition of a 2D convolution operation with a Hermitian inner product ofcomplex-valued embedding vectors. By virtue of its novel architecture, ConExis able to control the impact of a 2D convolution on predicted scores, i.e., byendowing ComplEx with two more degrees of freedom (see Section 4). Ergo,ConEx is able to degenerate to ComplEx if such a degeneration is necessary tofurther reduce the incurred training loss.We evaluated ConEx on ve of the most commonly used benchmark datasets(WN18, WN18RR, FB15K, FB15K-237 and YAGO3-10). We used the ndingsof [28] on using Bayesian optimization to select a small sample of hyperparametervalues for our experiments. Hence, we did not need to perform an extensivehyperparameter optimization throughout our experiments and xed the seedfor the pseudo-random generator to 1. In our experiments, we followed thestandard training strategy commonly used in the literature [ 4,3]. Overall, ourresults suggest that ConEx outperforms state-of-the-art models on four outof ve benchmark datasets w.r.t. Hits@N and Mean Reciprocal Rank ( MRR ).ConEx outperforms ComplEx and ConvE on all benchmark datasets in allmetrics. Results of our statistical hypothesis testing indicates that the superiorperformance of ConEx is statistically signicant. Our ablation study suggeststhat the dropout technique and the label smoothing have the highest impact onthe performance of ConEx . Furthermore, our results on the YAGO3-10 datasetsupports the ndings of Runelli et al. [ 28] as training DistMult and ComplExwith new techniques resulted in increasing their MRR performances by absolute20% and 19%, respectively. Finally, our results suggest that the generalizationperformance of models can be further improved by applying ensemble learning.In particular, ensembling ConEx leads to a new state-of-the-art performance onWN18RR and FB15K-237.Convolutional Complex Knowledge Graph Embeddings 32 Related workA wide range of works have investigated KGE to address various tasks such as typeprediction, relation prediction, link prediction, question answering, item recom-mendation and knowledge graph completion [ 8,7,26,14]. We refer to [ 24,36,5,16,27]for recent surveys and give a brief overview of selected KGE techniques. Table 1shows scoring functions of state-of-the-art KGE models.RESCAL [ 26] is a bilinear model that computes a three-way factorization ofa third-order adjacency tensor representing the input KG. RESCAL capturesvarious types of relations in the input KG but is limited in its scalability as it hasquadratic complexity in the factorization rank [ 33]. DistMult [ 37] can be seen asan ecient extension of RESCAL with a diagonal matrix per relation to reducethe complexity of RESCAL [ 4]. DistMult performs poorly on antisymmetricrelations while performing well on symmetric relations [ 33]. Note that throughapplying the reciprocal data augmentation technique, this incapability of DistMultis alleviated [ 28]. TuckER [ 4] performs a Tucker decomposition on the binarytensor representing the input KG, which enables multi-task learning throughparameter sharing between dierent relations via the core tensor.Table 1: State-of-the-art KGE models with training strategies. edenotes embed-dings, e2Ccorresponds to the complex conjugate of e:.denotes a convolutionoperation with !kernel.fdenotes rectied linear unit function. ;;denotethe Hamilton, the Hadamard and an inner product, respectively. In ConvE,the reshaping operation is omitted. The tensor product along the n-th mode isdenoted bynand the core tensor is represented by W. MSE, MR, BCE andCE denote mean squared error, margin ranking, binary cross entropy and crossentropy loss functions. NegSamp and AdvNegSamp stand for negative samplingand adversarial sampling.Model Scoring Function VectorSpace Loss Training Optimizer RegularizerRESCAL [26] ehWret eh;et2R MSE Full ALS L2DistMult [37] heh;er;eti eh;er;et2RMR NegSamp Adagrad Weighted L2ComplEx [35] Re( heh;er;eti) eh;er;et2CBCE NegSamp Adagrad Weighted L2ConvE [9] f(vec(f([eh;er]!))W)eteh;er;et2RBCE KvsAll Adam Dropout, BatchNormTuckER [4] W 1eh2er3et eh;er;et2RBCE KvsAll Adam Dropout, BatchNormRotatE [31] keheretk eh;er;et2CCE AdvNegSamp Adam -QuatE [38] ehe/ret eh;er;et2H CE AdvNegSamp Adagrad Weighted L2ConEx conv( eh;er)Re(heh;er;eti)eh;er;et2CBCE KvsAll Adam Dropout, BatchNormComplEx [ 35] extends DistMult by learning representations in a complexvector space. ComplEx is able to infer both symmetric and antisymmetric relationsvia a Hermitian inner product of embeddings that involves the conjugate-transposeof one of the two input vectors. ComplEx yields state-of-the-art performance onthe link prediction task while leveraging linear space and time complexity of thedot products. Trouillon et. al. [ 34] showed that ComplEx is equivalent to HolE [ 25].Inspired by Euler's identity, RotatE [ 31] employs a rotational model taking4 Caglar Demir and Axel-Cyrille Ngonga Ngomopredicates as rotations from subjects to objects in complex space via the element-wise Hadamard product [ 16]. RotatE performs well on composition/transitiverelations while ComplEx performs poorly [ 31]. QuatE [ 38] extends the complex-valued space into hypercomplex by a quaternion with three imaginary components,where the Hamilton product is used as compositional operator for hypercomplexvalued-representations.ConvE [ 9] applies a 2D convolution to model the interactions between entitiesand relations. Through interactions captured by 2D convolution, ConvE yields astate-of-art performance in link prediction. ConvKB extends ConvE by omittingthe reshaping operation in the encoding of representations in the convolutionoperation [ 23]. Similarly, HypER extends ConvE by applying relation-specicconvolution lters as opposed to applying lters from concatenated subject andrelation vectors [3].3 Preliminaries and Notation3.1 Knowledge GraphsLetEandRrepresent the set of entities and relations, respectively. Then, a KGG=f(h;r;t)2EREg can be formalised as a set of triples where each triplecontains two entities h;t2Eand a relation r2R. A relation rinGis{symmetric if (h;r;t)() (t;r;h) for all pairs of entities h;t2E,{anti-symmetric if (h;r;t)2G) (t;r;h)62Gfor all h6=t, and{transitive /composite if (h;r;t)2G^ (t;r;y)2G) (h;r;y)2G for allh;t;y2E[31,18].The inverse of a relation r, denoted r1, is a relation such that for any twoentities handt, (h;r;t)2G () (t;r1;h)2G.3.2 Link PredictionThe link prediction refers to predicting whether unseen triples (i.e., triples notfound inG) are true [ 16]. The task is often formalised by learning a scoringfunction:ERE7! R[24,16] ideally characterized by (h;r;t)>(x;y;z)if (h;r;t) is true and ( x;y;z) is not.4 Convolutional Complex Knowledge Graph EmbeddingsInspired by the previous works ComplEx [ 35] and ConvE [ 9], we dub our approachConEx (convolutional complex knowledge graph embeddings).Convolutional Complex Knowledge Graph Embeddings 5Motivation. Sun et. al. [ 31] suggested that ComplEx is not able to modeltriples with transitive relations since ComplEx does not perform well on datasetscontaining many transitive relations (see Table 5 and Section 4.6 in [31]). Moti-vated by this consideration, we propose ConEx , which applies the Hadamardproduct to compose a 2D convolution followed by an ane transformation witha Hermitian inner product in C. By virtue of the proposed architecture (seeEquation (1)), ConEx is endowed with the capability of1. leveraging a 2D convolution and2.degenerating to ComplEx if such degeneration is necessary to further minimizethe incurred training loss.ConEx benets from the parameter sharing and equivariant representationproperties of convolutions [ 11]. The parameter sharing property of the convolutionoperation allows ConEx to achieve parameter eciency, while the equivariantrepresentation allows ConEx to eectively integrate interactions captured in thestacked complex-valued embeddings of entities and relations into computationof scores. This implies that small interactions in the embeddings have smallimpacts on the predicted scores2. The rationale behind this architecture is toincrease the expressiveness of our model without increasing the number of itsparameters. As previously stated in [ 35], this nontrivial endeavour is the keystoneof embedding models. Ergo, we aim to overcome the shortcomings of ComplExin modelling triples containing transitive relations through combining it with a2D convolutions followed by an ane transformation on C.Approach. Given a triple ( h;r;t),ConEx :C3d7!Rcomputes its score asConEx (h;r;t) = conv( eh;er)Re(heh;er;eti); (1)where conv(;) :C2d7!Cdis dened asconv( eh;er) =fvec(f([eh;er]!))W+b; (2)wheref() denotes the rectied linear unit function (ReLU), vec( ) stands for aattening operation, is the convolution operation, !stands for kernels/lters inthe convolution, and ( W;b) characterize an ane transformation. By virtue of itsnovel structure, ConEx is enriched with the capability of controlling the impactof a 2D convolution and Hermitian inner product on the predicted scores. Ergo,the gradients of loss (see Equation (6)) w.r.t. embeddings can be propagatedin two ways, namely, via conv(eh;er) orRe(heh;er;eti). Equation (1) can be2We refer to [11] for further details of properties of convolutions.6 Caglar Demir and Axel-Cyrille Ngonga Ngomoequivalently expressed by expanding its real and imaginary parts:ConEx (h;r;t ) =dXk=1Re()kRe(eh)kRe(er)kRe(et)k (3)=hRe();Re(eh);Re(er);Re(et)i+hRe();Re(eh);Im(er);Im(et)i+hIm();Im(eh);Re(er);Im(et)ihIm();Im(eh);Im(er);Re(et)i (4)where etis the conjugate of etanddenotes the output of conv(eh;er) forbrevity. Such multiplicative inclusion of conv(;) equips ConEx with two moredegrees of freedom due the Re( ) and Im() parts.Connection to ComplEx. During the optimization, conv(;) is allowed toreduce its range into 2Csuch that Re() = 1^Im() = 1 . This allows ConExto degenerate into ComplEx as shown in Equation (5):ConEx (h;r;t) =ComplEx( h,r,t): (5)This multiplicative inclusion of conv(;) is motivated by the scaling parameter inthe batch normalization (see section 3 in [ 15]). Consequently, ConEx is alloweduse a 2D convolution followed by an ane transformation as a scaling factor inthe computation of scores.Training. We train our approach by following a standard setting [ 9,4]. Similarly,we applied the standard data augmentation technique, the KvsAll trainingprocedure3. After the data augmentation technique for a given pair ( h,r), wecompute scores for all x2Ewith(h;r;x). We then apply the logistic sigmoidfunction(((h;r;t))) to obtain predicted probabilities of entities. ConEx istrained to minimize the binary cross entropy loss function Lthat determines theincurred loss on a given pair ( h,r) as dened in the following:L=1jEjjEjXi=1(y(i)log(^y(i)) + (1y(i))log(1^y(i))); (6)where ^y2RjEjis the vector of predicted probabilities and y2[0;1]jEjis thebinary label vector.5 Experiments5.1 DatasetsWe used ve of the most commonly used benchmark datasets (WN18, WN18RR,FB15K, FB15K-237 and YAGO3-10). An overview of the datasets is provided3Note that the KvsAll strategy is called 1-N scoring in [ 9]. Here, we follow theterminology of [28].Convolutional Complex Knowledge Graph Embeddings 7in Table 2. WN18 and WN18RR are subsets of Wordnet, which describes lexicaland semantic hierarchies between concepts and involves symmetric andanti-symmetric relation types, while FB15K, FB15K-237 are subsets of Freebase,which involves mainly symmetric ,antisymmetric andcomposite relationtypes [ 31]. We refer to [ 9] for further details pertaining to the benchmark datasets.Table 2: Overview of datasets in terms of number of entities, number of relations,and node degrees in the train split along with the number of triples in each splitof the dataset.DatasetjEj jRj Degr. (MSD)jGTrainjjGValidationjjGTestjYAGO3-10 123,182 37 9.6 8.7 1,079,040 5,000 5,000FB15K 14,951 1,345 32.46 69.46 483,142 50,000 59,071WN18 40,943 18 3.49 7.74 141,442 5,000 5,000FB15K-237 14,541 237 19.7 30 272,115 17,535 20,466WN18RR 40,943 11 2.2 3.6 86,835 3,034 3,1345.2 Evaluation MetricsWe used the ltered MRR and Hits@N to evaluate link prediction performances,as in previous works [ 31,35,9,4]. We refer to [ 28] for details pertaining to metrics.5.3 Experimental SetupWe selected the hyperparameters of ConEx based on the MRR score obtainedon the validation set of WN18RR. Hence, we evaluated the link predictionperformance of ConEx on FB15K-237, YAGO3-10, WN18 and FB15K by usingthe best hyperparameter conguration found on WN18RR. This decision stemsfrom the fact that we aim to reduce the impact of extensive hyperparameteroptimization on the reported results and the CO2emission caused throughrelying on the ndings of previously works [ 28]. Strubell et al. [ 30] highlightedthe substantial energy consumption of performing extensive hyperparameteroptimization. Moreover, Runelli et al. [ 28] showed that model congurations canbe found by exploring relatively few random samples from a large hyperparameterspace. With these considerations, we determined the ranges of hyperparametersfor the grid search algorithm optimizer based on their best hyperparameter settingfor ConvE (see Table 8 in [ 28]). Specically, the ranges of the hyperparameterswere dened as follows: d:f100;200g; dropout rate:f:3;:4gfor the input; dropoutrate:f:4;:5gfor the feature map; label smoothing: f:1gand the number of outputchannels in the convolution operation: f16;32g; the batch size: f1024g; thelearning rate:f:001g. After determining the best hyperparameters based on the8 Caglar Demir and Axel-Cyrille Ngonga NgomoMRR on the validation dataset; we retrained ConEx with these hyperparameterson the combination of train and valid sets as applied in [17].Motivated by the experimental setups for ResNet [ 12] and AlexNet [ 19], wewere interested in quantifying the impact of ensemble learning on the link predic-tion performances. Ensemble learning refers to learning a weighted combination oflearning algorithms. In our case, we generated ensembles of models by averagingthe predictions of said models.4To this end, we re-evaluated state-of-the-artmodels, including TucKER, DistMult and ComplEx on the combination of trainand validation sets of benchmark datasets. Therewith, we were also able toquantify the impact of training state-of-the-art models on the combination oftrain and validation sets. Moreover, we noticed that link prediction performancesof DistMult and ComplEx, on the YAGO3-10 dataset were reported withoutemploying new training strategies (KvsAll, the reciprocal data augmentation,the batch normalization, and the ADAM optimizer). Hence, we trained DistMult,ComplEx on YAGO3-10 with these strategies.5.4 Implementation Details and ReproducibilityWe implemented and evaluated our approach in the framework provided by [ 4,2].Throughout our experiments, the seed for the pseudo-random generator wasxed to 1. To alleviate the hardware requirements for the reproducibility of ourresults and to foster further reproducible research, we provide hyperparameteroptimization, training and evaluation scripts along with pretrained models at theproject page.6 ResultsTable 3, Table 4 and Table 10 report the link prediction performances of ConExon ve benchmark datasets. Overall, ConEx outperforms state-of-the-art modelson four out of ve datasets. In particular, ConEx outperforms ComplEx andConvE on all ve datasets. This supports our original hypothesis, i.e., that thecomposition of a 2D convolution with a Hermitian inner product improves theprediction of relations in complex spaces. We used the Wilcoxon signed-rank testto measure the statistical signicance of our link prediction results. Moreover,we performed an ablation study (see Table 8) to obtain condence intervals forprediction performances of ConEx . These results are shown in the Appendix.Bold and underlined entries denote best and second-best results in all tables.ConEx outperforms all state-of-the-art models on WN18 and FB15K (see Ta-ble 10 in the Appendix), whereas such distinct superiority is not observed onWN18RR and FB15K-237. Table 3 shows that ConEx outperforms many state-of-the-art models, including RotatE, ConvE, HypER, ComplEx, NKGE, in allmetrics on WN18RR and FB15K-237. This is an important result for two reasons:4Ergo, the weights for models were set to 1 (see the section 16.6 in [ 22] for moredetails.)Convolutional Complex Knowledge Graph Embeddings 9Table 3: Link prediction results on WN18RR and FB15K-237. Results are obtainedfrom corresponding papers.zrepresents recently reported results of correspondingmodels.WN18RR FB15K-237MRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult [9] .430 .490 .440 .390 .241 .419 .263 .155ComplEx [9] .440 .510 .460 .410 .247 .428 .275 .158ConvE [9] .430 .520 .440 .400 .335 .501 .356 .237RESCALy[28] .467 .517 .480 .439 .357 .541 .393 .263DistMulty[28] .452 .530 .466 .413 .343 .531 .378 .250ComplExy[28] .475 .547 .490 .438 .348 .536 .384 .253ConvEy[28] .442 .504 .451 .411 .339 .521 .369 .248HypER [3] .465 .522 .477 .436 .341 .520 .376 .252NKGE [38] .450 .526 .465 .421 .330 .510 .365 .241RotatE [31] .476 .571 .492 .428 .338 .533 .375 .241TuckER [4] .470 .526 .482 .443 .358 .544 .394 .266QuatE [38] .482 .572 .499 .436 .366 .556 .401 .271DistMult .439 .527 .455 .399 .353 .539 .390 .260ComplEx .453 .546 .473 .408 .332 .509 .366 .244TuckER .466 .515 .476 .441 .363 .553 .400 .268ConEx .481 .550 .493 .448 .366 .555 .403 .271(1)ConEx requires signicantly fewer parameters to yield such superior results(e.g., ConEx only requires 26.63M parameters on WN18RR, while RotatE relieson 40.95M parameters), and (2) we did not tune the hyperparameters of ConExon FB15K-237.Furthermore, the results reported in Table 3 corroborate thendings of Runelli et al. [ 28]: training DistMult and ComplEx with KvsAll, thereciprocal data augmentation, the batch normalization, and the ADAM optimizerleads to a signicant improvement, particularly on FB15K-237.During our experiments, we observed that many state-of-the-art models are notevaluated on YAGO3-10. This may stem from the fact that the size of YAGO3-10prohibits performing extensive hyperparameter optimization even with the currentstate-of-the-art hardware systems. Note that YAGO3-10 involves 8.23 and 8.47times more entities than FB15K and FB15K-237, respectively. Table 4 indicatesthat DistMult and ComplEx perform particularly well on YAGO3-10, providedthat KvsAll, the reciprocal data augmentation, the batch normalization, andthe ADAM optimizer are employed. These results support ndings of Runelliet al. [ 28]. During training, we observed that the training loss of DistMult andComplEx seemed to converge within 400 epochs, whereas the training loss ofTuckER seemed to continue decreasing. Ergo, we conjecture that TuckER ismore likely to benet from increasing the number of epochs than DistMult andComplEx. Table 4 shows that the superior performance of ConEx against state-10 Caglar Demir and Axel-Cyrille Ngonga NgomoTable 4: Link prediction results on YAGO3-10. Results are obtained from corre-sponding papers.YAGO3-10MRR Hits@10 Hits@3 Hits@1DistMult [9] .340 .540 .380 .240ComplEx [9] .360 .550 .400 .260ConvE [9] .440 .620 .490 .350HypER [3] .533 .678 .580 .455RotatE [31] .495 .670 .550 .402DistMult .543 .683 .590 .466ComplEx .547 .690 .594 .468TuckER .427 .609 .476 .331ConEx .553 .696 .601 .474of-the-art models including DistMult, ComplEx, HypER can be maintained onthe largest benchmark dataset for the link prediction.Delving into the link prediction results, we observed an inconsistency in thetest splits of WN18RR and FB15K-237. Specically, the test splits of WN18RRand FB15K-237 contain many out-of-vocabulary entities5. For instance, 6% of thetest set on WN18RR involves out-of-vocabulary entities. During our experiments,we did not remove such triples to obtain fair comparisons on both datasets.To quantify the impact of unseen entities on link prediction performances, weconducted an additional experiment.Link Prediction per Relation. Table 5 reports the link prediction per relationperformances on WN18RR. Overall, models perform particularly well on triplescontaining symmetric relations such as also seeand similar to. Comparedto RotatE, DistMult, ComplEx and TuckER, ConEx performs well on triplescontaining transitive relations such as hypernym and haspart. Allen et al. [ 1]ranked the complexity of type of relations as R>S>Cin the link predictiontask. Based on this ranking, superior performance of ConEx becomes moreapparent as the complexity of relations increases.Ensemble Learning. Table 6 reports the link prediction performances ofensembles based on pairs of models. These results suggest that ensemble learningcan be applied as an eective means to boost the generalization performanceof existing approaches including ConEx . These results may also indicate thatmodels may be further improved through optimizing the impact of each modelon the ensembles, e.g., by learning two scalars andinConEx (s;p;o) +TuckER( s;p;o)instead of averaging predicted scores.5github.com/TimDettmers/ConvE/issues/66Convolutional Complex Knowledge Graph Embeddings 11Table 5: MRR link prediction on each relation of WN18RR. Results of RotatEare taken from [ 38]. The complexity of type of relations in the link predictiontask is dened as R >S>C [1].Relation Name Type RotatE DistMult ComplEx TuckER ConExhypernym S .148 .102 .106 .121 .149instance hypernym S .318 .218 .292 .375 .393member meronym C .232 .129 .181 .181 .171synset domain topic of C .341 .226 .266 .344 .373haspart C .184 .143 .181 .171 .192member ofdomain usage C .318 .225 .280 .213 .318member ofdomain region C .200 .095 .267 .284 .354derivationally related form R .947 .982 .984 .985 .986alsosee R .585 .639 .557 .658 .647verb group R .943 1.00 1.00 1.00 1.00similar to R 1.00 1.00 1.00 1.00 1.00Table 6: Link prediction results of ensembled models on WN18RR and FB15K-237. Second rows denote link prediction results without triples containing out-of-vocabulary entities. ConEx -ConEx stands for ensembling two ConEx trainedwith the dropout rate 0.4 and 0.5 on the feature map.WN18RR FB15K-237MRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult-ComplEx .446 .545 .467 .398 .359 .546 .397 .265.475 .579 .497 .426 .359 .546 .397 .265DistMult-TuckER .446 .533 .461 .405 .371 .563 .410 .275.476 .569 .492 .433 .371 .563 .411 .275ConEx -DistMult .454 .545 .471 .410 .371 .563 .409 .275.484 .580 .501 .439 .367 .556 .403 .272ConEx -ComplEx .470 .554 .487 .428 .370 .559 .407 .276.501 .589 .518 .456 .360 .547 .397 .267ConEx -TuckER .483 .549 .494 .449 .375 .568 .414 .278.514 .583 .526 .479 .375 .568 .414 .278ConEx -ConEx .485 .559 .495 .450 .376 .569 .415 .279.517 .594 .526 .479 .376 .570 .415 .279Parameter Analysis. Table 7 indicates the robustness of ConEx against theovertting problem. Increasing the number of parameters in ConEx does not leadto a signicant decrease in the generalization performance. In particular, ConExachieves similar generalization performance, with p= 26:63Mandp= 70:66M,12 Caglar Demir and Axel-Cyrille Ngonga NgomoTable 7: Inuence of dierent hyperparameter congurations for ConEx onWN18RR. d,candpstand for the dimensions of embeddings in C, number ofoutput channels in 2D convolutions and number of free parameters in millions,respectively.WN18RRd c p MRR Hits@10 Hits@3 Hits@1300 64 70.66M .475 .540 .490 .442250 64 52.49M .475 .541 .488 .441300 32 47.62M .480 .548 .491 .447250 32 36.39M .479 .545 .490 .446300 16 36.10M .479 .550 .494 .445250 16 28.48M .477 .544 .489 .443200 32 26.63M .481 .550 .493 .447100 32 10.75M .474 .533 .480 .440100 16 9.47M .476 .536 .486 .44150 32 4.74M .448 .530 .477 .401as the dierence between MRR scores are less than absolute 1%. This cannot beexplained with convolutions playing no role as ConEx would then degrade backto ComplEx and achieve the same results (which is clearly not the case in ourexperiments).7 DiscussionThe superior performance of ConEx stems from the composition of a 2D convo-lution with a Hermitian inner product of complex-valued embeddings. Trouillonet al. [ 35] showed that a Hermitian inner product of complex-valued embeddingscan be eectively used to tackle the link prediction problem. Applying the con-volution operation on complex-valued embeddings of subjects and predicatespermits ConEx to recognize interactions between subjects and predicates inthe form of complex-valued feature maps. Through the ane transformationof feature maps and their inclusion into a Hermitian inner product involvingthe conjugate-transpose of complex-valued embeddings of objects, ConEx canaccurately infer various types of relations. Moreover, the number and shapesof the kernels permit to adjust the expressiveness , while ConEx retains theparameter eciency due to the parameter sharing property of convolutions. Byvirtue of the design, the expressiveness of ConEx may be further improved byincreasing the depth of the conv( ;) via the residual learning block [12].8 Conclusion and Future WorkIn this work, we introduced ConEx |a multiplicative composition of a 2Dconvolution with a Hermitian inner product on complex-valued embeddings. ByConvolutional Complex Knowledge Graph Embeddings 13virtue of its novel structure, ConEx is endowed with the capability of controllingthe impact of a 2D convolution and a Hermitian inner product on the predictedscores. Such combination makes ConEx more robust to overtting, as is armedwith our parameter analysis. Our results open a plethora of other researchavenues. In future work, we plan to investigate the following: (1) combiningthe convolution operation with Hypercomplex multiplications, (2) increasingthe depth in the convolutions via residual learning block and (3) nding moreeective combinations of ensembles of models.AcknowledgmentsThis work has been supported by the BMWi-funded project RAKI (01MD19012D)as well as the BMBF-funded project DAIKIRI (01IS19085B). We are grateful toDiego Moussallem for valuable comments on earlier drafts and to Pamela HeidiDouglas for editing the manuscript.References1.Allen, C., Balazevic, I., Hospedales, T.: Interpreting knowledge graph relationrepresentation from word embeddings. In: International Conference on LearningRepresentations (2021), https://openreview.net/forum?id=gLWj29369lW2.Bala zevi c, I., Allen, C., Hospedales, T.: Multi-relational poincar e graph embeddings.In: Advances in Neural Information Processing Systems. pp. 4465{4475 (2019)3.Bala zevi c, I., Allen, C., Hospedales, T.M.: Hypernetwork knowledge graph em-beddings. In: International Conference on Articial Neural Networks. pp. 553{565.Springer (2019)4.Bala zevi c, I., Allen, C., Hospedales, T.M.: Tucker: Tensor factorization for knowledgegraph completion. arXiv preprint arXiv:1901.09590 (2019)5.Cai, H., Zheng, V.W., Chang, K.C.C.: A comprehensive survey of graph embedding:Problems, techniques, and applications. IEEE Transactions on Knowledge and DataEngineering 30(9), 1616{1637 (2018)6.Chen, H., Wang, W., Li, G., Shi, Y.: A quaternion-embedded capsule networkmodel for knowledge graph completion. IEEE Access (2020)7.Demir, C., Moussallem, D., Ngomo, A.C.N.: A shallow neural model for relationprediction. arXiv preprint arXiv:2101.09090 (2021)8.Demir, C., Ngomo, A.C.N.: A physical embedding model for knowledge graphs. In:Joint International Semantic Technology Conference. pp. 192{209. Springer (2019)9.Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2d knowledgegraph embeddings. In: Thirty-Second AAAI Conference on Articial Intelligence(2018)10.Eder, J.S.: Knowledge graph based search system (Jun 21 2012), uS Patent App.13/404,10911. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT press (2016)12.He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition.In: Proceedings of the IEEE conference on computer vision and pattern recognition.pp. 770{778 (2016)14 Caglar Demir and Axel-Cyrille Ngonga Ngomo13.Hogan, A., Blomqvist, E., Cochez, M., d'Amato, C., de Melo, G., Gutierrez, C.,Gayo, J.E.L., Kirrane, S., Neumaier, S., Polleres, A., et al.: Knowledge graphs.arXiv preprint arXiv:2003.02320 (2020)14.Huang, X., Zhang, J., Li, D., Li, P.: Knowledge graph embedding based questionanswering. In: Proceedings of the Twelfth ACM International Conference on WebSearch and Data Mining. pp. 105{113 (2019)15.Ioe, S., Szegedy, C.: Batch normalization: Accelerating deep network training byreducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)16.Ji, S., Pan, S., Cambria, E., Marttinen, P., Yu, P.S.: A survey on knowledge graphs:Representation, acquisition and applications. arXiv preprint arXiv:2002.00388(2020)17.Joulin, A., Grave, E., Bojanowski, P., Nickel, M., Mikolov, T.: Fast linear modelfor knowledge graph embeddings. arXiv preprint arXiv:1710.10881 (2017)18.Kazemi, S.M., Poole, D.: Simple embedding for link prediction in knowledge graphs.In: Advances in neural information processing systems. pp. 4284{4295 (2018)19.Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classication with deepconvolutional neural networks. Communications of the ACM 60(6), 84{90 (2017)20.Krompa, D., Baier, S., Tresp, V.: Type-constrained representation learning inknowledge graphs. In: International semantic web conference. pp. 640{655. Springer(2015)21.Malyshev, S., Kr otzsch, M., Gonz alez, L., Gonsior, J., Bielefeldt, A.: Getting themost out of wikidata: semantic technology usage in wikipedia's knowledge graph.In: International Semantic Web Conference. pp. 376{394. Springer (2018)22. Murphy, K.P.: Machine learning: a probabilistic perspective. MIT press (2012)23.Nguyen, D.Q., Nguyen, T.D., Nguyen, D.Q., Phung, D.: A novel embedding modelfor knowledge base completion based on convolutional neural network. arXiv preprintarXiv:1712.02121 (2017)24.Nickel, M., Murphy, K., Tresp, V., Gabrilovich, E.: A review of relational machinelearning for knowledge graphs. Proceedings of the IEEE 104(1), 11{33 (2015)25.Nickel, M., Rosasco, L., Poggio, T.: Holographic embeddings of knowledge graphs.arXiv preprint arXiv:1510.04935 (2015)26.Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning onmulti-relational data. In: ICML. vol. 11, pp. 809{816 (2011)27.Qin, C., Zhu, H., Zhuang, F., Guo, Q., Zhang, Q., Zhang, L., Wang, C., Chen, E.,Xiong, H.: A survey on knowledge graph based recommender systems. SCIENTIASINICA Informationis28.Runelli, D., Broscheit, S., Gemulla, R.: You can teach an old dog new tricks! ontraining knowledge graph embeddings. In: International Conference on LearningRepresentations (2019)29.Saleem, M., Kamdar, M.R., Iqbal, A., Sampath, S., Deus, H.F., Ngonga Ngomo,A.C.: Big linked cancer data: Integrating linked tcga and pubmed. Journal of websemantics 27, 34{41 (2014)30.Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deeplearning in nlp. arXiv preprint arXiv:1906.02243 (2019)31.Sun, Z., Deng, Z.H., Nie, J.Y., Tang, J.: Rotate: Knowledge graph embedding byrelational rotation in complex space. arXiv preprint arXiv:1902.10197 (2019)32.Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: Divide the gradient by a runningaverage of its recent magnitude. COURSERA: Neural networks for machine learning4(2), 26{31 (2012)Convolutional Complex Knowledge Graph Embeddings 1533.Trouillon, T., Dance, C.R., Gaussier, E., Welbl, J., Riedel, S., Bouchard, G.:Knowledge graph completion via complex tensor factorization. The Journal ofMachine Learning Research 18(1), 4735{4772 (2017)34.Trouillon, T., Nickel, M.: Complex and holographic embeddings of knowledge graphs:a comparison. arXiv preprint arXiv:1707.01475 (2017)35.Trouillon, T., Welbl, J., Riedel, S., Gaussier, E., Bouchard, G.: Complex embeddingsfor simple link prediction. In: International Conference on Machine Learning. pp.2071{2080 (2016)36.Wang, Q., Mao, Z., Wang, B., Guo, L.: Knowledge graph embedding: A survey of ap-proaches and applications. IEEE Transactions on Knowledge and Data Engineering29(12), 2724{2743 (2017)37.Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relationsfor learning and inference in knowledge bases. In: ICLR (2015)38.Zhang, S., Tay, Y., Yao, L., Liu, Q.: Quaternion knowledge graph embeddings. In:Advances in Neural Information Processing Systems. pp. 2731{2741 (2019)9 AppendixStatistical Hypothesis Testing. We carried out a Wilcoxon signed-rank testto check whether our results are signicant. Our null hypothesis was that the linkprediction performances of ConEx , ComplEx and ConvE come from the samedistribution. The alternative hypothesis was correspondingly that these resultscome from dierent distributions. To perform the Wilcoxon signed-rank test(two-sided), we used the dierences of the MRR , Hits@1, Hits@3, and Hits@10performances on WN18RR, FB15K-237 and YAGO3-10. We performed twohypothesis tests between ConEx and ComplEx as well as between ConEx andConvE. In both tests, we were able to reject the null hypothesis with a p-value<1%. Ergo, the superior performance of ConEx is statistically signicant.Ablation Study. We conducted our ablation study in a fashion akin to [ 9].Like [ 9], we evaluated 2 dierent parameter initialisations to compute condenceintervals that is dened as x1:96spn, where x=1nPnixiands=qPni(xix)2n,respectively. Hence, the mean and the standard deviation are computed withoutBessel's correction. Our results suggest that the initialization of parameters doesnot play a signicant role in the link performance of ConEx . The dropouttechnique is the most important component in the generalization performance ofConEx . This is also observed in [ 9]. Moreover, replacing the Adam optimizerwith the RMSprop optimizer [ 32] leads to slight increases in the variance of thelink prediction results. During our ablation experiments, we were also interestedin decomposing ConEx through removing conv(;), after ConEx is trainedwith it on benchmark datasets. By doing so, we aim to observe the impact of a2D convolution in the computation of scores. Table 9 indicates that the impactofconv(;) diers depending on the input knowledge graph. As the size of theinput knowledge graph increases, the impact of conv(;) on the computation ofscores of triples increases.16 Caglar Demir and Axel-Cyrille Ngonga NgomoTable 8: Ablation study for ConEx on FB15K-237. dpandlsdenote the dropouttechnique and the label smoothing technique, respectively.FB15K-237MRR Hits@10 Hits@3 Hits@1Full .366 .000 .556 .001 .404 .001 .270 .001Nodpon inputs .282 .000 .441 .001 .313 .001 .203 .000Nodpon feature map .351 .000 .533 .000 .388 .001 .259 .001Nols .321.001 .498 .001 .354 .001 .232 .002With RMSprop .361 .004 .550 .007 .400 .005 .267 .003Table 9: Link prediction results on benchmark datasets. ConExstands forremoving conv(;) inConEx during the evaluation.ConEx ConExMRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1WN18RR .481 .550 .493 .448 .401 .494 .437 .346FB15K-237 .366 .555 .403 .271 .284 .458 .314 .198YAGO3-10 .553 .696 .601 .477 .198 .324 .214 .136Table 10: Link prediction results on WN18 and FB15K obtained from [4,38].WN18 FB15KMRR Hits@10 Hits@3 Hits@1 MRR Hits@10 Hits@3 Hits@1DistMult .822 .936 .914 .728 .654 .824 .733 .546ComplEx .941 .947 .936 .936 .692 .840 .759 .599ANALOGY .942 .947 .944 .939 .725 .854 .785 .646R-GCN .819 .964 .929 .697 .696 .842 .760 .601TorusE .947 .954 .950 .943 .733 .832 .771 .674ConvE .943 .956 .946 .935 .657 .831 .723 .558HypER .951 .958 .955 .947 .790 .885 .829 .734SimplE .942 .947 .944 .939 .727 .838 .773 .660TuckER .953 .958 .955 .949 .795 .892 .833 .741QuatE .950 .962 .954 .944 .833 .900 .859 .800ConEx .976 .980 .978 .973 .872 .930 .896 .837Link Prediction Results on WN18 and FB15K. Table 10 reports linkprediction results on the WN18 and FB15K benchmark datasets.<|im_end|> <|im_start|>assistant ### Review Title Novel extension to ComplEx with excellent presentation and evaluation ### Review Text The work presents an extension to the ComplEx graph embedding method that improves the shortcomings of ComplEx in terms of transitive relations. The paper does an excellent job at summarizing related work and highlighting the differences with the proposed method. The motivation and design choices are clearly presented and easy to follow. The presented methodology seems novel and impactful. The evaluation is extensive (five datasets) and convincing (comparison to SOTA models). Code and models are available online. The shortcomings of the paper are minor and mostly cosmetic: * Figure 1 needs more elaboration: What are the conclusions drawn from it? * Missing citation and typo in 5.3 "Ruffinelli et al." * In Tables 4 and 7 it is not explained what an underlined number represents. * Some of the references seem to cite the arXiv versions, rather than the conference versions (e.g., 3 and 13). ### Review Rating 2: Accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
k9GoaycDeio
ICLR.cc/2021/Conference
2021
Improving Local Effectiveness for Global Robustness Training
["JINGYUE LU", "M. Pawan Kumar"]
Despite its increasing popularity, deep neural networks are easily fooled. To alleviate this deficiency, researchers are actively developing new training strategies, which encourage models that are robust to small input perturbations. Several successful robust training methods have been proposed. However, many of them rely on strong adversaries, which can be prohibitively expensive to generate when the input dimension is high and the model structure is complicated. We adopt a new perspective on robustness and propose a novel training algorithm that allows a more effective use of adversaries. Our method improves the model robustness at each local ball centered around an adversary and then, by combining these local balls through a global term, achieves overall robustness. We demonstrate that, by maximizing the use of adversaries via focusing on local balls, we achieve high robust accuracy with weak adversaries. Specifically, our method reaches a similar robust accuracy level to the state of the art approaches trained on strong adversaries on MNIST, CIFAR-10 and CIFAR-100. As a result, the overall training time is reduced. Furthermore, when trained with strong adversaries, our method matches with the current state of the art on MNIST and outperforms them on CIFAR-10 and CIFAR-100.
["strong adversaries", "local effectiveness", "adversaries", "local balls", "mnist", "global robustness", "global robustness training", "popularity", "deep neural networks", "deficiency"]
ABSTRACTDespite its popularity, deep neural networks are easily fooled. To alleviate thisdeficiency, researchers are actively developing new training strategies, whichencourage models that are robust to small input perturbations. Several successfulrobust training methods have been proposed. However, many of them rely onstrong adversaries, which can be prohibitively expensive to generate when theinput dimension is high and the model structure is complicated. We adopt a newperspective on robustness and propose a novel training algorithm that allows amore effective use of adversaries. Our method improves the model robustness ateach local ball centered around an adversary and then, by combining these localballs through a global term, achieves overall robustness. We demonstrate that, bymaximizing the use of adversaries via focusing on local balls, we achieve highrobust accuracy with weak adversaries. Specifically, our method reaches a similarrobust accuracy level to the state of the art approaches trained on strong adversarieson MNIST, CIFAR-10 and CIFAR-100. As a result, the overall training time isreduced. Furthermore, when trained with strong adversaries, our method matcheswith the current state of the art on MNIST and outperforms them on CIFAR-10 andCIFAR-100.1 I NTRODUCTIONWith the proliferation of deep neural networks (DNN) in areas including computer vision, naturallanguage processing and speech recognition, there has been a growing concern over their safety. Forexample, Szegedy et al. (2013) demonstrated that naturally trained DNNs are in fact fragile. Byadding to each data a perturbation that is carefully designed but imperceptible to humans, DNNspreviously reaching almost 100 %accuracy performance could hardly make a correct predictionany more. This could cause serious issues in areas such as autonomous navigation or personalisedmedicine, where an incorrect decision can endanger life. To tackle these issues, training DNNs thatare robust to small perturbations has become an active area of research in machine learning.Various algorithms have been proposed (Papernot et al., 2016; Kannan et al., 2018; Zhang et al.,2019b; Qin et al., 2019; Moosavi-Dezfooli et al., 2020; Madry et al., 2018; Ding et al., 2020). Amongthem, adversarial training (ADV) (Madry et al., 2018) and TRADES (Zhang et al., 2019b) aretwo of the most frequently used training methods so far. Although developed upon different ideas,both methods require using strong adversarial attacks, generally computed through several steps ofprojected gradient descent. Such attacks can quickly become prohibitive when model complexityand input dimensions increase, thereby limiting their applicability. Since the cost of finding strongadversaries is mainly due to the high number of gradient steps performed, one potential approach toalleviate the problem is to use cheap but weak adversaries. Weak adversaries are obtained using fewergradient steps, and in the extreme case with a single gradient step. Based on this idea, Wong et al.(2020) argue that by using random initialization and a larger step-size, adversarial training with weakadversaries found via one gradient step is sufficient to achieve a satisfactory level of robustness. Weterm this method as one-step ADV from now on. While one-step ADV does indeed exhibit robustness,there is still a noticeable gap when compared with its multi-step counterpart.In this paper, we further bridge the gap by proposing a new robust training algorithm: AdversarialTraining via LocAl Stability (ATLAS). Local stability, in our context, implies stability of predictionand is the same as local robustness. Specifically, we make the following contributions:1Under review as a conference paper at ICLR 2021•We adopt a new perspective on robust accuracy and introduce a framework for construct-ing robust training losses that allow more effective use of adversaries. The frameworkconsists of a local component and a global component. The local component maximizesthe effectiveness of an given adversary by improving the network’s robustness on both theadversary and points around it. In other words, the local component attempts to increase theradius of a ball centered at the adversary on which the network is being robust. The globalcomponent combines all local balls in a regularized way to achieve the desired overall robustperformance.•Based on the framework and guided by the need of fast robust training, we propose ournovel robust training algorithm ATLAS.•We show that ATLAS makes a more effective use of weak adversaries by favourablycomparing it against one-step ADV on three datasets: MNIST, CIFAR-10 and CIFAR-100.•Although one-step ATLAS is more expensive than its other one-step counterparts, ATLASstill allows efficient robust training. We show that, with a one-step weak adversary, ATLASmanages to achieve comparable levels of robust accuracy to multi-step state of the artmethods on all datasets.•Finally, we show that when strong adversaries are used, ATLAS matches with the currentstate of the art on MNIST and outperforms them on CIFAR-10 and CIFAR-100.2 R ELATED WORKSRobust training aims to learn a network such that it is able to give the same correct output evenwhen the input is slightly perturbed. Existing robust training algorithms can be divided into twocategories: natural image based methods and adversaries based methods. Within the first category,the common form of loss is a natural loss term plus a regularizer computed at natural images. Webriefly mention some of these methods. Moosavi-Dezfooli et al. (2020) observed empirically thatreducing the curvature of the loss function and the decision boundary could lead to robust models.The authors thus propose a regularizer based on the Hessian of the loss function. Closely related isthe regularizer, introduced in (Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Ross & Doshi-Velez,2018), that penalizes the Frobenius norm of the Jacobian of the loss function. Jacobian regularizercan also be seen as a way of reducing the curvature of the decision boundary (Jakubovitz & Giryes,2018). Although calculating the norm is computationally expensive, a fast estimator with empiricallyhigh accuracy has been developed in Hoffman et al. (2019).We focus on adversary based robust training, as they generally perform better in terms of robustaccuracy. Under this category, an early fundamental work is the Fast Gradient Sign Method (FGSM)by Goodfellow et al. (2015). Adversarial Training (ADV) (Madry et al., 2018) is a multi-step variantof FGSM. Rather than using one step FGSM, ADV employs multi-step projected gradient descent(PGD) (Kurakin et al., 2016) with smaller step-sizes to generate perturbed inputs. These modificationshave allowed ADV to become one of the most effective robust training methods so far (Athalyeet al., 2018). Another frequently used robust training method is TRADES (Zhang et al., 2019b).Motivated by its theoretical analyses of the trade-off between natural accuracy and robust accuracy ina binary classification example, TRADES encourages model robustness by adding to the natural lossa regularizer involving adversaries to push away the decision boundary.Recently, Qin et al. (2019) suggest that a robust model can be learned through promoting linearity inthe vicinity of the training examples. They designed a local linearity regularizer (LLR), in whichadversaries are used to maximize the penalty for non-linearity. Applying LLR also allows efficientrobust training. We note that the underlying idea of LLR is complementary to ATLAS. In addition,several works have suggested to adopt input dependent treatments. These works include incorporatingthe fact of whether the given input is correctly classified (Wang et al., 2020) and using adaptiveperturbation for different inputs (Ding et al., 2019).One major drawback of adversary based methods (Madry et al., 2018; Zhang et al., 2019b; Qin et al.,2019; Wang et al., 2020; Ding et al., 2019) is that most of them rely on strong adversaries, computedvia expensive PGD. When the input dimension is high and the model structure is complicated, findingadversaries can be too expensive for these methods to work effectively. Several works have researchedpossible ways to speed up the process. Algorithmically, Zhang et al. (2019a) cut down the total2Under review as a conference paper at ICLR 2021number of full forward and backward passes for generating multi-step PGD adversaries while Zhanget al. (2020) introduce a parameter to allow early stop PGD. Shafahi et al. (2019) reduce adversaryoverhead by combining weight update and input perturbation update within a single back propagationand use a single step FGSM adversary. Wong et al. (2020) argue that the main reason one-stepFGSM, generally regarded as not robust, is effective in (Shafahi et al., 2019) is due to the non-zeroinitialization used. As a result, Wong et al. (2020) proposed that with random initialization and alarger step size, weak adversaries generated by FGSM could lead to models with a high level ofrobust accuracy. In this study, we adopt a similar viewpoint of accelerating robust training throughthe use of weak adversaries.3 P RELIMINARIES AND NOTATIONSWe consider classification tasks. Let xbe an image from the data distribution XRNandyxbeits correct label from classes C=f1;:::;Cg. We denote a neural network, parameterized by ,asf(x;) :X!RC:The function foutputs logits and the predicted class label is chosen to bethe element with the largest scalar value. Given a supervised dataset, the parameters are obtainedby minimizing a loss function `over the dataset. In natural training, a common loss choice is thecross-entropy loss, denoted as `CE. To compute `CE, we need prediction probabilities p(x;), whichis evaluated as pi(x;) =exp(fi(x;))Pjexp(fj(x;))element-wise.Given a tolerance level and a distance measure norm Lp, we say the network fis robust toadversaries at a given point x2X, if it satisfiesarg maxi2Cfi(x0;) = arg maxi2Cfi(x;);8x0s.tkx0xkp: (1)We equivalently use a ball B(x) =fx0jkx0xkpgto represent the allowed perturbation. If fis robust onB, we callBa robust ball.Evaluating the true robustness of fonXis challenging. In practice, we replace Xwith a test set andevaluate the robustness of fby measuring the percentage of the test set satisfying the condition inequation (1). To check whether the condition is met, various attack strategies are applied. Commonlyused attacks are PGD based. To be specific, PGD performs the following gradient step during eachiterationt+ 1,xt+1=B(x)(xt+sgn(rx`(f(xt;));y)); <; (2)and repeats the iteration several times. Here, sgndenotes the sign function. On the other hand, FGSMuses a single gradient stepx0=x+sgn(rx`(f(x;));y): (3)Finally, we introduce the following notation to facilitate the discussion. We first assume that theneural network fcontains ReLU activation for the sake of clarity. This implies that the neural networkis piecewise linear. As a consequence, at each x2X, it is easy to find a weight matrix Wx2RCNand a constant vector bx2RCsuch thatf(x;) =Wxx+bx. To simplify the notation, we defineWx= [Wxjbx]2RC(N+1); xT= [xTj1 ]2R1(N+1): (4)As a result, at each point x, we havef(x;) = Wxx. Before we delve in to the details of thealgorithm ATLAS, we first introduce a new framework for designing robust training losses.4 T HEGENERAL FRAMEWORKUnlike the optimization perspective taken by ADV and the regularization viewpoint adopted byTRADES, our new loss framework is motivated from a geometric standpoint. We start by brieflymentioning its closely related regularization type approaches. Generally, in such approaches, westart from a natural image x. By including a loss term for x, regularization type losses ensure themodel gives the correct prediction yxatx. Assume the network makes the correct prediction at thenatural imagex. It follows that, there exists a robust ball B(x), where0is the maximum radiusachievable. Previous use of adversary regularization term encourages the radius of the ballB(x),centered atx, to increase.3Under review as a conference paper at ICLR 2021We take a different direction in our new loss framework. Instead of focusing on one global ballcentered atx, we focus on local balls centered at various points x02B(x)and hopefully, bycombining them in a regularized way, we achieve the desired robustness . To facilitate theillustration, we introduce our loss framework through a binary classification problem. Let there betwo classesC=fc1;c2g. For an arbitrary point x2X, the correct class label is yx=c1. Since weare interested in a network’s performance in classifying labels, we compute logits difference asf1(x;)f2(x;) =dWxx2R;whered= [1;1]is a row vector. For a binary classification problem with cross-entropy loss,the loss decreases monotonically with the increase of the value dWxx. To achieve the desiredrobustness, we require dWx0x0>0for allx02B(x). When cross entropy loss `is considered, weneed`(x0)<log(0:5)for allx02B(x). In Figure 1, we show a potential loss curve on B(x)by fixing the value of xon all dimensions apart from one. Our loss framework consists of a localcomponent and a global component. Both components update the parameters of the network model.Figure 1: Original loss curve on B(x)by fixing the value of xon all dimen-sions apart from one. Points with the lossabove the decision boundary log 0:5are adversarial points and marked as red.4.1 L OCAL ROBUSTNESSFor the local component, at each given local point x02B(x), the goal is to attain a robust ballB0(x0)with large radius. We maximize the use of adversaries by treating them as local center pointsx0. In other words, given an adversary x02B(x), we need to satisfy two requirements: the modelpredicts the required label yxatx0and the radius 0ofB0(x0)should be enlarged. When comparedwith ADV , the additional second requirement allows us to achieve improved robustness performanceeven when weak adversaries are used.The first requirement can be easily met by applying a standard loss term at x0with the label yx. Forthe second requirement, to push away the decision boundary, various methods including Jacobianregularizers (Jakubovitz & Giryes, 2018; Ross & Doshi-Velez, 2018), MMA (Ding et al., 2019) andMMR (Croce & Hein, 2019) have been introduced. An expected loss curve after introducing the localcomponent is shown in Figure 2.Figure 2: Loss curve after a local step. Thedashed line is the original loss curve. Providedwith a weak adversary x0, the local step aimsto decrease the loss at x0while increase localrobustness so the local robust ball centered at x0has a large radius.Figure 3: Loss curve after a global step. Thedashed line is the loss curve after a local step.To further smooth the loss curve and to removeadversaries between xandx0, we penalize thedifferencep(x;)p(x0;). As a result, thegap between `(x)and`(x0)is decreased.4Under review as a conference paper at ICLR 20214.2 G LOBAL REGULARIZATIONIn terms of the global component, we combine local balls together in a controlled way to furtherimprove robustness. We first make the following observation.Proposition 1. Consider a ballB(x)at an arbitrary point x2X. For anyx2B(x), we defineWxandxaccordingly, as in equation (4). Let ube a constant such that, for all possible Wx, thefollowing is satisfied kWxkFu. Assumex1andx2are arbitrary chosen points in B(x). Thenfor anyx=x1+ (1)x2; (5)we havedWxx12(dWx1x1+dWx2x2)uL; (6)whereL=p2(2kxk2+12kx1x2k2)is a constant.We now consider the combination of local robust balls. At each loss computation, we have two points:the natural image xand the adversary x0. Assume the model makes the correct prediction at thenatural image and, after the local component loss, at x0, so we have two non-empty local robustballsB(x)andB0(x0). We further assume these two balls are disjoint for the sake of simplicity.The same underlying idea should work for more general cases. The goal is that, with the globalregularization step, points that are misclassified even after the local step can be correctly predicted.To do so, given that local robust balls B(x)andB0(x0)are disjoint, we assume there exists anadversary point x2B(x)satisfyingx=x+ (1)x0; (7)where2(0;1)is a constant depending on B(x)andB0(x0). Then a direct application ofProposition 1 givesdWxx12(dWxx+dWx0x0)uL=12d(WxxWx0x0) +dWx0x0uL(8)whereuandLare defined accordingly.To makexno longer an adversary (that is, to encourage dWxx>0) after global combination,we should make the lower bound on the right as high as possible. Since both dWxxanddWx0x0have the same max value theoretically, we can increase their sum by first increasing the value ofdWx0x0and then decrease the distance between dWxxanddWx0x0. We recall that during thelocal robustness step, we have introduced cross-entropy loss at x0, which directly maximizes thevalue ofdWx0x0. In addition, enlarging local robust balls often leads to a decreased value of u.Given thatdWx0x0anduare already optimized, a sound global step is to minimise the distancebetweendWxxanddWx0x0. We note that, for the distance to be zero, it is sufficient to have therelative differences f2(x0;)f1(x0;)andf2(x;)f1(x;)to be equal rather than requiring anequality between logits f(x;)andf(x0;). To account for this fact, we use prediction probabilitiesinstead. The global regularization step requires minimizing the prediction probability differencep(x;)p(x0;). As the result, we are likely to turn xinto a correctly predicted point after theglobal step, as illustrated in Figure 3.When the loss surface is considered, the global component loss encourages the loss surface to besmoothened and flattend on B(x), so the global component is helpful even when no such adversariesxexist. As suggested in (Zhang et al., 2019b; Cisse et al., 2017; Ross & Doshi-Velez, 2018;Moosavi-Dezfooli et al., 2017), these properties of the loss surface are desired and are possiblyindispensable for models to be robust.Overall, various losses can be constructed under this framework. Depending on the problem at hand,one can choose an appropriate regularizer for increasing local robust balls while selecting a suitablemetric for penalizing the prediction probability difference.5Under review as a conference paper at ICLR 20215 ATLASGuided by the goal of effective fast robust training, we propose a new training algorithm ATLAS,which stands for Adversarial Training via LocAl Stability. We mention that local stability, in ourcontext, means stability of prediction, so it is the same as local robustness. ATLAS based on theframework. We introduce and explain our choices for local and global components.5.1 ATLAS- LOCALMany existing approaches for increasing robust ball radius are computationally expensive. In thiswork, we adopt the standard Jacobian approach to meet our local component requirements basedon the fact that an efficient estimation algorithm for Jacobian has been developed in Hoffman et al.(2019). We briefly introduce how Jacobian can be used to increase local robustness.Given a local point x0, let the correct class label be c=yx. Then for any c06=c, the boundaryhyper-surface separating classes candc0consists of points xbsatisfyingfc(xb;)fc0(xb;) = 0: (9)Applying the standard formula for computing the distance between a point and a hyper-plane1, weget the first order approximation distance dc0ofx0to the boundary hyper-surface in equation (9)under thel2norm asdc0=jfc(x0;)fc0(x0;)jkrx0fc(x0;)rx0fc0(x0;)k2: (10)Since the above equation holds true for any c0, we conclude the model is robust on an l2norm ballcentered atx0with radius d:= minc06=cdc0. To maximize the radius d, we borrow the followingproposition from Jakubovitz & Giryes (2018), which introduced a Jacobian regularizer to the naturalloss, to provide a lower bound for d.Proposition 2. Assume the model is making the correct prediction c=yxforx0and the distancemetric is measured via l2norm. The first order approximation of the minimum perturbation dthat isrequired to find an adversary example is lower bounded byd1p2kJ(x0)kFminc06=cjfc(x0;)fc0(x0;)j; (11)whereJ(x0) =rx0f(x0;)is the Jacobian matrix computed at x0andkkFis the Frobenius norm.For a larger distance d, we need to both increase the value of jfc(x0;)fc0(x0;)jand decreasethe Frobenius norm of J(x0). The first term can be easily taken care of by using a cross-entropy lossonx0while for the second term, we can include a cheap approximation of kJ(x0)k2F, denoted bykJ(x0)kapproxF as a penalty term in our loss. Using the idea of random projection, Hoffman et al.(2019) shows theoretically and empirically that kJ(x0)kapproxF can be estimated with high quality byone backward pass regardless the total number of classes C.To combine the above analyses, for the local robustness component, we introduce the following loss,`local(x0) =`CE(x0) +kJ(x0)kapproxF; (12)whereis a positive scalar. It is worth noting that our local loss shown in equation (12) is differentfrom that of (Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Ross & Doshi-Velez, 2018) as theloss is evaluated at an adversary x0instead of the image x. In the fast robust training setting, x0is aweak adversary obtained through one-step FGSM.5.2 ATLAS- GLOBALTo penalize the prediction probability difference, a natural and frequently used choice is Kull-back–Leibler (KL) distance. We thus formulate the global loss as`global(x0) =KL(p(x0;)kp(x;)); (13)whereis a positive constant.1in this case, the point should be x0and the hyper-plane is tangent to the boundary hyper-surface.6Under review as a conference paper at ICLR 2021Although generally KL(p(x0;)kp(x;))6=KL(p(x;)kp(x0;))due to the asymmetric nature ofKL distance, the same underlying nature of both terms means it is sufficient to use one directionfor the penalty. In our loss, we have use the prediction distribution at x0to be the base distribution.There are two reasons for this choice: firstly, we want it to be consistent with the fact that the localcomponent loss is computed on x0; and secondly, we hope it could mitigate a possible over-fittingissue. If a model over-fits at the original image xcausing an element pi(x;)ofp(x;)to be closeto0, the distance between pi(x;)andpi(x0;)would not be penalized effectively. In extreme cases,the difference will not be penalized at all if pi(x;) = 0 . Usingp(x0;)as the base might alleviatethis issue, asx0keeps changing during each epoch. Under the same reasoning, we point out TRADESrelies on finding strong adversaries. Since it uses the natural image xto compute cross-entropy lossand as the base distribution in the KL regularizer, TRADES cannot use adversaries effectively whenthey are weak. This is consistent with what we see in the experiments section that is, TRADES withone step FGSM is largely outperformed by other methods on challenging datasets. Furthermore, inAppendix C.1, we show that when putting more emphasis on adversaries by replacing xwithx0inTRADES, improved robust accuracy can be achieved.5.3 F INAL LOSSIntegrating the above analyses, we propose the following as the final loss for ATLAS. Given >0and >0, the loss is formulated as`ATLAS =1jXjXx2X`CE(x0) +kJ(x0)kapproxF| {z }local+KL(p(x0;)kp(x;))| {z }global;x02B(x):(14)We mention that although our analyses are carried out in l2norm, our results are easily generalizableto other norms. To ensure the model makes correct predictions at natural images x2X and foreffective patch combination, we adopt an adaptive value scheme for the pre-determined perturbationepsilon. Specifically, we start by setting = 0 and gradually increases its value to the requirednumber over epochs during the initial stage of training. Assume the final value of isv, to beconsistent with the magnitude of , we replace andasvandvaccordingly.6 E XPERIMENTSWe evaluate the performance of ATLAS on three datasets: MNIST, CIFAR-10 and CIFAR-100. Weconsider the cases of training robustness models with weak adversaries generated by FGSM and withstrong adversaries by multi-step PGD. To be consistent with the general experimental setting, weadoptl1norm and use the perturbation value = 0:3for MNIST and = 8=255for CIFAR-10 andCIFAR-100. MNIST is trained on a 4-layer CNN, which consists of 2 convolutional layers followedby 2 fully connected layers. CIFAR-10 and CIFAR-100 are trained on Wide-ResNet-28-8 (Zagoruyko& Komodakis, 2016). To encourage efficient robust training, instead of a fixed learning rate schedulerthat decreases the learning rate at pre-specified epoch numbers, we randomly choose a subset of thedata to be the validation set and then use its robust accuracy to guide the learning rate adjustmentand to terminate the training process. For MNIST, 20-step PGD is applied for computing robustaccuracy on the validation set while for CIFAR-10 and CIFAR-100, 10-step PGD is used. In addition,we gradually increase epsilon value from 0 to 0:3or8=255for the first 15 epochs. We use the SGDoptmizer for the training. More training related details can be found in Appendix B.Methods We compare against ADV and TRADES. We term methods that use weak FGSM ad-versaries as one-step methods . Following the advice from Wong et al. (2020), we apply FGSM,combined with random initialization and a step size of 1.25 to generate adversarial examples. One-step ADV is an implementation of (Wong et al., 2020). To be consistent with their multi-step variants,we use cross-entropy loss for computing the gradient in FGSM for ADV and ATLAS while employKL distance for TRADES. Multi-step methods are trained on strong adversaries generated by PGD:20 steps for the MNIST and 10 steps for CIFAR-10 and CIFAR-100. We also include the Jacobianpenalty loss (Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Ross & Doshi-Velez, 2018) due toits close relation to our local loss component. We refer to it as zero-step JAC to emphasize the factit does not require adversaries. For each method, a range of parameters are tested and those thatgive both high clean accuracy and high validation robust accuracy are chosen. Results for all testedparameter choices are provided in Appendix F. Due to the limited space, the ablation study on theeffect of each component of ATLAS: the local part (ATLAS-l, by setting = 0) and the global part7Under review as a conference paper at ICLR 2021(ATLAS-g, by setting = 0) is left to Appendix C and the same is for a detailed comparison betweenTRADES and ATLAS-g. In addition, we have include in Appendix D.1 an experiment to demonstrateATLAS’s effective use of weak adversaries. We compare ATLAS against a loss which always treatsthe natural image x, instead of a weak adversary x0, as the center point for the local component. Inthe words, the loss takes the form of combining TRADES and zero-step JAC.Robustness Evaluation Extensive robustness evaluations are conducted in order to avoid a falsesense of security. Black box attack generates adversaries on surrogate models with 1000 PGDsteps. In terms of white box attacks, apart from an initial robustness evaluation with PGD-20,we further introduce three strong white attacks: PGD-1000, Untargeted (Un-T) attack (Carlini &Wagner, 2017) and Multi-targeted (Multi-T) attack (Gowal et al., 2019). Untargeted attack uses thelossfu(x;)fc(x;), wherecis the correct class and u= arg maxi6=cfi(x). We allow utochange during each gradient step. Regarding Multi-targeted attack, we perform attack with the lossfi(x;)fc(x;)for alli2Csuch thati6=c. On the big Wide-ResNet model, we run 10 restartswith 100 steps for Untargeted attack and 5 restarts with 20 steps for Multi-target attack. We reduce thenumber of restarts and steps for the Multi-target attack to account for the fact that each incorrect classneeds to be tested. Due to the large number of classes, Multi-target is not performed on CIFAR-100.On the other hand, we perform 20 restarts with 50 steps for both attacks on the small CNN model.For each method, attack results are reported for a model at the epoch that gives the best validationrobust accuracy during the training. On the challenging datasets CIFAR-10 and CIFAR-100, twomore robust evaluations are carried out to obtain a more comprehensive understanding of models’robustness performance. Specifically, we apply an hard-label attack, RayS (Chen & Gu, 2020) and anensemble of diverse parameter-free attacks, AutoAttack (Croce & Hein, 2020). Models’ robustnessperformance under these two evaluations are reported in Appendix D.2.We summarize attack results in Table 1. We mention that with a learning rate scheduler, whichis guided by the robust accuracy on the validation set, all models achieved their best performancearound 30 epochs, except 12 epochs for zero-step JAC on MNIST. In terms of robustness performance,slight performance improvements for all methods and narrower performance gaps will be observed ifall models are trained for 70 epochs with a fixed learning rate scheduler on MNIST. On the otherhand, on CIFAR-10 and CIFAR-100, we found all methods perform better with the guided learningrate scheduler. Since we are interested in efficient robust learning, we use the guided learning ratescheduler for all experiments.Evaluations on CIFAR-10 We study the challenging CIFAR-10 first. For zero-step Jac, we observethat to achieve a roughly 30% robust accuracy, nominal accuracy is dropped to below 65%. Directlyincluding a Jacobian penalty at natural images could thus lead to over-regularization issues. Thismay also be the reason why the method is used as a post-processing technique in (Jakubovitz &Giryes, 2018). On the other hand, by using local adversaries instead of natural images, ATLAS-lmanages to obtain robust accuracy without sacrificing the nominal accuracy too much, which isdemonstrated in the Appendix. When TRADES is considered, we find that its effectiveness relies onstrong adversaries: robust accuracy increases sharply from one-step case to multi-step case. SinceTRADES computes loss at natural images and use them as the base distribution in the KL penalty,weak adversaries are not effectively used.In addition, we constantly find for one-step TRADES that, after a certain number of epochs, thepercentage of the model making incorrect decisions on weak adversaries falls sharply, leading toa sudden drop of more than 10% on validation robust accuracy. This phenomena is referred to ascatastrophic overfitting (Wong et al., 2020; Andriushchenko & Flammarion, 2020). We observethat when this behaviour occurs, the approximated value kJ(x0)kapproxF increases steeply, which isconsistent with what has been observed in Andriushchenko & Flammarion (2020). When dealingwith catastrophic overfitting is the main concern, Andriushchenko & Flammarion (2020) argue thekey is to increase local linearity and they introduce a regularizer to maximize gradient alignment forpoints within the perturbation ball. In our case, the issue could be similarly resolved by increasingthe coefficient to encourage local linearity. However, since the goal of this project is to achievehigh robust accuracy fast, we do not restrict ourselves to models without catastrophic overfitting only,which are likely to compensate robust accuracy for stability. Instead, we use early termination andconsider a wider range of models. We mention that the same phenomenon is observed on one-stepADV and on one-step ATLAS when is small but less frequently. More discussions on catastrophicoverfitting can be found in Appendix E.8Under review as a conference paper at ICLR 2021Table 1: Robustness performance for MNIST on a small CNN and for CIFAR-10 and CIFAR-100 onWide-Resnet 28-8. The higher the better.White box attack TimeModel CleanaccuracyBlack boxattackPGD-20attackPGD-1000attackUn-TattackMulti-Tattackper batchMNISTzero-step JAC 98.34 % 89.84 % 58.09 % 2.48% 28.83 % 28.72 % 0.012sone-step ADV 99:49% 96.26 % 96.29 % 83.25 % 91.02 % 90.62 % 0.009sone-step TRADES 99.40 % 96.21 % 96.28 % 83.05 % 90.66 % 90.02 % 0.016sone-step ATLAS 99.47 % 96:36% 96:84% 89:56% 92:44%92:04% 0.018stwenty-step ADV 99.48 % 96.40 % 97.16 % 92:32% 93.01 % 92:87% 0.077stwenty-step TRADES 99:49% 96.21 % 97.01 % 90.51 % 92.49 % 92.21 % 0.097stwenty-step ATLAS 99.44 % 96:45% 97:19% 92.20 % 93:03% 92.74 % 0.086sCIFAR-10zero-step JAC 63.24 % 60.45 % 34.40 % 34.34 % 31.13 % 30.32 % 0.47sone-step ADV 86.24 % 82.53 % 45.73 % 44.98 % 45.14 % 42.97 % 0.25sone-step TRADES 86:84% 82:63% 40.03 % 39.41 % 39.31 % 38.14 % 0.55sone-step ATLAS 84.52 % 81.06 % 49:01% 48:59% 48:54%46:55% 0.73sten-step ADV 84:77% 82.53 % 50.60 % 50.22 % 49.74 % 47.96 % 1.38sten-step TRADES 84.01 % 82:63% 52.70 % 52.45 % 50.29 % 49.66 % 2.02sten-step ATLAS 82.94 % 81.06 % 53:45% 53:10% 51:51%50:16% 1.88sCIFAR-100zero-step JAC 47.96 % 44.67 % 15.39 % 15.18 % 14.35 % - 0.47sone-step ADV 48.88 % 45.59 % 19.89 % 19.72 % 19.04 % - 0.25sone-step TRADES 62:58% 57:64% 16.93 % 16.21 % 14.83 % - 0.53sone-step ATLAS 60.98 % 57.04 % 26:30% 25:94% 26:28% - 0.73sten-step ADV 60:72% 58:15% 27.71 % 27.43 % 26.83 % - 1.37sten-step TRADES 59.76 % 57.78 % 26.27 % 26.10 % 23.86 % - 2.00sten-step ATLAS 59.62 % 57.83 % 29:12% 28:89% 27:77% - 1.87sOverall, one-step ATLAS outperforms other one-step methods in all strong white box attacks. Evenwhen compared with methods trained with 10 steps PGD, the performance gap between one-stepATLAS and multi-step ADV under the strongest Multi-T attack is below 1:5%. Although adding localand global penalties result in extra computational time for one-step ATLAS (0.73s) than one-stepADV (0.25s), the fact that ATLAS achieves comparable high robust accuracy to that of multi-stepADV (1.38s) with weak adversaries still allows a roughly 50 %speed-up in training. For a faircomparison, we also evaluate an ADV model with 5 PGD steps. The 5-step ADV models requiresslightly more time (0.75s per batch) but is less robust than one-step ATLAS by reaching 44:93%accuracy under the Multi-T attack. When trained on strong adversaries, multi-step ATLAS wins overthe other two multi-step methods in all strong white box attacks.Evaluations on CIFAR-100 and MNIST Similar performances are observed on the CIFAR-100.The key fact to notice here is that the per batch training cost required for CIFAR-100 is similar to thatof CIFAR-10. Since we apply an efficient approximation to evaluate kJ(x)k2F, the increase of totalclass number does not result in a rise of total computational cost. Again, we evaluate ADV modelswith 5 PGD steps, which requires 0.75s per batch and reaches 25:21% robust accuracy under Un-Tattack. On MNIST, one-step ATLAS performs the best as well. Since MNIST is a simple dataset, allmulti-step methods obtain high robust accuracy on the small CNN model. For the same reason, theunstable sudden drop of validation robust accuracy is not observed on any one-step method.7 D ISCUSSIONWe have adopted a different perspective and proposed a new framework for constructing losses thatallow more effective use of adversaries. Specifically, based on the framework, we introduce a novelalgorithm ATLAS for fast robust training. ATLAS can be used as an initialisation technique for othercomplicated tasks. For instance, model trained with ATLAS can be employed as the starting point forlayer-wise adversarial training in improving certified robustness. Apart from fast robust training, webelieve other losses could be constructed from the framework to accommodate different problems.We leave the explorations of potential applications to various robust training setting to future research.9Under review as a conference paper at ICLR 2021
fZjF5Wtz7QP
Good empirical results but there are concerns about the novelty and clarity of the provided justifications
5: Marginally below acceptance threshold
**Summary:** The paper proposes a new way to combine existing techniques for improving adversarial robustness: adversarial training, Jacobian regularization and TRADES consistency loss. The obtained results on three datasets are better than the baselines which rely on adversarial training, Jacobian regularization or TRADES separately. **Pros:** - Good empirical results. - Extensive empirical evaluation on MNIST, CIFAR-10, CIFAR-100. - Ablation study for the components of the method. **Cons:** - My main concern is the novelty of the proposed approach. The paper proposes to use adversarial training together with the TRADES-loss (with a reversed KL-divergence) and a variant of approximate gradient penalization at an adversarial point. All these approaches existed before but now just all combined together. - Additional concern is that the method requires 2 additional hyperparameters alpha and beta compared to usual adversarial training that doesn’t lead to any additional hyperparameters. I think this concern is especially relevant since the method is proposed to be useful for fast adversarial training (i.e. with one-step adversarial examples). - Another concern is the clarity of the presentation. It was really hard to grasp what are the sets $P_x$, $P_x’$, $S_\theta(x)$, $S_{\theta^1}(x)$, $S_{\theta^2}(x)$ and relations between them. Maybe it would be better to clarify it with a picture / diagram. Currently, the theoretical part doesn’t seem to be clear or convincing enough to me. - It would be more insightful if one can provide a clear discussion on how and why the proposed method mitigates the catastrophic overfitting problem (similarly to these recent papers [Li et al. (2020)](https://arxiv.org/pdf/2006.03089.pdf) and [Andriushchenko et al. (2020)](https://arxiv.org/pdf/2007.02617.pdf) which focus on overcoming this problem). There are some mentions **Minor suggestions** - For me, it seems to be a bit misleading to call a subset of the input space a patch given that in the literature, patches are mostly referred to perturbations on the image plane (e.g. as in [Adversarial Patch paper](https://arxiv.org/abs/1712.09665)). - Contributions 2 and 3 at the top of page 2 are nearly duplicates. - “we **take care** of local balls at various points” -- imprecise (what it means to take care in this context?) and a bit informal language. - Equation (6): in the denominator it should be f_c’ instead of f_c in the second term. - Page 5: “as we evaluate it at an adversary x’ instead of the image x” -- it’s not yet clear what “an adversary x’” means, in particular with which method this point is obtained, is it epsilon-bounded (I suppose yes but this becomes clear only much later in the text) or not. Would be good to clarify this. - Page 7: “validation set robust accuracy guided learning rate scheduler” -- consider splitting this phrase which is too complicated. - Page 7: “natural images will **prevail** when weak adversaries are used” -- meaning is unclear. **Score:** 5/10 because of the concerns about the novelty and clarity of the provided justifications of the method.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Improving Local Effectiveness for Global Robustness Training ### Paper Abstract Despite its increasing popularity, deep neural networks are easily fooled. To alleviate this deficiency, researchers are actively developing new training strategies, which encourage models that are robust to small input perturbations. Several successful robust training methods have been proposed. However, many of them rely on strong adversaries, which can be prohibitively expensive to generate when the input dimension is high and the model structure is complicated. We adopt a new perspective on robustness and propose a novel training algorithm that allows a more effective use of adversaries. Our method improves the model robustness at each local ball centered around an adversary and then, by combining these local balls through a global term, achieves overall robustness. We demonstrate that, by maximizing the use of adversaries via focusing on local balls, we achieve high robust accuracy with weak adversaries. Specifically, our method reaches a similar robust accuracy level to the state of the art approaches trained on strong adversaries on MNIST, CIFAR-10 and CIFAR-100. As a result, the overall training time is reduced. Furthermore, when trained with strong adversaries, our method matches with the current state of the art on MNIST and outperforms them on CIFAR-10 and CIFAR-100. ### Paper Keywords ["strong adversaries", "local effectiveness", "adversaries", "local balls", "mnist", "global robustness", "global robustness training", "popularity", "deep neural networks", "deficiency"] ### Paper Content ABSTRACTDespite its popularity, deep neural networks are easily fooled. To alleviate thisdeficiency, researchers are actively developing new training strategies, whichencourage models that are robust to small input perturbations. Several successfulrobust training methods have been proposed. However, many of them rely onstrong adversaries, which can be prohibitively expensive to generate when theinput dimension is high and the model structure is complicated. We adopt a newperspective on robustness and propose a novel training algorithm that allows amore effective use of adversaries. Our method improves the model robustness ateach local ball centered around an adversary and then, by combining these localballs through a global term, achieves overall robustness. We demonstrate that, bymaximizing the use of adversaries via focusing on local balls, we achieve highrobust accuracy with weak adversaries. Specifically, our method reaches a similarrobust accuracy level to the state of the art approaches trained on strong adversarieson MNIST, CIFAR-10 and CIFAR-100. As a result, the overall training time isreduced. Furthermore, when trained with strong adversaries, our method matcheswith the current state of the art on MNIST and outperforms them on CIFAR-10 andCIFAR-100.1 I NTRODUCTIONWith the proliferation of deep neural networks (DNN) in areas including computer vision, naturallanguage processing and speech recognition, there has been a growing concern over their safety. Forexample, Szegedy et al. (2013) demonstrated that naturally trained DNNs are in fact fragile. Byadding to each data a perturbation that is carefully designed but imperceptible to humans, DNNspreviously reaching almost 100 %accuracy performance could hardly make a correct predictionany more. This could cause serious issues in areas such as autonomous navigation or personalisedmedicine, where an incorrect decision can endanger life. To tackle these issues, training DNNs thatare robust to small perturbations has become an active area of research in machine learning.Various algorithms have been proposed (Papernot et al., 2016; Kannan et al., 2018; Zhang et al.,2019b; Qin et al., 2019; Moosavi-Dezfooli et al., 2020; Madry et al., 2018; Ding et al., 2020). Amongthem, adversarial training (ADV) (Madry et al., 2018) and TRADES (Zhang et al., 2019b) aretwo of the most frequently used training methods so far. Although developed upon different ideas,both methods require using strong adversarial attacks, generally computed through several steps ofprojected gradient descent. Such attacks can quickly become prohibitive when model complexityand input dimensions increase, thereby limiting their applicability. Since the cost of finding strongadversaries is mainly due to the high number of gradient steps performed, one potential approach toalleviate the problem is to use cheap but weak adversaries. Weak adversaries are obtained using fewergradient steps, and in the extreme case with a single gradient step. Based on this idea, Wong et al.(2020) argue that by using random initialization and a larger step-size, adversarial training with weakadversaries found via one gradient step is sufficient to achieve a satisfactory level of robustness. Weterm this method as one-step ADV from now on. While one-step ADV does indeed exhibit robustness,there is still a noticeable gap when compared with its multi-step counterpart.In this paper, we further bridge the gap by proposing a new robust training algorithm: AdversarialTraining via LocAl Stability (ATLAS). Local stability, in our context, implies stability of predictionand is the same as local robustness. Specifically, we make the following contributions:1Under review as a conference paper at ICLR 2021•We adopt a new perspective on robust accuracy and introduce a framework for construct-ing robust training losses that allow more effective use of adversaries. The frameworkconsists of a local component and a global component. The local component maximizesthe effectiveness of an given adversary by improving the network’s robustness on both theadversary and points around it. In other words, the local component attempts to increase theradius of a ball centered at the adversary on which the network is being robust. The globalcomponent combines all local balls in a regularized way to achieve the desired overall robustperformance.•Based on the framework and guided by the need of fast robust training, we propose ournovel robust training algorithm ATLAS.•We show that ATLAS makes a more effective use of weak adversaries by favourablycomparing it against one-step ADV on three datasets: MNIST, CIFAR-10 and CIFAR-100.•Although one-step ATLAS is more expensive than its other one-step counterparts, ATLASstill allows efficient robust training. We show that, with a one-step weak adversary, ATLASmanages to achieve comparable levels of robust accuracy to multi-step state of the artmethods on all datasets.•Finally, we show that when strong adversaries are used, ATLAS matches with the currentstate of the art on MNIST and outperforms them on CIFAR-10 and CIFAR-100.2 R ELATED WORKSRobust training aims to learn a network such that it is able to give the same correct output evenwhen the input is slightly perturbed. Existing robust training algorithms can be divided into twocategories: natural image based methods and adversaries based methods. Within the first category,the common form of loss is a natural loss term plus a regularizer computed at natural images. Webriefly mention some of these methods. Moosavi-Dezfooli et al. (2020) observed empirically thatreducing the curvature of the loss function and the decision boundary could lead to robust models.The authors thus propose a regularizer based on the Hessian of the loss function. Closely related isthe regularizer, introduced in (Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Ross & Doshi-Velez,2018), that penalizes the Frobenius norm of the Jacobian of the loss function. Jacobian regularizercan also be seen as a way of reducing the curvature of the decision boundary (Jakubovitz & Giryes,2018). Although calculating the norm is computationally expensive, a fast estimator with empiricallyhigh accuracy has been developed in Hoffman et al. (2019).We focus on adversary based robust training, as they generally perform better in terms of robustaccuracy. Under this category, an early fundamental work is the Fast Gradient Sign Method (FGSM)by Goodfellow et al. (2015). Adversarial Training (ADV) (Madry et al., 2018) is a multi-step variantof FGSM. Rather than using one step FGSM, ADV employs multi-step projected gradient descent(PGD) (Kurakin et al., 2016) with smaller step-sizes to generate perturbed inputs. These modificationshave allowed ADV to become one of the most effective robust training methods so far (Athalyeet al., 2018). Another frequently used robust training method is TRADES (Zhang et al., 2019b).Motivated by its theoretical analyses of the trade-off between natural accuracy and robust accuracy ina binary classification example, TRADES encourages model robustness by adding to the natural lossa regularizer involving adversaries to push away the decision boundary.Recently, Qin et al. (2019) suggest that a robust model can be learned through promoting linearity inthe vicinity of the training examples. They designed a local linearity regularizer (LLR), in whichadversaries are used to maximize the penalty for non-linearity. Applying LLR also allows efficientrobust training. We note that the underlying idea of LLR is complementary to ATLAS. In addition,several works have suggested to adopt input dependent treatments. These works include incorporatingthe fact of whether the given input is correctly classified (Wang et al., 2020) and using adaptiveperturbation for different inputs (Ding et al., 2019).One major drawback of adversary based methods (Madry et al., 2018; Zhang et al., 2019b; Qin et al.,2019; Wang et al., 2020; Ding et al., 2019) is that most of them rely on strong adversaries, computedvia expensive PGD. When the input dimension is high and the model structure is complicated, findingadversaries can be too expensive for these methods to work effectively. Several works have researchedpossible ways to speed up the process. Algorithmically, Zhang et al. (2019a) cut down the total2Under review as a conference paper at ICLR 2021number of full forward and backward passes for generating multi-step PGD adversaries while Zhanget al. (2020) introduce a parameter to allow early stop PGD. Shafahi et al. (2019) reduce adversaryoverhead by combining weight update and input perturbation update within a single back propagationand use a single step FGSM adversary. Wong et al. (2020) argue that the main reason one-stepFGSM, generally regarded as not robust, is effective in (Shafahi et al., 2019) is due to the non-zeroinitialization used. As a result, Wong et al. (2020) proposed that with random initialization and alarger step size, weak adversaries generated by FGSM could lead to models with a high level ofrobust accuracy. In this study, we adopt a similar viewpoint of accelerating robust training throughthe use of weak adversaries.3 P RELIMINARIES AND NOTATIONSWe consider classification tasks. Let xbe an image from the data distribution XRNandyxbeits correct label from classes C=f1;:::;Cg. We denote a neural network, parameterized by ,asf(x;) :X!RC:The function foutputs logits and the predicted class label is chosen to bethe element with the largest scalar value. Given a supervised dataset, the parameters are obtainedby minimizing a loss function `over the dataset. In natural training, a common loss choice is thecross-entropy loss, denoted as `CE. To compute `CE, we need prediction probabilities p(x;), whichis evaluated as pi(x;) =exp(fi(x;))Pjexp(fj(x;))element-wise.Given a tolerance level and a distance measure norm Lp, we say the network fis robust toadversaries at a given point x2X, if it satisfiesarg maxi2Cfi(x0;) = arg maxi2Cfi(x;);8x0s.tkx0xkp: (1)We equivalently use a ball B(x) =fx0jkx0xkpgto represent the allowed perturbation. If fis robust onB, we callBa robust ball.Evaluating the true robustness of fonXis challenging. In practice, we replace Xwith a test set andevaluate the robustness of fby measuring the percentage of the test set satisfying the condition inequation (1). To check whether the condition is met, various attack strategies are applied. Commonlyused attacks are PGD based. To be specific, PGD performs the following gradient step during eachiterationt+ 1,xt+1=B(x)(xt+sgn(rx`(f(xt;));y)); <; (2)and repeats the iteration several times. Here, sgndenotes the sign function. On the other hand, FGSMuses a single gradient stepx0=x+sgn(rx`(f(x;));y): (3)Finally, we introduce the following notation to facilitate the discussion. We first assume that theneural network fcontains ReLU activation for the sake of clarity. This implies that the neural networkis piecewise linear. As a consequence, at each x2X, it is easy to find a weight matrix Wx2RCNand a constant vector bx2RCsuch thatf(x;) =Wxx+bx. To simplify the notation, we defineWx= [Wxjbx]2RC(N+1); xT= [xTj1 ]2R1(N+1): (4)As a result, at each point x, we havef(x;) = Wxx. Before we delve in to the details of thealgorithm ATLAS, we first introduce a new framework for designing robust training losses.4 T HEGENERAL FRAMEWORKUnlike the optimization perspective taken by ADV and the regularization viewpoint adopted byTRADES, our new loss framework is motivated from a geometric standpoint. We start by brieflymentioning its closely related regularization type approaches. Generally, in such approaches, westart from a natural image x. By including a loss term for x, regularization type losses ensure themodel gives the correct prediction yxatx. Assume the network makes the correct prediction at thenatural imagex. It follows that, there exists a robust ball B(x), where0is the maximum radiusachievable. Previous use of adversary regularization term encourages the radius of the ballB(x),centered atx, to increase.3Under review as a conference paper at ICLR 2021We take a different direction in our new loss framework. Instead of focusing on one global ballcentered atx, we focus on local balls centered at various points x02B(x)and hopefully, bycombining them in a regularized way, we achieve the desired robustness . To facilitate theillustration, we introduce our loss framework through a binary classification problem. Let there betwo classesC=fc1;c2g. For an arbitrary point x2X, the correct class label is yx=c1. Since weare interested in a network’s performance in classifying labels, we compute logits difference asf1(x;)f2(x;) =dWxx2R;whered= [1;1]is a row vector. For a binary classification problem with cross-entropy loss,the loss decreases monotonically with the increase of the value dWxx. To achieve the desiredrobustness, we require dWx0x0>0for allx02B(x). When cross entropy loss `is considered, weneed`(x0)<log(0:5)for allx02B(x). In Figure 1, we show a potential loss curve on B(x)by fixing the value of xon all dimensions apart from one. Our loss framework consists of a localcomponent and a global component. Both components update the parameters of the network model.Figure 1: Original loss curve on B(x)by fixing the value of xon all dimen-sions apart from one. Points with the lossabove the decision boundary log 0:5are adversarial points and marked as red.4.1 L OCAL ROBUSTNESSFor the local component, at each given local point x02B(x), the goal is to attain a robust ballB0(x0)with large radius. We maximize the use of adversaries by treating them as local center pointsx0. In other words, given an adversary x02B(x), we need to satisfy two requirements: the modelpredicts the required label yxatx0and the radius 0ofB0(x0)should be enlarged. When comparedwith ADV , the additional second requirement allows us to achieve improved robustness performanceeven when weak adversaries are used.The first requirement can be easily met by applying a standard loss term at x0with the label yx. Forthe second requirement, to push away the decision boundary, various methods including Jacobianregularizers (Jakubovitz & Giryes, 2018; Ross & Doshi-Velez, 2018), MMA (Ding et al., 2019) andMMR (Croce & Hein, 2019) have been introduced. An expected loss curve after introducing the localcomponent is shown in Figure 2.Figure 2: Loss curve after a local step. Thedashed line is the original loss curve. Providedwith a weak adversary x0, the local step aimsto decrease the loss at x0while increase localrobustness so the local robust ball centered at x0has a large radius.Figure 3: Loss curve after a global step. Thedashed line is the loss curve after a local step.To further smooth the loss curve and to removeadversaries between xandx0, we penalize thedifferencep(x;)p(x0;). As a result, thegap between `(x)and`(x0)is decreased.4Under review as a conference paper at ICLR 20214.2 G LOBAL REGULARIZATIONIn terms of the global component, we combine local balls together in a controlled way to furtherimprove robustness. We first make the following observation.Proposition 1. Consider a ballB(x)at an arbitrary point x2X. For anyx2B(x), we defineWxandxaccordingly, as in equation (4). Let ube a constant such that, for all possible Wx, thefollowing is satisfied kWxkFu. Assumex1andx2are arbitrary chosen points in B(x). Thenfor anyx=x1+ (1)x2; (5)we havedWxx12(dWx1x1+dWx2x2)uL; (6)whereL=p2(2kxk2+12kx1x2k2)is a constant.We now consider the combination of local robust balls. At each loss computation, we have two points:the natural image xand the adversary x0. Assume the model makes the correct prediction at thenatural image and, after the local component loss, at x0, so we have two non-empty local robustballsB(x)andB0(x0). We further assume these two balls are disjoint for the sake of simplicity.The same underlying idea should work for more general cases. The goal is that, with the globalregularization step, points that are misclassified even after the local step can be correctly predicted.To do so, given that local robust balls B(x)andB0(x0)are disjoint, we assume there exists anadversary point x2B(x)satisfyingx=x+ (1)x0; (7)where2(0;1)is a constant depending on B(x)andB0(x0). Then a direct application ofProposition 1 givesdWxx12(dWxx+dWx0x0)uL=12d(WxxWx0x0) +dWx0x0uL(8)whereuandLare defined accordingly.To makexno longer an adversary (that is, to encourage dWxx>0) after global combination,we should make the lower bound on the right as high as possible. Since both dWxxanddWx0x0have the same max value theoretically, we can increase their sum by first increasing the value ofdWx0x0and then decrease the distance between dWxxanddWx0x0. We recall that during thelocal robustness step, we have introduced cross-entropy loss at x0, which directly maximizes thevalue ofdWx0x0. In addition, enlarging local robust balls often leads to a decreased value of u.Given thatdWx0x0anduare already optimized, a sound global step is to minimise the distancebetweendWxxanddWx0x0. We note that, for the distance to be zero, it is sufficient to have therelative differences f2(x0;)f1(x0;)andf2(x;)f1(x;)to be equal rather than requiring anequality between logits f(x;)andf(x0;). To account for this fact, we use prediction probabilitiesinstead. The global regularization step requires minimizing the prediction probability differencep(x;)p(x0;). As the result, we are likely to turn xinto a correctly predicted point after theglobal step, as illustrated in Figure 3.When the loss surface is considered, the global component loss encourages the loss surface to besmoothened and flattend on B(x), so the global component is helpful even when no such adversariesxexist. As suggested in (Zhang et al., 2019b; Cisse et al., 2017; Ross & Doshi-Velez, 2018;Moosavi-Dezfooli et al., 2017), these properties of the loss surface are desired and are possiblyindispensable for models to be robust.Overall, various losses can be constructed under this framework. Depending on the problem at hand,one can choose an appropriate regularizer for increasing local robust balls while selecting a suitablemetric for penalizing the prediction probability difference.5Under review as a conference paper at ICLR 20215 ATLASGuided by the goal of effective fast robust training, we propose a new training algorithm ATLAS,which stands for Adversarial Training via LocAl Stability. We mention that local stability, in ourcontext, means stability of prediction, so it is the same as local robustness. ATLAS based on theframework. We introduce and explain our choices for local and global components.5.1 ATLAS- LOCALMany existing approaches for increasing robust ball radius are computationally expensive. In thiswork, we adopt the standard Jacobian approach to meet our local component requirements basedon the fact that an efficient estimation algorithm for Jacobian has been developed in Hoffman et al.(2019). We briefly introduce how Jacobian can be used to increase local robustness.Given a local point x0, let the correct class label be c=yx. Then for any c06=c, the boundaryhyper-surface separating classes candc0consists of points xbsatisfyingfc(xb;)fc0(xb;) = 0: (9)Applying the standard formula for computing the distance between a point and a hyper-plane1, weget the first order approximation distance dc0ofx0to the boundary hyper-surface in equation (9)under thel2norm asdc0=jfc(x0;)fc0(x0;)jkrx0fc(x0;)rx0fc0(x0;)k2: (10)Since the above equation holds true for any c0, we conclude the model is robust on an l2norm ballcentered atx0with radius d:= minc06=cdc0. To maximize the radius d, we borrow the followingproposition from Jakubovitz & Giryes (2018), which introduced a Jacobian regularizer to the naturalloss, to provide a lower bound for d.Proposition 2. Assume the model is making the correct prediction c=yxforx0and the distancemetric is measured via l2norm. The first order approximation of the minimum perturbation dthat isrequired to find an adversary example is lower bounded byd1p2kJ(x0)kFminc06=cjfc(x0;)fc0(x0;)j; (11)whereJ(x0) =rx0f(x0;)is the Jacobian matrix computed at x0andkkFis the Frobenius norm.For a larger distance d, we need to both increase the value of jfc(x0;)fc0(x0;)jand decreasethe Frobenius norm of J(x0). The first term can be easily taken care of by using a cross-entropy lossonx0while for the second term, we can include a cheap approximation of kJ(x0)k2F, denoted bykJ(x0)kapproxF as a penalty term in our loss. Using the idea of random projection, Hoffman et al.(2019) shows theoretically and empirically that kJ(x0)kapproxF can be estimated with high quality byone backward pass regardless the total number of classes C.To combine the above analyses, for the local robustness component, we introduce the following loss,`local(x0) =`CE(x0) +kJ(x0)kapproxF; (12)whereis a positive scalar. It is worth noting that our local loss shown in equation (12) is differentfrom that of (Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Ross & Doshi-Velez, 2018) as theloss is evaluated at an adversary x0instead of the image x. In the fast robust training setting, x0is aweak adversary obtained through one-step FGSM.5.2 ATLAS- GLOBALTo penalize the prediction probability difference, a natural and frequently used choice is Kull-back–Leibler (KL) distance. We thus formulate the global loss as`global(x0) =KL(p(x0;)kp(x;)); (13)whereis a positive constant.1in this case, the point should be x0and the hyper-plane is tangent to the boundary hyper-surface.6Under review as a conference paper at ICLR 2021Although generally KL(p(x0;)kp(x;))6=KL(p(x;)kp(x0;))due to the asymmetric nature ofKL distance, the same underlying nature of both terms means it is sufficient to use one directionfor the penalty. In our loss, we have use the prediction distribution at x0to be the base distribution.There are two reasons for this choice: firstly, we want it to be consistent with the fact that the localcomponent loss is computed on x0; and secondly, we hope it could mitigate a possible over-fittingissue. If a model over-fits at the original image xcausing an element pi(x;)ofp(x;)to be closeto0, the distance between pi(x;)andpi(x0;)would not be penalized effectively. In extreme cases,the difference will not be penalized at all if pi(x;) = 0 . Usingp(x0;)as the base might alleviatethis issue, asx0keeps changing during each epoch. Under the same reasoning, we point out TRADESrelies on finding strong adversaries. Since it uses the natural image xto compute cross-entropy lossand as the base distribution in the KL regularizer, TRADES cannot use adversaries effectively whenthey are weak. This is consistent with what we see in the experiments section that is, TRADES withone step FGSM is largely outperformed by other methods on challenging datasets. Furthermore, inAppendix C.1, we show that when putting more emphasis on adversaries by replacing xwithx0inTRADES, improved robust accuracy can be achieved.5.3 F INAL LOSSIntegrating the above analyses, we propose the following as the final loss for ATLAS. Given >0and >0, the loss is formulated as`ATLAS =1jXjXx2X`CE(x0) +kJ(x0)kapproxF| {z }local+KL(p(x0;)kp(x;))| {z }global;x02B(x):(14)We mention that although our analyses are carried out in l2norm, our results are easily generalizableto other norms. To ensure the model makes correct predictions at natural images x2X and foreffective patch combination, we adopt an adaptive value scheme for the pre-determined perturbationepsilon. Specifically, we start by setting = 0 and gradually increases its value to the requirednumber over epochs during the initial stage of training. Assume the final value of isv, to beconsistent with the magnitude of , we replace andasvandvaccordingly.6 E XPERIMENTSWe evaluate the performance of ATLAS on three datasets: MNIST, CIFAR-10 and CIFAR-100. Weconsider the cases of training robustness models with weak adversaries generated by FGSM and withstrong adversaries by multi-step PGD. To be consistent with the general experimental setting, weadoptl1norm and use the perturbation value = 0:3for MNIST and = 8=255for CIFAR-10 andCIFAR-100. MNIST is trained on a 4-layer CNN, which consists of 2 convolutional layers followedby 2 fully connected layers. CIFAR-10 and CIFAR-100 are trained on Wide-ResNet-28-8 (Zagoruyko& Komodakis, 2016). To encourage efficient robust training, instead of a fixed learning rate schedulerthat decreases the learning rate at pre-specified epoch numbers, we randomly choose a subset of thedata to be the validation set and then use its robust accuracy to guide the learning rate adjustmentand to terminate the training process. For MNIST, 20-step PGD is applied for computing robustaccuracy on the validation set while for CIFAR-10 and CIFAR-100, 10-step PGD is used. In addition,we gradually increase epsilon value from 0 to 0:3or8=255for the first 15 epochs. We use the SGDoptmizer for the training. More training related details can be found in Appendix B.Methods We compare against ADV and TRADES. We term methods that use weak FGSM ad-versaries as one-step methods . Following the advice from Wong et al. (2020), we apply FGSM,combined with random initialization and a step size of 1.25 to generate adversarial examples. One-step ADV is an implementation of (Wong et al., 2020). To be consistent with their multi-step variants,we use cross-entropy loss for computing the gradient in FGSM for ADV and ATLAS while employKL distance for TRADES. Multi-step methods are trained on strong adversaries generated by PGD:20 steps for the MNIST and 10 steps for CIFAR-10 and CIFAR-100. We also include the Jacobianpenalty loss (Jakubovitz & Giryes, 2018; Hoffman et al., 2019; Ross & Doshi-Velez, 2018) due toits close relation to our local loss component. We refer to it as zero-step JAC to emphasize the factit does not require adversaries. For each method, a range of parameters are tested and those thatgive both high clean accuracy and high validation robust accuracy are chosen. Results for all testedparameter choices are provided in Appendix F. Due to the limited space, the ablation study on theeffect of each component of ATLAS: the local part (ATLAS-l, by setting = 0) and the global part7Under review as a conference paper at ICLR 2021(ATLAS-g, by setting = 0) is left to Appendix C and the same is for a detailed comparison betweenTRADES and ATLAS-g. In addition, we have include in Appendix D.1 an experiment to demonstrateATLAS’s effective use of weak adversaries. We compare ATLAS against a loss which always treatsthe natural image x, instead of a weak adversary x0, as the center point for the local component. Inthe words, the loss takes the form of combining TRADES and zero-step JAC.Robustness Evaluation Extensive robustness evaluations are conducted in order to avoid a falsesense of security. Black box attack generates adversaries on surrogate models with 1000 PGDsteps. In terms of white box attacks, apart from an initial robustness evaluation with PGD-20,we further introduce three strong white attacks: PGD-1000, Untargeted (Un-T) attack (Carlini &Wagner, 2017) and Multi-targeted (Multi-T) attack (Gowal et al., 2019). Untargeted attack uses thelossfu(x;)fc(x;), wherecis the correct class and u= arg maxi6=cfi(x). We allow utochange during each gradient step. Regarding Multi-targeted attack, we perform attack with the lossfi(x;)fc(x;)for alli2Csuch thati6=c. On the big Wide-ResNet model, we run 10 restartswith 100 steps for Untargeted attack and 5 restarts with 20 steps for Multi-target attack. We reduce thenumber of restarts and steps for the Multi-target attack to account for the fact that each incorrect classneeds to be tested. Due to the large number of classes, Multi-target is not performed on CIFAR-100.On the other hand, we perform 20 restarts with 50 steps for both attacks on the small CNN model.For each method, attack results are reported for a model at the epoch that gives the best validationrobust accuracy during the training. On the challenging datasets CIFAR-10 and CIFAR-100, twomore robust evaluations are carried out to obtain a more comprehensive understanding of models’robustness performance. Specifically, we apply an hard-label attack, RayS (Chen & Gu, 2020) and anensemble of diverse parameter-free attacks, AutoAttack (Croce & Hein, 2020). Models’ robustnessperformance under these two evaluations are reported in Appendix D.2.We summarize attack results in Table 1. We mention that with a learning rate scheduler, whichis guided by the robust accuracy on the validation set, all models achieved their best performancearound 30 epochs, except 12 epochs for zero-step JAC on MNIST. In terms of robustness performance,slight performance improvements for all methods and narrower performance gaps will be observed ifall models are trained for 70 epochs with a fixed learning rate scheduler on MNIST. On the otherhand, on CIFAR-10 and CIFAR-100, we found all methods perform better with the guided learningrate scheduler. Since we are interested in efficient robust learning, we use the guided learning ratescheduler for all experiments.Evaluations on CIFAR-10 We study the challenging CIFAR-10 first. For zero-step Jac, we observethat to achieve a roughly 30% robust accuracy, nominal accuracy is dropped to below 65%. Directlyincluding a Jacobian penalty at natural images could thus lead to over-regularization issues. Thismay also be the reason why the method is used as a post-processing technique in (Jakubovitz &Giryes, 2018). On the other hand, by using local adversaries instead of natural images, ATLAS-lmanages to obtain robust accuracy without sacrificing the nominal accuracy too much, which isdemonstrated in the Appendix. When TRADES is considered, we find that its effectiveness relies onstrong adversaries: robust accuracy increases sharply from one-step case to multi-step case. SinceTRADES computes loss at natural images and use them as the base distribution in the KL penalty,weak adversaries are not effectively used.In addition, we constantly find for one-step TRADES that, after a certain number of epochs, thepercentage of the model making incorrect decisions on weak adversaries falls sharply, leading toa sudden drop of more than 10% on validation robust accuracy. This phenomena is referred to ascatastrophic overfitting (Wong et al., 2020; Andriushchenko & Flammarion, 2020). We observethat when this behaviour occurs, the approximated value kJ(x0)kapproxF increases steeply, which isconsistent with what has been observed in Andriushchenko & Flammarion (2020). When dealingwith catastrophic overfitting is the main concern, Andriushchenko & Flammarion (2020) argue thekey is to increase local linearity and they introduce a regularizer to maximize gradient alignment forpoints within the perturbation ball. In our case, the issue could be similarly resolved by increasingthe coefficient to encourage local linearity. However, since the goal of this project is to achievehigh robust accuracy fast, we do not restrict ourselves to models without catastrophic overfitting only,which are likely to compensate robust accuracy for stability. Instead, we use early termination andconsider a wider range of models. We mention that the same phenomenon is observed on one-stepADV and on one-step ATLAS when is small but less frequently. More discussions on catastrophicoverfitting can be found in Appendix E.8Under review as a conference paper at ICLR 2021Table 1: Robustness performance for MNIST on a small CNN and for CIFAR-10 and CIFAR-100 onWide-Resnet 28-8. The higher the better.White box attack TimeModel CleanaccuracyBlack boxattackPGD-20attackPGD-1000attackUn-TattackMulti-Tattackper batchMNISTzero-step JAC 98.34 % 89.84 % 58.09 % 2.48% 28.83 % 28.72 % 0.012sone-step ADV 99:49% 96.26 % 96.29 % 83.25 % 91.02 % 90.62 % 0.009sone-step TRADES 99.40 % 96.21 % 96.28 % 83.05 % 90.66 % 90.02 % 0.016sone-step ATLAS 99.47 % 96:36% 96:84% 89:56% 92:44%92:04% 0.018stwenty-step ADV 99.48 % 96.40 % 97.16 % 92:32% 93.01 % 92:87% 0.077stwenty-step TRADES 99:49% 96.21 % 97.01 % 90.51 % 92.49 % 92.21 % 0.097stwenty-step ATLAS 99.44 % 96:45% 97:19% 92.20 % 93:03% 92.74 % 0.086sCIFAR-10zero-step JAC 63.24 % 60.45 % 34.40 % 34.34 % 31.13 % 30.32 % 0.47sone-step ADV 86.24 % 82.53 % 45.73 % 44.98 % 45.14 % 42.97 % 0.25sone-step TRADES 86:84% 82:63% 40.03 % 39.41 % 39.31 % 38.14 % 0.55sone-step ATLAS 84.52 % 81.06 % 49:01% 48:59% 48:54%46:55% 0.73sten-step ADV 84:77% 82.53 % 50.60 % 50.22 % 49.74 % 47.96 % 1.38sten-step TRADES 84.01 % 82:63% 52.70 % 52.45 % 50.29 % 49.66 % 2.02sten-step ATLAS 82.94 % 81.06 % 53:45% 53:10% 51:51%50:16% 1.88sCIFAR-100zero-step JAC 47.96 % 44.67 % 15.39 % 15.18 % 14.35 % - 0.47sone-step ADV 48.88 % 45.59 % 19.89 % 19.72 % 19.04 % - 0.25sone-step TRADES 62:58% 57:64% 16.93 % 16.21 % 14.83 % - 0.53sone-step ATLAS 60.98 % 57.04 % 26:30% 25:94% 26:28% - 0.73sten-step ADV 60:72% 58:15% 27.71 % 27.43 % 26.83 % - 1.37sten-step TRADES 59.76 % 57.78 % 26.27 % 26.10 % 23.86 % - 2.00sten-step ATLAS 59.62 % 57.83 % 29:12% 28:89% 27:77% - 1.87sOverall, one-step ATLAS outperforms other one-step methods in all strong white box attacks. Evenwhen compared with methods trained with 10 steps PGD, the performance gap between one-stepATLAS and multi-step ADV under the strongest Multi-T attack is below 1:5%. Although adding localand global penalties result in extra computational time for one-step ATLAS (0.73s) than one-stepADV (0.25s), the fact that ATLAS achieves comparable high robust accuracy to that of multi-stepADV (1.38s) with weak adversaries still allows a roughly 50 %speed-up in training. For a faircomparison, we also evaluate an ADV model with 5 PGD steps. The 5-step ADV models requiresslightly more time (0.75s per batch) but is less robust than one-step ATLAS by reaching 44:93%accuracy under the Multi-T attack. When trained on strong adversaries, multi-step ATLAS wins overthe other two multi-step methods in all strong white box attacks.Evaluations on CIFAR-100 and MNIST Similar performances are observed on the CIFAR-100.The key fact to notice here is that the per batch training cost required for CIFAR-100 is similar to thatof CIFAR-10. Since we apply an efficient approximation to evaluate kJ(x)k2F, the increase of totalclass number does not result in a rise of total computational cost. Again, we evaluate ADV modelswith 5 PGD steps, which requires 0.75s per batch and reaches 25:21% robust accuracy under Un-Tattack. On MNIST, one-step ATLAS performs the best as well. Since MNIST is a simple dataset, allmulti-step methods obtain high robust accuracy on the small CNN model. For the same reason, theunstable sudden drop of validation robust accuracy is not observed on any one-step method.7 D ISCUSSIONWe have adopted a different perspective and proposed a new framework for constructing losses thatallow more effective use of adversaries. Specifically, based on the framework, we introduce a novelalgorithm ATLAS for fast robust training. ATLAS can be used as an initialisation technique for othercomplicated tasks. For instance, model trained with ATLAS can be employed as the starting point forlayer-wise adversarial training in improving certified robustness. Apart from fast robust training, webelieve other losses could be constructed from the framework to accommodate different problems.We leave the explorations of potential applications to various robust training setting to future research.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Good empirical results but there are concerns about the novelty and clarity of the provided justifications ### Review Text **Summary:** The paper proposes a new way to combine existing techniques for improving adversarial robustness: adversarial training, Jacobian regularization and TRADES consistency loss. The obtained results on three datasets are better than the baselines which rely on adversarial training, Jacobian regularization or TRADES separately. **Pros:** - Good empirical results. - Extensive empirical evaluation on MNIST, CIFAR-10, CIFAR-100. - Ablation study for the components of the method. **Cons:** - My main concern is the novelty of the proposed approach. The paper proposes to use adversarial training together with the TRADES-loss (with a reversed KL-divergence) and a variant of approximate gradient penalization at an adversarial point. All these approaches existed before but now just all combined together. - Additional concern is that the method requires 2 additional hyperparameters alpha and beta compared to usual adversarial training that doesn’t lead to any additional hyperparameters. I think this concern is especially relevant since the method is proposed to be useful for fast adversarial training (i.e. with one-step adversarial examples). - Another concern is the clarity of the presentation. It was really hard to grasp what are the sets $P_x$, $P_x’$, $S_\theta(x)$, $S_{\theta^1}(x)$, $S_{\theta^2}(x)$ and relations between them. Maybe it would be better to clarify it with a picture / diagram. Currently, the theoretical part doesn’t seem to be clear or convincing enough to me. - It would be more insightful if one can provide a clear discussion on how and why the proposed method mitigates the catastrophic overfitting problem (similarly to these recent papers [Li et al. (2020)](https://arxiv.org/pdf/2006.03089.pdf) and [Andriushchenko et al. (2020)](https://arxiv.org/pdf/2007.02617.pdf) which focus on overcoming this problem). There are some mentions **Minor suggestions** - For me, it seems to be a bit misleading to call a subset of the input space a patch given that in the literature, patches are mostly referred to perturbations on the image plane (e.g. as in [Adversarial Patch paper](https://arxiv.org/abs/1712.09665)). - Contributions 2 and 3 at the top of page 2 are nearly duplicates. - “we **take care** of local balls at various points” -- imprecise (what it means to take care in this context?) and a bit informal language. - Equation (6): in the denominator it should be f_c’ instead of f_c in the second term. - Page 5: “as we evaluate it at an adversary x’ instead of the image x” -- it’s not yet clear what “an adversary x’” means, in particular with which method this point is obtained, is it epsilon-bounded (I suppose yes but this becomes clear only much later in the text) or not. Would be good to clarify this. - Page 7: “validation set robust accuracy guided learning rate scheduler” -- consider splitting this phrase which is too complicated. - Page 7: “natural images will **prevail** when weak adversaries are used” -- meaning is unclear. **Score:** 5/10 because of the concerns about the novelty and clarity of the provided justifications of the method. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
V69LGwJ0lIN
ICLR.cc/2021/Conference
2021
OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
["Anurag Ajay", "Aviral Kumar", "Pulkit Agrawal", "Sergey Levine", "Ofir Nachum"]
Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent’s ability to query the environment for transitions and rewards is effectively unlimited. However, in many practical applications, the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited. In this work, we focus on this offline setting. Our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors before using these primitives for downstream task learning. Primitives extracted in this way serve two purposes: they delineate the behaviors that are supported by the data from those that are not, making them useful for avoiding distributional shift in offline RL; and they provide a degree of temporal abstraction, which reduces the effective horizon yielding better learning in theory, and improved offline RL in practice. In addition to benefiting offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning as well as exploration and transfer in online RL on a variety of benchmark domains. Visualizations and code are available at https://sites.google.com/view/opal-iclr
["Offline Reinforcement Learning", "Primitive Discovery", "Unsupervised Learning"]
ABSTRACTReinforcement learning (RL) has achieved impressive performance in a variety ofonline settings in which an agent’s ability to query the environment for transitionsand rewards is effectively unlimited. However, in many practical applications, thesituation is reversed: an agent may have access to large amounts of undirectedoffline experience data, while access to the online environment is severely lim-ited. In this work, we focus on this offline setting. Our main insight is that, whenpresented with offline data composed of a variety of behaviors, an effective wayto leverage this data is to extract a continuous space of recurring and temporallyextended primitive behaviors before using these primitives for downstream tasklearning. Primitives extracted in this way serve two purposes: they delineate thebehaviors that are supported by the data from those that are not, making them use-ful for avoiding distributional shift in offline RL; and they provide a degree of tem-poral abstraction, which reduces the effective horizon yielding better learning intheory, and improved offline RL in practice. In addition to benefiting offline policyoptimization, we show that performing offline primitive learning in this way canalso be leveraged for improving few-shot imitation learning as well as explorationand transfer in online RL on a variety of benchmark domains. Visualizations andcode are available at https://sites.google.com/view/opal-iclr1 I NTRODUCTIONReinforcement Learning (RL) systems have achieved impressive performance in a variety of on-line settings such as games (Silver et al., 2016; Tesauro, 1995; Brown & Sandholm, 2019) androbotics (Levine et al., 2016; Dasari et al., 2019; Peters et al., 2010; Parmas et al., 2019; Pinto &Gupta, 2016; Nachum et al., 2019a), where the agent can act in the environment and sample asmany transitions and rewards as needed. However, in many practical applications the agent’s abilityto continuously act in the environment may be severely limited due to practical concerns (Dulac-Arnold et al., 2019). For example, a robot learning through trial and error in the real world requirescostly human supervision, safety checks, and resets (Atkeson et al., 2015), rendering many stan-dard online RL algorithms inapplicable (Matsushima et al., 2020). However, in such settings wemight instead have access to large amounts of previously logged data, which could be logged froma baseline hand-engineered policy or even from other related tasks. For example, in self-drivingapplications, one may have access to large amounts of human driving behavior; in robotic applica-tions, one might have data of either humans or robots performing similar tasks. While these offlinedatasets are often undirected (generic human driving data on various routes in various cities may notbe directly relevant to navigation of a specific route within a specific city) and unlabelled (generichuman driving data is often not labelled with the human’s intended route or destination), this data isstill useful in that it can inform the algorithm about what is possible to do in the real world, withoutthe need for active exploration.In this paper, we study how, in this offline setting, an effective strategy to leveraging unlabeled andundirected past data is to utilize unsupervised learning to extract potentially useful and temporallyextended primitive skills to learn what types of behaviors are possible . For example, consider adataset of an agent performing undirected navigation in a maze environment (Figure 1). While thedataset does not provide demonstrations of exclusively one specific point-to-point navigation task,Work done during an internship at Google Brain1Published as a conference paper at ICLR 2021(a) medium (diverse) (b) medium (CQL+OPAL) (c) large (diverse) (d) large (CQL+OPAL)Figure 1: Visualization of (a subset of) diverse datasets for (a) antmaze medium and (c) antmazelarge, along with trajectories sampled from CQL+OPAL trained on diverse datasets of (b) antmazemedium and (d) antmaze large.it nevertheless presents clear indications of which temporally extended behaviors are useful andnatural in this environment (e.g., moving forward, left, right, and backward), and our unsupervisedlearning objective aims to distill these behaviors into temporally extended primitives. Once theselocomotive primitive behaviors are extracted, we can use them as a compact constrained temporally-extended action space for learning a task policy with offline RL, which only needs to focus on taskrelevant navigation, thereby making task learning easier. For example, once a specific point-to-pointnavigation is commanded, the agent can leverage the learned primitives for locomotion and onlyfocus on the task of navigation, as opposed to learning locomotion and navigation from scratch.We refer to our proposed unsupervised learning method as Offline Primitives for Accelerating of-fline reinforcement Learning (OPAL), and apply this basic paradigm to offline RL, where the agentis given a single offline dataset to use for both the initial unsupervised learning phase and then asubsequent task-directed offline policy optimization phase. Despite the fact that no additional datais used, we find that our proposed unsupervised learning technique can dramatically improve offlinepolicy optimization compared to performing offline policy optimization on the raw dataset directly.To the best of our knowledge, ours is the first work to theoretically justify and experimentally verifythe benefits of primitive learning in offline RL settings, showing that hierarchies can provide tempo-ral abstraction that allows us to reduce the effect of compounding errors issue in offline RL. Thesetheoretical and empirical results are notably in contrast to previous related work in online hierarchi-cal RL (Nachum et al., 2019b), which found that improved exploration is the main benefit affordedby hierarchically learned primitives. We instead show significant benefits in the offline RL setting,where exploration is irrelevant.Beyond offline RL, and although this isn’t the main focus of the work, we also show the applicabilityof our method for accelerating RL by incorporating OPAL as a preprocessing step to standard onlineRL, few-shot imitation learning, and multi-task transfer learning. In all settings, we demonstrate thatthe use of OPAL can improve the speed and quality of downstream task learning.2 R ELATED WORKOffline RL. Offline RL presents the problem of learning a policy from a fixed prior dataset oftransitions and rewards. Recent works in offline RL (Kumar et al., 2019; Levine et al., 2020; Wuet al., 2019; Ghasemipour et al., 2020; Jaques et al., 2019; Fujimoto et al., 2018) constrain thepolicy to be close to the data distribution to avoid the use of out-of-distribution actions (Kumaret al., 2019; Levine et al., 2020). To constrain the policy, some methods use distributional penalties,as measured by KL divergence (Levine et al., 2020; Jaques et al., 2019), MMD (Kumar et al., 2019),or Wasserstein distance (Wu et al., 2019). Other methods first sample actions from the behaviorpolicy and then either clip the maximum deviation from those actions (Fujimoto et al., 2018) or justuse those actions (Ghasemipour et al., 2020) during the value backup to stay within the support ofthe offline data. In contrast to these works, OPAL uses an offline dataset for unsupervised learningof a continuous space of primitives. The use of these primitives for downstream tasks implicitlyconstrains a learned primitive-directing policy to stay close to the offline data distribution. As wedemonstrate in our experiments, the use of OPAL in conjunction with an off-the-shelf offline RLalgorithm in this way can yield significant improvement compared to applying offline RL to thedataset directly.2Published as a conference paper at ICLR 2021Online skill discovery. There are a number of recent works (Eysenbach et al., 2018; Nachum et al.,2018a; Sharma et al., 2019) which use unsupervised objectives to discover skills and use the discov-ered skills for planning (Sharma et al., 2019), few-shot imitation learning, or online RL (Eysenbachet al., 2018; Nachum et al., 2018a). However, these works focus on online settings and assumeaccess to the environment. In contrast, OPAL focuses on settings where a large dataset of diversebehaviors is provided but access to the environment is restricted. It leverages these static offlinedatasets to discover primitive skills with better state coverage and avoids the exploration issue oflearning primitives from scratch.Hierarchical policy learning. Hierarchical policy learning involves learning a hierarchy of policieswhere a low-level policy acts as primitive skills and a high-level policy directs the low-level policyto solve a task. While some works (Bacon et al., 2017; Stolle & Precup, 2002; Peng et al., 2019)learn a discrete set of lower-level policies, each behaving as a primitive skill, other works (Vezhn-evets et al., 2017; Nachum et al., 2018b; 2019a; Hausman et al., 2018) learn a continuous space ofprimitive skills representing the lower-level policy. These methods have mostly been applied in on-line settings. However, there have been some recent variants of the above works (Lynch et al., 2020;Shankar & Gupta, 2020; Krishnan et al., 2017; Merel et al., 2018) which extract skills from a priordataset and using it for either performing tasks directly (Lynch et al., 2020) or learning downstreamtasks (Shankar & Gupta, 2020; Krishnan et al., 2017; Merel et al., 2018) with online RL. WhileOPAL is related to these works, we mainly focus on leveraging the learned primitives for asymptot-ically improving the performance of offline RL; i.e., both the primitive learning and the downstreamtask must be solved using a single static dataset. Furthermore, we provide performance bounds forOPAL and enumerate the specific properties an offline dataset should possess to guarantee improveddownstream task learning, while such theoretical guarantees are largely absent from existing work.3 P RELIMINARIESWe consider the standard Markov decision process (MDP) setting (Puterman, 1994), specified bya tupleM= (S;A;P;;r; )whereSrepresents the state space, Arepresents the action space,P(s0js;a)represents the transition probability, (s)represents the initial state distribution, r(s;a)2(Rmax;Rmax)represents the reward function, and 2(0;1)represents the discount factor. Apolicyin this MDP corresponds to a function S! (A), where (A)is the simplex over A.It induces a discounted future state distribution d, defined by d(s) = (1)P1t=0tP(st=sj), whereP(st=sj)is the probability of reaching the state sat timetby runningonM.For a positive integer k, we usedk(s) = (1k)P1t=0tkP(stk=sj)to denote the every-k-step state distribution of . The return of policy in MDPMis defined as JRL(;M) =11Esd;a(ajs)[r(s;a)]. We represent the reward- and discount-agnostic environment as a tupleE= (S;A;P;).We aim to use a large, unlabeled, and undirected experience dataset D:=fri:= (st;at)c1t=0gNi=1associated withEto extract primitives and improve offline RL for downstream task learning. Toaccount for the fact that the dataset Dmay be generated by a mixture of diverse policies start-ing at diverse initial states, we assume Dis generated by first sampling a behavior policy along with an initial state s, where ;represent some (unknown) distributions over poli-cies and states, respectively, and then running onEforctime steps starting at s0=s.We define the probability of a sub-trajectory := (s0;a0;:::;sc1;ac1)inDunder a pol-icyas() =(s0)Qc1t=1P(stjst1;at1)Qc1t=0(atjst), and the conditional probability as(js) = 1[s=s0]Qc1t=1P(stjst1;at1)Qc1t=0(atjst). In this work, we will show how to applyunsupervised learning techniques to Dto extract a continuous space of primitives (ajs;z), wherez2Z, the latent space inferred by unsupervised learning. We intend to use the learned (ajs;z)toasymptotically improve the performance of offline RL for downstream task learning. For offline RL,we assume the existence of a dataset Dr:=fri:= (st;at;rt)c1t=0gNi=1, corresponding to the samesub-trajectories inDlabelled with MDP rewards. Additionally, we can use the extracted primitivesfor other applications such as few-shot imitation learning, online RL, and online multi-task transferlearning. We review the additional assumptions for these applications in Appendix A.3Published as a conference paper at ICLR 2021⇡✓(a|s, z)<latexit sha1_base64="XGKGEGHXYjG4NaqJqfbPtivAuyE=">AAAB/HicbVDLSsNAFJ3UV62vaJduBotQQUpSBV0W3bisYB/QhDCZTtuhkwczN0KM9VfcuFDErR/izr9x2mahrQcuHM65l3vv8WPBFVjWt1FYWV1b3yhulra2d3b3zP2DtooSSVmLRiKSXZ8oJnjIWsBBsG4sGQl8wTr++Hrqd+6ZVDwK7yCNmRuQYcgHnBLQkmeWnZh7mQMjBmRSJY/q9OHEMytWzZoBLxM7JxWUo+mZX04/oknAQqCCKNWzrRjcjEjgVLBJyUkUiwkdkyHraRqSgCk3mx0/wcda6eNBJHWFgGfq74mMBEqlga87AwIjtehNxf+8XgKDSzfjYZwAC+l80SARGCI8TQL3uWQURKoJoZLrWzEdEUko6LxKOgR78eVl0q7X7LNa/fa80rjK4yiiQ3SEqshGF6iBblATtRBFKXpGr+jNeDJejHfjY95aMPKZMvoD4/MHjTiUsg==</latexit>Primitive PolicyEncoderq(z|⌧)<latexit sha1_base64="mitotMQhu7WCExqwOKL8kk0lhkI=">AAAB+XicbVBNS8NAEJ34WetX1KOXxSLUS0mqoMeiF48V7Ae0IWy2m3bpZhN3N4Ua+0+8eFDEq//Em//GbZuDtj4YeLw3w8y8IOFMacf5tlZW19Y3Ngtbxe2d3b19++CwqeJUEtogMY9lO8CKciZoQzPNaTuRFEcBp61geDP1WyMqFYvFvR4n1ItwX7CQEayN5Nv2g591kwGblB+fuhqnZ75dcirODGiZuDkpQY66b391ezFJIyo04Vipjusk2suw1IxwOil2U0UTTIa4TzuGChxR5WWzyyfo1Cg9FMbSlNBopv6eyHCk1DgKTGeE9UAtelPxP6+T6vDKy5hIUk0FmS8KU450jKYxoB6TlGg+NgQTycytiAywxESbsIomBHfx5WXSrFbc80r17qJUu87jKMAxnEAZXLiEGtxCHRpAYATP8ApvVma9WO/Wx7x1xcpnjuAPrM8fmeGToQ==</latexit>Unlabelled & UndirectedDataD<latexit sha1_base64="vhcSZKB5REty1fMyhkxMk/A5FwM=">AAAB8nicbVDLSgMxFL1TX7W+qi7dBIvgqsxUQZdFXbisYB8wHUomzbShmWRIMkIZ+hluXCji1q9x59+YaWehrQcCh3PuJeeeMOFMG9f9dkpr6xubW+Xtys7u3v5B9fCoo2WqCG0TyaXqhVhTzgRtG2Y47SWK4jjktBtObnO/+0SVZlI8mmlCgxiPBIsYwcZKfj/GZkwwz+5mg2rNrbtzoFXiFaQGBVqD6ld/KEkaU2EIx1r7npuYIMPKMMLprNJPNU0wmeAR9S0VOKY6yOaRZ+jMKkMUSWWfMGiu/t7IcKz1NA7tZB5RL3u5+J/npya6DjImktRQQRYfRSlHRqL8fjRkihLDp5ZgopjNisgYK0yMbaliS/CWT14lnUbdu6g3Hi5rzZuijjKcwCmcgwdX0IR7aEEbCEh4hld4c4zz4rw7H4vRklPsHMMfOJ8/diuRXg==</latexit>Task Policy⇡ (z|s)<latexit sha1_base64="GCK2+ngQZu2EKM9l8D2Kk7TdZBw=">AAAB+HicbVBNT8JAEJ3iF+IHVY9eGokJXkiLJnokevGIiYAJbZrtssCG7XazuzWByi/x4kFjvPpTvPlvXKAHBV8yyct7M5mZFwlGlXbdb6uwtr6xuVXcLu3s7u2X7YPDtkpSiUkLJyyRDxFShFFOWppqRh6EJCiOGOlEo5uZ33kkUtGE3+uxIEGMBpz2KUbaSKFd9gUNM18oOq1OntRZaFfcmjuHs0q8nFQgRzO0v/xegtOYcI0ZUqrruUIHGZKaYkamJT9VRCA8QgPSNZSjmKggmx8+dU6N0nP6iTTFtTNXf09kKFZqHEemM0Z6qJa9mfif1011/yrIKBepJhwvFvVT5ujEmaXg9KgkWLOxIQhLam518BBJhLXJqmRC8JZfXiXtes07r9XvLiqN6zyOIhzDCVTBg0towC00oQUYUniGV3izJtaL9W59LFoLVj5zBH9gff4A1q+TMw==</latexit>Labelled DataDr<latexit sha1_base64="7fed4aboO+102xrMnPuo3ukabcM=">AAAB9HicbVDLSgMxFL3js9ZX1aWbYBFclZkq6LKoC5cV7APasWTSTBuaScYkUyhDv8ONC0Xc+jHu/Bsz7Sy09UDgcM693JMTxJxp47rfzsrq2vrGZmGruL2zu7dfOjhsapkoQhtEcqnaAdaUM0EbhhlO27GiOAo4bQWjm8xvjanSTIoHM4mpH+GBYCEj2FjJ70bYDAnm6e30UfVKZbfizoCWiZeTMuSo90pf3b4kSUSFIRxr3fHc2PgpVoYRTqfFbqJpjMkID2jHUoEjqv10FnqKTq3SR6FU9gmDZurvjRRHWk+iwE5mIfWil4n/eZ3EhFd+ykScGCrI/FCYcGQkyhpAfaYoMXxiCSaK2ayIDLHCxNieirYEb/HLy6RZrXjnler9Rbl2nddRgGM4gTPw4BJqcAd1aACBJ3iGV3hzxs6L8+58zEdXnHznCP7A+fwBBL2SQg==</latexit>Drhi<latexit sha1_base64="rjibpOyYn67iESUeufzIxXHtUtw=">AAACBHicbVC7TsMwFHXKq5RXgLGLRYXEVCUFCcYKGBiLRB9SGyLHdVurthPZDlIVZWDhV1gYQIiVj2Djb3DSDNByJEvH59yre+8JIkaVdpxvq7Syura+Ud6sbG3v7O7Z+wcdFcYSkzYOWSh7AVKEUUHammpGepEkiAeMdIPpVeZ3H4hUNBR3ehYRj6OxoCOKkTaSb1cHHOkJRiy5Tu+ln+RfyZMJTVPfrjl1JwdcJm5BaqBAy7e/BsMQx5wIjRlSqu86kfYSJDXFjKSVQaxIhPAUjUnfUIE4UV6SH5HCY6MM4SiU5gkNc/V3R4K4UjMemMpsR7XoZeJ/Xj/WowsvoSKKNRF4PmgUM6hDmCUCh1QSrNnMEIQlNbtCPEESYW1yq5gQ3MWTl0mnUXdP643bs1rzsoijDKrgCJwAF5yDJrgBLdAGGDyCZ/AK3qwn68V6tz7mpSWr6DkEf2B9/gAUNpkE</latexit>Drlo<latexit sha1_base64="SdFxDs6avafE+GzUtiaM9gkNgwc=">AAACBHicbVC7TsMwFHXKq5RXgLGLRYXEVCUFCcYKGBiLRB9SGyLHdVurjh3ZDlIVZWDhV1gYQIiVj2Djb3DSDNByJEvH59yre+8JIkaVdpxvq7Syura+Ud6sbG3v7O7Z+wcdJWKJSRsLJmQvQIowyklbU81IL5IEhQEj3WB6lfndByIVFfxOzyLihWjM6YhipI3k29VBiPQEI5Zcp/fST/KvDBMm0tS3a07dyQGXiVuQGijQ8u2vwVDgOCRcY4aU6rtOpL0ESU0xI2llECsSITxFY9I3lKOQKC/Jj0jhsVGGcCSkeVzDXP3dkaBQqVkYmMpsR7XoZeJ/Xj/WowsvoTyKNeF4PmgUM6gFzBKBQyoJ1mxmCMKSml0hniCJsDa5VUwI7uLJy6TTqLun9cbtWa15WcRRBlVwBE6AC85BE9yAFmgDDB7BM3gFb9aT9WK9Wx/z0pJV9ByCP7A+fwAjdpkO</latexit>{⌧=(st,at)c1t=0}<latexit sha1_base64="jOBcCHisI7KSx4ePENZkOqNE7ic=">AAACB3icbVDLSsNAFJ3UV62vqEtBgkWooCWpgm4KRTcuK9gHNDFMppM6dPJg5kYoITs3/oobF4q49Rfc+TdO2yy09cCFwzn3cu89XsyZBNP81goLi0vLK8XV0tr6xuaWvr3TllEiCG2RiEei62FJOQtpCxhw2o0FxYHHaccbXo39zgMVkkXhLYxi6gR4EDKfEQxKcvV9O7UBJ/WKdOEYu3DkplA3s7uUnFiZnbl62ayaExjzxMpJGeVouvqX3Y9IEtAQCMdS9iwzBifFAhjhNCvZiaQxJkM8oD1FQxxQ6aSTPzLjUCl9w4+EqhCMifp7IsWBlKPAU50Bhns5643F/7xeAv6Fk7IwToCGZLrIT7gBkTEOxegzQQnwkSKYCKZuNcg9FpiAiq6kQrBmX54n7VrVOq3Wbs7Kjcs8jiLaQweogix0jhroGjVRCxH0iJ7RK3rTnrQX7V37mLYWtHxmF/2B9vkD7IuYtA==</latexit>{⌧=(st,at,rt)c1t=0}<latexit sha1_base64="zkczpXm8cCwKC6Ft6jK47HOCjPU=">AAACC3icbVDLSsNAFJ3UV62vqEs3oUWooCWpgm4KRTcuK9gHNDFMppN26OTBzI1QQvZu/BU3LhRx6w+482+ctllo9cCFwzn3cu89XsyZBNP80gpLyyura8X10sbm1vaOvrvXkVEiCG2TiEei52FJOQtpGxhw2osFxYHHadcbX0397j0VkkXhLUxi6gR4GDKfEQxKcvWyndqAk0ZVunCMVQkXjtwUGmZ2l5ITK7MzV6+YNXMG4y+xclJBOVqu/mkPIpIENATCsZR9y4zBSbEARjjNSnYiaYzJGA9pX9EQB1Q66eyXzDhUysDwI6EqBGOm/pxIcSDlJPBUZ4BhJBe9qfif10/Av3BSFsYJ0JDMF/kJNyAypsEYAyYoAT5RBBPB1K0GGWGBCaj4SioEa/Hlv6RTr1mntfrNWaV5mcdRRAeojKrIQueoia5RC7URQQ/oCb2gV+1Re9betPd5a0HLZ/bRL2gf396Cmk0=</latexit>{⌧=(st,at,z)c1t=0}<latexit sha1_base64="4SMDc0hbh8KS6W0qJD2YT/+zNjI=">AAACCXicbVDJSgNBEO2JW4xb1KOXxiBE0DATBb0Egl48RjALZMahp9OTNOlZ6K4R4jBXL/6KFw+KePUPvPk3dpaDRh8UPN6roqqeFwuuwDS/jNzC4tLySn61sLa+sblV3N5pqSiRlDVpJCLZ8YhigoesCRwE68SSkcATrO0NL8d++45JxaPwBkYxcwLSD7nPKQEtuUVspzaQpFZWLhwRXfeHbgo1M7tN6bGV2ZlbLJkVcwL8l1gzUkIzNNzip92LaBKwEKggSnUtMwYnJRI4FSwr2IliMaFD0mddTUMSMOWkk08yfKCVHvYjqSsEPFF/TqQkUGoUeLozIDBQ895Y/M/rJuCfOykP4wRYSKeL/ERgiPA4FtzjklEQI00IlVzfiumASEJBh1fQIVjzL/8lrWrFOqlUr09L9YtZHHm0h/ZRGVnoDNXRFWqgJqLoAT2hF/RqPBrPxpvxPm3NGbOZXfQLxsc3R7OZbg==</latexit>{(s0,z ,sc,c1Xt=0trt)}<latexit sha1_base64="OUKfMaw3B0LGzfTK0gkJT57pH/g=">AAACFHicbVDLSsNAFJ3UV62vqEs3wSJUrCWpgm6EohuXFewDmjZMptN26EwSZm6EGvIRbvwVNy4UcevCnX/j9LHQ6oELh3Pu5d57/IgzBbb9ZWQWFpeWV7KrubX1jc0tc3unrsJYElojIQ9l08eKchbQGjDgtBlJioXPacMfXo39xh2VioXBLYwi2ha4H7AeIxi05JlHblJQnl28LyqPFF0VCy+BCzvtJOTYSd0+FgJ3QHpw6KaembdL9gTWX+LMSB7NUPXMT7cbkljQAAjHSrUcO4J2giUwwmmac2NFI0yGuE9bmgZYUNVOJk+l1oFWulYvlLoCsCbqz4kEC6VGwtedAsNAzXtj8T+vFUPvvJ2wIIqBBmS6qBdzC0JrnJDVZZIS4CNNMJFM32qRAZaYgM4xp0Nw5l/+S+rlknNSKt+c5iuXsziyaA/towJy0BmqoGtURTVE0AN6Qi/o1Xg0no03433amjFmM7voF4yPb8Ktngk=</latexit>ForwardAutoencoding loss + KL constraintFine-tuning with BCOffline RLszonce per c stepsevery stepTask Policy⇡ (z|s)<latexit sha1_base64="GCK2+ngQZu2EKM9l8D2Kk7TdZBw=">AAAB+HicbVBNT8JAEJ3iF+IHVY9eGokJXkiLJnokevGIiYAJbZrtssCG7XazuzWByi/x4kFjvPpTvPlvXKAHBV8yyct7M5mZFwlGlXbdb6uwtr6xuVXcLu3s7u2X7YPDtkpSiUkLJyyRDxFShFFOWppqRh6EJCiOGOlEo5uZ33kkUtGE3+uxIEGMBpz2KUbaSKFd9gUNM18oOq1OntRZaFfcmjuHs0q8nFQgRzO0v/xegtOYcI0ZUqrruUIHGZKaYkamJT9VRCA8QgPSNZSjmKggmx8+dU6N0nP6iTTFtTNXf09kKFZqHEemM0Z6qJa9mfif1011/yrIKBepJhwvFvVT5ujEmaXg9KgkWLOxIQhLam518BBJhLXJqmRC8JZfXiXtes07r9XvLiqN6zyOIhzDCVTBg0towC00oQUYUniGV3izJtaL9W59LFoLVj5zBH9gff4A1q+TMw==</latexit>⇡✓(a|s, z)<latexit sha1_base64="XGKGEGHXYjG4NaqJqfbPtivAuyE=">AAAB/HicbVDLSsNAFJ3UV62vaJduBotQQUpSBV0W3bisYB/QhDCZTtuhkwczN0KM9VfcuFDErR/izr9x2mahrQcuHM65l3vv8WPBFVjWt1FYWV1b3yhulra2d3b3zP2DtooSSVmLRiKSXZ8oJnjIWsBBsG4sGQl8wTr++Hrqd+6ZVDwK7yCNmRuQYcgHnBLQkmeWnZh7mQMjBmRSJY/q9OHEMytWzZoBLxM7JxWUo+mZX04/oknAQqCCKNWzrRjcjEjgVLBJyUkUiwkdkyHraRqSgCk3mx0/wcda6eNBJHWFgGfq74mMBEqlga87AwIjtehNxf+8XgKDSzfjYZwAC+l80SARGCI8TQL3uWQURKoJoZLrWzEdEUko6LxKOgR78eVl0q7X7LNa/fa80rjK4yiiQ3SEqshGF6iBblATtRBFKXpGr+jNeDJejHfjY95aMPKZMvoD4/MHjTiUsg==</latexit>Primitive Policya(2) Offline training of task policy(3) Test-time policy execution⇡✓(a|s, z)<latexit sha1_base64="XGKGEGHXYjG4NaqJqfbPtivAuyE=">AAAB/HicbVDLSsNAFJ3UV62vaJduBotQQUpSBV0W3bisYB/QhDCZTtuhkwczN0KM9VfcuFDErR/izr9x2mahrQcuHM65l3vv8WPBFVjWt1FYWV1b3yhulra2d3b3zP2DtooSSVmLRiKSXZ8oJnjIWsBBsG4sGQl8wTr++Hrqd+6ZVDwK7yCNmRuQYcgHnBLQkmeWnZh7mQMjBmRSJY/q9OHEMytWzZoBLxM7JxWUo+mZX04/oknAQqCCKNWzrRjcjEjgVLBJyUkUiwkdkyHraRqSgCk3mx0/wcda6eNBJHWFgGfq74mMBEqlga87AwIjtehNxf+8XgKDSzfjYZwAC+l80SARGCI8TQL3uWQURKoJoZLrWzEdEUko6LxKOgR78eVl0q7X7LNa/fa80rjK4yiiQ3SEqshGF6iBblATtRBFKXpGr+jNeDJejHfjY95aMPKZMvoD4/MHjTiUsg==</latexit>Primitive PolicyPrior⇢!(z|s)<latexit sha1_base64="GVo4fk2jq6kQ2n6lQ4VageKeonY=">AAAB+3icbVDLSsNAFJ34rPUV69LNYBHqpiRV0GXRjcsK9gFNCJPppB06mQkzE7HG/IobF4q49Ufc+TdO2yy09cCFwzn3cu89YcKo0o7zba2srq1vbJa2yts7u3v79kGlo0QqMWljwYTshUgRRjlpa6oZ6SWSoDhkpBuOr6d+955IRQW/05OE+DEachpRjLSRArviyZEIMk/EZIjy2uOTOg3sqlN3ZoDLxC1IFRRoBfaXNxA4jQnXmCGl+q6TaD9DUlPMSF72UkUShMdoSPqGchQT5Wez23N4YpQBjIQ0xTWcqb8nMhQrNYlD0xkjPVKL3lT8z+unOrr0M8qTVBOO54uilEEt4DQIOKCSYM0mhiAsqbkV4hGSCGsTV9mE4C6+vEw6jbp7Vm/cnlebV0UcJXAEjkENuOACNMENaIE2wOABPINX8Gbl1ov1bn3MW1esYuYQ/IH1+QMh8JR+</latexit>Encoderq(z|⌧)<latexit sha1_base64="mitotMQhu7WCExqwOKL8kk0lhkI=">AAAB+XicbVBNS8NAEJ34WetX1KOXxSLUS0mqoMeiF48V7Ae0IWy2m3bpZhN3N4Ua+0+8eFDEq//Em//GbZuDtj4YeLw3w8y8IOFMacf5tlZW19Y3Ngtbxe2d3b19++CwqeJUEtogMY9lO8CKciZoQzPNaTuRFEcBp61geDP1WyMqFYvFvR4n1ItwX7CQEayN5Nv2g591kwGblB+fuhqnZ75dcirODGiZuDkpQY66b391ezFJIyo04Vipjusk2suw1IxwOil2U0UTTIa4TzuGChxR5WWzyyfo1Cg9FMbSlNBopv6eyHCk1DgKTGeE9UAtelPxP6+T6vDKy5hIUk0FmS8KU450jKYxoB6TlGg+NgQTycytiAywxESbsIomBHfx5WXSrFbc80r17qJUu87jKMAxnEAZXLiEGtxCHRpAYATP8ApvVma9WO/Wx7x1xcpnjuAPrM8fmeGToQ==</latexit>OPAL(1) Offline unsupervised primitive learning with OPALFigure 2: Overview of offline RL with OPAL. OPAL is trained on unlabelled data Dusing autoen-coding objective. For offline RL, the encoder first labels the reward-labelled data Drwith latents,and divides it into DrhiandDrlo. The task policy is trained onDrhiusing offline RL while theprimitive policy is finetuned onDlousing behavioral cloning (BC).4 O FFLINE RL WITH OPALIn this section, we elaborate on OPAL, our proposed method for extracting primitives from Dandthen leveraging these primitives to learn downstream tasks with offline RL. We begin by describ-ing our unsupervised objective, which distills Dinto a continuous space of latent-conditioned andtemporally-extended primitive policies (ajs;z). For learning downstream tasks with offline RL,we first labelDrwith appropriate latents using the OPAL encoder q(zj)and then learn a policy (zjs)which is trained to sample an appropriate primitive every csteps to optimize a specifictask, using any off-the-shelf offline RL algorithm. A graphical overview of offline RL with OPALis shown in Figure 2. While we mainly focus on offline RL, we briefly discuss how to use thelearned primitives for few-shot imitation learning, online RL, and multi-task online transfer learningin section 5 and provide more details in Appendix A.4.1 E XTRACTING TEMPORALLY -EXTENDED PRIMITIVES FROM DATAWe would like to extract a continuous space of temporally-extended primitives (ajs;z)fromDwhich we can later use as an action space for learning downstream tasks with offline RL. Thiswould reduce our effective task horizon, thereby making the downstream learning easier, as well asallow the downstream policy to stay close to the offline data distribution, thereby bringing stabilityto the downstream learning. We propose the following objective for learning , incorporating anauto-encoding loss function with a KL constraint to encourage better generalization:min;;!J(;;! ) =^ED;zq(zj)"c1Xt=0log(atjst;z)#(1)s.t.^ED[DKL(q(zj)jj!(zjs0))]KL (2)where ^Eindicates empirical expectation. The learned components of this objective may be inter-preted as encoder, decoder, and prior:Encoder:q(zj)encodes the trajectory of state-action pairs into a distribution in latent spaceand gives out parameters of that distribution. In our case, we represent qwith a bidirectional GRUwhich takes in and gives out parameters of a Gaussian distribution (encz;encz).Decoder (aka Primitive Policy): (ajs;z)is the latent-conditioned policy. It maximizes the con-ditional log-likelihood of actions in given the state and the latent vector. In our implementation,we parameterize it as a feed-forward neural network which takes in current state and latent vectorand gives out parameters of a Gaussian distribution for the action (a;a).Prior/Primitive Predictor: !(zjs0)tries to predict the encoded distribution of the sub-trajectoryfrom its initial state. Our implementation uses a feed-forward neural network which takes in theinitial state and gives out parameters of a Gaussian distribution (prz;prz).4Published as a conference paper at ICLR 2021KL-constraint (Equation 2) . As an additional component of the algorithm, we enforce consistencyin the latent variables predicted by the encoder q(zj)and the prior !(zjs0). Since our goalis to obtain a primitive zthat captures a temporal sequence of actions for a given sub-trajectory= (s0;a0;;sc1;ac1)(as defined in Section 3), we utilize a regularization that enforces thedistribution, q(zj)to be close to just predicting the primitive or the latent variable zgiven the startstate of this sub-trajectory, s0(i.e.!(zjs0)). This conditioning on the initial state regularizes thedistributionq(zj)to not overfit to the the complete sub-trajectory as the same zshould also bepredictable only given s0. The above form of KL constraint is inspired from past works (Lynch et al.,2020; Kumar et al., 2020a). In particular Lynch et al. (2020) add a KL-constraint (Equation 2, “Planprior matching” in Lynch et al. (2020)) that constrains the distribution over latent variables computedonly given the initial state and the goal state to the distribution over latent variables computed usingthe entire trajectory. Our form in Equation 2 is similar to this prior except that we do not operate ina goal-conditioned RL setting and hence only condition !on the initial state s0.In practice, rather than solving the constrained optimization directly, we implement the KL con-straint as a penalty, weighted by an appropriately chosen coefficient . Thus, one may interpret ourunsupervised objective as using a sequential -V AE (Higgins et al., 2016). However, as mentionedabove, our prior is conditioned on s0and learned as part of the optimization because the set of prim-itives active inDdepends ons0. If= 1, OPAL is equivalent to a conditional V AE maximizinglog probability of conditioned on its initial state s0; see Appendix D for more details. Despite thesimilarities between our proposed objective and V AEs, our presentation of OPAL as a constrainedauto-encoding objective is deliberate. As we will show in Section 4.3, our theoretical guaranteesdepend on a well-optimized auto-encoding loss to provide benefits of using learned primitives for downstream tasks. In contrast, a V AE loss, which simply maximizes the likelihood of observeddata, may not necessarily provide a benefit for downstream tasks. For example, if the data can begenerated by a single stationary policy, a V AE-optimal policy can simply ignore the latent z, thusproducing a degenerate space of primitives. In contrast, when the KL constraint in our objective isweak (i.e.,KL0or <1), the auto-encoding loss is encouraged to find a unique zfor distinctto optimize reconstruction loss.4.2 O FFLINE RL WITH PRIMITIVES FOR DOWNSTREAM TASKSAfter distilling learned primitives from Din terms of an encoder q(zj), a latent primitive policy(or decoder) (ajs;z), and a prior !(zjs0), OPAL then applies these learned models to improveoffline RL for downstream tasks.As shown in Figure 2, our goal is to use a dataset with reward labeled sub-trajectories Dr=fi:=(sit;ait;rit)c1t=0gNi=1to learn a behavior policy that maximizes cumulative reward. With OPAL, weuse the learned primitives (ajs;z)as low-level controllers, and then learn a high-level controller (zjs). To do so, we relabel the dataset Drin terms of temporally extended transitions usingthe learned encoder q(zj). Specifically, we create a dataset Drhi=f(si0;zi;Pc1t=0trit;sic)gNi=1,whereziq(ji). GivenDrhi, any off-the-shelf offline RL algorithm can be used to learn (zjs)(in our experiments we use CQL (Kumar et al., 2020b)). As a way to ensure that the c-step transitionsi:= (sit;ait;rit)c1t=0remain consistent with the labelled latent action zi, we finetune (ajs;z)onDrlo=f((sit;ait)c1t=0;zi)gNi=1with a simple latent-conditioned behavioral cloning loss:min^E(;z)Drlo"c1Xt=0log(atjst;z)#: (3)4.3 S UBOPTIMALITY AND PERFORMANCE BOUNDS FOR OPALNow, we will analyze OPAL and derive performance bounds for it in the context of offline RL,formally examining the benefit of the temporal abstraction afforded by OPAL as well as studyingwhat propertiesDshould possess so that OPAL can improve downstream task performance.As explained above, when applying OPAL to offline RL, we first learn the primitives (ajs;z)usingD, and then learn a high-level task policy (zjs)in the space of the primitives. Let (zjs)be the optimal task policy. Thus the low-level and high-level together comprise a hierarchicalpolicy, which we denote as ; . To quantify the performance of policies obtained from OPAL,5Published as a conference paper at ICLR 2021we define the notion of suboptimality of the learned primitives (ajs;z)in an MDPMwith anassociated optimal policy asSubOpt() :=jJRL(;M)JRL(; ;M)j: (4)To relate SubOpt()with some notion of divergence betweenand; , we introduce the fol-lowing performance difference lemma.Lemma 4.0.1. If1and2are two policies in M, thenjJRL(1;M)JRL(2;M)j2(1c)(1)RmaxEsd1c[DTV(1(js)jj2(js))];(5)where DTV(1(js)jj2(js))denotes the TV divergence over c-length sub-trajectories sampledfrom1vs.2(see section 3). Furthermore,SubOpt()2(1c)(1)RmaxEsdc[DTV((js)jj; (js))]: (6)The proof of the above lemma and all the following results are provided in Appendix B.1.Through above lemma, we showed that the suboptimality of the learned primitives can be boundedby the total variation divergence between the optimal policy inMand the optimal policy actingthrough the learned primitives ; . We now continue to bound the divergence between and; in terms of how representative Dis ofand how optimal the primitives are with respectto the auto-encoding objective (equation 1). We begin with a definition of how often an arbitrarypolicy appears in , the distribution generating D:Definition 1. We say a policy inMis-common inifE;s[DTV((js)jj(js))].Theorem 4.1. Let;;! be the outputs of solving equation 1, such that J(;;! ) =c. Then, withhigh probability 1, for anythat is-common in , there exists a distribution Hoverzsuchthat forH(js) :=EzH[(jz;s)],Es[DTV((js)jjH(js))]+vuut12 c+rSJ+Hc!(7)whereHc=E;;s0[Pc1t=0log(atjst)](i.e. a constant and property of D) andSJis apositive constant incurred due to sampling error in J(;;! )and depends on concentration prop-erties of(ajs;z)andq(zj).Corollary 4.1.1. If the optimal policy ofMis-common in , anddc1, then, withhigh probability 1,SubOpt()2(1c)(1)Rmax0@+vuut12 c+rSJ+Hc!1A: (8)As we can see, SubOpt()will reduce asDgets closer to(i.e.approaches 0) and betterprimitives are learned (i.e. cdecreases). While it might be tempting to increase c(i.e. the length ofsub-trajectories) to reduce the suboptimality, a larger cwill inevitably make it practically harder tocontrol the autoencoding loss c, thereby leading to an increase in overall suboptimality and inducinga trade-off in determining the best value of c. In our experiments we treat cas a hyperparameter andset it toc= 10 , although more sophisticated ways to determine ccan be an interesting avenue forfuture work.Till now, we have argued that there exists some near-optimal task policy ifis sufficientlylearned andis sufficiently well-represented in D. Now, we will show how primitive learning canimprove downstream learning, by considering the benefits of using OPAL with offline RL. Buildingon the policy performance analysis from Kumar et al. (2020b), we now present theoretical resultsbounding the performance of the policy obtained when offline RL is performed with OPAL.6Published as a conference paper at ICLR 2021Environment BC BEAR EMAQ CQL CQL+OPAL (ours)antmaze medium (diverse) 0.0 8.0 0.0 53.76.1 81.13.1antmaze large (diverse) 0.0 0.0 0.0 14.93.2 70.32.9kitchen mixed 47.5 47.2 70.82.3 52.42.5 69.32.7kitchen partial 33.8 13.1 74.60.6 50.11.0 80.22.4Table 1: Average success rate ( %) (over 4 seeds) of offline RL methods: BC, BEAR (Kumar et al.,2019), EMAQ (Ghasemipour et al., 2020), CQL (Kumar et al., 2020b) and CQL+OPAL (ours).Theorem 4.2. Let (zjs)be the policy obtained by CQL and let ;(ajs)refer to the policywhen (zjs)is used together with (ajs;z). Letf;grefer to the policy generatingDrin MDPMandzH(zjs);s0=s;zq(zj). Then,J( ;;M)J(;M)with high probability 1where=O 1(1c)(1)Esd ;^MH(s)hqjZj(DCQL( ;H)(s) + 1)i!(9)1cEsd MH(s)DCQL( ;;H)(s); (10)where DCQL is a measure of the divergence between two policies; see the appendix for a formalstatement.The precise bound along with a proof is described in Appendix B.1. Intuitively, this bound suggeststhat the worst-case deterioration over the learned policy depends on the divergence between thelearned latent-space policy DCQL and the actual primitive distribution, which is controlled via anyconservative offline RL algorithm (Kumar et al. (2020b) in our experiments) and the size of thelatent spacejZj. Crucially, note that comparing Equation 9 to the performance bound for CQL(Equation 6in Kumar et al. (2020b)) reveals several benefits pertaining to (1)temporal abstraction– a reduction in the factor of horizon by virtue of c, and (2)reduction in the amount of worst-caseerror propagation due to a reduced action space jZjvs.jAj. Thus, as evident from the above bound,the total error induced due to a combination of distributional shift and sampling is significantlyreduced when OPAL is used as compared to the standard RL counterpart of this bound which isaffected by the size of the entire action space for each and every timestep in the horizon. Thisformalizes our intuition that OPAL helps to partly mitigate distributional shift and sampling error.One downside of using a latent space policy is that we incur unsupervised learning error whilelearning primitives. However, empirically, this unsupervised learning error gets dominated by othererror terms pertaining to offline RL. That is, it is much easier to control unsupervised loss than errorsarising in offline RL.5 E VALUATIONIn this section, we will empirically show that OPAL improves learning of downstream tasks withoffline RL, and then briefly show the same with few-shot imitation learning, online RL, and onlinemulti-task transfer learning. Unless otherwise stated, we use c= 10 anddim(Z) = 8 . See Ap-pendix C for further implementation and experimental details. Visualizations and code are availableathttps://sites.google.com/view/opal-iclr5.1 O FFLINE RL WITH OPALDescription: We use environments and datasets provided in D4RL (Fu et al., 2020). Since the aimof our method is specifically to perform offline RL in settings where the offline data comprises variedand undirected multi-task behavior, we focus on Antmaze medium (diverse dataset), Antmaze large(diverse dataset), and Franka kitchen (mixed and partial datasets). The Antmaze datasets involve asimulated ant robot performing undirected navigation in a maze. The task is to use this undirecteddataset to solve a specific point-to-point navigation problem, traversing the maze from one cornerto the opposite corner, with only sparse 0-1 completion reward for reaching the goal. The kitchendatasets involves a franka robot manipulating multiple objects (microwave, kettle, etc.) either in an7Published as a conference paper at ICLR 2021(a) medium (CQL) (b) medium (CQL+OPAL) (c) large (CQL) (d) large (CQL+OPAL)Figure 3: State visitation heatmaps for antmaze medium policies learned using (1) CQL and (2)CQL+OPAL, and antmaze large policies learned using (3) CQL and (4) CQL+OPAL.Environment BC BC+OPAL (ours) BC+SV AEantmaze medium (diverse) 30.13.2 81.52.7 72.82.3antmaze large (diverse) 9.22.5 63.52.3 49.42.2Table 2: Average success rate ( %) (over 4 seeds) of few-shot IL methods: BC, BC+OPAL, andBC+SV AE (Wang et al., 2017).undirected manner (mixed dataset) or in a partially task directed manner (partial dataset). The taskis to use the datasets to arrange objects in a desired configuration, with only sparse 0-1 completionreward for every object that attains the target configuration.Baseline: We use Behavior cloning (BC), BEAR (Kumar et al., 2019), EMAQ (Ghasemipour et al.,2020), and CQL (Kumar et al., 2020b) as baselines. We compare it to CQL+OPAL, which first usesOPAL to distill primitives from the offline dataset before applying CQL to learn a primitive-directinghigh-level policy.Results: As shown in Table 1, CQL+OPAL outperforms nearly all the baselines on antmaze (seeFigure 1 and Figure 3 for visualization) and kitchen tasks, with the exception of EMAQ having simi-lar performance on kitchen mixed. To ensure fair comparison with EMAQ, we use an autoregressiveprimitive policy. With the exception of EMAQ on kitchen mixed, we are not aware of any existingoffline RL algorithms that achieves similarly good performance on these tasks; moreover, we are notaware of any existing online RL algorithms which solve these tasks (see Table 3 for some compar-isons), highlighting the benefit of using offline datasets to circumvent exploration challenges. Thereare two potential reasons for OPAL’s success. First, temporally-extended primitives could makethe reward propagation learning problem easier. Second, the primitives may provide a better latentaction space than the atomic actions of the environment. To understand the relative importance ofthese factors, we experimented with an ablation of CQL+OPAL that uses c= 1to remove temporalabstraction. In this case, we find the method’s performance to be similar to standard CQL. Thisimplies that the temporal abstraction provided by OPAL is one of the main contributing factors toits good performance. This observation also agrees with our theoretical analysis. See Appendix Efor detailed discussion.5.2 F EW-SHOT IMITATION LEARNING WITH OPALDescription: Previously, we assumed that we have access to a task reward function, but only undi-rected data that performs other tasks. Now, we will study the opposite case, where we are notprovided with a reward function for the new task either, but instead receive a small number of task-specific demonstrations that illustrate optimal behavior. Simply imitating these few demonstrationsis insufficient to obtain a good policy, and our experiments evaluate whether OPAL can effectivelyincorporate the prior data to enable few-shot adaptation in this setting. We use the Antmaze envi-ronments (diverse datasets) to evaluate our method and use an expert policy for these environmentsto samplen= 10 successful trajectories.Baseline and Results: For baselines, we use Behavior cloning (BC) and the model from Wanget al. (2017), which prescribes using a sequential V AE (SV AE) over state trajectories in conjunctionwith imitation learning. As shown in Table 2, BC+OPAL clearly outperforms other baselines, show-8Published as a conference paper at ICLR 2021Environment HIRO SAC+BC SAC+OPAL(ours) DDQN+DDCOantmaze medium sparse (diverse) 0.0 0.0 81.63.7 0.0antmaze large sparse (diverse) 0.0 0.0 0.0 0.0antmaze medium dense (diverse) 0.0 0.0 81.33.3 0.0antmaze large dense (diverse) 12 0.0 81.53.9 0.0Table 3: Average success rate ( %) (over 4 seeds) of online RL methods: HIRO (Nachum et al.,2018a), SAC+BC, SAC+OPAL, and DDQN+DDCO (Krishnan et al., 2017). These methods wereran for 2:5e6steps for antmaze medium environments and 17:5e6steps for antmaze large environ-ments.Models MT10 MT50PPO 15.24.8 5.12.2PPO+OPAL(ours) 70.14.3 45.33.1SAC 39.5 28.8Table 4: Due to improved exploration, PPO+OPAL outperforms PPO and SAC on MT10 and MT50in terms of average success rate ( %) (over 4 seeds).ing the importance of temporal abstraction and ascertaining the quality of learned primitives. SeeAppendix A for detailed discussion.5.3 O NLINE RL AND MULTI -TASK TRANSFER WITH OPALDescription: For online RL and multi-task transfer learning, we learn a task policy in space ofprimitives(ajs;z)while keeping it fixed. For multi-task transfer, the task policy also takes inthe task id and we use c= 5 andZ= 8. Since the primitives need to transfer to a different statedistribution for multi-task transfer, it only learns the action sub-trajectory distribution and doesn’ttake in the state feedback. See Appendix A for a detailed description of models. For online RL, weuse the Antmaze environments (diverse datasets) with sparse and dense rewards for evaluating ourmethod. For online multi-task transfer learning, we learn primitives with expert data from pick-and-place task and then use it to learn multi-task policy for MT10 and MT50 (from metaworld (Yu et al.,2020)), containing 10and50robotic manipulation tasks which needs to be solved simultaneously.Baseline and Results: For online RL, we use HIRO (Nachum et al., 2018b), a state-of-the-arthierarchical RL method, SAC (Haarnoja et al., 2018) with Behavior cloning (BC) pre-training onD, and Discovery of Continuous Options (DDCO) (Krishnan et al., 2017) which uses Dto learn adiscrete set of primitives and then learns a task policy in space of those primitives with online RL(Double DQN (DDQN) (Van Hasselt et al., 2015)). For online multi-task transfer learning, we usePPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018) as baselines. As shown in Table 3 andTable 4, OPAL uses temporal abstraction to improve exploration and thus accelerate online RL andmulti-task transfer learning. See Appendix A for detailed discussion.6 D ISCUSSIONWe proposed Offline Primitives for Accelerating offline RL (OPAL) as a preproccesing step for ex-tracting recurring primitive behaviors from undirected and unlabelled dataset of diverse behaviors.We derived theoretical statements which describe under what conditions OPAL can improve learningof downstream offline RL tasks and showed how these improvements manifest in practice, leading tosignificant improvements in complex manipulation tasks. We further showed empirical demonstra-tions of OPAL’s application to few-shot imitation learning, online RL, and online multi-task transferlearning. In this work, we focused on simple auto-encoding models for representing OPAL, and aninteresting avenue for future work is scaling up this basic paradigm to image-based tasks.7 A CKNOWLEDGEMENTSWe would like to thank Ben Eysenbach and Kamyar Ghasemipour for valuable discussions at dif-ferent points over the course of this work. This work was supported by Google, DARPA MachineCommon Sense grant and MIT-IBM grant.9Published as a conference paper at ICLR 2021
bDV_GCn84xx
Interesting research question but motivation of the method unclear
6: Marginally above acceptance threshold
In the RL setting, this paper tackles the case where an agent may have access to large amounts of offline experience data. The objective of the work is to find an effective way to leverage this data for finding temporally extended primitive behaviors. The paper provides results that show how performing offline primitive learning can be leveraged for improving few-shot imitation learning as well as exploration and transfer on a variety of benchmark domains. The paper tackles an important question in reinforcement learning: learning temporally extended primitive behaviors from off-policy data is a very relevant question. However, I found the motivation for the approach quite vague as well as different elements that require clarification (see below) and because of this, I can't recommend acceptance. Motivation for the approach: - if the downstream policy has to stay close to the offline data distribution, it seems to me that if the offline data distribution is obtained with a bad policy, this can not lead to interesting decision-making. - can you explain why equation 2 "motivates better generalization"? Other concerns: - Why can gamma not have a value of 0 (first paragraph preliminaries) - What does it mean "To capture multi-modality in a dataset" ? (second paragraph preliminaries) - Figure 1: What does the (81.1%) and (70.3%) refer to? - Table 4: why is SAC given without standard deviation? Some text improvements: - "we focus on focus on" Other comment: - The number of seeds (4) should ideally be increased.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning ### Paper Abstract Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent’s ability to query the environment for transitions and rewards is effectively unlimited. However, in many practical applications, the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited. In this work, we focus on this offline setting. Our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors before using these primitives for downstream task learning. Primitives extracted in this way serve two purposes: they delineate the behaviors that are supported by the data from those that are not, making them useful for avoiding distributional shift in offline RL; and they provide a degree of temporal abstraction, which reduces the effective horizon yielding better learning in theory, and improved offline RL in practice. In addition to benefiting offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning as well as exploration and transfer in online RL on a variety of benchmark domains. Visualizations and code are available at https://sites.google.com/view/opal-iclr ### Paper Keywords ["Offline Reinforcement Learning", "Primitive Discovery", "Unsupervised Learning"] ### Paper Content ABSTRACTReinforcement learning (RL) has achieved impressive performance in a variety ofonline settings in which an agent’s ability to query the environment for transitionsand rewards is effectively unlimited. However, in many practical applications, thesituation is reversed: an agent may have access to large amounts of undirectedoffline experience data, while access to the online environment is severely lim-ited. In this work, we focus on this offline setting. Our main insight is that, whenpresented with offline data composed of a variety of behaviors, an effective wayto leverage this data is to extract a continuous space of recurring and temporallyextended primitive behaviors before using these primitives for downstream tasklearning. Primitives extracted in this way serve two purposes: they delineate thebehaviors that are supported by the data from those that are not, making them use-ful for avoiding distributional shift in offline RL; and they provide a degree of tem-poral abstraction, which reduces the effective horizon yielding better learning intheory, and improved offline RL in practice. In addition to benefiting offline policyoptimization, we show that performing offline primitive learning in this way canalso be leveraged for improving few-shot imitation learning as well as explorationand transfer in online RL on a variety of benchmark domains. Visualizations andcode are available at https://sites.google.com/view/opal-iclr1 I NTRODUCTIONReinforcement Learning (RL) systems have achieved impressive performance in a variety of on-line settings such as games (Silver et al., 2016; Tesauro, 1995; Brown & Sandholm, 2019) androbotics (Levine et al., 2016; Dasari et al., 2019; Peters et al., 2010; Parmas et al., 2019; Pinto &Gupta, 2016; Nachum et al., 2019a), where the agent can act in the environment and sample asmany transitions and rewards as needed. However, in many practical applications the agent’s abilityto continuously act in the environment may be severely limited due to practical concerns (Dulac-Arnold et al., 2019). For example, a robot learning through trial and error in the real world requirescostly human supervision, safety checks, and resets (Atkeson et al., 2015), rendering many stan-dard online RL algorithms inapplicable (Matsushima et al., 2020). However, in such settings wemight instead have access to large amounts of previously logged data, which could be logged froma baseline hand-engineered policy or even from other related tasks. For example, in self-drivingapplications, one may have access to large amounts of human driving behavior; in robotic applica-tions, one might have data of either humans or robots performing similar tasks. While these offlinedatasets are often undirected (generic human driving data on various routes in various cities may notbe directly relevant to navigation of a specific route within a specific city) and unlabelled (generichuman driving data is often not labelled with the human’s intended route or destination), this data isstill useful in that it can inform the algorithm about what is possible to do in the real world, withoutthe need for active exploration.In this paper, we study how, in this offline setting, an effective strategy to leveraging unlabeled andundirected past data is to utilize unsupervised learning to extract potentially useful and temporallyextended primitive skills to learn what types of behaviors are possible . For example, consider adataset of an agent performing undirected navigation in a maze environment (Figure 1). While thedataset does not provide demonstrations of exclusively one specific point-to-point navigation task,Work done during an internship at Google Brain1Published as a conference paper at ICLR 2021(a) medium (diverse) (b) medium (CQL+OPAL) (c) large (diverse) (d) large (CQL+OPAL)Figure 1: Visualization of (a subset of) diverse datasets for (a) antmaze medium and (c) antmazelarge, along with trajectories sampled from CQL+OPAL trained on diverse datasets of (b) antmazemedium and (d) antmaze large.it nevertheless presents clear indications of which temporally extended behaviors are useful andnatural in this environment (e.g., moving forward, left, right, and backward), and our unsupervisedlearning objective aims to distill these behaviors into temporally extended primitives. Once theselocomotive primitive behaviors are extracted, we can use them as a compact constrained temporally-extended action space for learning a task policy with offline RL, which only needs to focus on taskrelevant navigation, thereby making task learning easier. For example, once a specific point-to-pointnavigation is commanded, the agent can leverage the learned primitives for locomotion and onlyfocus on the task of navigation, as opposed to learning locomotion and navigation from scratch.We refer to our proposed unsupervised learning method as Offline Primitives for Accelerating of-fline reinforcement Learning (OPAL), and apply this basic paradigm to offline RL, where the agentis given a single offline dataset to use for both the initial unsupervised learning phase and then asubsequent task-directed offline policy optimization phase. Despite the fact that no additional datais used, we find that our proposed unsupervised learning technique can dramatically improve offlinepolicy optimization compared to performing offline policy optimization on the raw dataset directly.To the best of our knowledge, ours is the first work to theoretically justify and experimentally verifythe benefits of primitive learning in offline RL settings, showing that hierarchies can provide tempo-ral abstraction that allows us to reduce the effect of compounding errors issue in offline RL. Thesetheoretical and empirical results are notably in contrast to previous related work in online hierarchi-cal RL (Nachum et al., 2019b), which found that improved exploration is the main benefit affordedby hierarchically learned primitives. We instead show significant benefits in the offline RL setting,where exploration is irrelevant.Beyond offline RL, and although this isn’t the main focus of the work, we also show the applicabilityof our method for accelerating RL by incorporating OPAL as a preprocessing step to standard onlineRL, few-shot imitation learning, and multi-task transfer learning. In all settings, we demonstrate thatthe use of OPAL can improve the speed and quality of downstream task learning.2 R ELATED WORKOffline RL. Offline RL presents the problem of learning a policy from a fixed prior dataset oftransitions and rewards. Recent works in offline RL (Kumar et al., 2019; Levine et al., 2020; Wuet al., 2019; Ghasemipour et al., 2020; Jaques et al., 2019; Fujimoto et al., 2018) constrain thepolicy to be close to the data distribution to avoid the use of out-of-distribution actions (Kumaret al., 2019; Levine et al., 2020). To constrain the policy, some methods use distributional penalties,as measured by KL divergence (Levine et al., 2020; Jaques et al., 2019), MMD (Kumar et al., 2019),or Wasserstein distance (Wu et al., 2019). Other methods first sample actions from the behaviorpolicy and then either clip the maximum deviation from those actions (Fujimoto et al., 2018) or justuse those actions (Ghasemipour et al., 2020) during the value backup to stay within the support ofthe offline data. In contrast to these works, OPAL uses an offline dataset for unsupervised learningof a continuous space of primitives. The use of these primitives for downstream tasks implicitlyconstrains a learned primitive-directing policy to stay close to the offline data distribution. As wedemonstrate in our experiments, the use of OPAL in conjunction with an off-the-shelf offline RLalgorithm in this way can yield significant improvement compared to applying offline RL to thedataset directly.2Published as a conference paper at ICLR 2021Online skill discovery. There are a number of recent works (Eysenbach et al., 2018; Nachum et al.,2018a; Sharma et al., 2019) which use unsupervised objectives to discover skills and use the discov-ered skills for planning (Sharma et al., 2019), few-shot imitation learning, or online RL (Eysenbachet al., 2018; Nachum et al., 2018a). However, these works focus on online settings and assumeaccess to the environment. In contrast, OPAL focuses on settings where a large dataset of diversebehaviors is provided but access to the environment is restricted. It leverages these static offlinedatasets to discover primitive skills with better state coverage and avoids the exploration issue oflearning primitives from scratch.Hierarchical policy learning. Hierarchical policy learning involves learning a hierarchy of policieswhere a low-level policy acts as primitive skills and a high-level policy directs the low-level policyto solve a task. While some works (Bacon et al., 2017; Stolle & Precup, 2002; Peng et al., 2019)learn a discrete set of lower-level policies, each behaving as a primitive skill, other works (Vezhn-evets et al., 2017; Nachum et al., 2018b; 2019a; Hausman et al., 2018) learn a continuous space ofprimitive skills representing the lower-level policy. These methods have mostly been applied in on-line settings. However, there have been some recent variants of the above works (Lynch et al., 2020;Shankar & Gupta, 2020; Krishnan et al., 2017; Merel et al., 2018) which extract skills from a priordataset and using it for either performing tasks directly (Lynch et al., 2020) or learning downstreamtasks (Shankar & Gupta, 2020; Krishnan et al., 2017; Merel et al., 2018) with online RL. WhileOPAL is related to these works, we mainly focus on leveraging the learned primitives for asymptot-ically improving the performance of offline RL; i.e., both the primitive learning and the downstreamtask must be solved using a single static dataset. Furthermore, we provide performance bounds forOPAL and enumerate the specific properties an offline dataset should possess to guarantee improveddownstream task learning, while such theoretical guarantees are largely absent from existing work.3 P RELIMINARIESWe consider the standard Markov decision process (MDP) setting (Puterman, 1994), specified bya tupleM= (S;A;P;;r; )whereSrepresents the state space, Arepresents the action space,P(s0js;a)represents the transition probability, (s)represents the initial state distribution, r(s;a)2(Rmax;Rmax)represents the reward function, and 2(0;1)represents the discount factor. Apolicyin this MDP corresponds to a function S! (A), where (A)is the simplex over A.It induces a discounted future state distribution d, defined by d(s) = (1)P1t=0tP(st=sj), whereP(st=sj)is the probability of reaching the state sat timetby runningonM.For a positive integer k, we usedk(s) = (1k)P1t=0tkP(stk=sj)to denote the every-k-step state distribution of . The return of policy in MDPMis defined as JRL(;M) =11Esd;a(ajs)[r(s;a)]. We represent the reward- and discount-agnostic environment as a tupleE= (S;A;P;).We aim to use a large, unlabeled, and undirected experience dataset D:=fri:= (st;at)c1t=0gNi=1associated withEto extract primitives and improve offline RL for downstream task learning. Toaccount for the fact that the dataset Dmay be generated by a mixture of diverse policies start-ing at diverse initial states, we assume Dis generated by first sampling a behavior policy along with an initial state s, where ;represent some (unknown) distributions over poli-cies and states, respectively, and then running onEforctime steps starting at s0=s.We define the probability of a sub-trajectory := (s0;a0;:::;sc1;ac1)inDunder a pol-icyas() =(s0)Qc1t=1P(stjst1;at1)Qc1t=0(atjst), and the conditional probability as(js) = 1[s=s0]Qc1t=1P(stjst1;at1)Qc1t=0(atjst). In this work, we will show how to applyunsupervised learning techniques to Dto extract a continuous space of primitives (ajs;z), wherez2Z, the latent space inferred by unsupervised learning. We intend to use the learned (ajs;z)toasymptotically improve the performance of offline RL for downstream task learning. For offline RL,we assume the existence of a dataset Dr:=fri:= (st;at;rt)c1t=0gNi=1, corresponding to the samesub-trajectories inDlabelled with MDP rewards. Additionally, we can use the extracted primitivesfor other applications such as few-shot imitation learning, online RL, and online multi-task transferlearning. We review the additional assumptions for these applications in Appendix A.3Published as a conference paper at ICLR 2021⇡✓(a|s, z)<latexit sha1_base64="XGKGEGHXYjG4NaqJqfbPtivAuyE=">AAAB/HicbVDLSsNAFJ3UV62vaJduBotQQUpSBV0W3bisYB/QhDCZTtuhkwczN0KM9VfcuFDErR/izr9x2mahrQcuHM65l3vv8WPBFVjWt1FYWV1b3yhulra2d3b3zP2DtooSSVmLRiKSXZ8oJnjIWsBBsG4sGQl8wTr++Hrqd+6ZVDwK7yCNmRuQYcgHnBLQkmeWnZh7mQMjBmRSJY/q9OHEMytWzZoBLxM7JxWUo+mZX04/oknAQqCCKNWzrRjcjEjgVLBJyUkUiwkdkyHraRqSgCk3mx0/wcda6eNBJHWFgGfq74mMBEqlga87AwIjtehNxf+8XgKDSzfjYZwAC+l80SARGCI8TQL3uWQURKoJoZLrWzEdEUko6LxKOgR78eVl0q7X7LNa/fa80rjK4yiiQ3SEqshGF6iBblATtRBFKXpGr+jNeDJejHfjY95aMPKZMvoD4/MHjTiUsg==</latexit>Primitive PolicyEncoderq(z|⌧)<latexit sha1_base64="mitotMQhu7WCExqwOKL8kk0lhkI=">AAAB+XicbVBNS8NAEJ34WetX1KOXxSLUS0mqoMeiF48V7Ae0IWy2m3bpZhN3N4Ua+0+8eFDEq//Em//GbZuDtj4YeLw3w8y8IOFMacf5tlZW19Y3Ngtbxe2d3b19++CwqeJUEtogMY9lO8CKciZoQzPNaTuRFEcBp61geDP1WyMqFYvFvR4n1ItwX7CQEayN5Nv2g591kwGblB+fuhqnZ75dcirODGiZuDkpQY66b391ezFJIyo04Vipjusk2suw1IxwOil2U0UTTIa4TzuGChxR5WWzyyfo1Cg9FMbSlNBopv6eyHCk1DgKTGeE9UAtelPxP6+T6vDKy5hIUk0FmS8KU450jKYxoB6TlGg+NgQTycytiAywxESbsIomBHfx5WXSrFbc80r17qJUu87jKMAxnEAZXLiEGtxCHRpAYATP8ApvVma9WO/Wx7x1xcpnjuAPrM8fmeGToQ==</latexit>Unlabelled & UndirectedDataD<latexit sha1_base64="vhcSZKB5REty1fMyhkxMk/A5FwM=">AAAB8nicbVDLSgMxFL1TX7W+qi7dBIvgqsxUQZdFXbisYB8wHUomzbShmWRIMkIZ+hluXCji1q9x59+YaWehrQcCh3PuJeeeMOFMG9f9dkpr6xubW+Xtys7u3v5B9fCoo2WqCG0TyaXqhVhTzgRtG2Y47SWK4jjktBtObnO/+0SVZlI8mmlCgxiPBIsYwcZKfj/GZkwwz+5mg2rNrbtzoFXiFaQGBVqD6ld/KEkaU2EIx1r7npuYIMPKMMLprNJPNU0wmeAR9S0VOKY6yOaRZ+jMKkMUSWWfMGiu/t7IcKz1NA7tZB5RL3u5+J/npya6DjImktRQQRYfRSlHRqL8fjRkihLDp5ZgopjNisgYK0yMbaliS/CWT14lnUbdu6g3Hi5rzZuijjKcwCmcgwdX0IR7aEEbCEh4hld4c4zz4rw7H4vRklPsHMMfOJ8/diuRXg==</latexit>Task Policy⇡ (z|s)<latexit sha1_base64="GCK2+ngQZu2EKM9l8D2Kk7TdZBw=">AAAB+HicbVBNT8JAEJ3iF+IHVY9eGokJXkiLJnokevGIiYAJbZrtssCG7XazuzWByi/x4kFjvPpTvPlvXKAHBV8yyct7M5mZFwlGlXbdb6uwtr6xuVXcLu3s7u2X7YPDtkpSiUkLJyyRDxFShFFOWppqRh6EJCiOGOlEo5uZ33kkUtGE3+uxIEGMBpz2KUbaSKFd9gUNM18oOq1OntRZaFfcmjuHs0q8nFQgRzO0v/xegtOYcI0ZUqrruUIHGZKaYkamJT9VRCA8QgPSNZSjmKggmx8+dU6N0nP6iTTFtTNXf09kKFZqHEemM0Z6qJa9mfif1011/yrIKBepJhwvFvVT5ujEmaXg9KgkWLOxIQhLam518BBJhLXJqmRC8JZfXiXtes07r9XvLiqN6zyOIhzDCVTBg0towC00oQUYUniGV3izJtaL9W59LFoLVj5zBH9gff4A1q+TMw==</latexit>Labelled DataDr<latexit sha1_base64="7fed4aboO+102xrMnPuo3ukabcM=">AAAB9HicbVDLSgMxFL3js9ZX1aWbYBFclZkq6LKoC5cV7APasWTSTBuaScYkUyhDv8ONC0Xc+jHu/Bsz7Sy09UDgcM693JMTxJxp47rfzsrq2vrGZmGruL2zu7dfOjhsapkoQhtEcqnaAdaUM0EbhhlO27GiOAo4bQWjm8xvjanSTIoHM4mpH+GBYCEj2FjJ70bYDAnm6e30UfVKZbfizoCWiZeTMuSo90pf3b4kSUSFIRxr3fHc2PgpVoYRTqfFbqJpjMkID2jHUoEjqv10FnqKTq3SR6FU9gmDZurvjRRHWk+iwE5mIfWil4n/eZ3EhFd+ykScGCrI/FCYcGQkyhpAfaYoMXxiCSaK2ayIDLHCxNieirYEb/HLy6RZrXjnler9Rbl2nddRgGM4gTPw4BJqcAd1aACBJ3iGV3hzxs6L8+58zEdXnHznCP7A+fwBBL2SQg==</latexit>Drhi<latexit sha1_base64="rjibpOyYn67iESUeufzIxXHtUtw=">AAACBHicbVC7TsMwFHXKq5RXgLGLRYXEVCUFCcYKGBiLRB9SGyLHdVurthPZDlIVZWDhV1gYQIiVj2Djb3DSDNByJEvH59yre+8JIkaVdpxvq7Syura+Ud6sbG3v7O7Z+wcdFcYSkzYOWSh7AVKEUUHammpGepEkiAeMdIPpVeZ3H4hUNBR3ehYRj6OxoCOKkTaSb1cHHOkJRiy5Tu+ln+RfyZMJTVPfrjl1JwdcJm5BaqBAy7e/BsMQx5wIjRlSqu86kfYSJDXFjKSVQaxIhPAUjUnfUIE4UV6SH5HCY6MM4SiU5gkNc/V3R4K4UjMemMpsR7XoZeJ/Xj/WowsvoSKKNRF4PmgUM6hDmCUCh1QSrNnMEIQlNbtCPEESYW1yq5gQ3MWTl0mnUXdP643bs1rzsoijDKrgCJwAF5yDJrgBLdAGGDyCZ/AK3qwn68V6tz7mpSWr6DkEf2B9/gAUNpkE</latexit>Drlo<latexit sha1_base64="SdFxDs6avafE+GzUtiaM9gkNgwc=">AAACBHicbVC7TsMwFHXKq5RXgLGLRYXEVCUFCcYKGBiLRB9SGyLHdVurjh3ZDlIVZWDhV1gYQIiVj2Djb3DSDNByJEvH59yre+8JIkaVdpxvq7Syura+Ud6sbG3v7O7Z+wcdJWKJSRsLJmQvQIowyklbU81IL5IEhQEj3WB6lfndByIVFfxOzyLihWjM6YhipI3k29VBiPQEI5Zcp/fST/KvDBMm0tS3a07dyQGXiVuQGijQ8u2vwVDgOCRcY4aU6rtOpL0ESU0xI2llECsSITxFY9I3lKOQKC/Jj0jhsVGGcCSkeVzDXP3dkaBQqVkYmMpsR7XoZeJ/Xj/WowsvoTyKNeF4PmgUM6gFzBKBQyoJ1mxmCMKSml0hniCJsDa5VUwI7uLJy6TTqLun9cbtWa15WcRRBlVwBE6AC85BE9yAFmgDDB7BM3gFb9aT9WK9Wx/z0pJV9ByCP7A+fwAjdpkO</latexit>{⌧=(st,at)c1t=0}<latexit sha1_base64="jOBcCHisI7KSx4ePENZkOqNE7ic=">AAACB3icbVDLSsNAFJ3UV62vqEtBgkWooCWpgm4KRTcuK9gHNDFMppM6dPJg5kYoITs3/oobF4q49Rfc+TdO2yy09cCFwzn3cu89XsyZBNP81goLi0vLK8XV0tr6xuaWvr3TllEiCG2RiEei62FJOQtpCxhw2o0FxYHHaccbXo39zgMVkkXhLYxi6gR4EDKfEQxKcvV9O7UBJ/WKdOEYu3DkplA3s7uUnFiZnbl62ayaExjzxMpJGeVouvqX3Y9IEtAQCMdS9iwzBifFAhjhNCvZiaQxJkM8oD1FQxxQ6aSTPzLjUCl9w4+EqhCMifp7IsWBlKPAU50Bhns5643F/7xeAv6Fk7IwToCGZLrIT7gBkTEOxegzQQnwkSKYCKZuNcg9FpiAiq6kQrBmX54n7VrVOq3Wbs7Kjcs8jiLaQweogix0jhroGjVRCxH0iJ7RK3rTnrQX7V37mLYWtHxmF/2B9vkD7IuYtA==</latexit>{⌧=(st,at,rt)c1t=0}<latexit sha1_base64="zkczpXm8cCwKC6Ft6jK47HOCjPU=">AAACC3icbVDLSsNAFJ3UV62vqEs3oUWooCWpgm4KRTcuK9gHNDFMppN26OTBzI1QQvZu/BU3LhRx6w+482+ctllo9cCFwzn3cu89XsyZBNP80gpLyyura8X10sbm1vaOvrvXkVEiCG2TiEei52FJOQtpGxhw2osFxYHHadcbX0397j0VkkXhLUxi6gR4GDKfEQxKcvWyndqAk0ZVunCMVQkXjtwUGmZ2l5ITK7MzV6+YNXMG4y+xclJBOVqu/mkPIpIENATCsZR9y4zBSbEARjjNSnYiaYzJGA9pX9EQB1Q66eyXzDhUysDwI6EqBGOm/pxIcSDlJPBUZ4BhJBe9qfif10/Av3BSFsYJ0JDMF/kJNyAypsEYAyYoAT5RBBPB1K0GGWGBCaj4SioEa/Hlv6RTr1mntfrNWaV5mcdRRAeojKrIQueoia5RC7URQQ/oCb2gV+1Re9betPd5a0HLZ/bRL2gf396Cmk0=</latexit>{⌧=(st,at,z)c1t=0}<latexit sha1_base64="4SMDc0hbh8KS6W0qJD2YT/+zNjI=">AAACCXicbVDJSgNBEO2JW4xb1KOXxiBE0DATBb0Egl48RjALZMahp9OTNOlZ6K4R4jBXL/6KFw+KePUPvPk3dpaDRh8UPN6roqqeFwuuwDS/jNzC4tLySn61sLa+sblV3N5pqSiRlDVpJCLZ8YhigoesCRwE68SSkcATrO0NL8d++45JxaPwBkYxcwLSD7nPKQEtuUVspzaQpFZWLhwRXfeHbgo1M7tN6bGV2ZlbLJkVcwL8l1gzUkIzNNzip92LaBKwEKggSnUtMwYnJRI4FSwr2IliMaFD0mddTUMSMOWkk08yfKCVHvYjqSsEPFF/TqQkUGoUeLozIDBQ895Y/M/rJuCfOykP4wRYSKeL/ERgiPA4FtzjklEQI00IlVzfiumASEJBh1fQIVjzL/8lrWrFOqlUr09L9YtZHHm0h/ZRGVnoDNXRFWqgJqLoAT2hF/RqPBrPxpvxPm3NGbOZXfQLxsc3R7OZbg==</latexit>{(s0,z ,sc,c1Xt=0trt)}<latexit sha1_base64="OUKfMaw3B0LGzfTK0gkJT57pH/g=">AAACFHicbVDLSsNAFJ3UV62vqEs3wSJUrCWpgm6EohuXFewDmjZMptN26EwSZm6EGvIRbvwVNy4UcevCnX/j9LHQ6oELh3Pu5d57/IgzBbb9ZWQWFpeWV7KrubX1jc0tc3unrsJYElojIQ9l08eKchbQGjDgtBlJioXPacMfXo39xh2VioXBLYwi2ha4H7AeIxi05JlHblJQnl28LyqPFF0VCy+BCzvtJOTYSd0+FgJ3QHpw6KaembdL9gTWX+LMSB7NUPXMT7cbkljQAAjHSrUcO4J2giUwwmmac2NFI0yGuE9bmgZYUNVOJk+l1oFWulYvlLoCsCbqz4kEC6VGwtedAsNAzXtj8T+vFUPvvJ2wIIqBBmS6qBdzC0JrnJDVZZIS4CNNMJFM32qRAZaYgM4xp0Nw5l/+S+rlknNSKt+c5iuXsziyaA/towJy0BmqoGtURTVE0AN6Qi/o1Xg0no03433amjFmM7voF4yPb8Ktngk=</latexit>ForwardAutoencoding loss + KL constraintFine-tuning with BCOffline RLszonce per c stepsevery stepTask Policy⇡ (z|s)<latexit sha1_base64="GCK2+ngQZu2EKM9l8D2Kk7TdZBw=">AAAB+HicbVBNT8JAEJ3iF+IHVY9eGokJXkiLJnokevGIiYAJbZrtssCG7XazuzWByi/x4kFjvPpTvPlvXKAHBV8yyct7M5mZFwlGlXbdb6uwtr6xuVXcLu3s7u2X7YPDtkpSiUkLJyyRDxFShFFOWppqRh6EJCiOGOlEo5uZ33kkUtGE3+uxIEGMBpz2KUbaSKFd9gUNM18oOq1OntRZaFfcmjuHs0q8nFQgRzO0v/xegtOYcI0ZUqrruUIHGZKaYkamJT9VRCA8QgPSNZSjmKggmx8+dU6N0nP6iTTFtTNXf09kKFZqHEemM0Z6qJa9mfif1011/yrIKBepJhwvFvVT5ujEmaXg9KgkWLOxIQhLam518BBJhLXJqmRC8JZfXiXtes07r9XvLiqN6zyOIhzDCVTBg0towC00oQUYUniGV3izJtaL9W59LFoLVj5zBH9gff4A1q+TMw==</latexit>⇡✓(a|s, z)<latexit sha1_base64="XGKGEGHXYjG4NaqJqfbPtivAuyE=">AAAB/HicbVDLSsNAFJ3UV62vaJduBotQQUpSBV0W3bisYB/QhDCZTtuhkwczN0KM9VfcuFDErR/izr9x2mahrQcuHM65l3vv8WPBFVjWt1FYWV1b3yhulra2d3b3zP2DtooSSVmLRiKSXZ8oJnjIWsBBsG4sGQl8wTr++Hrqd+6ZVDwK7yCNmRuQYcgHnBLQkmeWnZh7mQMjBmRSJY/q9OHEMytWzZoBLxM7JxWUo+mZX04/oknAQqCCKNWzrRjcjEjgVLBJyUkUiwkdkyHraRqSgCk3mx0/wcda6eNBJHWFgGfq74mMBEqlga87AwIjtehNxf+8XgKDSzfjYZwAC+l80SARGCI8TQL3uWQURKoJoZLrWzEdEUko6LxKOgR78eVl0q7X7LNa/fa80rjK4yiiQ3SEqshGF6iBblATtRBFKXpGr+jNeDJejHfjY95aMPKZMvoD4/MHjTiUsg==</latexit>Primitive Policya(2) Offline training of task policy(3) Test-time policy execution⇡✓(a|s, z)<latexit sha1_base64="XGKGEGHXYjG4NaqJqfbPtivAuyE=">AAAB/HicbVDLSsNAFJ3UV62vaJduBotQQUpSBV0W3bisYB/QhDCZTtuhkwczN0KM9VfcuFDErR/izr9x2mahrQcuHM65l3vv8WPBFVjWt1FYWV1b3yhulra2d3b3zP2DtooSSVmLRiKSXZ8oJnjIWsBBsG4sGQl8wTr++Hrqd+6ZVDwK7yCNmRuQYcgHnBLQkmeWnZh7mQMjBmRSJY/q9OHEMytWzZoBLxM7JxWUo+mZX04/oknAQqCCKNWzrRjcjEjgVLBJyUkUiwkdkyHraRqSgCk3mx0/wcda6eNBJHWFgGfq74mMBEqlga87AwIjtehNxf+8XgKDSzfjYZwAC+l80SARGCI8TQL3uWQURKoJoZLrWzEdEUko6LxKOgR78eVl0q7X7LNa/fa80rjK4yiiQ3SEqshGF6iBblATtRBFKXpGr+jNeDJejHfjY95aMPKZMvoD4/MHjTiUsg==</latexit>Primitive PolicyPrior⇢!(z|s)<latexit sha1_base64="GVo4fk2jq6kQ2n6lQ4VageKeonY=">AAAB+3icbVDLSsNAFJ34rPUV69LNYBHqpiRV0GXRjcsK9gFNCJPppB06mQkzE7HG/IobF4q49Ufc+TdO2yy09cCFwzn3cu89YcKo0o7zba2srq1vbJa2yts7u3v79kGlo0QqMWljwYTshUgRRjlpa6oZ6SWSoDhkpBuOr6d+955IRQW/05OE+DEachpRjLSRArviyZEIMk/EZIjy2uOTOg3sqlN3ZoDLxC1IFRRoBfaXNxA4jQnXmCGl+q6TaD9DUlPMSF72UkUShMdoSPqGchQT5Wez23N4YpQBjIQ0xTWcqb8nMhQrNYlD0xkjPVKL3lT8z+unOrr0M8qTVBOO54uilEEt4DQIOKCSYM0mhiAsqbkV4hGSCGsTV9mE4C6+vEw6jbp7Vm/cnlebV0UcJXAEjkENuOACNMENaIE2wOABPINX8Gbl1ov1bn3MW1esYuYQ/IH1+QMh8JR+</latexit>Encoderq(z|⌧)<latexit sha1_base64="mitotMQhu7WCExqwOKL8kk0lhkI=">AAAB+XicbVBNS8NAEJ34WetX1KOXxSLUS0mqoMeiF48V7Ae0IWy2m3bpZhN3N4Ua+0+8eFDEq//Em//GbZuDtj4YeLw3w8y8IOFMacf5tlZW19Y3Ngtbxe2d3b19++CwqeJUEtogMY9lO8CKciZoQzPNaTuRFEcBp61geDP1WyMqFYvFvR4n1ItwX7CQEayN5Nv2g591kwGblB+fuhqnZ75dcirODGiZuDkpQY66b391ezFJIyo04Vipjusk2suw1IxwOil2U0UTTIa4TzuGChxR5WWzyyfo1Cg9FMbSlNBopv6eyHCk1DgKTGeE9UAtelPxP6+T6vDKy5hIUk0FmS8KU450jKYxoB6TlGg+NgQTycytiAywxESbsIomBHfx5WXSrFbc80r17qJUu87jKMAxnEAZXLiEGtxCHRpAYATP8ApvVma9WO/Wx7x1xcpnjuAPrM8fmeGToQ==</latexit>OPAL(1) Offline unsupervised primitive learning with OPALFigure 2: Overview of offline RL with OPAL. OPAL is trained on unlabelled data Dusing autoen-coding objective. For offline RL, the encoder first labels the reward-labelled data Drwith latents,and divides it into DrhiandDrlo. The task policy is trained onDrhiusing offline RL while theprimitive policy is finetuned onDlousing behavioral cloning (BC).4 O FFLINE RL WITH OPALIn this section, we elaborate on OPAL, our proposed method for extracting primitives from Dandthen leveraging these primitives to learn downstream tasks with offline RL. We begin by describ-ing our unsupervised objective, which distills Dinto a continuous space of latent-conditioned andtemporally-extended primitive policies (ajs;z). For learning downstream tasks with offline RL,we first labelDrwith appropriate latents using the OPAL encoder q(zj)and then learn a policy (zjs)which is trained to sample an appropriate primitive every csteps to optimize a specifictask, using any off-the-shelf offline RL algorithm. A graphical overview of offline RL with OPALis shown in Figure 2. While we mainly focus on offline RL, we briefly discuss how to use thelearned primitives for few-shot imitation learning, online RL, and multi-task online transfer learningin section 5 and provide more details in Appendix A.4.1 E XTRACTING TEMPORALLY -EXTENDED PRIMITIVES FROM DATAWe would like to extract a continuous space of temporally-extended primitives (ajs;z)fromDwhich we can later use as an action space for learning downstream tasks with offline RL. Thiswould reduce our effective task horizon, thereby making the downstream learning easier, as well asallow the downstream policy to stay close to the offline data distribution, thereby bringing stabilityto the downstream learning. We propose the following objective for learning , incorporating anauto-encoding loss function with a KL constraint to encourage better generalization:min;;!J(;;! ) =^ED;zq(zj)"c1Xt=0log(atjst;z)#(1)s.t.^ED[DKL(q(zj)jj!(zjs0))]KL (2)where ^Eindicates empirical expectation. The learned components of this objective may be inter-preted as encoder, decoder, and prior:Encoder:q(zj)encodes the trajectory of state-action pairs into a distribution in latent spaceand gives out parameters of that distribution. In our case, we represent qwith a bidirectional GRUwhich takes in and gives out parameters of a Gaussian distribution (encz;encz).Decoder (aka Primitive Policy): (ajs;z)is the latent-conditioned policy. It maximizes the con-ditional log-likelihood of actions in given the state and the latent vector. In our implementation,we parameterize it as a feed-forward neural network which takes in current state and latent vectorand gives out parameters of a Gaussian distribution for the action (a;a).Prior/Primitive Predictor: !(zjs0)tries to predict the encoded distribution of the sub-trajectoryfrom its initial state. Our implementation uses a feed-forward neural network which takes in theinitial state and gives out parameters of a Gaussian distribution (prz;prz).4Published as a conference paper at ICLR 2021KL-constraint (Equation 2) . As an additional component of the algorithm, we enforce consistencyin the latent variables predicted by the encoder q(zj)and the prior !(zjs0). Since our goalis to obtain a primitive zthat captures a temporal sequence of actions for a given sub-trajectory= (s0;a0;;sc1;ac1)(as defined in Section 3), we utilize a regularization that enforces thedistribution, q(zj)to be close to just predicting the primitive or the latent variable zgiven the startstate of this sub-trajectory, s0(i.e.!(zjs0)). This conditioning on the initial state regularizes thedistributionq(zj)to not overfit to the the complete sub-trajectory as the same zshould also bepredictable only given s0. The above form of KL constraint is inspired from past works (Lynch et al.,2020; Kumar et al., 2020a). In particular Lynch et al. (2020) add a KL-constraint (Equation 2, “Planprior matching” in Lynch et al. (2020)) that constrains the distribution over latent variables computedonly given the initial state and the goal state to the distribution over latent variables computed usingthe entire trajectory. Our form in Equation 2 is similar to this prior except that we do not operate ina goal-conditioned RL setting and hence only condition !on the initial state s0.In practice, rather than solving the constrained optimization directly, we implement the KL con-straint as a penalty, weighted by an appropriately chosen coefficient . Thus, one may interpret ourunsupervised objective as using a sequential -V AE (Higgins et al., 2016). However, as mentionedabove, our prior is conditioned on s0and learned as part of the optimization because the set of prim-itives active inDdepends ons0. If= 1, OPAL is equivalent to a conditional V AE maximizinglog probability of conditioned on its initial state s0; see Appendix D for more details. Despite thesimilarities between our proposed objective and V AEs, our presentation of OPAL as a constrainedauto-encoding objective is deliberate. As we will show in Section 4.3, our theoretical guaranteesdepend on a well-optimized auto-encoding loss to provide benefits of using learned primitives for downstream tasks. In contrast, a V AE loss, which simply maximizes the likelihood of observeddata, may not necessarily provide a benefit for downstream tasks. For example, if the data can begenerated by a single stationary policy, a V AE-optimal policy can simply ignore the latent z, thusproducing a degenerate space of primitives. In contrast, when the KL constraint in our objective isweak (i.e.,KL0or <1), the auto-encoding loss is encouraged to find a unique zfor distinctto optimize reconstruction loss.4.2 O FFLINE RL WITH PRIMITIVES FOR DOWNSTREAM TASKSAfter distilling learned primitives from Din terms of an encoder q(zj), a latent primitive policy(or decoder) (ajs;z), and a prior !(zjs0), OPAL then applies these learned models to improveoffline RL for downstream tasks.As shown in Figure 2, our goal is to use a dataset with reward labeled sub-trajectories Dr=fi:=(sit;ait;rit)c1t=0gNi=1to learn a behavior policy that maximizes cumulative reward. With OPAL, weuse the learned primitives (ajs;z)as low-level controllers, and then learn a high-level controller (zjs). To do so, we relabel the dataset Drin terms of temporally extended transitions usingthe learned encoder q(zj). Specifically, we create a dataset Drhi=f(si0;zi;Pc1t=0trit;sic)gNi=1,whereziq(ji). GivenDrhi, any off-the-shelf offline RL algorithm can be used to learn (zjs)(in our experiments we use CQL (Kumar et al., 2020b)). As a way to ensure that the c-step transitionsi:= (sit;ait;rit)c1t=0remain consistent with the labelled latent action zi, we finetune (ajs;z)onDrlo=f((sit;ait)c1t=0;zi)gNi=1with a simple latent-conditioned behavioral cloning loss:min^E(;z)Drlo"c1Xt=0log(atjst;z)#: (3)4.3 S UBOPTIMALITY AND PERFORMANCE BOUNDS FOR OPALNow, we will analyze OPAL and derive performance bounds for it in the context of offline RL,formally examining the benefit of the temporal abstraction afforded by OPAL as well as studyingwhat propertiesDshould possess so that OPAL can improve downstream task performance.As explained above, when applying OPAL to offline RL, we first learn the primitives (ajs;z)usingD, and then learn a high-level task policy (zjs)in the space of the primitives. Let (zjs)be the optimal task policy. Thus the low-level and high-level together comprise a hierarchicalpolicy, which we denote as ; . To quantify the performance of policies obtained from OPAL,5Published as a conference paper at ICLR 2021we define the notion of suboptimality of the learned primitives (ajs;z)in an MDPMwith anassociated optimal policy asSubOpt() :=jJRL(;M)JRL(; ;M)j: (4)To relate SubOpt()with some notion of divergence betweenand; , we introduce the fol-lowing performance difference lemma.Lemma 4.0.1. If1and2are two policies in M, thenjJRL(1;M)JRL(2;M)j2(1c)(1)RmaxEsd1c[DTV(1(js)jj2(js))];(5)where DTV(1(js)jj2(js))denotes the TV divergence over c-length sub-trajectories sampledfrom1vs.2(see section 3). Furthermore,SubOpt()2(1c)(1)RmaxEsdc[DTV((js)jj; (js))]: (6)The proof of the above lemma and all the following results are provided in Appendix B.1.Through above lemma, we showed that the suboptimality of the learned primitives can be boundedby the total variation divergence between the optimal policy inMand the optimal policy actingthrough the learned primitives ; . We now continue to bound the divergence between and; in terms of how representative Dis ofand how optimal the primitives are with respectto the auto-encoding objective (equation 1). We begin with a definition of how often an arbitrarypolicy appears in , the distribution generating D:Definition 1. We say a policy inMis-common inifE;s[DTV((js)jj(js))].Theorem 4.1. Let;;! be the outputs of solving equation 1, such that J(;;! ) =c. Then, withhigh probability 1, for anythat is-common in , there exists a distribution Hoverzsuchthat forH(js) :=EzH[(jz;s)],Es[DTV((js)jjH(js))]+vuut12 c+rSJ+Hc!(7)whereHc=E;;s0[Pc1t=0log(atjst)](i.e. a constant and property of D) andSJis apositive constant incurred due to sampling error in J(;;! )and depends on concentration prop-erties of(ajs;z)andq(zj).Corollary 4.1.1. If the optimal policy ofMis-common in , anddc1, then, withhigh probability 1,SubOpt()2(1c)(1)Rmax0@+vuut12 c+rSJ+Hc!1A: (8)As we can see, SubOpt()will reduce asDgets closer to(i.e.approaches 0) and betterprimitives are learned (i.e. cdecreases). While it might be tempting to increase c(i.e. the length ofsub-trajectories) to reduce the suboptimality, a larger cwill inevitably make it practically harder tocontrol the autoencoding loss c, thereby leading to an increase in overall suboptimality and inducinga trade-off in determining the best value of c. In our experiments we treat cas a hyperparameter andset it toc= 10 , although more sophisticated ways to determine ccan be an interesting avenue forfuture work.Till now, we have argued that there exists some near-optimal task policy ifis sufficientlylearned andis sufficiently well-represented in D. Now, we will show how primitive learning canimprove downstream learning, by considering the benefits of using OPAL with offline RL. Buildingon the policy performance analysis from Kumar et al. (2020b), we now present theoretical resultsbounding the performance of the policy obtained when offline RL is performed with OPAL.6Published as a conference paper at ICLR 2021Environment BC BEAR EMAQ CQL CQL+OPAL (ours)antmaze medium (diverse) 0.0 8.0 0.0 53.76.1 81.13.1antmaze large (diverse) 0.0 0.0 0.0 14.93.2 70.32.9kitchen mixed 47.5 47.2 70.82.3 52.42.5 69.32.7kitchen partial 33.8 13.1 74.60.6 50.11.0 80.22.4Table 1: Average success rate ( %) (over 4 seeds) of offline RL methods: BC, BEAR (Kumar et al.,2019), EMAQ (Ghasemipour et al., 2020), CQL (Kumar et al., 2020b) and CQL+OPAL (ours).Theorem 4.2. Let (zjs)be the policy obtained by CQL and let ;(ajs)refer to the policywhen (zjs)is used together with (ajs;z). Letf;grefer to the policy generatingDrin MDPMandzH(zjs);s0=s;zq(zj). Then,J( ;;M)J(;M)with high probability 1where=O 1(1c)(1)Esd ;^MH(s)hqjZj(DCQL( ;H)(s) + 1)i!(9)1cEsd MH(s)DCQL( ;;H)(s); (10)where DCQL is a measure of the divergence between two policies; see the appendix for a formalstatement.The precise bound along with a proof is described in Appendix B.1. Intuitively, this bound suggeststhat the worst-case deterioration over the learned policy depends on the divergence between thelearned latent-space policy DCQL and the actual primitive distribution, which is controlled via anyconservative offline RL algorithm (Kumar et al. (2020b) in our experiments) and the size of thelatent spacejZj. Crucially, note that comparing Equation 9 to the performance bound for CQL(Equation 6in Kumar et al. (2020b)) reveals several benefits pertaining to (1)temporal abstraction– a reduction in the factor of horizon by virtue of c, and (2)reduction in the amount of worst-caseerror propagation due to a reduced action space jZjvs.jAj. Thus, as evident from the above bound,the total error induced due to a combination of distributional shift and sampling is significantlyreduced when OPAL is used as compared to the standard RL counterpart of this bound which isaffected by the size of the entire action space for each and every timestep in the horizon. Thisformalizes our intuition that OPAL helps to partly mitigate distributional shift and sampling error.One downside of using a latent space policy is that we incur unsupervised learning error whilelearning primitives. However, empirically, this unsupervised learning error gets dominated by othererror terms pertaining to offline RL. That is, it is much easier to control unsupervised loss than errorsarising in offline RL.5 E VALUATIONIn this section, we will empirically show that OPAL improves learning of downstream tasks withoffline RL, and then briefly show the same with few-shot imitation learning, online RL, and onlinemulti-task transfer learning. Unless otherwise stated, we use c= 10 anddim(Z) = 8 . See Ap-pendix C for further implementation and experimental details. Visualizations and code are availableathttps://sites.google.com/view/opal-iclr5.1 O FFLINE RL WITH OPALDescription: We use environments and datasets provided in D4RL (Fu et al., 2020). Since the aimof our method is specifically to perform offline RL in settings where the offline data comprises variedand undirected multi-task behavior, we focus on Antmaze medium (diverse dataset), Antmaze large(diverse dataset), and Franka kitchen (mixed and partial datasets). The Antmaze datasets involve asimulated ant robot performing undirected navigation in a maze. The task is to use this undirecteddataset to solve a specific point-to-point navigation problem, traversing the maze from one cornerto the opposite corner, with only sparse 0-1 completion reward for reaching the goal. The kitchendatasets involves a franka robot manipulating multiple objects (microwave, kettle, etc.) either in an7Published as a conference paper at ICLR 2021(a) medium (CQL) (b) medium (CQL+OPAL) (c) large (CQL) (d) large (CQL+OPAL)Figure 3: State visitation heatmaps for antmaze medium policies learned using (1) CQL and (2)CQL+OPAL, and antmaze large policies learned using (3) CQL and (4) CQL+OPAL.Environment BC BC+OPAL (ours) BC+SV AEantmaze medium (diverse) 30.13.2 81.52.7 72.82.3antmaze large (diverse) 9.22.5 63.52.3 49.42.2Table 2: Average success rate ( %) (over 4 seeds) of few-shot IL methods: BC, BC+OPAL, andBC+SV AE (Wang et al., 2017).undirected manner (mixed dataset) or in a partially task directed manner (partial dataset). The taskis to use the datasets to arrange objects in a desired configuration, with only sparse 0-1 completionreward for every object that attains the target configuration.Baseline: We use Behavior cloning (BC), BEAR (Kumar et al., 2019), EMAQ (Ghasemipour et al.,2020), and CQL (Kumar et al., 2020b) as baselines. We compare it to CQL+OPAL, which first usesOPAL to distill primitives from the offline dataset before applying CQL to learn a primitive-directinghigh-level policy.Results: As shown in Table 1, CQL+OPAL outperforms nearly all the baselines on antmaze (seeFigure 1 and Figure 3 for visualization) and kitchen tasks, with the exception of EMAQ having simi-lar performance on kitchen mixed. To ensure fair comparison with EMAQ, we use an autoregressiveprimitive policy. With the exception of EMAQ on kitchen mixed, we are not aware of any existingoffline RL algorithms that achieves similarly good performance on these tasks; moreover, we are notaware of any existing online RL algorithms which solve these tasks (see Table 3 for some compar-isons), highlighting the benefit of using offline datasets to circumvent exploration challenges. Thereare two potential reasons for OPAL’s success. First, temporally-extended primitives could makethe reward propagation learning problem easier. Second, the primitives may provide a better latentaction space than the atomic actions of the environment. To understand the relative importance ofthese factors, we experimented with an ablation of CQL+OPAL that uses c= 1to remove temporalabstraction. In this case, we find the method’s performance to be similar to standard CQL. Thisimplies that the temporal abstraction provided by OPAL is one of the main contributing factors toits good performance. This observation also agrees with our theoretical analysis. See Appendix Efor detailed discussion.5.2 F EW-SHOT IMITATION LEARNING WITH OPALDescription: Previously, we assumed that we have access to a task reward function, but only undi-rected data that performs other tasks. Now, we will study the opposite case, where we are notprovided with a reward function for the new task either, but instead receive a small number of task-specific demonstrations that illustrate optimal behavior. Simply imitating these few demonstrationsis insufficient to obtain a good policy, and our experiments evaluate whether OPAL can effectivelyincorporate the prior data to enable few-shot adaptation in this setting. We use the Antmaze envi-ronments (diverse datasets) to evaluate our method and use an expert policy for these environmentsto samplen= 10 successful trajectories.Baseline and Results: For baselines, we use Behavior cloning (BC) and the model from Wanget al. (2017), which prescribes using a sequential V AE (SV AE) over state trajectories in conjunctionwith imitation learning. As shown in Table 2, BC+OPAL clearly outperforms other baselines, show-8Published as a conference paper at ICLR 2021Environment HIRO SAC+BC SAC+OPAL(ours) DDQN+DDCOantmaze medium sparse (diverse) 0.0 0.0 81.63.7 0.0antmaze large sparse (diverse) 0.0 0.0 0.0 0.0antmaze medium dense (diverse) 0.0 0.0 81.33.3 0.0antmaze large dense (diverse) 12 0.0 81.53.9 0.0Table 3: Average success rate ( %) (over 4 seeds) of online RL methods: HIRO (Nachum et al.,2018a), SAC+BC, SAC+OPAL, and DDQN+DDCO (Krishnan et al., 2017). These methods wereran for 2:5e6steps for antmaze medium environments and 17:5e6steps for antmaze large environ-ments.Models MT10 MT50PPO 15.24.8 5.12.2PPO+OPAL(ours) 70.14.3 45.33.1SAC 39.5 28.8Table 4: Due to improved exploration, PPO+OPAL outperforms PPO and SAC on MT10 and MT50in terms of average success rate ( %) (over 4 seeds).ing the importance of temporal abstraction and ascertaining the quality of learned primitives. SeeAppendix A for detailed discussion.5.3 O NLINE RL AND MULTI -TASK TRANSFER WITH OPALDescription: For online RL and multi-task transfer learning, we learn a task policy in space ofprimitives(ajs;z)while keeping it fixed. For multi-task transfer, the task policy also takes inthe task id and we use c= 5 andZ= 8. Since the primitives need to transfer to a different statedistribution for multi-task transfer, it only learns the action sub-trajectory distribution and doesn’ttake in the state feedback. See Appendix A for a detailed description of models. For online RL, weuse the Antmaze environments (diverse datasets) with sparse and dense rewards for evaluating ourmethod. For online multi-task transfer learning, we learn primitives with expert data from pick-and-place task and then use it to learn multi-task policy for MT10 and MT50 (from metaworld (Yu et al.,2020)), containing 10and50robotic manipulation tasks which needs to be solved simultaneously.Baseline and Results: For online RL, we use HIRO (Nachum et al., 2018b), a state-of-the-arthierarchical RL method, SAC (Haarnoja et al., 2018) with Behavior cloning (BC) pre-training onD, and Discovery of Continuous Options (DDCO) (Krishnan et al., 2017) which uses Dto learn adiscrete set of primitives and then learns a task policy in space of those primitives with online RL(Double DQN (DDQN) (Van Hasselt et al., 2015)). For online multi-task transfer learning, we usePPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018) as baselines. As shown in Table 3 andTable 4, OPAL uses temporal abstraction to improve exploration and thus accelerate online RL andmulti-task transfer learning. See Appendix A for detailed discussion.6 D ISCUSSIONWe proposed Offline Primitives for Accelerating offline RL (OPAL) as a preproccesing step for ex-tracting recurring primitive behaviors from undirected and unlabelled dataset of diverse behaviors.We derived theoretical statements which describe under what conditions OPAL can improve learningof downstream offline RL tasks and showed how these improvements manifest in practice, leading tosignificant improvements in complex manipulation tasks. We further showed empirical demonstra-tions of OPAL’s application to few-shot imitation learning, online RL, and online multi-task transferlearning. In this work, we focused on simple auto-encoding models for representing OPAL, and aninteresting avenue for future work is scaling up this basic paradigm to image-based tasks.7 A CKNOWLEDGEMENTSWe would like to thank Ben Eysenbach and Kamyar Ghasemipour for valuable discussions at dif-ferent points over the course of this work. This work was supported by Google, DARPA MachineCommon Sense grant and MIT-IBM grant.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Interesting research question but motivation of the method unclear ### Review Text In the RL setting, this paper tackles the case where an agent may have access to large amounts of offline experience data. The objective of the work is to find an effective way to leverage this data for finding temporally extended primitive behaviors. The paper provides results that show how performing offline primitive learning can be leveraged for improving few-shot imitation learning as well as exploration and transfer on a variety of benchmark domains. The paper tackles an important question in reinforcement learning: learning temporally extended primitive behaviors from off-policy data is a very relevant question. However, I found the motivation for the approach quite vague as well as different elements that require clarification (see below) and because of this, I can't recommend acceptance. Motivation for the approach: - if the downstream policy has to stay close to the offline data distribution, it seems to me that if the offline data distribution is obtained with a bad policy, this can not lead to interesting decision-making. - can you explain why equation 2 "motivates better generalization"? Other concerns: - Why can gamma not have a value of 0 (first paragraph preliminaries) - What does it mean "To capture multi-modality in a dataset" ? (second paragraph preliminaries) - Figure 1: What does the (81.1%) and (70.3%) refer to? - Table 4: why is SAC given without standard deviation? Some text improvements: - "we focus on focus on" Other comment: - The number of seeds (4) should ideally be increased. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
8VUywK1AT7d
thecvf.com/ECCV/2022/Workshop/VIPriors
2022
GraphVid: It Only Takes a Few Nodes to Understand a Video
["Eitan Kosman", "Dotan Di Castro"]
We propose a concise representation of videos that encode perceptually meaningful features into graphs. With this representation, we aim to leverage the large amount of redundancies in videos and save computations. First, we construct superpixel-based graph representations of videos by considering superpixels as graph nodes and create spatial and temporal connections between adjacent superpixels. Then, we leverage Graph Convolutional Networks to process this representation and predict the desired output. As a result, we are able to train models with much fewer parameters, which translates into short training periods and a reduction in computation resource requirements. A comprehensive experimental study on the publicly available datasets Kinetics-400 and Charades shows that the proposed method is highly cost-effective and uses limited commodity hardware during training and inference. \textbf{It reduces the computational requirements 10-fold} while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising direction that could open the door to solving video understanding more efficiently and enable more resource limited users to thrive in this research field.
["videos", "nodes", "video", "representation", "graphvid", "video graphvid", "concise representation", "meaningful features", "graphs", "large amount"]
000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044ECCV#4861ECCV#4861GraphVid : It Only Takes a Few Nodes to Understand aVideoAnonymous ECCV submissionPaper ID 4861Abstract. We propose a concise representation of videos that encode perceptu-ally meaningful features into graphs. With this representation, we aim to lever-age the large amount of redundancies in videos and save computations. First,we construct superpixel-based graph representations of videos by considering su-perpixels as graph nodes and create spatial and temporal connections betweenadjacent superpixels. Then, we leverage Graph Convolutional Networks to pro-cess this representation and predict the desired output. As a result, we are ableto train models with much fewer parameters, which translates into short trainingperiods and a reduction in computation resource requirements. A comprehensiveexperimental study on the publicly available datasets Kinetics-400 and Charadesshows that the proposed method is highly cost-effective and uses limited com-modity hardware during training and inference. It reduces the computationalrequirements 10-fold while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising directionthat could open the door to solving video understanding more efficiently and en-able more resource limited users to thrive in this research field.1 IntroductionThe field of video understanding has gained prominence thanks to the rising popularityof videos, which has become the most common form of data on the web. On each newuploaded video, a variety of tasks can be performed, such as tagging [18], human ac-tion recognition [38], anomaly detection [47], etc. New video-processing algorithms arecontinuously being developed to automatically organize the web through the flawlessaccomplishment of the aforementioned tasks.Nowadays, Deep Neural Networks are the de-facto standard for video understand-ing [36]. However, with every addition of a new element to the training set (that is, a fulltraining video), more resources are required in order to satisfy the enormous computa-tional needs. On the one hand, the exponential increment in the amount of data raisesconcerns regarding our ability to handle it in the future. On the other hand, it has alsospurred an highly creative research field aimed at finding ways to mitigate this burden.Among the first-generation of video processing methods were ones geared towardadopting 2D convolution neural networks (CNNs), due to their computational efficiency[44]. Others decomposed 3D convolutions [14, 57] into simpler operators, or split acomplex neural network into an ensemble of lightweight networks [9]. However, videounderstanding has greatly evolved since then, with the current state-of-the-art meth-ods featuring costly attention mechanisms [4, 20, 32, 3, 15, 6, 30]. Beyond accuracy, a045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089ECCV#4861ECCV#48612 ECCV-22 submission ID 4861prominent advantage of the latest generation of methods is that they process raw data,that is, video frames that do not undergo any advanced pre-processing. Meanwhile, pur-suing new video representations and incorporating pre-computed features to acceleratetraining is a promising direction that requires more extensive research.(a) Original image (b) Mean superpixelsFig. 1: A visual comparison between a pixel and a mean-superpixel representation. Onthe left, the original image is presented. On the right, we present the image formed bygenerating superpixel regions using SLIC and filling each region with its mean color.Prior to the renaissance of deep learning [29], much research was done on visual fea-ture generation. Two prominent visual feature generation methods are superpixels1andoptic-flow2. These techniques’ ability to encode perceptually meaningful features hasgreatly contributed to the success of computer vision algorithms. Superpixels provide aconvenient, compact representation of images that can be very useful for computation-ally demanding problems, while optic-flow provides hints about motion. We rely onthese methods to construct a novel representation of videos that encodes sufficient in-formation for video understanding: 1) adjacent pixels are grouped together in the formof superpixels, and 2) temporal relations and proximities are expressed via graph con-nectivity. The example depicted in Figure 1 provides an intuition for the sufficiencyof superpixel representation for scene understanding. It contains the superpixel regionsobtained via SLIC [2], with each region filled with the mean color. One can clearly dis-cern a person playing a guitar in both images. A different way of depicting the relationsbetween superpixels is a graph with nodes representing superpixels [34, 11, 5]. Such arepresentation has the advantage of being invariant to rotations and flips, which obvi-ates the need for further augmentation. We here demonstrate how this representationcan reduce the computational requirements for processing videos.Recent years have seen a surge in the utilization of Graph Neural Networks (GNNs)[26] in tasks that involve images [34, 11, 5], audio [12, 62] and other data forms [55, 56,1]. In this paper, we propose GraphVid , a concise graph representation of videos that en-ables video processing via GNNs. GraphVid constructs a graph representation of videos1Superpixel techniques segment an image into regions by considering similarity measures, de-fined using perceptual features.2Optic-flow is the pattern of the apparent motion of an object(s) in the image between twoconsecutive frames due to the movement of the object or the camera.090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134ECCV#4861ECCV#4861ECCV-22 submission ID 4861 3that is subsequently processed via a GCN to predict a target. We intend to exploit thepower of graphs for efficient video processing. To the best of our knowledge, we are thefirst to utilize a graph-based representation of videos for efficiency. GraphVid dramat-ically reduces the memory footprint of a model, enabling large batch-sizes that trans-late to better generalization. Moreover, it utilizes models with an order-of-magnitudefewer parameters than the current state-of-the-art models while preserving the predic-tive power. In summary, our contributions are:1. We present GraphVid - a simple and intuitive, yet sufficient representation of videoclips. This simplicity is crucial for delivering efficiency.2. We propose a dedicated GNN for processing the proposed representation. The pro-posed architecture is compared with conventional GNN models in order to demon-strate the importance of each component of GraphVid .3. We present 4 types of new augmentations that are directly applied to the video-graph representation. A thorough ablation study of their configurations is preformedin order to demonstrate the contribution of each.4. We perform a thorough experimental study, and show that GraphVid greatly out-performs previous methods in terms of efficiency - first and foremost, the paperutilizes GNNs for efficient video understanding. We show that it successfully re-duces computations while preserving much of the performance of state-of-the-artapproaches that utilize computationally demanding models.2 Related Work2.1 Deep Learning for Video UnderstandingCNNs have found numerous applications in video processing [33, 50, 60]. These in-clude LSTM-based networks that perform per-frame encoding [45, 51, 60] and the ex-tension of 2D convolutions to the temporal dimension, e.g., 3D CNNs such as C3D[49], R2D [44] and R(2+1)D [50].The success of the Transformer model [52] has led to the development of attention-based models for vision tasks, via self-attention modules that were used to model spatialdependencies in images. NLNet [54] was the first to employ self-attention in a CNN.With this novel attention mechanism, NLNet possible to model long-range dependen-cies between pixels. The next model to be developed was GCNet [7], which simplifiedthe NL-module, thanks to its need for fewer parameters and computations, while pre-serving its performance. A more prominent transition from CNNs to Transformers be-gan with Vision Transformer (ViT) [13], which prompted research aimed at improvingits effectiveness on small datasets, such as Deit [48]. Later, vision-transformers wereadapted for video tasks [35, 4, 6, 15, 30, 32], now crowned as the current state-of-the-artthat top the leader-boards of this field.The usage of graph representation in video understanding sparsely took place inthe work of Wang [55]. They used a pre-trained Resnet variants [22] on the MSCOCOdataset [31] in order to generate object bounding boxes of interest on each video frame.These bounding boxes are later used for the construction of a spatio-temporal graphthat describes how objects change through time, and perform classification on top of135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179ECCV#4861ECCV#48614 ECCV-22 submission ID 4861the spatio-temporal graph with graph convolutional neural networks [26]. However, wenote that the usage of a large backbone for generating object bounding boxes is harmfulfor performance. We intend to alleviate this by proposing a lighter graph representation.In combination of a dedicated GNN architecture, our representation greatly outperforms[55] in all metrics.2.2 Superpixel Representation of Visual DataSuperpixels are groups of perceptually similar pixels that can be used to create visuallymeaningful entities while heavily reducing the number of primitives for subsequentprocessing steps [46]. The efficiency of the obtained representation has led to the de-velopment of many superpixel-generation algorithms for images [46]. This approachwas adapted for volumetric data via the construction of supervoxels [37], which are thetrivial extension to depth. These methods were adjusted for use in videos [58] by treat-ing the temporal dimension as depth. However, this results in degraded performance,as inherent assumptions regarding neighboring points in the 3D space do not apply tovideos with non-negligible motion. Recent approaches especially designed to deal withvideos consider the temporal dimensions for generating superpixels that are coherentin time. Xu et a.l [59] proposed a hierarchical graph-based segmentation method. Thiswas followed by the work of Chang et a.l [8], who suggested that Temporal Superpixels(TSPs) can serve as a representation of videos using temporal superpixels by modelingthe flow between frames with a bilateral Gaussian process.2.3 Graph Convolutional Neural NetworksIntroduced in [26], Graph Convolutional Networks (GCNs) have been widely adoptedfor graph-related tasks [61, 28]. The basic form of a GCN uses aggregators, such asaverage and summation, to obtain a node representation given its neighbors. This basicform was rapidly extended to more complex architectures with more sophisticated ag-gregators. For instance, Graph Attention Networks [53] use dot-product-based attentionto calculate weights for edges. Relational GCNs [42] add to this framework by also con-sidering multiple edge types, namely, relations (such as temporal and spatial relations),and the aggregating information from each relation via separate weights in a singlelayer. Recently, GCNs have been adopted for tasks involving audio [12, 62] and images[34, 11, 5]. Following the success of graph models to efficiently perform image-basedtasks, we are eager to demonstrate our extension of the image-graph representation tovideos.3GraphVid - A Video-Graph RepresentationIn this section, we introduce the methodology of GraphVid . First, we present our methodfor video-graph representation generation, depicted in Figure 2 and described in Algo-rithm 1. Then, we present our training methodology that utilizes this representation.Finally, we discuss the benefits of GraphVid and propose several augmentations.180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224ECCV#4861ECCV#4861ECCV-22 submission ID 4861 5Fig. 2: The flow of GraphVid . Given a video clip, we generate superpixels using SLICfor each frame. The superpixels are used to construct a region-adjacency graph of aframe, with superpixels as nodes. Then, the graph sequence is connected via temporalproximities to construct a dynamic graph, which is later fed into a GNN for prediction.3.1 OverviewIn our framework, we deal with video clips that are sequences of Tvideo framesv∈RT×C×H×W. The goal is to transform vinto a graph that is sufficiently infor-mative for further processing. To achieve this, we use SLIC [2] to generate Sseg-mented regions, called superpixels , over each frame. We denote each segmented regionasRt,i, where t∈[T]represents the temporal frame index, and i∈[S]the superpixel-segmented region index. The following is a description of how we utilize the superpixelsto construct our video-graph representation.Graph Elements - We define the undirected graph Gas a 3-tuple G= (V,E,R), whereV={Rt,i|t∈[T], i∈[S]}is the set of nodes representing the segmented regions, Eis the set of labeled edges (to be defined hereunder) and R={spatial, temporal }is a set of relations as defined in [42]. Each node Rt,iis associated with an attributeRt,i.c∈R3representing the mean RGB color in that segmented region. Additionally,we refer to Rt,i.yandRt,i.xas the coordinates of the superpixel’s centroid, which weuse to compute the distances between superpixels. These distances, which will laterserve as the edge attributes of the graph, are computed bydtq→tpi,j =sRtq,i.y−Rtp,j.yH2+Rtq,i.x−Rtp,j.xW2. (1)Here, tq, tp∈[T]denote frame indices, and i, j∈[S]denote superpixel indices gen-erated for the corresponding frames. The set of edges Eis composed of two types: 1)intra-frame edges (denoted Espatial) - edges between nodes corresponding to superpix-els in the same frame. We refer to these edges as spatial edges .2)inter-frame edges(denoted Etemporal) - edges between nodes corresponding to superpixels in two se-quential frames. We refer to these edges as temporal edges . Finally, the full set of edgesis given by E=Espatial∪ Etemporal. Following is a description of how we constructboth components.225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269ECCV#4861ECCV#48616 ECCV-22 submission ID 4861Spatial Edges - In similar to [5], we generate a region-adjacency graph for each frame,with edge attributes describing the distances between superpixel centroids. The nota-tionEspatialt refers to the set of the spatial-edges connecting nodes corresponding tosuperpixels in the frame t, andEspatial=STt=1Espatialt .Each edge eti,j∈ Espatialisassociated with an attribute that describes the euclidean distance between the two su-perpixel centroids iandjin frame t, that is, dt→ti,j. These distances provide informationabout the relations between the superpixels. Additionally, the distances are invariant torotations and image-flips, which eliminates the need for those augmentations. Note thatnormalization of the superpixels’ centroid coordinates is required in order to obscureinformation regarding the resolution of frames, which is irrelevant for many tasks, suchas action classification. In Figure 3, we demonstrate the procedure of spatial edge gen-eration for a cropped image that results in a partial graph of the whole image. Each su-perpixel is associated with a node, which is connected via edges to other adjacent nodes(with the distances between the superpixels’ centroids serving as edge attributes).Fig. 3: Spatial edge generation. First, superpixels are generated. Each superpixel is rep-resented as a node, which is connected via its edges to other such nodes within a frame.Each node is assigned the mean color of the respective segmented region, and each edgeis assigned the distances between the superpixel centroids connected by that edge.Temporal Edges - In modeling the temporal relations, we aim to connect nodes thattend to describe the same objects in subsequent frames. To do so, we rely on the as-sumption that in subsequent frames, such superpixels are attributed similar colors andthe same spatial proximity. To achieve this, for each superpixel Rt,i, we construct aneighborhood Nt,ithat contains superpixels from its subsequent frame whose centroidshave a proximity of at most dproximity ∈(0,1]with respect to the euclidean distance.Then, we find the superpixel with the most similar color in this neighborhood. As aresult, the tthframe is associated with the set of edges Etemporalt→t+1 that model temporalrelations with its subsequent frame, formally:Nt,i={Rt+1,j|dt→t+1i,j < dproximity }, (2)neighbor (Rt,i) = argminRt+1,j∈Nt,i|Rt,i.c−Rt+1,j.c|2, (3)Etemporalt→t+1 ={(Rt,i, temporal, neighbor (Rt,i)|i∈[S]}. (4)270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314ECCV#4861ECCV#4861ECCV-22 submission ID 4861 7Equipped with these definitions, we define the set of temporal edges connectingnodes corresponding to superpixels in frame tto superpixels in frame t+ 1as the unionof the temporal edge sets generated for all the frames: Etemporal=ST−1t=1Etemporalt→t+1 .Algorithm 1 Graph GenerationInput: v∈RT×C×H×W▷The input video clipParameters: S∈N ▷Number of superpixels per framedproximity ∈(0,1] ▷Diameter of neighborhoodsOutput: G= (V,E,R) ▷A video-graphV,Vlast,Espatial,Etemporal← ∅,∅,∅,∅fort∈[T]doSP←SLIC (v[t], S)V ← V ∪ SPEspatial← Espatial∪regionAdjacetEdges (SP)Etemporalt−1→t← ∅forRt−1,i∈ VlastdoNt−1,i← {Rt,j|dt−1→ti,j < dproximity }nnt−1,i←argminRt,j∈Nt,i|Rt,i.c−Rt,j.c|2)Etemporalt−1→t← Etemporalt−1→t∪ {(Rt−1,i, temporal, nn t−1,i)}end forEtemporal← Etemporal∪ Etemporalt−1→tVlast←SPend forreturn G= (V,E=Espatial∪ Etemporal,R={spatial, tempo })3.2 Model ArchitectureIn order to model both the spatial and temporal relations between superpixels, our modelprimarily relies on the Neural Relational Model [42], which is an extension of GCNs[26] to large-scale relational data. In a Neural Relational Model, the propagation modelfor calculating the forward-pass update of a node, denoted by vi, is defined ash(l+1)i =σXr∈RXj∈Nri1ci,rW(l)rh(l)j+W(l)0h(l)i, (5)where Nridenotes the set of neighbor indices of node iunder relation r∈ R (not tobe confused with the notation Nt,ifrom Eq. 2). ci,ris a problem-specific normaliza-tion constant that can either be learned or chosen in advance (such as ci,r=|Nri|). Toincorporate edge features, we adapt the approach proposed in [10], that concatenatesnode and edge attributes as a layer’s input, yielding the following:h(l+1)i =σXr∈RXj∈Nri1ci,rW(l)r[h(l)j, ei,j] +W(l)0h(l)i, (6)where ei,jis the feature of the edge connecting nodes vi, vj.315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359ECCV#4861ECCV#48618 ECCV-22 submission ID 48613.3 AugmentationsWe introduce a few possible augmentations that we found useful for training our modelas they improved the generalization.Additive Gaussian Edge Noise (AGEN) - Edge attributes represent distances betweensuperpixel centroids. The coordinates of those centroids may vary due to different su-perpixel shapes with different centers of mass. To compensate for this, we add a certainamount of noise to each edge attribute. Given a hyper-parameter σedge, for each edgeattribute eu,vand for each training iteration, we sample a normally distributed variablezu,v∼N(0, σedge)that is added to the edge attribute.Additive Gaussian Node Noise (AGNN) - Node attributes represent the colors of regionsin each frame. Similar to edge attributes, the mean color of each segmented region mayvary due to different superpixel shapes. To compensate for this, we add a certain amountof noise to each node attribute. Given a hyper-parameter σnode, for each node attributev.cof dimension dcand for each training iteration, we sample a normally distributedvariable zv∼Ndc(0, σnode·Idc)that is added to the node attribute.Random Removal of Spatial Edges (RRSE) - This augmentation tends to mimic the reg-ularization effect introduced in DropEdge [40]. Moreover, since the removal of edgesleads to fewer message-passings in a GCN, this also accelerates the training and in-ference. To perform this, we choose a probability pedge∈[0,1]. Then, each edge eispreserved with a probability of pedge.Random Removal of Superpixels (RRS) - SLIC [2] is sensitive to its initialization. Con-sequently, each video clip may have several graph representations during different train-ing iterations and inference. This can be mitigated by removing a certain amount ofsuperpixels. The outcome is fewer nodes in the corresponding representative graph, aswell as fewer edges. Similar to RRE, we choose a probability pnode∈[0,1]so that eachsuperpixel is preserved with a probability of pnode.3.4 Benefits of GraphVidInvariance Qualification - The absence of coordinates leads to invariance in the spatialdimension of each frame. It is evident that such a representation is invariant to rotation,horizontal flip and vertical flip, since the relations between different parts of the imageare solely characterized by distances. This, in turn, obviates the need to perform suchaugmentations during training.Efficiency - We argue that our graph-based representation is more efficient than rawframes. To illustrate this, let T, C, H andWbe the original dimensions of the video clip;that is, the number of frames, number of channels in each frame and height and widthof a frame, respectively. This implies that the raw representation requires T·C·H·Wparameters to encode a single input. Now, to calculate the size of the graph-video rep-resentation, let Sbe the number of superpixels in a single frame. By construction, there360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404ECCV#4861ECCV#4861ECCV-22 submission ID 4861 9are at most 4·Sedges in each frame because SLIC constraints each superpixel to have4 adjacent superpixels. Each edge contains 3values, corresponding to the distance onthe image grid, source node and target node. Additionally, there are, at most, Sedgesbetween every temporal step. This results in 3·( 4·S|{z}intraframeedges+ (T−1)·S|{z}interframeedges) +C·T·S|{z}super−pixelspa-rameters in total. Typically, the second representation requires much fewer parametersbecause we choose Sso that S≪H·W.Prior Knowledge Incorporation - Optical-flow and over-segmentation are encodedwithin the graph-video representation using the inter-frame and intra-frame edges. Thisincorporates strong prior knowledge within the resultant representation. For example,optical-flow dramatically improved the accuracy in the two-stream methodology thatwas proposed in [44]. Additionally, over-segmentation using superpixels has been founduseful as input features for machine learning models due to the limited loss of impor-tant details, accompanied by a dramatic reduction in the expended time by means ofreducing the number of elements of the input [21, 11, 5].4 ExperimentsWe validated GraphVid on 2 human-action-classification benchmarks. The goal of hu-man action classification is to determine the human-involved action that occurs withina video. The objectives of this empirical study were twofold:–Analyze the impact of the various parameters on the accuracy of the model.–As we first and foremost target efficiency, we sought to examine the resources’ con-sumption of GraphVid in terms of Floating Point Operations (FLOPs). We followedthe conventional protocol [16], which uses single-clip FLOPs as a basic unit ofcomputational cost. We show that we are able to achieve a significant improvementin efficiency over previous methods while preserving state-of-the-art performance.4.1 SetupDatasets - We utilize two commonly used datasets for action classification: Kinetics-400 (K400) [23] and Charades [43]. Kinetics-400 [23] is a large-scale video datasetreleased in 2017 that contains 400 classes, with each category consisting of more than400 videos. It originally had, in total, around 240K, 19K, and 38K videos for training,validation and testing subsets, respectively. Kinetics datasets are gradually shrinkingover time due to videos being taken offline, making it difficult to compare against lessrecent works. We used a dataset containing 208K, 17K and 33K videos for training,validation and test respectively. We report on the most recently available videos. Eachvideo lasts approximately 10 seconds and is assigned a label. The Charades dataset[43] is composed of 9,848 videos of daily indoor activities, each of an average lengthof 30 seconds. In total, the dataset contains 66,500 temporal annotations for 157 actionclasses. In the standard split, there are 7,986 training videos and 1,863 validation videos,sampled at 12 frames per second. We follow prior arts by reporting the Top-1 and Top-5recognition accuracy for Kinetics-400 and mean average precision (mAP) for Charades.405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449ECCV#4861ECCV#486110 ECCV-22 submission ID 4861Fig. 4: The general graph neural network architecture we use throughout our experi-mental study.Network Architecture and Training - We use GNN variants as backbones for our ex-periments and feed each of them with our video-graph representation. Specifically, weconsider Graph Convolutional Networks [26] (denoted GCNs), Graph Attention Net-works [53] (denoted GATs) and Relational Graph Convolutional Networks [42] (de-noted RGCNs). The general architecture of our backbones is depicted in Figure 4. Itconsists of 2fully-connected (FC) layers with exponential linear unit (ELU) activationsthat transform the node feature vectors into a 256Dfeature space. Then come 4layersof the corresponding GNN layer type (that is, either GCN, GAT or RGCN along with anedge feature concatenation from Eq. 6) with a hidden size of 512 with ELU activations,followed by global mean pooling, dropout with a probability of 0.2and a linear layerwhose output is the predicted logits. For the GAT layers, we use 4 attention heads ineach layer, and average the attention heads’ results to obtain the desired hidden layersize. For the RGCN layers, we specify 2 relations, which correspond to the spatial andtemporal relations, as described in Section 3. We use the Adam [25] with a learning rateof1e−3for optimization and do not change it throughout the training.We divide the videos into clips using a sliding window of 20 frames, using a strideof 2 between every 2 consecutive frames and a stride of 10 between every 2 consecutivevideo clips. In all the experiments, we used a fixed batch size of 200, which capturesthe context of a time window that endures 200×20 = 4000 frames per batch.Inference - At the test phase, we use the same sliding window methodology as inthe training. We follow the common practice of processing multiple views of a longvideo and average per-view logits to obtain the final results. The number of views isdetermined by the validation dataset.Implementation Details - All the experiments were run on a Ubuntu 18.04 machinewith Intel i9-10920X, 93GB RAM and 2 GeForce RTX 3090 GPUs. Our implementa-tion of GraphVid is in Python3. Specifically, to generate superpixels, we use the SLIC[2] algorithm via its implementation fast-slic [24]. To generate graphs and train thegraph neural models, we use Pytorch-Geometric [19]. We use a fixed seed for SLIC’sinitialization and cache the generated graphs during the first training epochs in order tofurther reduce the number of computations.4.2 Ablation StudyWe conduct an in-depth study on Kinetics-400 to analyze the performance gain con-tributed by incorporating the different components of GraphVid .450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494ECCV#4861ECCV#4861ECCV-22 submission ID 4861 11Graph Neural Network Variants and Number of Superpixels per Frame - We assess theperformance of different GNN variants: GCN [26] is trained without edge relations ( i.e.temporal and spatial edges are treated via the same weights). GAT [53] is trained by em-ploying the attention mechanism for neighborhood aggregation without edge relations.RGCN [42] is trained with edge relations, as described in Section 3.2.The results of the action classification on Kinetics-400 are shown in Figure 5. Inthis series, the number of views is fixed at 8, which is the number of views that wasfound to be most effective for the validation set. For all variants, increasing the numberof superpixels per frame ( S) contributes to the accuracy of the model. We notice asignificant improvement in accuracy for the lower range of the number of superpixels,while the accuracy begins to saturate for S≥650. Increasing further the number ofsuperpixels leads to bigger inputs, which require more computations. As our goal is tomaximize the efficiency, we do not experiment with larger inputs in this section. We50 200 350 500 650 8002040602172938445092133414854132840505866# Superpixels per frameAccuracy (%)Top-150 200 350 500 650 8004060273645515561293946525864294152616874# Superpixels per frameAccuracy (%)Top-5GCN GAT RGCNFig. 5: The effect of varying the number of superpixels per frame on test accuracy onKinetics-400.further present in Table 1 the models’ specifications for 800superpixels, which is thebest-performing configuration in this series of experiments. Not surprisingly, the simpleGCN variant requires the least amount of computations among the three. Meanwhile,the RGCN variant requires fewer computations than GAT and achieves a higher level ofaccuracy. We conclude that it is beneficial to incorporate edge relations when wishingto encode temporal and spatial relations in videos, and that those features are not easilylearned by heavy computational models, such as GAT.Table 1: Comparison of model specifications for various architectures. We report theTop-1 and Top-5 accuracy on Kinetics-400.Model Top-1 Top-5 FLOPs ( ·109) Params ( ·106)GCN 50.1 61.6 28 2.08GAT 54.7 64.5 56 3.93RGCN 66.2 74.1 42 2.99495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539ECCV#4861ECCV#486112 ECCV-22 submission ID 486100.20.40.60.81020406080σedgeAccuracy (%)AGEN00.20.40.60.81020406080σnodeAccuracy (%)AGNN00.20.40.60.81020406080pedgeAccuracy (%)RRSE00.20.40.60.81020406080pnodeAccuracy (%)RRSTop-1 Top-5Fig. 6: The impact of the proposed augmentations on test accuracy of Kinetics-400: Ad-ditive Gaussian edge noise (AGEN). Additive Gaussian node noise (AGNN). Randomremoval of spatial edges (RRSE). Random removal of superpixels (RRS).Augmentations - We assessed the impact of augmentations on the test’s performanceand their ability to alleviate over-fitting. For this purpose, we chose the best configura-tion obtained from the previous experiments, that is, RGCN with 800 superpixels perframe, and trained it while adding one type of augmentation at a time. The results ofthis series are depicted in Figure 7. Each graph shows the level of accuracy reached bytraining the model with one of the various parameters that control the augmentation.We begin with the analysis of the AGEN and AGNN. Both augmentations relateto the addition of Gaussian noise to the attributes of the edges and the nodes of theinput graphs, with the corresponding parameters controlling the standard deviation ofthat Gaussian noise. The impact of these augmentations is less noticeable as these pa-rameters head towards 0, since lower values reflect the scenarios in which little or noaugmentations are performed. Continuously increasing the parameter slightly bringsabout a gradual improvement in the accuracy, until a turning point is reached, afterwhich the level of accuracy starts to decline until it reaches ∼1400, which resembles arandom classifier. The decrease in accuracy stems from the noise obscuring the originalsignal, allegedly forcing the classifier to classify noise that is not generalizable to thetest set. In the cases of RRSE and RRS, the random removal of spatial edges harmsthe accuracy of the model for all values of pedge<1. This finding leads us to concludethat spatial edges encode meaningful information about spatial relations between thesuperpixel entities. Moreover, slightly removing the nodes positively impacts the levelof accuracy, reaching a peak at around pnode≈0.8. To conclude this series, we presentthe values that lead to the best Top-1 accuracy score in Table 2.Table 2: Augmentation parameters and their optimized values.Param σedge σnode pedge pnodeValue 0.4 0.2 1 0.8Top-1 74.5 73 66 70Top-5 85 83 74 76540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584ECCV#4861ECCV#4861ECCV-22 submission ID 4861 13(a) FLOPS vs Kinetics-400 Accuracy (b) FLOPS vs Charades mAPFig. 7: Model FLOPs vs. performance - Green bubbles indicates GraphVid variantsfrom Table 3 and Table 4. Identities of the other models are omitted in order to avoidoverload on the plot. GraphVid achieves comparable performance to the state-of-the-art while greatly reducing the number of parameters and FLOPs. For Kinetics-400,RGCN-2000 with the full set of augmentations achieves almost the same performanceas all computationally heavy models on the plot, while requiring the least amount ofparameters and FLOPs. For Charades, RGCN-2000 with the full set of augmentationsand pretraining on Kinetics-400 is on par with the state-of-the-art, and fewer computerequirements. Bubble radius indicates the number of parameters of the model.4.3 Comparison to the State-of-the-ArtKinetics-400 - We present the Kinetics-400 results for our RGCN model variant in Ta-ble 3 and Figure 7a, along with comparisons to previous arts, including convolutional-based and transformer-based methods. Our results are denoted RGCN- d, where drep-resents the number of superpixels. Additionally, we use the set of augmentations withthe individually optimized hyper-parameters from Table 2 to train these models. First,when the RGCN-800 model is trained with the full set of augmentations (denoted Full-Aug), it achieves a significantly higher Top-1 accuracy than when it is trained withoutany augmentation (denoted No-Aug) or when each augmentation is applied individu-ally. These results demonstrate the effectiveness of our model and that our carefullydesigned augmentations can alleviate overfitting and improve the generalization overthe test set. Second, all our RGCNs require orders-of-magnitude fewer computationsthan the current state-of-the-art architectures, as well as more than ×10fewer parame-ters.Charades - We train 2 RGCN variants with 800and2000 superpixels per frame withthe same set of augmentations and hyper-parameters found in Table 2. Additionally,we follow prior arts [17, 15] by pre-training on K-400 followed by replacing the lastFC layer to match the output dimensionality and fine-tuning on Charades. Table 4 andFigure 7b show that when our RGCN model is trained with 2000 superpixels per frame,its mAP score is comparable to the current state-of-the-art, but this score is reached withorders-of-magnitude fewer computations and using considerably fewer parameters thanprior arts.585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629ECCV#4861ECCV#486114 ECCV-22 submission ID 4861Table 3: Comparisons to state-of-the-art on the Kinetics-400 dataset. We report the Top-1 and Top-5 accuracy scores. The top section of the table depicts convolution-basedmodels. The middle section depicts transformer-based models, and the bottom sectionrepresents our graph-based models.Method Top-1 Top-5 Views FLOPs ( ·109) Param ( ·106)SlowFast R101+N [17] 79.8 93.9 30 234 59.9X3D-XXL R101+N [16] 80.4 94.6 30 144 20.3MViT-B, 32×3 [15] 80.2 94.4 5 170 36.6TimeSformer-L [6] 80.7 94.7 3 2380 121.4ViT-B-VTN [35] 78.6 93.7 1 4218 11.04ViViT-L/16x2 [4] 80.6 94.7 12 1446 310.8Swin-S [32] 80.6 94.5 12 166 49.8Swin-B [32] 82.7 95.5 12 282 88.1RGCN-800 (No Aug) 66.2 74.1 8 42 2.57RGCN-800 (Full Aug) 76.4 91.1 8 42 2.57RGCN-2000 (Full Aug) 80.0 94.3 8 110 2.57Table 4: Comparisons to state-of-the-art on the Charades multi-label dataset. We reportthe mAP scores as more than one ground truth action is possible.Method mAP FLOPs ( ·109) Params ( ·106)MoVieNet-A2 [27] 32.5 6.59 4.8MoVieNet-A4 [27] 48.5 90.4 4.9MoVieNet-A6 [27] 63.2 306 31.4TVN-1 [39] 32.2 13 11.1TVN-4 [39] 35.4 106 44.2AssembleNet-50 [41] 53.0 700 37.3AssembleNet-101 [41] 58.6 1200 53.3SlowFast 16×8R101 [17] 45.2 7020 59.9RGCN-800 (No Aug) 37.4 42 2.57RGCN-800 (Full Aug) 43.1 42 2.57RGCN-2000 (Full Aug) 45.3 110 2.57RGCN-2000 (Full Aug)+K400 49.4 110 2.575 Conclusions and Future WorkIn this paper, we present GraphVid , a graph video representations that enable video-processing via graph neural networks. Furthermore, we propose a relational graph con-volutional model that suits this representation. Our experimental study demonstratesthis model’s efficiency in performing video-related tasks while achieving comparableperformance to the current state-of-the-art. An interesting avenue for future work is toexplore new graph representations of videos, including learnable methods. Addition-ally, we consider the development of new dedicated graph neural models for processingthe unique and dynamic structure of the video-graph as an interesting research direc-tion. Finally, unified models for image and video understanding that disregard temporaledges could be explored in order to take advantage of the amount of data in both worlds.630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674ECCV#4861ECCV#4861ECCV-22 submission ID 4861 15References1. Abadal, S., Jain, A., Guirado, R., L ́opez-Alonso, J., Alarc ́on, E.: Computing graph neural net-works: A survey from algorithms to accelerators. ACM Computing Surveys (CSUR) 54(9),1–38 (2021)2. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., S ̈usstrunk, S.: Slic superpixels. Tech.rep. (2010)3. Akbari, H., Yuan, L., Qian, R., Chuang, W.H., Chang, S.F., Cui, Y ., Gong, B.: Vatt: Trans-formers for multimodal self-supervised learning from raw video, audio and text. arXivpreprint arXiv:2104.11178 (2021)4. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lu ˇci ́c, M., Schmid, C.: Vivit: A video visiontransformer. arXiv preprint arXiv:2103.15691 (2021)5. Avelar, P.H., Tavares, A.R., da Silveira, T.L., Jung, C.R., Lamb, L.C.: Superpixel imageclassification with graph attention networks. In: SIBGRAPI. pp. 203–209. IEEE (2020)6. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video under-standing? arXiv preprint arXiv:2102.05095 (2021)7. Cao, Y ., Xu, J., Lin, S., Wei, F., Hu, H.: Gcnet: Non-local networks meet squeeze-excitationnetworks and beyond. In: ICCV Workshops. pp. 0–0 (2019)8. Chang, J., Wei, D., Fisher, J.W.: A video representation using temporal superpixels. In:CVPR. pp. 2051–2058 (2013)9. Chen, Y ., Kalantidis, Y ., Li, J., Yan, S., Feng, J.: Multi-fiber networks for video recognition.In: ECCV . pp. 352–367 (2018)10. Corso, G., Cavalleri, L., Beaini, D., Li `o, P., Veli ˇckovi ́c, P.: Principal neighbourhood aggre-gation for graph nets. arXiv preprint arXiv:2004.05718 (2020)11. Dadsetan, S., Pichler, D., Wilson, D., Hovakimyan, N., Hobbs, J.: Superpixels and graphconvolutional neural networks for efficient detection of nutrient deficiency stress from aerialimagery. In: CVPR. pp. 2950–2959 (2021)12. Dokania, S., Singh, V .: Graph representation learning for audio & music genre classification.arXiv preprint arXiv:1910.11117 (2019)13. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., De-hghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words:Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)14. Du Tran, H.W., Torresani, L., Ray, J., Lecun, Y ., Paluri, M.: A closer look at spatiotemporalconvolutions for action recognition.(2017) (2017)15. Fan, H., Xiong, B., Mangalam, K., Li, Y ., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscalevision transformers. arXiv preprint arXiv:2104.11227 (2021)16. Feichtenhofer, C.: X3d: Expanding architectures for efficient video recognition. In: CVPR.pp. 203–213 (2020)17. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In:ICCV . pp. 6202–6211 (2019)18. Fern ́andez, D., Varas, D., Espadaler, J., Masuda, I., Ferreira, J., Woodward, A., Rodr ́ıguez,D., Gir ́o-i Nieto, X., Carlos Riveiro, J., Bou, E.: Vits: video tagging system from massiveweb multimedia collections. In: ICCV Workshops. pp. 337–346 (2017)19. Fey, M., Lenssen, J.E.: Fast graph representation learning with pytorch geometric. arXivpreprint arXiv:1903.02428 (2019)20. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In:CVPR. pp. 244–253 (2019)21. Gonzalo-Martin, C., Garcia-Pedrero, A., Lillo-Saavedra, M., Menasalvas, E.: Deep learn-ing for superpixel-based classification of remote sensing images (September 2016),http://proceedings.utwente.nl/401/675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719ECCV#4861ECCV#486116 ECCV-22 submission ID 486122. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Pro-ceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778(2016)23. Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F.,Green, T., Back, T., Natsev, P., et al.: The kinetics human action video dataset. arXiv preprintarXiv:1705.06950 (2017)24. Kim, A.: fast-slic. https://github.com/Algy/fast-slic (2019)25. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)26. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907 (2016)27. Kondratyuk, D., Yuan, L., Li, Y ., Zhang, L., Tan, M., Brown, M., Gong, B.: Movinets: Mobilevideo networks for efficient video recognition. In: CVPR. pp. 16020–16030 (2021)28. Kumar, A., Singh, S.S., Singh, K., Biswas, B.: Link prediction techniques, applications, andperformance: A survey. Physica A: Statistical Mechanics and its Applications 553, 124289(2020)29. LeCun, Y ., Bengio, Y ., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015)30. Li, X., Zhang, Y ., Liu, C., Shuai, B., Zhu, Y ., Brattoli, B., Chen, H., Marsic, I., Tighe, J.:Vidtr: Video transformer without convolutions. arXiv preprint arXiv:2104.11746 (2021)31. Lin, T.Y ., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll ́ar, P., Zitnick,C.L.: Microsoft coco: Common objects in context. In: European conference on computervision. pp. 740–755. Springer (2014)32. Liu, Z., Ning, J., Cao, Y ., Wei, Y ., Zhang, Z., Lin, S., Hu, H.: Video swin transformer. arXivpreprint arXiv:2106.13230 (2021)33. Mittal, S., et al.: A survey of accelerator architectures for 3d convolution neural networks.Journal of Systems Architecture p. 102041 (2021)34. Monti, F., Boscaini, D., Masci, J., Rodola, E., Svoboda, J., Bronstein, M.M.: Geometricdeep learning on graphs and manifolds using mixture model cnns. In: CVPR. pp. 5115–5124(2017)35. Neimark, D., Bar, O., Zohar, M., Asselmann, D.: Video transformer network. arXiv preprintarXiv:2102.00719 (2021)36. Oprea, S., Martinez-Gonzalez, P., Garcia-Garcia, A., Castro-Vargas, J.A., Orts-Escolano, S.,Garcia-Rodriguez, J., Argyros, A.: A review on deep learning techniques for video predic-tion. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)37. Papon, J., Abramov, A., Schoeler, M., Worgotter, F.: V oxel cloud connectivity segmentation-supervoxels for point clouds. In: CVPR. pp. 2027–2034 (2013)38. Pareek, P., Thakkar, A.: A survey on video-based human action recognition: recent up-dates, datasets, challenges, and applications. Artificial Intelligence Review 54(3), 2259–2322(2021)39. Piergiovanni, A., Angelova, A., Ryoo, M.S.: Tiny video networks. Applied AI Letters p. e38(2019)40. Rong, Y ., Huang, W., Xu, T., Huang, J.: Dropedge: Towards deep graph convolutional net-works on node classification. arXiv preprint arXiv:1907.10903 (2019)41. Ryoo, M.S., Piergiovanni, A., Tan, M., Angelova, A.: Assemblenet: Searching for multi-stream neural connectivity in video architectures. arXiv preprint arXiv:1905.13209 (2019)42. Schlichtkrull, M., Kipf, T.N., Bloem, P., Van Den Berg, R., Titov, I., Welling, M.: Modelingrelational data with graph convolutional networks. In: ESWC. pp. 593–607. Springer (2018)43. Sigurdsson, G.A., Varol, G., Wang, X., Farhadi, A., Laptev, I., Gupta, A.: Hollywood inhomes: Crowdsourcing data collection for activity understanding. In: ECCV . pp. 510–526.Springer (2016)720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764ECCV#4861ECCV#4861ECCV-22 submission ID 4861 1744. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition invideos. arXiv preprint arXiv:1406.2199 (2014)45. Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representa-tions using lstms. In: ICML. pp. 843–852. PMLR (2015)46. Stutz, D., Hermans, A., Leibe, B.: Superpixels: An evaluation of the state-of-the-art. Com-puter Vision and Image Understanding 166, 1–27 (2018)47. Suarez, J.J.P., Naval Jr, P.C.: A survey on deep learning techniques for video anomaly detec-tion. arXiv preprint arXiv:2009.14146 (2020)48. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., J ́egou, H.: Training data-efficient image transformers & distillation through attention. In: ICML. pp. 10347–10357.PMLR (2021)49. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal featureswith 3d convolutional networks. In: ICCV . pp. 4489–4497 (2015)50. Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y ., Paluri, M.: A closer look at spatiotem-poral convolutions for action recognition. In: CVPR. pp. 6450–6459 (2018)51. Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., Baik, S.W.: Action recognition in video se-quences using deep bi-directional lstm with cnn features. IEEE access 6, 1155–1166 (2017)52. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł.,Polosukhin, I.: Attention is all you need. In: Advances in neural information processingsystems. pp. 5998–6008 (2017)53. Veli ˇckovi ́c, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y .: Graph attentionnetworks. arXiv preprint arXiv:1710.10903 (2017)54. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR. pp. 7794–7803 (2018)55. Wang, X., Gupta, A.: Videos as space-time region graphs. In: Proceedings of the Europeanconference on computer vision (ECCV). pp. 399–417 (2018)56. Xie, R., Liu, Z., Jia, J., Luan, H., Sun, M.: Representation learning of knowledge graphswith entity descriptions. In: Proceedings of the AAAI Conference on Artificial Intelligence.vol. 30 (2016)57. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning:Speed-accuracy trade-offs in video classification. In: ECCV . pp. 305–321 (2018)58. Xu, C., Corso, J.J.: Evaluation of super-voxel methods for early video processing. In: 2012IEEE Conference on Computer Vision and Pattern Recognition. pp. 1202–1209 (2012).https://doi.org/10.1109/CVPR.2012.624780259. Xu, C., Xiong, C., Corso, J.J.: Streaming hierarchical video segmentation. In: Fitzgibbon,A., Lazebnik, S., Perona, P., Sato, Y ., Schmid, C. (eds.) Computer Vision – ECCV 2012. pp.626–639. Springer Berlin Heidelberg, Berlin, Heidelberg (2012)60. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.:Beyond short snippets: Deep networks for video classification. In: CVPR. pp. 4694–4702(2015)61. Zhang, D., Yin, J., Zhu, X., Zhang, C.: Network representation learning: A survey. IEEEtransactions on Big Data 6(1), 3–28 (2018)62. Zhang, S., Qin, Y ., Sun, K., Lin, Y .: Few-shot audio classification with attentional graphneural networks. In: Interspeech. pp. 3649–3653 (2019)
-AYhRTcWDe5
The idea presented in the paper is simple but can effectively speed up action recognition, therefore the paper should be accepted.
6: Marginally above acceptance threshold
Summary: - The authors propose an efficient graph video representation, GraphVid, that can be used for action recognition with reduced time and memory requirements. GraphVid results in a large efficiency gain without decreasing the performance. Positive points: + The idea presented in the paper is interesting and can facilitate future work on action recognition. + Paper experiments 4 augmentation strategies to improve the model performance. + Thorough experiments and ablation study show GraphVid effectiveness. + Competitive results on two action recognition benchmarks, Kinetics-400 and Charades. Negative points: - Relevant related work is missing. GCN have been used for video modelling before. 1) Yan, Sijie, Yuanjun Xiong, and Dahua Lin. "Spatial temporal graph convolutional networks for skeleton-based action recognition." Thirty-second AAAI conference on artificial intelligence. 2018. 2) Thakkar, Kalpit, and P. J. Narayanan. "Part-based graph convolutional network for action recognition." arXiv preprint arXiv:1809.04983 (2018). 3) Korban, Matthew, and Xin Li. "Ddgcn: A dynamic directed graph convolutional network for action recognition." European Conference on Computer Vision. Springer, Cham, 2020. 4) Papadopoulos, Konstantinos, et al. "Vertex feature encoding and hierarchical temporal modeling in a spatial-temporal graph convolutional network for action recognition." arXiv preprint arXiv:1912.09745 (2019). ... - Writing is sloppy and overly complex in places. The text can be simplified by removing sentences such as "to be defined hereunder" (line 207), "The following is a description of how we utilize the superpixels to construct our video-graph representation." (line 202-203)... - Spatial edges and figure 5. It is unclear whether the spatial graph for each frame is complete. I do not see an explanation about edge selection, however, it seems that in figure 3 only "neighbouring super pixels" are connected. - Missing algorithm time complexity for graph generation (i.e. extraction of super pixels, graph construction). - Prior knowledge incorporation (line 369): I do not see how optical flow is currently encoded in the graph video representation, especially due to the absence of coordinates. For example, if an object moves fast within consecutive frames, the distance between the respective super pixels over time might be larger than d_proximity. This way, information about the object motion (direction) is lost completely. Justification: The idea of using super pixels in combination with GCNs is, to my knowledge, novel. The experiments are thorough and show the effectiveness of the method. The paper needs some fixes in the text as indicated above. My rate is weak accept.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title GraphVid: It Only Takes a Few Nodes to Understand a Video ### Paper Abstract We propose a concise representation of videos that encode perceptually meaningful features into graphs. With this representation, we aim to leverage the large amount of redundancies in videos and save computations. First, we construct superpixel-based graph representations of videos by considering superpixels as graph nodes and create spatial and temporal connections between adjacent superpixels. Then, we leverage Graph Convolutional Networks to process this representation and predict the desired output. As a result, we are able to train models with much fewer parameters, which translates into short training periods and a reduction in computation resource requirements. A comprehensive experimental study on the publicly available datasets Kinetics-400 and Charades shows that the proposed method is highly cost-effective and uses limited commodity hardware during training and inference. \textbf{It reduces the computational requirements 10-fold} while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising direction that could open the door to solving video understanding more efficiently and enable more resource limited users to thrive in this research field. ### Paper Keywords ["videos", "nodes", "video", "representation", "graphvid", "video graphvid", "concise representation", "meaningful features", "graphs", "large amount"] ### Paper Content 000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044000001002003004005006007008009010011012013014015016017018019020021022023024025026027028029030031032033034035036037038039040041042043044ECCV#4861ECCV#4861GraphVid : It Only Takes a Few Nodes to Understand aVideoAnonymous ECCV submissionPaper ID 4861Abstract. We propose a concise representation of videos that encode perceptu-ally meaningful features into graphs. With this representation, we aim to lever-age the large amount of redundancies in videos and save computations. First,we construct superpixel-based graph representations of videos by considering su-perpixels as graph nodes and create spatial and temporal connections betweenadjacent superpixels. Then, we leverage Graph Convolutional Networks to pro-cess this representation and predict the desired output. As a result, we are ableto train models with much fewer parameters, which translates into short trainingperiods and a reduction in computation resource requirements. A comprehensiveexperimental study on the publicly available datasets Kinetics-400 and Charadesshows that the proposed method is highly cost-effective and uses limited com-modity hardware during training and inference. It reduces the computationalrequirements 10-fold while achieving results that are comparable to state-of-the-art methods. We believe that the proposed approach is a promising directionthat could open the door to solving video understanding more efficiently and en-able more resource limited users to thrive in this research field.1 IntroductionThe field of video understanding has gained prominence thanks to the rising popularityof videos, which has become the most common form of data on the web. On each newuploaded video, a variety of tasks can be performed, such as tagging [18], human ac-tion recognition [38], anomaly detection [47], etc. New video-processing algorithms arecontinuously being developed to automatically organize the web through the flawlessaccomplishment of the aforementioned tasks.Nowadays, Deep Neural Networks are the de-facto standard for video understand-ing [36]. However, with every addition of a new element to the training set (that is, a fulltraining video), more resources are required in order to satisfy the enormous computa-tional needs. On the one hand, the exponential increment in the amount of data raisesconcerns regarding our ability to handle it in the future. On the other hand, it has alsospurred an highly creative research field aimed at finding ways to mitigate this burden.Among the first-generation of video processing methods were ones geared towardadopting 2D convolution neural networks (CNNs), due to their computational efficiency[44]. Others decomposed 3D convolutions [14, 57] into simpler operators, or split acomplex neural network into an ensemble of lightweight networks [9]. However, videounderstanding has greatly evolved since then, with the current state-of-the-art meth-ods featuring costly attention mechanisms [4, 20, 32, 3, 15, 6, 30]. Beyond accuracy, a045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089045046047048049050051052053054055056057058059060061062063064065066067068069070071072073074075076077078079080081082083084085086087088089ECCV#4861ECCV#48612 ECCV-22 submission ID 4861prominent advantage of the latest generation of methods is that they process raw data,that is, video frames that do not undergo any advanced pre-processing. Meanwhile, pur-suing new video representations and incorporating pre-computed features to acceleratetraining is a promising direction that requires more extensive research.(a) Original image (b) Mean superpixelsFig. 1: A visual comparison between a pixel and a mean-superpixel representation. Onthe left, the original image is presented. On the right, we present the image formed bygenerating superpixel regions using SLIC and filling each region with its mean color.Prior to the renaissance of deep learning [29], much research was done on visual fea-ture generation. Two prominent visual feature generation methods are superpixels1andoptic-flow2. These techniques’ ability to encode perceptually meaningful features hasgreatly contributed to the success of computer vision algorithms. Superpixels provide aconvenient, compact representation of images that can be very useful for computation-ally demanding problems, while optic-flow provides hints about motion. We rely onthese methods to construct a novel representation of videos that encodes sufficient in-formation for video understanding: 1) adjacent pixels are grouped together in the formof superpixels, and 2) temporal relations and proximities are expressed via graph con-nectivity. The example depicted in Figure 1 provides an intuition for the sufficiencyof superpixel representation for scene understanding. It contains the superpixel regionsobtained via SLIC [2], with each region filled with the mean color. One can clearly dis-cern a person playing a guitar in both images. A different way of depicting the relationsbetween superpixels is a graph with nodes representing superpixels [34, 11, 5]. Such arepresentation has the advantage of being invariant to rotations and flips, which obvi-ates the need for further augmentation. We here demonstrate how this representationcan reduce the computational requirements for processing videos.Recent years have seen a surge in the utilization of Graph Neural Networks (GNNs)[26] in tasks that involve images [34, 11, 5], audio [12, 62] and other data forms [55, 56,1]. In this paper, we propose GraphVid , a concise graph representation of videos that en-ables video processing via GNNs. GraphVid constructs a graph representation of videos1Superpixel techniques segment an image into regions by considering similarity measures, de-fined using perceptual features.2Optic-flow is the pattern of the apparent motion of an object(s) in the image between twoconsecutive frames due to the movement of the object or the camera.090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134090091092093094095096097098099100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134ECCV#4861ECCV#4861ECCV-22 submission ID 4861 3that is subsequently processed via a GCN to predict a target. We intend to exploit thepower of graphs for efficient video processing. To the best of our knowledge, we are thefirst to utilize a graph-based representation of videos for efficiency. GraphVid dramat-ically reduces the memory footprint of a model, enabling large batch-sizes that trans-late to better generalization. Moreover, it utilizes models with an order-of-magnitudefewer parameters than the current state-of-the-art models while preserving the predic-tive power. In summary, our contributions are:1. We present GraphVid - a simple and intuitive, yet sufficient representation of videoclips. This simplicity is crucial for delivering efficiency.2. We propose a dedicated GNN for processing the proposed representation. The pro-posed architecture is compared with conventional GNN models in order to demon-strate the importance of each component of GraphVid .3. We present 4 types of new augmentations that are directly applied to the video-graph representation. A thorough ablation study of their configurations is preformedin order to demonstrate the contribution of each.4. We perform a thorough experimental study, and show that GraphVid greatly out-performs previous methods in terms of efficiency - first and foremost, the paperutilizes GNNs for efficient video understanding. We show that it successfully re-duces computations while preserving much of the performance of state-of-the-artapproaches that utilize computationally demanding models.2 Related Work2.1 Deep Learning for Video UnderstandingCNNs have found numerous applications in video processing [33, 50, 60]. These in-clude LSTM-based networks that perform per-frame encoding [45, 51, 60] and the ex-tension of 2D convolutions to the temporal dimension, e.g., 3D CNNs such as C3D[49], R2D [44] and R(2+1)D [50].The success of the Transformer model [52] has led to the development of attention-based models for vision tasks, via self-attention modules that were used to model spatialdependencies in images. NLNet [54] was the first to employ self-attention in a CNN.With this novel attention mechanism, NLNet possible to model long-range dependen-cies between pixels. The next model to be developed was GCNet [7], which simplifiedthe NL-module, thanks to its need for fewer parameters and computations, while pre-serving its performance. A more prominent transition from CNNs to Transformers be-gan with Vision Transformer (ViT) [13], which prompted research aimed at improvingits effectiveness on small datasets, such as Deit [48]. Later, vision-transformers wereadapted for video tasks [35, 4, 6, 15, 30, 32], now crowned as the current state-of-the-artthat top the leader-boards of this field.The usage of graph representation in video understanding sparsely took place inthe work of Wang [55]. They used a pre-trained Resnet variants [22] on the MSCOCOdataset [31] in order to generate object bounding boxes of interest on each video frame.These bounding boxes are later used for the construction of a spatio-temporal graphthat describes how objects change through time, and perform classification on top of135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179ECCV#4861ECCV#48614 ECCV-22 submission ID 4861the spatio-temporal graph with graph convolutional neural networks [26]. However, wenote that the usage of a large backbone for generating object bounding boxes is harmfulfor performance. We intend to alleviate this by proposing a lighter graph representation.In combination of a dedicated GNN architecture, our representation greatly outperforms[55] in all metrics.2.2 Superpixel Representation of Visual DataSuperpixels are groups of perceptually similar pixels that can be used to create visuallymeaningful entities while heavily reducing the number of primitives for subsequentprocessing steps [46]. The efficiency of the obtained representation has led to the de-velopment of many superpixel-generation algorithms for images [46]. This approachwas adapted for volumetric data via the construction of supervoxels [37], which are thetrivial extension to depth. These methods were adjusted for use in videos [58] by treat-ing the temporal dimension as depth. However, this results in degraded performance,as inherent assumptions regarding neighboring points in the 3D space do not apply tovideos with non-negligible motion. Recent approaches especially designed to deal withvideos consider the temporal dimensions for generating superpixels that are coherentin time. Xu et a.l [59] proposed a hierarchical graph-based segmentation method. Thiswas followed by the work of Chang et a.l [8], who suggested that Temporal Superpixels(TSPs) can serve as a representation of videos using temporal superpixels by modelingthe flow between frames with a bilateral Gaussian process.2.3 Graph Convolutional Neural NetworksIntroduced in [26], Graph Convolutional Networks (GCNs) have been widely adoptedfor graph-related tasks [61, 28]. The basic form of a GCN uses aggregators, such asaverage and summation, to obtain a node representation given its neighbors. This basicform was rapidly extended to more complex architectures with more sophisticated ag-gregators. For instance, Graph Attention Networks [53] use dot-product-based attentionto calculate weights for edges. Relational GCNs [42] add to this framework by also con-sidering multiple edge types, namely, relations (such as temporal and spatial relations),and the aggregating information from each relation via separate weights in a singlelayer. Recently, GCNs have been adopted for tasks involving audio [12, 62] and images[34, 11, 5]. Following the success of graph models to efficiently perform image-basedtasks, we are eager to demonstrate our extension of the image-graph representation tovideos.3GraphVid - A Video-Graph RepresentationIn this section, we introduce the methodology of GraphVid . First, we present our methodfor video-graph representation generation, depicted in Figure 2 and described in Algo-rithm 1. Then, we present our training methodology that utilizes this representation.Finally, we discuss the benefits of GraphVid and propose several augmentations.180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224ECCV#4861ECCV#4861ECCV-22 submission ID 4861 5Fig. 2: The flow of GraphVid . Given a video clip, we generate superpixels using SLICfor each frame. The superpixels are used to construct a region-adjacency graph of aframe, with superpixels as nodes. Then, the graph sequence is connected via temporalproximities to construct a dynamic graph, which is later fed into a GNN for prediction.3.1 OverviewIn our framework, we deal with video clips that are sequences of Tvideo framesv∈RT×C×H×W. The goal is to transform vinto a graph that is sufficiently infor-mative for further processing. To achieve this, we use SLIC [2] to generate Sseg-mented regions, called superpixels , over each frame. We denote each segmented regionasRt,i, where t∈[T]represents the temporal frame index, and i∈[S]the superpixel-segmented region index. The following is a description of how we utilize the superpixelsto construct our video-graph representation.Graph Elements - We define the undirected graph Gas a 3-tuple G= (V,E,R), whereV={Rt,i|t∈[T], i∈[S]}is the set of nodes representing the segmented regions, Eis the set of labeled edges (to be defined hereunder) and R={spatial, temporal }is a set of relations as defined in [42]. Each node Rt,iis associated with an attributeRt,i.c∈R3representing the mean RGB color in that segmented region. Additionally,we refer to Rt,i.yandRt,i.xas the coordinates of the superpixel’s centroid, which weuse to compute the distances between superpixels. These distances, which will laterserve as the edge attributes of the graph, are computed bydtq→tpi,j =sRtq,i.y−Rtp,j.yH2+Rtq,i.x−Rtp,j.xW2. (1)Here, tq, tp∈[T]denote frame indices, and i, j∈[S]denote superpixel indices gen-erated for the corresponding frames. The set of edges Eis composed of two types: 1)intra-frame edges (denoted Espatial) - edges between nodes corresponding to superpix-els in the same frame. We refer to these edges as spatial edges .2)inter-frame edges(denoted Etemporal) - edges between nodes corresponding to superpixels in two se-quential frames. We refer to these edges as temporal edges . Finally, the full set of edgesis given by E=Espatial∪ Etemporal. Following is a description of how we constructboth components.225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269ECCV#4861ECCV#48616 ECCV-22 submission ID 4861Spatial Edges - In similar to [5], we generate a region-adjacency graph for each frame,with edge attributes describing the distances between superpixel centroids. The nota-tionEspatialt refers to the set of the spatial-edges connecting nodes corresponding tosuperpixels in the frame t, andEspatial=STt=1Espatialt .Each edge eti,j∈ Espatialisassociated with an attribute that describes the euclidean distance between the two su-perpixel centroids iandjin frame t, that is, dt→ti,j. These distances provide informationabout the relations between the superpixels. Additionally, the distances are invariant torotations and image-flips, which eliminates the need for those augmentations. Note thatnormalization of the superpixels’ centroid coordinates is required in order to obscureinformation regarding the resolution of frames, which is irrelevant for many tasks, suchas action classification. In Figure 3, we demonstrate the procedure of spatial edge gen-eration for a cropped image that results in a partial graph of the whole image. Each su-perpixel is associated with a node, which is connected via edges to other adjacent nodes(with the distances between the superpixels’ centroids serving as edge attributes).Fig. 3: Spatial edge generation. First, superpixels are generated. Each superpixel is rep-resented as a node, which is connected via its edges to other such nodes within a frame.Each node is assigned the mean color of the respective segmented region, and each edgeis assigned the distances between the superpixel centroids connected by that edge.Temporal Edges - In modeling the temporal relations, we aim to connect nodes thattend to describe the same objects in subsequent frames. To do so, we rely on the as-sumption that in subsequent frames, such superpixels are attributed similar colors andthe same spatial proximity. To achieve this, for each superpixel Rt,i, we construct aneighborhood Nt,ithat contains superpixels from its subsequent frame whose centroidshave a proximity of at most dproximity ∈(0,1]with respect to the euclidean distance.Then, we find the superpixel with the most similar color in this neighborhood. As aresult, the tthframe is associated with the set of edges Etemporalt→t+1 that model temporalrelations with its subsequent frame, formally:Nt,i={Rt+1,j|dt→t+1i,j < dproximity }, (2)neighbor (Rt,i) = argminRt+1,j∈Nt,i|Rt,i.c−Rt+1,j.c|2, (3)Etemporalt→t+1 ={(Rt,i, temporal, neighbor (Rt,i)|i∈[S]}. (4)270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314ECCV#4861ECCV#4861ECCV-22 submission ID 4861 7Equipped with these definitions, we define the set of temporal edges connectingnodes corresponding to superpixels in frame tto superpixels in frame t+ 1as the unionof the temporal edge sets generated for all the frames: Etemporal=ST−1t=1Etemporalt→t+1 .Algorithm 1 Graph GenerationInput: v∈RT×C×H×W▷The input video clipParameters: S∈N ▷Number of superpixels per framedproximity ∈(0,1] ▷Diameter of neighborhoodsOutput: G= (V,E,R) ▷A video-graphV,Vlast,Espatial,Etemporal← ∅,∅,∅,∅fort∈[T]doSP←SLIC (v[t], S)V ← V ∪ SPEspatial← Espatial∪regionAdjacetEdges (SP)Etemporalt−1→t← ∅forRt−1,i∈ VlastdoNt−1,i← {Rt,j|dt−1→ti,j < dproximity }nnt−1,i←argminRt,j∈Nt,i|Rt,i.c−Rt,j.c|2)Etemporalt−1→t← Etemporalt−1→t∪ {(Rt−1,i, temporal, nn t−1,i)}end forEtemporal← Etemporal∪ Etemporalt−1→tVlast←SPend forreturn G= (V,E=Espatial∪ Etemporal,R={spatial, tempo })3.2 Model ArchitectureIn order to model both the spatial and temporal relations between superpixels, our modelprimarily relies on the Neural Relational Model [42], which is an extension of GCNs[26] to large-scale relational data. In a Neural Relational Model, the propagation modelfor calculating the forward-pass update of a node, denoted by vi, is defined ash(l+1)i =σXr∈RXj∈Nri1ci,rW(l)rh(l)j+W(l)0h(l)i, (5)where Nridenotes the set of neighbor indices of node iunder relation r∈ R (not tobe confused with the notation Nt,ifrom Eq. 2). ci,ris a problem-specific normaliza-tion constant that can either be learned or chosen in advance (such as ci,r=|Nri|). Toincorporate edge features, we adapt the approach proposed in [10], that concatenatesnode and edge attributes as a layer’s input, yielding the following:h(l+1)i =σXr∈RXj∈Nri1ci,rW(l)r[h(l)j, ei,j] +W(l)0h(l)i, (6)where ei,jis the feature of the edge connecting nodes vi, vj.315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359ECCV#4861ECCV#48618 ECCV-22 submission ID 48613.3 AugmentationsWe introduce a few possible augmentations that we found useful for training our modelas they improved the generalization.Additive Gaussian Edge Noise (AGEN) - Edge attributes represent distances betweensuperpixel centroids. The coordinates of those centroids may vary due to different su-perpixel shapes with different centers of mass. To compensate for this, we add a certainamount of noise to each edge attribute. Given a hyper-parameter σedge, for each edgeattribute eu,vand for each training iteration, we sample a normally distributed variablezu,v∼N(0, σedge)that is added to the edge attribute.Additive Gaussian Node Noise (AGNN) - Node attributes represent the colors of regionsin each frame. Similar to edge attributes, the mean color of each segmented region mayvary due to different superpixel shapes. To compensate for this, we add a certain amountof noise to each node attribute. Given a hyper-parameter σnode, for each node attributev.cof dimension dcand for each training iteration, we sample a normally distributedvariable zv∼Ndc(0, σnode·Idc)that is added to the node attribute.Random Removal of Spatial Edges (RRSE) - This augmentation tends to mimic the reg-ularization effect introduced in DropEdge [40]. Moreover, since the removal of edgesleads to fewer message-passings in a GCN, this also accelerates the training and in-ference. To perform this, we choose a probability pedge∈[0,1]. Then, each edge eispreserved with a probability of pedge.Random Removal of Superpixels (RRS) - SLIC [2] is sensitive to its initialization. Con-sequently, each video clip may have several graph representations during different train-ing iterations and inference. This can be mitigated by removing a certain amount ofsuperpixels. The outcome is fewer nodes in the corresponding representative graph, aswell as fewer edges. Similar to RRE, we choose a probability pnode∈[0,1]so that eachsuperpixel is preserved with a probability of pnode.3.4 Benefits of GraphVidInvariance Qualification - The absence of coordinates leads to invariance in the spatialdimension of each frame. It is evident that such a representation is invariant to rotation,horizontal flip and vertical flip, since the relations between different parts of the imageare solely characterized by distances. This, in turn, obviates the need to perform suchaugmentations during training.Efficiency - We argue that our graph-based representation is more efficient than rawframes. To illustrate this, let T, C, H andWbe the original dimensions of the video clip;that is, the number of frames, number of channels in each frame and height and widthof a frame, respectively. This implies that the raw representation requires T·C·H·Wparameters to encode a single input. Now, to calculate the size of the graph-video rep-resentation, let Sbe the number of superpixels in a single frame. By construction, there360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404ECCV#4861ECCV#4861ECCV-22 submission ID 4861 9are at most 4·Sedges in each frame because SLIC constraints each superpixel to have4 adjacent superpixels. Each edge contains 3values, corresponding to the distance onthe image grid, source node and target node. Additionally, there are, at most, Sedgesbetween every temporal step. This results in 3·( 4·S|{z}intraframeedges+ (T−1)·S|{z}interframeedges) +C·T·S|{z}super−pixelspa-rameters in total. Typically, the second representation requires much fewer parametersbecause we choose Sso that S≪H·W.Prior Knowledge Incorporation - Optical-flow and over-segmentation are encodedwithin the graph-video representation using the inter-frame and intra-frame edges. Thisincorporates strong prior knowledge within the resultant representation. For example,optical-flow dramatically improved the accuracy in the two-stream methodology thatwas proposed in [44]. Additionally, over-segmentation using superpixels has been founduseful as input features for machine learning models due to the limited loss of impor-tant details, accompanied by a dramatic reduction in the expended time by means ofreducing the number of elements of the input [21, 11, 5].4 ExperimentsWe validated GraphVid on 2 human-action-classification benchmarks. The goal of hu-man action classification is to determine the human-involved action that occurs withina video. The objectives of this empirical study were twofold:–Analyze the impact of the various parameters on the accuracy of the model.–As we first and foremost target efficiency, we sought to examine the resources’ con-sumption of GraphVid in terms of Floating Point Operations (FLOPs). We followedthe conventional protocol [16], which uses single-clip FLOPs as a basic unit ofcomputational cost. We show that we are able to achieve a significant improvementin efficiency over previous methods while preserving state-of-the-art performance.4.1 SetupDatasets - We utilize two commonly used datasets for action classification: Kinetics-400 (K400) [23] and Charades [43]. Kinetics-400 [23] is a large-scale video datasetreleased in 2017 that contains 400 classes, with each category consisting of more than400 videos. It originally had, in total, around 240K, 19K, and 38K videos for training,validation and testing subsets, respectively. Kinetics datasets are gradually shrinkingover time due to videos being taken offline, making it difficult to compare against lessrecent works. We used a dataset containing 208K, 17K and 33K videos for training,validation and test respectively. We report on the most recently available videos. Eachvideo lasts approximately 10 seconds and is assigned a label. The Charades dataset[43] is composed of 9,848 videos of daily indoor activities, each of an average lengthof 30 seconds. In total, the dataset contains 66,500 temporal annotations for 157 actionclasses. In the standard split, there are 7,986 training videos and 1,863 validation videos,sampled at 12 frames per second. We follow prior arts by reporting the Top-1 and Top-5recognition accuracy for Kinetics-400 and mean average precision (mAP) for Charades.405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449ECCV#4861ECCV#486110 ECCV-22 submission ID 4861Fig. 4: The general graph neural network architecture we use throughout our experi-mental study.Network Architecture and Training - We use GNN variants as backbones for our ex-periments and feed each of them with our video-graph representation. Specifically, weconsider Graph Convolutional Networks [26] (denoted GCNs), Graph Attention Net-works [53] (denoted GATs) and Relational Graph Convolutional Networks [42] (de-noted RGCNs). The general architecture of our backbones is depicted in Figure 4. Itconsists of 2fully-connected (FC) layers with exponential linear unit (ELU) activationsthat transform the node feature vectors into a 256Dfeature space. Then come 4layersof the corresponding GNN layer type (that is, either GCN, GAT or RGCN along with anedge feature concatenation from Eq. 6) with a hidden size of 512 with ELU activations,followed by global mean pooling, dropout with a probability of 0.2and a linear layerwhose output is the predicted logits. For the GAT layers, we use 4 attention heads ineach layer, and average the attention heads’ results to obtain the desired hidden layersize. For the RGCN layers, we specify 2 relations, which correspond to the spatial andtemporal relations, as described in Section 3. We use the Adam [25] with a learning rateof1e−3for optimization and do not change it throughout the training.We divide the videos into clips using a sliding window of 20 frames, using a strideof 2 between every 2 consecutive frames and a stride of 10 between every 2 consecutivevideo clips. In all the experiments, we used a fixed batch size of 200, which capturesthe context of a time window that endures 200×20 = 4000 frames per batch.Inference - At the test phase, we use the same sliding window methodology as inthe training. We follow the common practice of processing multiple views of a longvideo and average per-view logits to obtain the final results. The number of views isdetermined by the validation dataset.Implementation Details - All the experiments were run on a Ubuntu 18.04 machinewith Intel i9-10920X, 93GB RAM and 2 GeForce RTX 3090 GPUs. Our implementa-tion of GraphVid is in Python3. Specifically, to generate superpixels, we use the SLIC[2] algorithm via its implementation fast-slic [24]. To generate graphs and train thegraph neural models, we use Pytorch-Geometric [19]. We use a fixed seed for SLIC’sinitialization and cache the generated graphs during the first training epochs in order tofurther reduce the number of computations.4.2 Ablation StudyWe conduct an in-depth study on Kinetics-400 to analyze the performance gain con-tributed by incorporating the different components of GraphVid .450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494ECCV#4861ECCV#4861ECCV-22 submission ID 4861 11Graph Neural Network Variants and Number of Superpixels per Frame - We assess theperformance of different GNN variants: GCN [26] is trained without edge relations ( i.e.temporal and spatial edges are treated via the same weights). GAT [53] is trained by em-ploying the attention mechanism for neighborhood aggregation without edge relations.RGCN [42] is trained with edge relations, as described in Section 3.2.The results of the action classification on Kinetics-400 are shown in Figure 5. Inthis series, the number of views is fixed at 8, which is the number of views that wasfound to be most effective for the validation set. For all variants, increasing the numberof superpixels per frame ( S) contributes to the accuracy of the model. We notice asignificant improvement in accuracy for the lower range of the number of superpixels,while the accuracy begins to saturate for S≥650. Increasing further the number ofsuperpixels leads to bigger inputs, which require more computations. As our goal is tomaximize the efficiency, we do not experiment with larger inputs in this section. We50 200 350 500 650 8002040602172938445092133414854132840505866# Superpixels per frameAccuracy (%)Top-150 200 350 500 650 8004060273645515561293946525864294152616874# Superpixels per frameAccuracy (%)Top-5GCN GAT RGCNFig. 5: The effect of varying the number of superpixels per frame on test accuracy onKinetics-400.further present in Table 1 the models’ specifications for 800superpixels, which is thebest-performing configuration in this series of experiments. Not surprisingly, the simpleGCN variant requires the least amount of computations among the three. Meanwhile,the RGCN variant requires fewer computations than GAT and achieves a higher level ofaccuracy. We conclude that it is beneficial to incorporate edge relations when wishingto encode temporal and spatial relations in videos, and that those features are not easilylearned by heavy computational models, such as GAT.Table 1: Comparison of model specifications for various architectures. We report theTop-1 and Top-5 accuracy on Kinetics-400.Model Top-1 Top-5 FLOPs ( ·109) Params ( ·106)GCN 50.1 61.6 28 2.08GAT 54.7 64.5 56 3.93RGCN 66.2 74.1 42 2.99495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539ECCV#4861ECCV#486112 ECCV-22 submission ID 486100.20.40.60.81020406080σedgeAccuracy (%)AGEN00.20.40.60.81020406080σnodeAccuracy (%)AGNN00.20.40.60.81020406080pedgeAccuracy (%)RRSE00.20.40.60.81020406080pnodeAccuracy (%)RRSTop-1 Top-5Fig. 6: The impact of the proposed augmentations on test accuracy of Kinetics-400: Ad-ditive Gaussian edge noise (AGEN). Additive Gaussian node noise (AGNN). Randomremoval of spatial edges (RRSE). Random removal of superpixels (RRS).Augmentations - We assessed the impact of augmentations on the test’s performanceand their ability to alleviate over-fitting. For this purpose, we chose the best configura-tion obtained from the previous experiments, that is, RGCN with 800 superpixels perframe, and trained it while adding one type of augmentation at a time. The results ofthis series are depicted in Figure 7. Each graph shows the level of accuracy reached bytraining the model with one of the various parameters that control the augmentation.We begin with the analysis of the AGEN and AGNN. Both augmentations relateto the addition of Gaussian noise to the attributes of the edges and the nodes of theinput graphs, with the corresponding parameters controlling the standard deviation ofthat Gaussian noise. The impact of these augmentations is less noticeable as these pa-rameters head towards 0, since lower values reflect the scenarios in which little or noaugmentations are performed. Continuously increasing the parameter slightly bringsabout a gradual improvement in the accuracy, until a turning point is reached, afterwhich the level of accuracy starts to decline until it reaches ∼1400, which resembles arandom classifier. The decrease in accuracy stems from the noise obscuring the originalsignal, allegedly forcing the classifier to classify noise that is not generalizable to thetest set. In the cases of RRSE and RRS, the random removal of spatial edges harmsthe accuracy of the model for all values of pedge<1. This finding leads us to concludethat spatial edges encode meaningful information about spatial relations between thesuperpixel entities. Moreover, slightly removing the nodes positively impacts the levelof accuracy, reaching a peak at around pnode≈0.8. To conclude this series, we presentthe values that lead to the best Top-1 accuracy score in Table 2.Table 2: Augmentation parameters and their optimized values.Param σedge σnode pedge pnodeValue 0.4 0.2 1 0.8Top-1 74.5 73 66 70Top-5 85 83 74 76540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584ECCV#4861ECCV#4861ECCV-22 submission ID 4861 13(a) FLOPS vs Kinetics-400 Accuracy (b) FLOPS vs Charades mAPFig. 7: Model FLOPs vs. performance - Green bubbles indicates GraphVid variantsfrom Table 3 and Table 4. Identities of the other models are omitted in order to avoidoverload on the plot. GraphVid achieves comparable performance to the state-of-the-art while greatly reducing the number of parameters and FLOPs. For Kinetics-400,RGCN-2000 with the full set of augmentations achieves almost the same performanceas all computationally heavy models on the plot, while requiring the least amount ofparameters and FLOPs. For Charades, RGCN-2000 with the full set of augmentationsand pretraining on Kinetics-400 is on par with the state-of-the-art, and fewer computerequirements. Bubble radius indicates the number of parameters of the model.4.3 Comparison to the State-of-the-ArtKinetics-400 - We present the Kinetics-400 results for our RGCN model variant in Ta-ble 3 and Figure 7a, along with comparisons to previous arts, including convolutional-based and transformer-based methods. Our results are denoted RGCN- d, where drep-resents the number of superpixels. Additionally, we use the set of augmentations withthe individually optimized hyper-parameters from Table 2 to train these models. First,when the RGCN-800 model is trained with the full set of augmentations (denoted Full-Aug), it achieves a significantly higher Top-1 accuracy than when it is trained withoutany augmentation (denoted No-Aug) or when each augmentation is applied individu-ally. These results demonstrate the effectiveness of our model and that our carefullydesigned augmentations can alleviate overfitting and improve the generalization overthe test set. Second, all our RGCNs require orders-of-magnitude fewer computationsthan the current state-of-the-art architectures, as well as more than ×10fewer parame-ters.Charades - We train 2 RGCN variants with 800and2000 superpixels per frame withthe same set of augmentations and hyper-parameters found in Table 2. Additionally,we follow prior arts [17, 15] by pre-training on K-400 followed by replacing the lastFC layer to match the output dimensionality and fine-tuning on Charades. Table 4 andFigure 7b show that when our RGCN model is trained with 2000 superpixels per frame,its mAP score is comparable to the current state-of-the-art, but this score is reached withorders-of-magnitude fewer computations and using considerably fewer parameters thanprior arts.585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629ECCV#4861ECCV#486114 ECCV-22 submission ID 4861Table 3: Comparisons to state-of-the-art on the Kinetics-400 dataset. We report the Top-1 and Top-5 accuracy scores. The top section of the table depicts convolution-basedmodels. The middle section depicts transformer-based models, and the bottom sectionrepresents our graph-based models.Method Top-1 Top-5 Views FLOPs ( ·109) Param ( ·106)SlowFast R101+N [17] 79.8 93.9 30 234 59.9X3D-XXL R101+N [16] 80.4 94.6 30 144 20.3MViT-B, 32×3 [15] 80.2 94.4 5 170 36.6TimeSformer-L [6] 80.7 94.7 3 2380 121.4ViT-B-VTN [35] 78.6 93.7 1 4218 11.04ViViT-L/16x2 [4] 80.6 94.7 12 1446 310.8Swin-S [32] 80.6 94.5 12 166 49.8Swin-B [32] 82.7 95.5 12 282 88.1RGCN-800 (No Aug) 66.2 74.1 8 42 2.57RGCN-800 (Full Aug) 76.4 91.1 8 42 2.57RGCN-2000 (Full Aug) 80.0 94.3 8 110 2.57Table 4: Comparisons to state-of-the-art on the Charades multi-label dataset. We reportthe mAP scores as more than one ground truth action is possible.Method mAP FLOPs ( ·109) Params ( ·106)MoVieNet-A2 [27] 32.5 6.59 4.8MoVieNet-A4 [27] 48.5 90.4 4.9MoVieNet-A6 [27] 63.2 306 31.4TVN-1 [39] 32.2 13 11.1TVN-4 [39] 35.4 106 44.2AssembleNet-50 [41] 53.0 700 37.3AssembleNet-101 [41] 58.6 1200 53.3SlowFast 16×8R101 [17] 45.2 7020 59.9RGCN-800 (No Aug) 37.4 42 2.57RGCN-800 (Full Aug) 43.1 42 2.57RGCN-2000 (Full Aug) 45.3 110 2.57RGCN-2000 (Full Aug)+K400 49.4 110 2.575 Conclusions and Future WorkIn this paper, we present GraphVid , a graph video representations that enable video-processing via graph neural networks. Furthermore, we propose a relational graph con-volutional model that suits this representation. Our experimental study demonstratesthis model’s efficiency in performing video-related tasks while achieving comparableperformance to the current state-of-the-art. An interesting avenue for future work is toexplore new graph representations of videos, including learnable methods. Addition-ally, we consider the development of new dedicated graph neural models for processingthe unique and dynamic structure of the video-graph as an interesting research direc-tion. Finally, unified models for image and video understanding that disregard temporaledges could be explored in order to take advantage of the amount of data in both worlds.630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674ECCV#4861ECCV#4861ECCV-22 submission ID 4861 15References1. Abadal, S., Jain, A., Guirado, R., L ́opez-Alonso, J., Alarc ́on, E.: Computing graph neural net-works: A survey from algorithms to accelerators. ACM Computing Surveys (CSUR) 54(9),1–38 (2021)2. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., S ̈usstrunk, S.: Slic superpixels. Tech.rep. (2010)3. Akbari, H., Yuan, L., Qian, R., Chuang, W.H., Chang, S.F., Cui, Y ., Gong, B.: Vatt: Trans-formers for multimodal self-supervised learning from raw video, audio and text. arXivpreprint arXiv:2104.11178 (2021)4. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lu ˇci ́c, M., Schmid, C.: Vivit: A video visiontransformer. arXiv preprint arXiv:2103.15691 (2021)5. Avelar, P.H., Tavares, A.R., da Silveira, T.L., Jung, C.R., Lamb, L.C.: Superpixel imageclassification with graph attention networks. In: SIBGRAPI. pp. 203–209. IEEE (2020)6. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video under-standing? arXiv preprint arXiv:2102.05095 (2021)7. Cao, Y ., Xu, J., Lin, S., Wei, F., Hu, H.: Gcnet: Non-local networks meet squeeze-excitationnetworks and beyond. In: ICCV Workshops. pp. 0–0 (2019)8. Chang, J., Wei, D., Fisher, J.W.: A video representation using temporal superpixels. In:CVPR. pp. 2051–2058 (2013)9. Chen, Y ., Kalantidis, Y ., Li, J., Yan, S., Feng, J.: Multi-fiber networks for video recognition.In: ECCV . pp. 352–367 (2018)10. Corso, G., Cavalleri, L., Beaini, D., Li `o, P., Veli ˇckovi ́c, P.: Principal neighbourhood aggre-gation for graph nets. arXiv preprint arXiv:2004.05718 (2020)11. Dadsetan, S., Pichler, D., Wilson, D., Hovakimyan, N., Hobbs, J.: Superpixels and graphconvolutional neural networks for efficient detection of nutrient deficiency stress from aerialimagery. In: CVPR. pp. 2950–2959 (2021)12. Dokania, S., Singh, V .: Graph representation learning for audio & music genre classification.arXiv preprint arXiv:1910.11117 (2019)13. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., De-hghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words:Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)14. Du Tran, H.W., Torresani, L., Ray, J., Lecun, Y ., Paluri, M.: A closer look at spatiotemporalconvolutions for action recognition.(2017) (2017)15. Fan, H., Xiong, B., Mangalam, K., Li, Y ., Yan, Z., Malik, J., Feichtenhofer, C.: Multiscalevision transformers. arXiv preprint arXiv:2104.11227 (2021)16. Feichtenhofer, C.: X3d: Expanding architectures for efficient video recognition. In: CVPR.pp. 203–213 (2020)17. Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In:ICCV . pp. 6202–6211 (2019)18. Fern ́andez, D., Varas, D., Espadaler, J., Masuda, I., Ferreira, J., Woodward, A., Rodr ́ıguez,D., Gir ́o-i Nieto, X., Carlos Riveiro, J., Bou, E.: Vits: video tagging system from massiveweb multimedia collections. In: ICCV Workshops. pp. 337–346 (2017)19. Fey, M., Lenssen, J.E.: Fast graph representation learning with pytorch geometric. arXivpreprint arXiv:1903.02428 (2019)20. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In:CVPR. pp. 244–253 (2019)21. Gonzalo-Martin, C., Garcia-Pedrero, A., Lillo-Saavedra, M., Menasalvas, E.: Deep learn-ing for superpixel-based classification of remote sensing images (September 2016),http://proceedings.utwente.nl/401/675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719ECCV#4861ECCV#486116 ECCV-22 submission ID 486122. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Pro-ceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778(2016)23. Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F.,Green, T., Back, T., Natsev, P., et al.: The kinetics human action video dataset. arXiv preprintarXiv:1705.06950 (2017)24. Kim, A.: fast-slic. https://github.com/Algy/fast-slic (2019)25. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 (2014)26. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks.arXiv preprint arXiv:1609.02907 (2016)27. Kondratyuk, D., Yuan, L., Li, Y ., Zhang, L., Tan, M., Brown, M., Gong, B.: Movinets: Mobilevideo networks for efficient video recognition. In: CVPR. pp. 16020–16030 (2021)28. Kumar, A., Singh, S.S., Singh, K., Biswas, B.: Link prediction techniques, applications, andperformance: A survey. Physica A: Statistical Mechanics and its Applications 553, 124289(2020)29. LeCun, Y ., Bengio, Y ., Hinton, G.: Deep learning. nature 521(7553), 436–444 (2015)30. Li, X., Zhang, Y ., Liu, C., Shuai, B., Zhu, Y ., Brattoli, B., Chen, H., Marsic, I., Tighe, J.:Vidtr: Video transformer without convolutions. arXiv preprint arXiv:2104.11746 (2021)31. Lin, T.Y ., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll ́ar, P., Zitnick,C.L.: Microsoft coco: Common objects in context. In: European conference on computervision. pp. 740–755. Springer (2014)32. Liu, Z., Ning, J., Cao, Y ., Wei, Y ., Zhang, Z., Lin, S., Hu, H.: Video swin transformer. arXivpreprint arXiv:2106.13230 (2021)33. Mittal, S., et al.: A survey of accelerator architectures for 3d convolution neural networks.Journal of Systems Architecture p. 102041 (2021)34. Monti, F., Boscaini, D., Masci, J., Rodola, E., Svoboda, J., Bronstein, M.M.: Geometricdeep learning on graphs and manifolds using mixture model cnns. In: CVPR. pp. 5115–5124(2017)35. Neimark, D., Bar, O., Zohar, M., Asselmann, D.: Video transformer network. arXiv preprintarXiv:2102.00719 (2021)36. Oprea, S., Martinez-Gonzalez, P., Garcia-Garcia, A., Castro-Vargas, J.A., Orts-Escolano, S.,Garcia-Rodriguez, J., Argyros, A.: A review on deep learning techniques for video predic-tion. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)37. Papon, J., Abramov, A., Schoeler, M., Worgotter, F.: V oxel cloud connectivity segmentation-supervoxels for point clouds. In: CVPR. pp. 2027–2034 (2013)38. Pareek, P., Thakkar, A.: A survey on video-based human action recognition: recent up-dates, datasets, challenges, and applications. Artificial Intelligence Review 54(3), 2259–2322(2021)39. Piergiovanni, A., Angelova, A., Ryoo, M.S.: Tiny video networks. Applied AI Letters p. e38(2019)40. Rong, Y ., Huang, W., Xu, T., Huang, J.: Dropedge: Towards deep graph convolutional net-works on node classification. arXiv preprint arXiv:1907.10903 (2019)41. Ryoo, M.S., Piergiovanni, A., Tan, M., Angelova, A.: Assemblenet: Searching for multi-stream neural connectivity in video architectures. arXiv preprint arXiv:1905.13209 (2019)42. Schlichtkrull, M., Kipf, T.N., Bloem, P., Van Den Berg, R., Titov, I., Welling, M.: Modelingrelational data with graph convolutional networks. In: ESWC. pp. 593–607. Springer (2018)43. Sigurdsson, G.A., Varol, G., Wang, X., Farhadi, A., Laptev, I., Gupta, A.: Hollywood inhomes: Crowdsourcing data collection for activity understanding. In: ECCV . pp. 510–526.Springer (2016)720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764ECCV#4861ECCV#4861ECCV-22 submission ID 4861 1744. Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition invideos. arXiv preprint arXiv:1406.2199 (2014)45. Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representa-tions using lstms. In: ICML. pp. 843–852. PMLR (2015)46. Stutz, D., Hermans, A., Leibe, B.: Superpixels: An evaluation of the state-of-the-art. Com-puter Vision and Image Understanding 166, 1–27 (2018)47. Suarez, J.J.P., Naval Jr, P.C.: A survey on deep learning techniques for video anomaly detec-tion. arXiv preprint arXiv:2009.14146 (2020)48. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., J ́egou, H.: Training data-efficient image transformers & distillation through attention. In: ICML. pp. 10347–10357.PMLR (2021)49. Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal featureswith 3d convolutional networks. In: ICCV . pp. 4489–4497 (2015)50. Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y ., Paluri, M.: A closer look at spatiotem-poral convolutions for action recognition. In: CVPR. pp. 6450–6459 (2018)51. Ullah, A., Ahmad, J., Muhammad, K., Sajjad, M., Baik, S.W.: Action recognition in video se-quences using deep bi-directional lstm with cnn features. IEEE access 6, 1155–1166 (2017)52. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł.,Polosukhin, I.: Attention is all you need. In: Advances in neural information processingsystems. pp. 5998–6008 (2017)53. Veli ˇckovi ́c, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y .: Graph attentionnetworks. arXiv preprint arXiv:1710.10903 (2017)54. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR. pp. 7794–7803 (2018)55. Wang, X., Gupta, A.: Videos as space-time region graphs. In: Proceedings of the Europeanconference on computer vision (ECCV). pp. 399–417 (2018)56. Xie, R., Liu, Z., Jia, J., Luan, H., Sun, M.: Representation learning of knowledge graphswith entity descriptions. In: Proceedings of the AAAI Conference on Artificial Intelligence.vol. 30 (2016)57. Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning:Speed-accuracy trade-offs in video classification. In: ECCV . pp. 305–321 (2018)58. Xu, C., Corso, J.J.: Evaluation of super-voxel methods for early video processing. In: 2012IEEE Conference on Computer Vision and Pattern Recognition. pp. 1202–1209 (2012).https://doi.org/10.1109/CVPR.2012.624780259. Xu, C., Xiong, C., Corso, J.J.: Streaming hierarchical video segmentation. In: Fitzgibbon,A., Lazebnik, S., Perona, P., Sato, Y ., Schmid, C. (eds.) Computer Vision – ECCV 2012. pp.626–639. Springer Berlin Heidelberg, Berlin, Heidelberg (2012)60. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.:Beyond short snippets: Deep networks for video classification. In: CVPR. pp. 4694–4702(2015)61. Zhang, D., Yin, J., Zhu, X., Zhang, C.: Network representation learning: A survey. IEEEtransactions on Big Data 6(1), 3–28 (2018)62. Zhang, S., Qin, Y ., Sun, K., Lin, Y .: Few-shot audio classification with attentional graphneural networks. In: Interspeech. pp. 3649–3653 (2019)<|im_end|> <|im_start|>assistant ### Review Title The idea presented in the paper is simple but can effectively speed up action recognition, therefore the paper should be accepted. ### Review Text Summary: - The authors propose an efficient graph video representation, GraphVid, that can be used for action recognition with reduced time and memory requirements. GraphVid results in a large efficiency gain without decreasing the performance. Positive points: + The idea presented in the paper is interesting and can facilitate future work on action recognition. + Paper experiments 4 augmentation strategies to improve the model performance. + Thorough experiments and ablation study show GraphVid effectiveness. + Competitive results on two action recognition benchmarks, Kinetics-400 and Charades. Negative points: - Relevant related work is missing. GCN have been used for video modelling before. 1) Yan, Sijie, Yuanjun Xiong, and Dahua Lin. "Spatial temporal graph convolutional networks for skeleton-based action recognition." Thirty-second AAAI conference on artificial intelligence. 2018. 2) Thakkar, Kalpit, and P. J. Narayanan. "Part-based graph convolutional network for action recognition." arXiv preprint arXiv:1809.04983 (2018). 3) Korban, Matthew, and Xin Li. "Ddgcn: A dynamic directed graph convolutional network for action recognition." European Conference on Computer Vision. Springer, Cham, 2020. 4) Papadopoulos, Konstantinos, et al. "Vertex feature encoding and hierarchical temporal modeling in a spatial-temporal graph convolutional network for action recognition." arXiv preprint arXiv:1912.09745 (2019). ... - Writing is sloppy and overly complex in places. The text can be simplified by removing sentences such as "to be defined hereunder" (line 207), "The following is a description of how we utilize the superpixels to construct our video-graph representation." (line 202-203)... - Spatial edges and figure 5. It is unclear whether the spatial graph for each frame is complete. I do not see an explanation about edge selection, however, it seems that in figure 3 only "neighbouring super pixels" are connected. - Missing algorithm time complexity for graph generation (i.e. extraction of super pixels, graph construction). - Prior knowledge incorporation (line 369): I do not see how optical flow is currently encoded in the graph video representation, especially due to the absence of coordinates. For example, if an object moves fast within consecutive frames, the distance between the respective super pixels over time might be larger than d_proximity. This way, information about the object motion (direction) is lost completely. Justification: The idea of using super pixels in combination with GCNs is, to my knowledge, novel. The experiments are thorough and show the effectiveness of the method. The paper needs some fixes in the text as indicated above. My rate is weak accept. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
BVPowUU1cR
ICLR.cc/2021/Conference
2021
Assisting the Adversary to Improve GAN Training
["Andreas Munk", "William Harvey", "Frank Wood"]
Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a theoretically motivated penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.
["Generative Adversarial Networks", "GANs"]
ABSTRACTSome of the most popular methods for improving the stability and performanceof GANs involve constraining or regularizing the discriminator. In this paper weconsider a largely overlooked regularization technique which we refer to as theAdversary’s Assistant (AdvAs). We motivate this using a different perspective tothat of prior work. Specifically, we consider a common mismatch between theo-retical analysis and practice: analysis often assumes that the discriminator reachesits optimum on each iteration. In practice, this is essentially never true, oftenleading to poor gradient estimates for the generator. To address this, AdvAs is atheoretically motivated penalty imposed on the generator based on the norm of thegradients used to train the discriminator. This encourages the generator to movetowards points where the discriminator is optimal. We demonstrate the effect ofapplying AdvAs to several GAN objectives, datasets and network architectures.The results indicate a reduction in the mismatch between theory and practice andthat AdvAs can lead to improvement of GAN training, as measured by FID scores.1 I NTRODUCTIONThe generative adversarial network (GAN) framework (Goodfellow et al., 2014) trains a neuralnetwork known as a generator which maps from a random vector to an output such as an image.Key to training is another neural network, the adversary (sometimes called a discriminator or critic),which is trained to distinguish between “true” and generated data. This is done by maximizingone of the many different objectives proposed in the literature; see for instance Goodfellow et al.(2014); Arjovsky et al. (2017); Nowozin et al. (2016). The generator directly competes against theadversary: it is trained to minimize the same objective, which it does by making the generated datamore similar to the true data. GANs are efficient to sample from, requiring a single pass through adeep network, and highly flexible, as they do not require an explicit likelihood. They are especiallysuited to producing photo-realistic images (Zhou et al., 2019) compared to competing methods likenormalizing flows, which impose strict requirements on the neural network architecture (Kobyzevet al., 2020; Rezende & Mohamed, 2015) and V AEs (Kingma & Welling, 2014; Razavi et al., 2019;Vahdat & Kautz, 2020). Counterbalancing their appealing properties, GANs can have unstabletraining dynamics (Kurach et al., 2019; Goodfellow, 2017; Kodali et al., 2017; Mescheder et al.,2018).Substantial research effort has been directed towards improving the training of GANs. These en-deavors can generally be divided into two camps, albeit with significant overlap. The first developsbetter learning objectives for the generator/adversary to minimize/maximize. These are designed tohave properties which improve training (Arjovsky et al., 2017; Li et al., 2017; Nowozin et al., 2016).The other camp develops techniques to regularize the adversary and improve its training dynam-ics (Kodali et al., 2017; Roth et al., 2017; Miyato et al., 2018). The adversary can then provide abetter learning signal for the generator. Despite these contributions, stabilizing the training of GANsremains unsolved and continues to be an active research area.An overlooked approach is to train the generator in a way that accounts for the adversary not beingtrained to convergence. One such approach was introduced by Mescheder et al. (2017) and later builton by Nagarajan & Kolter (2017). The proposed method is a regularization term based on the normof the gradients used to train the adversary. This is motivated as a means to improve the convergenceproperties of the minimax game. The purpose of this paper is to provide a new perspective as to why1Under review as a conference paper at ICLR 2021this regularizer is appropriate. Our perspective differs in that we view it as promoting updates thatlead to a solution that satisfies a sufficient condition for the adversary to be optimal. To be precise,it encourages the generator to move towards points where the adversary’s current parameters areoptimal. Informally, this regularizer “assists” the adversary, and for this reason we refer to thisregularization method as the Adversary’s Assistant (AdvAs).We additionally propose a version of AdvAs which is hyperparameter-free. Furthermore, we releasea library which makes it simple to integrate into existing code. We demonstrate its application to astandard architecture with the WGAN-GP objective (Arjovsky et al., 2017; Gulrajani et al., 2017);the state-of-the-art StyleGAN2 architecture and objective introduced by Karras et al. (2020); andthe AutoGAN architecture and objective introduced by Gong et al. (2019). We test these on theMNIST (Lecun et al., 1998), CelebA (Liu et al., 2015), CIFAR10 (Krizhevsky et al., 2009) datasetsrespectively. We show that AdvAs improves training on all datasets, as measured by the Fr ́echetInception Distance (FID) (Heusel et al., 2017), and the inception score (Salimans et al., 2016) whereapplicable.2 B ACKGROUNDA generator is a neural network g:Z !X Rdxwhich maps from a random vector z2Zto an output x2 X (e.g., an image). Due to the distribution over z, the function ginduces adistribution over its output x=g(z). Ifgis invertible and differentiable, the probability densityfunction (PDF) over xfrom this “change of variables” could be computed. This is not necessaryfor training GANs, meaning that no such restrictions need to be placed on the neural network g. Wedenote this distribution p(x)where2Rdgdenotes the generator’s parameters. The GAN istrained on a dataset x1;:::;xN, where each xiis inX. We assume that this is sampled i.i.d. froma data-generating distribution ptrue. Then the aim of training is to learn so thatpis as close aspossible toptrue. Section 2.1 will make precise what is meant by “close.”The adversary a:X !A has parameters 2Rdawhich are typically trained alternatelywith the generator. It receives as input either the data or the generator’s outputs. The set that it mapsto,A, is dependent on the GAN type. For example, Goodfellow et al. (2014) define an adversarywhich maps from x2X to the probability that xis a “real” data point from the dataset, as opposedto a “fake” from the generator. They therefore choose Ato be [0;1]and train the adversary bymaximizing the associated log-likelihood objective,h(p;a) =Exptrue[loga(x)] +Exp[log(1a(x))]: (1)Using the intuition that the generator should generate samples that seem real and therefore “fool” theadversary, the generator is trained to minimize h(p;a). Since we find to minimize this objectivewhile fitting to maximize it, training a GAN is equivalent to solving the minimax game,minmaxh(p;a): (2)Eq. (1) gives the original form for h(p;a)used by Goodfellow et al. (2014) but this form variesbetween different GANs, as we will discuss in Section 2.1. The minimization and maximizationin Eq. (2) are performed with gradient descent in practice. To be precise, we define Lgen(;) =h(p;a)andLadv(;) =h(p;a). These are treated as losses for the generator and adversaryrespectively, and both are minimized. In other words, we turn the maximization of h(p;a)w.r.t.into a minimization of Ladv(;). Then on each iteration, andare updated one after the otherusing gradient descent steps along their respective gradients:rLgen(;) =rh(p;a); (3)rLadv(;) =rh(p;a): (4)2.1 GAN S MINIMIZE DIVERGENCESA common theme in the GAN literature is analysis based on what we call the optimal adversaryassumption . This is the assumption that, before each generator update, we have found the adversaryawhich maximizes h(p;a)given the current value of . To be precise, we define a class ofpermissible adversary functions F. This is often simply the space of all functions mapping X!A2Under review as a conference paper at ICLR 2021(Goodfellow et al., 2014), but is in some GAN variants constrained by, e.g., a Lipschitz constant (Ar-jovsky et al., 2017). Then we call the adversary aoptimal for a particular value of if and only ifh(p;a) = maxa2Fh(p;a).In practice, the neural network acannot represent every a2 F and so it may not be able toparameterize an optimal adversary for a given . As is common in the literature, we assume thatthe neural network is expressive enough that this is not an issue, i.e., we assume that for any ,there exists at least one 2resulting in an optimal adversary. Then, noting that there may bemultiple such 2, we define ()to be the set of all optimal adversary parameters. That is,() =f2jh(p;a) = maxa2Fh(p;a)gand the optimal adversary assumptions says thatbefore each update of we have found 2(). We emphasize that, in part due to the limitednumber of gradient updates performed on , this assumption is essentially never true in practice.This paper presents a method to improve the training of GANs by addressing this issue.The optimal adversary assumption simplifies analysis of GAN training considerably. Instead ofbeing a two-player game, it turns into a case of minimizing an objective with respect to alone. Wedenote this objectiveM(p) = maxa2Fh(p;a) =h(p;a)where2(): (5)For example, Goodfellow et al. (2014) showed that using the objective presented in Eq. (1) results inM(p) = 2JSD(ptruejjp)log 4 , where JSD is the Jensen-Shannon divergence. By making theoptimal adversary assumption, they could prove that their GAN training procedure would converge,and would minimize the Jensen-Shannon divergence between ptrueandp.A spate of research following the introduction of the original GAN objective has similarly madeuse of the optimal adversary assumption to propose GANs which minimize different divergences.For example, Wasserstein GANs (WGANs) (Arjovsky et al., 2017) minimize a Wasserstein dis-tance. MMD GANs (Li et al., 2017) minimize a distance known as the maximum mean discrep-ancy. Nowozin et al. (2016) introduce f-GANs which minimize f-divergences, a class including theKullback-Leibler and Jensen-Shannon divergences. We emphasize that this is by no means an ex-haustive list. Like these studies, this paper is motivated by the perspective that, under the optimaladversary assumption, GANs minimize a divergence. However, the GAN framework can also beviewed from a more game-theoretic perspective (Kodali et al., 2017; Grnarova et al., 2018).3 D OES AN OPTIMAL ADVERSARY LEAD TO OPTIMAL GRADIENTS ?As introduced above, the training of an adversary does not need to be considered in any analysisif it is simply assumed to always be optimal. From this perspective, the goal of training GANscan be seen as learning the generator to minimize M(p). This leads to the question: assumingthat we have an optimal adversary, can we compute the gradient required for the generator update,rM(p)? To clarify, assume that we have generator parameters 0, and have found 2(0)such thath(p0;a)andM(p0)are equal in value. We then want to take a gradient step on 0to minimizeM(p0). Virtually all GAN methods do this by assuming that M(p0)andh(p0;a)have equal gradients with respect to at0. That is, it is assumed that rM(p)j=0is equal tothepartial derivative1D1h(p;a)j=0. It is not immediately obvious that this is true.In the GAN literature this concern has largely been overlooked, with a few treatments for spe-cific GAN types, see e.g. Arjovsky et al. (2017); Goodfellow et al. (2014). In particular, Arjovskyet al. (2017) invoke (but do not explicitly prove) an extension of Theorem 1 in Milgrom & Segal(2002) to prove that the Wasserstein GAN has optimal gradients if the adversary is optimal, i.e.D1h(p;a)j=0=rM(p)j=0. We note that this extension can, in fact, be used to provethat GANs in general have this property under fairly weak assumptions:Theorem 1. LetM(p) =h(p;a)for any2(), as defined in Eq. (5). Assuming thatM(p)is differentiable w.r.t. andh(p;a)is differentiable w.r.t. for all2(), then if2()we haverM(p) =D1h(p;a): (6)1We useD1h(p;a)to denote the partial derivative of h(p;a)with respect to withkept constant.Similarly, we will use D2h(p;a)to denote the derivative of h(p;a)with respect to , withheld constant.3Under review as a conference paper at ICLR 2021See Appendix D.1 for a proof. We emphasize Theorem 1 applies only if the adversary is optimal.If this is not the case we cannot quantify, and so cannot directly minimize or account for, the dis-crepancy between rM(p)andD1h(p;a). Instead of attempting to do so, we consider anapproach that drives the parameters towards regions where 2()so that Theorem 1 can beinvoked.3.1 A DVERSARY CONSTRUCTORSTo see how we may impose the constraint that Eq. (6) is true, we consider a trivial relationshipbetween any generator and the corresponding optimal adversary. If an optimal adversary exists forevery2then there exists some, possibly non-unique, function f: !that maps fromany generator to a corresponding optimal adversary. That is, for all 2,f() =2()in which case h(p;af()) = maxa2Fh(p;a). We refer to such a function as an adversary con-structor . In an ideal scenario, we could compute the output of an adversary constructor, f(), forany. We could then invoke Theorem 1 and the generator could be updated with the gradientrM(p) =D1h(p;af()). In practice, computing f()is infeasible and we can only approxi-mate the optimal adversary parameters with gradient descent. There is therefore a mismatch betweenGAN theory, where Theorem 1 is often invoked, and practice, where the conditions to invoke it areessentially never satisfied. How then, can we address this problem? We look to the adversary con-structors, which provide a condition that must be satisfied for the optimal adversary assumption tobe true. Adversary constructors allow us to account for the influence of onby considering thetotal derivative rh(p;af()). We prove in Appendix D.2 that a comparison with the result ofTheorem 1 leads to Corollary 1. In the next section, we motivate AdvAs as an attempt to fulfill acondition suggested by this corollary.Corollary 1. Letf: !be a differentiable mapping such that for all 2,M(p) =h(p;af()). If the conditions in Theorem 1 are satisfied and the Jacobian matrix of fwith respectto,J(f)exists for all 2thenD2h(p;af())TJ(f) = 0: (7)4 A SSISTING THE ADVERSARYCorollary 1 tells us that D2h(p;a)TJ(f)will be zero whenever Theorem 1 can be invoked.This makes Eq. (7) a necessary, but not sufficient, condition for the invocation of Theorem 1.This suggests that the magnitude of D2h(p;a)TJ(f)could be a measure of how “close”D1h(p;a)is to the desired gradient rM(p). However, the Jacobian J(f)is not tractablesoD2h(p;a)TJ(f)cannot be computed. The only term we can calculate in practice isD2h(p;a), exactly the gradient used to train the adversary. If D2h(p;a)is zero, thenD2h(p;a)TJ(f)is zero. The magnitude of D2h(p;a)could therefore be an approximatemeasure of “closeness” instead of D2h(p;a)TJ(f). This leads to an augmented generator loss,which regularizes generator updates to reduce the magnitude of D2h(p;a). It has a scalar hy-perparameter 0, but Section 4.3 provides a heuristic which can remove the need to set thishyperparameter.LAdvAsgen (;) =Lgen(;) +r(;); (8)withr(;) =krLadv(;)k22; (9)recalling that rLadv(;) =D2h(p;a). We emphasize that r(;)is the same as that foundin previous work (Mescheder et al., 2017; Nagarajan & Kolter, 2017).Figuratively, AdvAs changes the generator updates to move in a conservative direction that does notover-exploit the adversary’s sub-optimality. Consider the generator and adversary as two playersattempting to out-maneuver one another. From Eq. (2), we see that the generator should learn tocounteract the best possible adversary, rather than the current adversary. If the current adversary issub-optimal, allowing it to catch up would yield better updates to the generator. One way to achievethis is to update the generator in a way that helps make the current adversary optimal. This behavioris exactly what AdvAs encourages. In this sense, it assists the adversary, leading to it’s name, theAdversary’s Assistant. We emphasize that using AdvAs involves making only a small modificationto a GAN training algorithm, but for completeness we include pseudocode in Appendix A.4Under review as a conference paper at ICLR 202110000 20000 30000 40000Iterations51015FIDnadv/Reg type1/AdvAs1/Baseline5/Baseline0.5 1.0 1.5 2.0Time (h)Figure 1: FID scores throughout training for the WGAN-GP objective on MNIST, estimated using60 000 samples from the generator. We plot up to a maximum of 40 000 iterations. When plottingagainst time (right), this means some lines end before the two hours we show. The blue line showsthe results with AdvAs, while the others are baselines with different values of nadv.4.1 A DVAS PRESERVES CONVERGENCE RESULTSAdvAs has several desirable properties which support its use as a regularizer: (1) it does not interferewith the update on , and recall that perfectly optimizing leads toh(p;a) =M(p). (2) Undermild conditions, rr(;)j=is zero for an optimal and sorLAdvAsgen (;) =rM(p).These properties imply that, under the optimal adversary assumption, optimizing LAdvAsgen is in factequivalent to optimizing Lgen. See Appendix D.4 for a proof. Therefore any convergence analysiswhich relies on the optimal adversary assumption is equally applicable when AdvAs is included inthe loss. Regarding the mild conditions in property (2), we require that be a stationary pointofh(p;a). This is true as long as his differentiable w.r.t. at(;)anddoes not lie on aboundary of . The optimal adversary parameters, , cannot lie on a boundary unless, for example,weight clipping is used as in Arjovsky et al. (2017). In such cases, we cannot speak to the efficacyof applying AdvAs.We make the additional observation that for some GAN objectives, minimizing r(;)alone (asopposed toLgenorLAdvAsgen ) may match pandptrue. We show this in Appendix D.3 for the WGANobjective (Arjovsky et al., 2017). In particular, for all 2,r(;)is zero and at a global minimumwheneverp=ptrue. Experimental results in Appendix C.3 support this observation. However, theresults appear worse than those obtained by optimizing either LgenorLAdvAsgen .4.2 E STIMATING THE ADVAS LOSSIt is not always possible, and seldom computationally feasible, to compute the AdvAs regularizationtermr(;)exactly. We instead use a stochastic estimate. This is computed by simply estimatingthe gradient rLadv(;)with a minibatch and then taking the squared L2-norm of this gradientestimate. That is, defining ~Ladv(;)as an unbiased estimate of the adversary’s loss, we estimater(;)with~r(;) =r~Ladv(;)22: (10)Although the gradient estimate is unbiased, taking the norm results in a biased estimate of r(;).However, comparisons with a more computationally expensive unbiased estimate2did not reveal asignificant difference in performance.2Computing an unbiased estimate can be done using the following: consider two independent and unbiasedestimates of rLadv(;)denotedX;X0. Then EXTX0=E[X]TE[X0] =krLadv(;)k22. Thisimplies that multiplying two estimates using independent samples is unbiased.5Under review as a conference paper at ICLR 20214.3 R EMOVING THE HYPERPARAMETER Table 1: FID and IS scores on CIFAR10 using Au-toGAN with and without AdvAs.IS FIDAdvAs (nadv= 2) 8:40:1 14:51:0Baseline (nadv= 5)8:30:1 15:00:70 2 4 6Time (h)100FIDnadv/Reg type1/AdvAs2/AdvAs1/Baseline2/Baseline5/BaselineFigure 2: FID scores on CIFAR10 using Auto-GAN from baselines and AdvAs plotted with alog y-axis against running time for different val-ues ofnadv. We see that AdvAs with nadv= 2yields the lowest FID scores at every point duringtraining.Eq. (8) introduces a hyperparameter, , whichwe would prefer not to perform a grid-searchon. Setting to be too great can destabilizetraining. Conversely, setting it to be too smallgives similar results to not using AdvAs. Wetherefore introduce a heuristic which can beused to avoid setting the hyperparameter. Ourexperiments suggests that this is often a goodchoice, although manually tuning may yieldgreater gains. This heuristic involves consider-ing the magnitudes of three gradients, and sowe first define the notation,gorig(;) =rLgen(;);gAdvAs (;) =r~r(;);gtotal(;; ) =rLAdvAsgen (;)=gorig(;) +gAdvAs (;):The heuristic can be interpreted as choosing at each iteration to prevent the total gradient,gtotal(;; ), from being dominated by theAdvAs term. Specifically, we ensure the mag-nitude ofgAdvAs (;)is less than or equal tothe magnitude of gorigby setting= min1;kgorig(;)k2kgAdvAs (;)k2(11)at every iteration. We then perform gradient descent along gtotal(;; ). This technique ensuresthatis bounded above by 1.5 E XPERIMENTSWe demonstrate the effect of incorporating AdvAs into GAN training using several GAN architec-tures, objectives, and datasets. Our experiments complement those of Nagarajan & Kolter (2017)and Mescheder et al. (2017). In each case, we compare GANs trained with AdvAs with baselinesthat do not use AdvAs but are otherwise identical. We first demonstrate the use of AdvAs in conjunc-tion with the WGAN-GP objective (Gulrajani et al., 2017) to model MNIST (Lecun et al., 1998).In this experiment, we compare the performance gains achieved by AdvAs to a reasonable upperbound on the gains achievable with this type of regularization. We further support these findingswith experiments on CIFAR10 (Krizhevsky et al., 2009) using AutoGAN (Gong et al., 2019), anarchitecture found through neural architecture search. We then demonstrate that AdvAs can im-prove training on larger images using StyleGAN2 (Karras et al., 2020) on CelebA (Liu et al., 2015).We quantify each network’s progress throughout training using the FID score (Heusel et al., 2017).Since AdvAs increases the computation time per iteration, we plot training progress against time foreach experiment. We also present inception scores (IS) (Salimans et al., 2016) where applicable. Weestimate scores in each case with 5 random seeds and report the standard deviation ( ) as a measureof uncertainty.AdvAs aims to improve performance by coming closer to having an optimal adversary. Anothercommon way to achieve this is to use a larger number of adversary updates ( nadv) before eachgenerator update. For each experiment, we show baselines with the value of nadvsuggested inthe literature. Noting that the computational complexity is O(nadv)and so keeping nadvlow isdesirable, we find that AdvAs can work well with lower values of nadvthan the baseline. For a faircomparison, we also report baselines trained with these values of nadv.6Under review as a conference paper at ICLR 2021Figure 3: Bottom: FID scores throughout training estimated with 1000 samples, plotted againstnumber of epochs (left) and training time (right). FID scores for AdvAs decrease more on eachiteration at the start of training and converge to be 7:5%lower. Top: The left two columns showuncurated samples with and without AdvAs after 2 epochs. The rightmost two columns show uncu-rated samples from networks at the end of training. In each grid of images, each row is generated bya network with a different training seed and shows 3 images generated by passing a different randomvector through this network. AdvAs leads to obvious qualitative improvement early in training.For MNIST and CelebA, we avoid setting the hyperparameter by using the heuristic proposed inSection 4.3. We found for CIFAR10 that manually tuning gave better performance, and so set =0:01. Additionally, on MNIST and CelebA, the methods we consider use regularization in the formof a gradient penalty (Gulrajani et al., 2017) for training the adversary. This is equivalent to includinga regularization term adv()in the definition of Ladv. That is,Ladv(;) =rh(p;a) +adv(). Following Eq. (9) this regularization term is included in the AdvAs term r(;). Anotherpractical detail is that AutoGAN and StyleGAN2 are trained with a hinge loss (Lim & Ye, 2017).That is, when computing the adversary’s loss Ladv(;), its outputa(x)is truncated to be below +1for real images, or above 1for generated images. This prevents it from receiving gradient feedbackwhen its predictions are both accurate and confident. Intuitively, this stops its outputs becoming toolarge and damaging the generator’s training. However, this truncation is not present when updatingthe generator. This means that the generator minimizes a different objective to the one maximized bythe adversary, and so it is not exactly a minimax game. It is not clear that it is beneficial to calculatethe AdvAs regularization term using this truncation. We found that better performance was obtainedby computing r(;)without truncation, and do this in the reported experiments.5.1 WGAN-GP ONMNISTWe use a simple neural architecture: the generator consists of a fully-connected layer followedby two transposed convolutions. The adversary has three convolutional layers. Both use instancenormalization (Ulyanov et al., 2017) and ReLU non-linearities; see Appendix E for details. Wecompare using AdvAs with nadv= 1 against the baseline for nadv2 f1;5gwherenadv= 5is suggested by Gulrajani et al. (2017). Fig. 1 shows the FID scores for each method throughouttraining. We see that using AdvAs with nadv= 1leads to better performance on convergence; evencompared to the baseline with nadv= 5, the best FID score reached is improved by 28%.5.2 A UTOGAN ONCIFAR10We next experiment on the generation of CIFAR10 (Krizhevsky et al., 2009) images. We use Auto-GAN (Gong et al., 2019), which has a generator architecture optimized for CIFAR10 using neuralarchitecture search. It is trained with a hinge loss, as described previously, an exponential moving7Under review as a conference paper at ICLR 2021average of generator weights, and typically uses nadv= 5. Figure 2 shows FID scores throughouttraining for various values of nadv, with and without AdvAs, each computed with 1000 samples. Ta-ble 1 shows FID scores at the end of training for the best performing value of nadvfor each method,estimated with 50 000 samples. For a fixed nadvof either 1 or 2, using AdvAs improves the FIDscore. In fact, with nadv= 2, the performance with AdvAs is indistinguishable from the baselinewith the suggested setting of nadv= 5. Unlike for MNIST, AdvAs does not outperform the baselinewith high enough nadv. We hypothesize that this is because, with an architecture highly optimizedfornadv= 5, the adversary is closer to being optimal when trained with nadv= 5. Assuming thisis the case, we would not expect AdvAs to improve training compared to a baseline with sufficientnadv. Still, our results show that applying AdvAs allows the same performance with a lower nadv.5.3 S TYLE GAN2 ONCELEB ATo demonstrate that AdvAs improves state-of-the-art GAN architectures and training procedures,we consider StyleGAN2 (Karras et al., 2020). We train this as proposed by Karras et al. (2020)with a WGAN-like objective with gradient penalty (Gulrajani et al., 2017), an exponential movingaverage of the generator weights, and various forms of regularization including path length, R1, andstyle-mixing regularization. More detail on these can be found in Karras et al. (2020), but we merelywish to emphasize that considerable effort has been put into tuning this training procedure. For thisreason, we do not attempt to further tune nadv, which is 1by default. Any improvements fromapplying AdvAs indicate a beneficial effect not provided by other forms of regularization used.Figure 3 compares the training of StyleGAN2 on CelebA at 64 64 resolution with and withoutthe AdvAs regularizer. Using AdvAs has two main effects: (1) the generated images show biggerimprovements per epoch at the start of training; and (2) the final FID score is improved by 7:5%.Even accounting for its greater time per iteration, the FID scores achieved by AdvAs overtake thebaseline after one day of training. We verify that the baseline performance is similar to that reportedby Zhou et al. (2019) with a similar architecture.6 R ELATED WORKWe motivated AdvAs from the perspective of the optimal adversary assumption. In this sense, itis similar to a large body of work aiming to improve and stabilize GAN training by better trainingthe adversary. AdvAs differs fundamentally due to its focus on the training the generator ratherthan the adversary. This other work generally affects the discriminator in one of two broad ways:weight constraints and gradient penalties Brock et al. (2019). Weight normalization involves directlymanipulating the parameters of the adversary, such as through weight clipping (Arjovsky et al.,2017) or spectral normalization (Miyato et al., 2018). Gradient penalties (Kodali et al., 2017; Rothet al., 2017; Gulrajani et al., 2017) impose soft constraints on the gradients of the adversary’s outputwith respect to its input. Various forms exist with different motivations; see Mescheder et al. (2018)for a summary and analysis. AdvAs may appear similar to a gradient penalty, as it operates ongradients of the adversary. However, the gradients are w.r.t. the adversary’s parameters rather thanits input. Furthermore, AdvAs is added to the generator’s loss and not the adversary’s.Regularizing generator updates has recently received more attention in the literature (Chu et al.,2020; Zhang et al., 2019; Brock et al., 2019). Chu et al. (2020) show theoretically that the ef-fectiveness of different forms of regularization for both the generator and adversary is linked tothe smoothness of the objective function. They present a set of conditions on the generator andadversary that ensure a smooth objective function, which they argue will stabilize GAN training.However, they leave the imposition of the required regularization on the generator to future work.Zhang et al. (2019) and Brock et al. (2019) consider applying spectral normalization (Miyato et al.,2018) to the generator, and find empirically that this improves performance.7 D ISCUSSION AND CONCLUSIONSWe have shown that AdvAs addresses the mismatch between theory, where the adversary is assumedto be trained to optimality, and practice, where this is never the case. We show improved trainingacross three datasets, architectures, and GAN objectives, indicating that it successfully reduces this8Under review as a conference paper at ICLR 2021disparity. This can lead to substantial improvements in final performance. We note that, whileapplying AdvAs in preliminary experiments with BEGAN (Berthelot et al., 2017) and LSGAN (Maoet al., 2017), we did not observe either a significant positive effect, or a significant negative effectother than the increased time per iteration. Nevertheless, AdvAs is simple to apply and will, in manycases, improve both training speed and final performance.
l83gld8C01
review
6: Marginally above acceptance threshold
This paper proposes a new regularizer to improve GAN training. By noticing that the discriminator does not always reach optimum at each iteration, this paper proposes Adversary's Assistant (AdvAs) for helping the discriminator to satisfy this condition. Interestingly, compared to the previous methods for improving GAN training, this work applies the regularizer at the generator (rather than the discriminator) and is theoretical motivated. Experiments on several GAN objectives, datasets and network architectures are provided to support the effectiveness of AdvAs. *Pros (1) This paper is clearly written. Even I am not an expert in GAN, I do not encounter too many difficulties in understanding the whole paper. (2) The whole framework is theoretical motivated. Given that the discriminator is not always at an optimal point during training, this paper derives several theorems and corollaries, which leads to the finding that adding a regularization on the generator could satisfy a necessary condition for training GAN optimally. (3) Empirical results are provided to show the proposed AdvAs can help GAN training under different settings. *Cons (1) This paper uses a minibatch to approximately compute the regulizer $r(\theta,\phi)$. I am wondering if the proposed AdvAs is sensitive to the estimation quality of $r(\theta,\phi)$? For example, if large batch size is used, will the results be better? If yes, then what is the "minimal" batch size to train a good GAN with AdvAs (i.e., outperforms the baseline)? (2) I appreciate that this paper honestly states that the proposed AdvAs cannot help BEGAN and LSGAN. I encourage the authors to delve deeper into this observed phenomenon and provide a brief discussion on explaining the possible reasons why AdvAs cannot help here. (3) As the main purpose of AdvAs is to encourage the value of Eq. (7) be close to 0, the authors are encouraged to also plot the value of $D_2h(p_θ, a_φ)$ during the training, as direct evidence for supporting the effectiveness of AdvAs. **Overall, I think it is an interesting paper, with good theoretical motivation and strong empirical results, therefore I tend to accept it at this time. Nonetheless, I am not an expert in GAN and cannot accurately access the value/importance of this paper. I am open to increase/decrease my score if other expert reviewers provide any positive/negative comments.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Assisting the Adversary to Improve GAN Training ### Paper Abstract Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a theoretically motivated penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores. ### Paper Keywords ["Generative Adversarial Networks", "GANs"] ### Paper Content ABSTRACTSome of the most popular methods for improving the stability and performanceof GANs involve constraining or regularizing the discriminator. In this paper weconsider a largely overlooked regularization technique which we refer to as theAdversary’s Assistant (AdvAs). We motivate this using a different perspective tothat of prior work. Specifically, we consider a common mismatch between theo-retical analysis and practice: analysis often assumes that the discriminator reachesits optimum on each iteration. In practice, this is essentially never true, oftenleading to poor gradient estimates for the generator. To address this, AdvAs is atheoretically motivated penalty imposed on the generator based on the norm of thegradients used to train the discriminator. This encourages the generator to movetowards points where the discriminator is optimal. We demonstrate the effect ofapplying AdvAs to several GAN objectives, datasets and network architectures.The results indicate a reduction in the mismatch between theory and practice andthat AdvAs can lead to improvement of GAN training, as measured by FID scores.1 I NTRODUCTIONThe generative adversarial network (GAN) framework (Goodfellow et al., 2014) trains a neuralnetwork known as a generator which maps from a random vector to an output such as an image.Key to training is another neural network, the adversary (sometimes called a discriminator or critic),which is trained to distinguish between “true” and generated data. This is done by maximizingone of the many different objectives proposed in the literature; see for instance Goodfellow et al.(2014); Arjovsky et al. (2017); Nowozin et al. (2016). The generator directly competes against theadversary: it is trained to minimize the same objective, which it does by making the generated datamore similar to the true data. GANs are efficient to sample from, requiring a single pass through adeep network, and highly flexible, as they do not require an explicit likelihood. They are especiallysuited to producing photo-realistic images (Zhou et al., 2019) compared to competing methods likenormalizing flows, which impose strict requirements on the neural network architecture (Kobyzevet al., 2020; Rezende & Mohamed, 2015) and V AEs (Kingma & Welling, 2014; Razavi et al., 2019;Vahdat & Kautz, 2020). Counterbalancing their appealing properties, GANs can have unstabletraining dynamics (Kurach et al., 2019; Goodfellow, 2017; Kodali et al., 2017; Mescheder et al.,2018).Substantial research effort has been directed towards improving the training of GANs. These en-deavors can generally be divided into two camps, albeit with significant overlap. The first developsbetter learning objectives for the generator/adversary to minimize/maximize. These are designed tohave properties which improve training (Arjovsky et al., 2017; Li et al., 2017; Nowozin et al., 2016).The other camp develops techniques to regularize the adversary and improve its training dynam-ics (Kodali et al., 2017; Roth et al., 2017; Miyato et al., 2018). The adversary can then provide abetter learning signal for the generator. Despite these contributions, stabilizing the training of GANsremains unsolved and continues to be an active research area.An overlooked approach is to train the generator in a way that accounts for the adversary not beingtrained to convergence. One such approach was introduced by Mescheder et al. (2017) and later builton by Nagarajan & Kolter (2017). The proposed method is a regularization term based on the normof the gradients used to train the adversary. This is motivated as a means to improve the convergenceproperties of the minimax game. The purpose of this paper is to provide a new perspective as to why1Under review as a conference paper at ICLR 2021this regularizer is appropriate. Our perspective differs in that we view it as promoting updates thatlead to a solution that satisfies a sufficient condition for the adversary to be optimal. To be precise,it encourages the generator to move towards points where the adversary’s current parameters areoptimal. Informally, this regularizer “assists” the adversary, and for this reason we refer to thisregularization method as the Adversary’s Assistant (AdvAs).We additionally propose a version of AdvAs which is hyperparameter-free. Furthermore, we releasea library which makes it simple to integrate into existing code. We demonstrate its application to astandard architecture with the WGAN-GP objective (Arjovsky et al., 2017; Gulrajani et al., 2017);the state-of-the-art StyleGAN2 architecture and objective introduced by Karras et al. (2020); andthe AutoGAN architecture and objective introduced by Gong et al. (2019). We test these on theMNIST (Lecun et al., 1998), CelebA (Liu et al., 2015), CIFAR10 (Krizhevsky et al., 2009) datasetsrespectively. We show that AdvAs improves training on all datasets, as measured by the Fr ́echetInception Distance (FID) (Heusel et al., 2017), and the inception score (Salimans et al., 2016) whereapplicable.2 B ACKGROUNDA generator is a neural network g:Z !X Rdxwhich maps from a random vector z2Zto an output x2 X (e.g., an image). Due to the distribution over z, the function ginduces adistribution over its output x=g(z). Ifgis invertible and differentiable, the probability densityfunction (PDF) over xfrom this “change of variables” could be computed. This is not necessaryfor training GANs, meaning that no such restrictions need to be placed on the neural network g. Wedenote this distribution p(x)where2Rdgdenotes the generator’s parameters. The GAN istrained on a dataset x1;:::;xN, where each xiis inX. We assume that this is sampled i.i.d. froma data-generating distribution ptrue. Then the aim of training is to learn so thatpis as close aspossible toptrue. Section 2.1 will make precise what is meant by “close.”The adversary a:X !A has parameters 2Rdawhich are typically trained alternatelywith the generator. It receives as input either the data or the generator’s outputs. The set that it mapsto,A, is dependent on the GAN type. For example, Goodfellow et al. (2014) define an adversarywhich maps from x2X to the probability that xis a “real” data point from the dataset, as opposedto a “fake” from the generator. They therefore choose Ato be [0;1]and train the adversary bymaximizing the associated log-likelihood objective,h(p;a) =Exptrue[loga(x)] +Exp[log(1a(x))]: (1)Using the intuition that the generator should generate samples that seem real and therefore “fool” theadversary, the generator is trained to minimize h(p;a). Since we find to minimize this objectivewhile fitting to maximize it, training a GAN is equivalent to solving the minimax game,minmaxh(p;a): (2)Eq. (1) gives the original form for h(p;a)used by Goodfellow et al. (2014) but this form variesbetween different GANs, as we will discuss in Section 2.1. The minimization and maximizationin Eq. (2) are performed with gradient descent in practice. To be precise, we define Lgen(;) =h(p;a)andLadv(;) =h(p;a). These are treated as losses for the generator and adversaryrespectively, and both are minimized. In other words, we turn the maximization of h(p;a)w.r.t.into a minimization of Ladv(;). Then on each iteration, andare updated one after the otherusing gradient descent steps along their respective gradients:rLgen(;) =rh(p;a); (3)rLadv(;) =rh(p;a): (4)2.1 GAN S MINIMIZE DIVERGENCESA common theme in the GAN literature is analysis based on what we call the optimal adversaryassumption . This is the assumption that, before each generator update, we have found the adversaryawhich maximizes h(p;a)given the current value of . To be precise, we define a class ofpermissible adversary functions F. This is often simply the space of all functions mapping X!A2Under review as a conference paper at ICLR 2021(Goodfellow et al., 2014), but is in some GAN variants constrained by, e.g., a Lipschitz constant (Ar-jovsky et al., 2017). Then we call the adversary aoptimal for a particular value of if and only ifh(p;a) = maxa2Fh(p;a).In practice, the neural network acannot represent every a2 F and so it may not be able toparameterize an optimal adversary for a given . As is common in the literature, we assume thatthe neural network is expressive enough that this is not an issue, i.e., we assume that for any ,there exists at least one 2resulting in an optimal adversary. Then, noting that there may bemultiple such 2, we define ()to be the set of all optimal adversary parameters. That is,() =f2jh(p;a) = maxa2Fh(p;a)gand the optimal adversary assumptions says thatbefore each update of we have found 2(). We emphasize that, in part due to the limitednumber of gradient updates performed on , this assumption is essentially never true in practice.This paper presents a method to improve the training of GANs by addressing this issue.The optimal adversary assumption simplifies analysis of GAN training considerably. Instead ofbeing a two-player game, it turns into a case of minimizing an objective with respect to alone. Wedenote this objectiveM(p) = maxa2Fh(p;a) =h(p;a)where2(): (5)For example, Goodfellow et al. (2014) showed that using the objective presented in Eq. (1) results inM(p) = 2JSD(ptruejjp)log 4 , where JSD is the Jensen-Shannon divergence. By making theoptimal adversary assumption, they could prove that their GAN training procedure would converge,and would minimize the Jensen-Shannon divergence between ptrueandp.A spate of research following the introduction of the original GAN objective has similarly madeuse of the optimal adversary assumption to propose GANs which minimize different divergences.For example, Wasserstein GANs (WGANs) (Arjovsky et al., 2017) minimize a Wasserstein dis-tance. MMD GANs (Li et al., 2017) minimize a distance known as the maximum mean discrep-ancy. Nowozin et al. (2016) introduce f-GANs which minimize f-divergences, a class including theKullback-Leibler and Jensen-Shannon divergences. We emphasize that this is by no means an ex-haustive list. Like these studies, this paper is motivated by the perspective that, under the optimaladversary assumption, GANs minimize a divergence. However, the GAN framework can also beviewed from a more game-theoretic perspective (Kodali et al., 2017; Grnarova et al., 2018).3 D OES AN OPTIMAL ADVERSARY LEAD TO OPTIMAL GRADIENTS ?As introduced above, the training of an adversary does not need to be considered in any analysisif it is simply assumed to always be optimal. From this perspective, the goal of training GANscan be seen as learning the generator to minimize M(p). This leads to the question: assumingthat we have an optimal adversary, can we compute the gradient required for the generator update,rM(p)? To clarify, assume that we have generator parameters 0, and have found 2(0)such thath(p0;a)andM(p0)are equal in value. We then want to take a gradient step on 0to minimizeM(p0). Virtually all GAN methods do this by assuming that M(p0)andh(p0;a)have equal gradients with respect to at0. That is, it is assumed that rM(p)j=0is equal tothepartial derivative1D1h(p;a)j=0. It is not immediately obvious that this is true.In the GAN literature this concern has largely been overlooked, with a few treatments for spe-cific GAN types, see e.g. Arjovsky et al. (2017); Goodfellow et al. (2014). In particular, Arjovskyet al. (2017) invoke (but do not explicitly prove) an extension of Theorem 1 in Milgrom & Segal(2002) to prove that the Wasserstein GAN has optimal gradients if the adversary is optimal, i.e.D1h(p;a)j=0=rM(p)j=0. We note that this extension can, in fact, be used to provethat GANs in general have this property under fairly weak assumptions:Theorem 1. LetM(p) =h(p;a)for any2(), as defined in Eq. (5). Assuming thatM(p)is differentiable w.r.t. andh(p;a)is differentiable w.r.t. for all2(), then if2()we haverM(p) =D1h(p;a): (6)1We useD1h(p;a)to denote the partial derivative of h(p;a)with respect to withkept constant.Similarly, we will use D2h(p;a)to denote the derivative of h(p;a)with respect to , withheld constant.3Under review as a conference paper at ICLR 2021See Appendix D.1 for a proof. We emphasize Theorem 1 applies only if the adversary is optimal.If this is not the case we cannot quantify, and so cannot directly minimize or account for, the dis-crepancy between rM(p)andD1h(p;a). Instead of attempting to do so, we consider anapproach that drives the parameters towards regions where 2()so that Theorem 1 can beinvoked.3.1 A DVERSARY CONSTRUCTORSTo see how we may impose the constraint that Eq. (6) is true, we consider a trivial relationshipbetween any generator and the corresponding optimal adversary. If an optimal adversary exists forevery2then there exists some, possibly non-unique, function f: !that maps fromany generator to a corresponding optimal adversary. That is, for all 2,f() =2()in which case h(p;af()) = maxa2Fh(p;a). We refer to such a function as an adversary con-structor . In an ideal scenario, we could compute the output of an adversary constructor, f(), forany. We could then invoke Theorem 1 and the generator could be updated with the gradientrM(p) =D1h(p;af()). In practice, computing f()is infeasible and we can only approxi-mate the optimal adversary parameters with gradient descent. There is therefore a mismatch betweenGAN theory, where Theorem 1 is often invoked, and practice, where the conditions to invoke it areessentially never satisfied. How then, can we address this problem? We look to the adversary con-structors, which provide a condition that must be satisfied for the optimal adversary assumption tobe true. Adversary constructors allow us to account for the influence of onby considering thetotal derivative rh(p;af()). We prove in Appendix D.2 that a comparison with the result ofTheorem 1 leads to Corollary 1. In the next section, we motivate AdvAs as an attempt to fulfill acondition suggested by this corollary.Corollary 1. Letf: !be a differentiable mapping such that for all 2,M(p) =h(p;af()). If the conditions in Theorem 1 are satisfied and the Jacobian matrix of fwith respectto,J(f)exists for all 2thenD2h(p;af())TJ(f) = 0: (7)4 A SSISTING THE ADVERSARYCorollary 1 tells us that D2h(p;a)TJ(f)will be zero whenever Theorem 1 can be invoked.This makes Eq. (7) a necessary, but not sufficient, condition for the invocation of Theorem 1.This suggests that the magnitude of D2h(p;a)TJ(f)could be a measure of how “close”D1h(p;a)is to the desired gradient rM(p). However, the Jacobian J(f)is not tractablesoD2h(p;a)TJ(f)cannot be computed. The only term we can calculate in practice isD2h(p;a), exactly the gradient used to train the adversary. If D2h(p;a)is zero, thenD2h(p;a)TJ(f)is zero. The magnitude of D2h(p;a)could therefore be an approximatemeasure of “closeness” instead of D2h(p;a)TJ(f). This leads to an augmented generator loss,which regularizes generator updates to reduce the magnitude of D2h(p;a). It has a scalar hy-perparameter 0, but Section 4.3 provides a heuristic which can remove the need to set thishyperparameter.LAdvAsgen (;) =Lgen(;) +r(;); (8)withr(;) =krLadv(;)k22; (9)recalling that rLadv(;) =D2h(p;a). We emphasize that r(;)is the same as that foundin previous work (Mescheder et al., 2017; Nagarajan & Kolter, 2017).Figuratively, AdvAs changes the generator updates to move in a conservative direction that does notover-exploit the adversary’s sub-optimality. Consider the generator and adversary as two playersattempting to out-maneuver one another. From Eq. (2), we see that the generator should learn tocounteract the best possible adversary, rather than the current adversary. If the current adversary issub-optimal, allowing it to catch up would yield better updates to the generator. One way to achievethis is to update the generator in a way that helps make the current adversary optimal. This behavioris exactly what AdvAs encourages. In this sense, it assists the adversary, leading to it’s name, theAdversary’s Assistant. We emphasize that using AdvAs involves making only a small modificationto a GAN training algorithm, but for completeness we include pseudocode in Appendix A.4Under review as a conference paper at ICLR 202110000 20000 30000 40000Iterations51015FIDnadv/Reg type1/AdvAs1/Baseline5/Baseline0.5 1.0 1.5 2.0Time (h)Figure 1: FID scores throughout training for the WGAN-GP objective on MNIST, estimated using60 000 samples from the generator. We plot up to a maximum of 40 000 iterations. When plottingagainst time (right), this means some lines end before the two hours we show. The blue line showsthe results with AdvAs, while the others are baselines with different values of nadv.4.1 A DVAS PRESERVES CONVERGENCE RESULTSAdvAs has several desirable properties which support its use as a regularizer: (1) it does not interferewith the update on , and recall that perfectly optimizing leads toh(p;a) =M(p). (2) Undermild conditions, rr(;)j=is zero for an optimal and sorLAdvAsgen (;) =rM(p).These properties imply that, under the optimal adversary assumption, optimizing LAdvAsgen is in factequivalent to optimizing Lgen. See Appendix D.4 for a proof. Therefore any convergence analysiswhich relies on the optimal adversary assumption is equally applicable when AdvAs is included inthe loss. Regarding the mild conditions in property (2), we require that be a stationary pointofh(p;a). This is true as long as his differentiable w.r.t. at(;)anddoes not lie on aboundary of . The optimal adversary parameters, , cannot lie on a boundary unless, for example,weight clipping is used as in Arjovsky et al. (2017). In such cases, we cannot speak to the efficacyof applying AdvAs.We make the additional observation that for some GAN objectives, minimizing r(;)alone (asopposed toLgenorLAdvAsgen ) may match pandptrue. We show this in Appendix D.3 for the WGANobjective (Arjovsky et al., 2017). In particular, for all 2,r(;)is zero and at a global minimumwheneverp=ptrue. Experimental results in Appendix C.3 support this observation. However, theresults appear worse than those obtained by optimizing either LgenorLAdvAsgen .4.2 E STIMATING THE ADVAS LOSSIt is not always possible, and seldom computationally feasible, to compute the AdvAs regularizationtermr(;)exactly. We instead use a stochastic estimate. This is computed by simply estimatingthe gradient rLadv(;)with a minibatch and then taking the squared L2-norm of this gradientestimate. That is, defining ~Ladv(;)as an unbiased estimate of the adversary’s loss, we estimater(;)with~r(;) =r~Ladv(;)22: (10)Although the gradient estimate is unbiased, taking the norm results in a biased estimate of r(;).However, comparisons with a more computationally expensive unbiased estimate2did not reveal asignificant difference in performance.2Computing an unbiased estimate can be done using the following: consider two independent and unbiasedestimates of rLadv(;)denotedX;X0. Then EXTX0=E[X]TE[X0] =krLadv(;)k22. Thisimplies that multiplying two estimates using independent samples is unbiased.5Under review as a conference paper at ICLR 20214.3 R EMOVING THE HYPERPARAMETER Table 1: FID and IS scores on CIFAR10 using Au-toGAN with and without AdvAs.IS FIDAdvAs (nadv= 2) 8:40:1 14:51:0Baseline (nadv= 5)8:30:1 15:00:70 2 4 6Time (h)100FIDnadv/Reg type1/AdvAs2/AdvAs1/Baseline2/Baseline5/BaselineFigure 2: FID scores on CIFAR10 using Auto-GAN from baselines and AdvAs plotted with alog y-axis against running time for different val-ues ofnadv. We see that AdvAs with nadv= 2yields the lowest FID scores at every point duringtraining.Eq. (8) introduces a hyperparameter, , whichwe would prefer not to perform a grid-searchon. Setting to be too great can destabilizetraining. Conversely, setting it to be too smallgives similar results to not using AdvAs. Wetherefore introduce a heuristic which can beused to avoid setting the hyperparameter. Ourexperiments suggests that this is often a goodchoice, although manually tuning may yieldgreater gains. This heuristic involves consider-ing the magnitudes of three gradients, and sowe first define the notation,gorig(;) =rLgen(;);gAdvAs (;) =r~r(;);gtotal(;; ) =rLAdvAsgen (;)=gorig(;) +gAdvAs (;):The heuristic can be interpreted as choosing at each iteration to prevent the total gradient,gtotal(;; ), from being dominated by theAdvAs term. Specifically, we ensure the mag-nitude ofgAdvAs (;)is less than or equal tothe magnitude of gorigby setting= min1;kgorig(;)k2kgAdvAs (;)k2(11)at every iteration. We then perform gradient descent along gtotal(;; ). This technique ensuresthatis bounded above by 1.5 E XPERIMENTSWe demonstrate the effect of incorporating AdvAs into GAN training using several GAN architec-tures, objectives, and datasets. Our experiments complement those of Nagarajan & Kolter (2017)and Mescheder et al. (2017). In each case, we compare GANs trained with AdvAs with baselinesthat do not use AdvAs but are otherwise identical. We first demonstrate the use of AdvAs in conjunc-tion with the WGAN-GP objective (Gulrajani et al., 2017) to model MNIST (Lecun et al., 1998).In this experiment, we compare the performance gains achieved by AdvAs to a reasonable upperbound on the gains achievable with this type of regularization. We further support these findingswith experiments on CIFAR10 (Krizhevsky et al., 2009) using AutoGAN (Gong et al., 2019), anarchitecture found through neural architecture search. We then demonstrate that AdvAs can im-prove training on larger images using StyleGAN2 (Karras et al., 2020) on CelebA (Liu et al., 2015).We quantify each network’s progress throughout training using the FID score (Heusel et al., 2017).Since AdvAs increases the computation time per iteration, we plot training progress against time foreach experiment. We also present inception scores (IS) (Salimans et al., 2016) where applicable. Weestimate scores in each case with 5 random seeds and report the standard deviation ( ) as a measureof uncertainty.AdvAs aims to improve performance by coming closer to having an optimal adversary. Anothercommon way to achieve this is to use a larger number of adversary updates ( nadv) before eachgenerator update. For each experiment, we show baselines with the value of nadvsuggested inthe literature. Noting that the computational complexity is O(nadv)and so keeping nadvlow isdesirable, we find that AdvAs can work well with lower values of nadvthan the baseline. For a faircomparison, we also report baselines trained with these values of nadv.6Under review as a conference paper at ICLR 2021Figure 3: Bottom: FID scores throughout training estimated with 1000 samples, plotted againstnumber of epochs (left) and training time (right). FID scores for AdvAs decrease more on eachiteration at the start of training and converge to be 7:5%lower. Top: The left two columns showuncurated samples with and without AdvAs after 2 epochs. The rightmost two columns show uncu-rated samples from networks at the end of training. In each grid of images, each row is generated bya network with a different training seed and shows 3 images generated by passing a different randomvector through this network. AdvAs leads to obvious qualitative improvement early in training.For MNIST and CelebA, we avoid setting the hyperparameter by using the heuristic proposed inSection 4.3. We found for CIFAR10 that manually tuning gave better performance, and so set =0:01. Additionally, on MNIST and CelebA, the methods we consider use regularization in the formof a gradient penalty (Gulrajani et al., 2017) for training the adversary. This is equivalent to includinga regularization term adv()in the definition of Ladv. That is,Ladv(;) =rh(p;a) +adv(). Following Eq. (9) this regularization term is included in the AdvAs term r(;). Anotherpractical detail is that AutoGAN and StyleGAN2 are trained with a hinge loss (Lim & Ye, 2017).That is, when computing the adversary’s loss Ladv(;), its outputa(x)is truncated to be below +1for real images, or above 1for generated images. This prevents it from receiving gradient feedbackwhen its predictions are both accurate and confident. Intuitively, this stops its outputs becoming toolarge and damaging the generator’s training. However, this truncation is not present when updatingthe generator. This means that the generator minimizes a different objective to the one maximized bythe adversary, and so it is not exactly a minimax game. It is not clear that it is beneficial to calculatethe AdvAs regularization term using this truncation. We found that better performance was obtainedby computing r(;)without truncation, and do this in the reported experiments.5.1 WGAN-GP ONMNISTWe use a simple neural architecture: the generator consists of a fully-connected layer followedby two transposed convolutions. The adversary has three convolutional layers. Both use instancenormalization (Ulyanov et al., 2017) and ReLU non-linearities; see Appendix E for details. Wecompare using AdvAs with nadv= 1 against the baseline for nadv2 f1;5gwherenadv= 5is suggested by Gulrajani et al. (2017). Fig. 1 shows the FID scores for each method throughouttraining. We see that using AdvAs with nadv= 1leads to better performance on convergence; evencompared to the baseline with nadv= 5, the best FID score reached is improved by 28%.5.2 A UTOGAN ONCIFAR10We next experiment on the generation of CIFAR10 (Krizhevsky et al., 2009) images. We use Auto-GAN (Gong et al., 2019), which has a generator architecture optimized for CIFAR10 using neuralarchitecture search. It is trained with a hinge loss, as described previously, an exponential moving7Under review as a conference paper at ICLR 2021average of generator weights, and typically uses nadv= 5. Figure 2 shows FID scores throughouttraining for various values of nadv, with and without AdvAs, each computed with 1000 samples. Ta-ble 1 shows FID scores at the end of training for the best performing value of nadvfor each method,estimated with 50 000 samples. For a fixed nadvof either 1 or 2, using AdvAs improves the FIDscore. In fact, with nadv= 2, the performance with AdvAs is indistinguishable from the baselinewith the suggested setting of nadv= 5. Unlike for MNIST, AdvAs does not outperform the baselinewith high enough nadv. We hypothesize that this is because, with an architecture highly optimizedfornadv= 5, the adversary is closer to being optimal when trained with nadv= 5. Assuming thisis the case, we would not expect AdvAs to improve training compared to a baseline with sufficientnadv. Still, our results show that applying AdvAs allows the same performance with a lower nadv.5.3 S TYLE GAN2 ONCELEB ATo demonstrate that AdvAs improves state-of-the-art GAN architectures and training procedures,we consider StyleGAN2 (Karras et al., 2020). We train this as proposed by Karras et al. (2020)with a WGAN-like objective with gradient penalty (Gulrajani et al., 2017), an exponential movingaverage of the generator weights, and various forms of regularization including path length, R1, andstyle-mixing regularization. More detail on these can be found in Karras et al. (2020), but we merelywish to emphasize that considerable effort has been put into tuning this training procedure. For thisreason, we do not attempt to further tune nadv, which is 1by default. Any improvements fromapplying AdvAs indicate a beneficial effect not provided by other forms of regularization used.Figure 3 compares the training of StyleGAN2 on CelebA at 64 64 resolution with and withoutthe AdvAs regularizer. Using AdvAs has two main effects: (1) the generated images show biggerimprovements per epoch at the start of training; and (2) the final FID score is improved by 7:5%.Even accounting for its greater time per iteration, the FID scores achieved by AdvAs overtake thebaseline after one day of training. We verify that the baseline performance is similar to that reportedby Zhou et al. (2019) with a similar architecture.6 R ELATED WORKWe motivated AdvAs from the perspective of the optimal adversary assumption. In this sense, itis similar to a large body of work aiming to improve and stabilize GAN training by better trainingthe adversary. AdvAs differs fundamentally due to its focus on the training the generator ratherthan the adversary. This other work generally affects the discriminator in one of two broad ways:weight constraints and gradient penalties Brock et al. (2019). Weight normalization involves directlymanipulating the parameters of the adversary, such as through weight clipping (Arjovsky et al.,2017) or spectral normalization (Miyato et al., 2018). Gradient penalties (Kodali et al., 2017; Rothet al., 2017; Gulrajani et al., 2017) impose soft constraints on the gradients of the adversary’s outputwith respect to its input. Various forms exist with different motivations; see Mescheder et al. (2018)for a summary and analysis. AdvAs may appear similar to a gradient penalty, as it operates ongradients of the adversary. However, the gradients are w.r.t. the adversary’s parameters rather thanits input. Furthermore, AdvAs is added to the generator’s loss and not the adversary’s.Regularizing generator updates has recently received more attention in the literature (Chu et al.,2020; Zhang et al., 2019; Brock et al., 2019). Chu et al. (2020) show theoretically that the ef-fectiveness of different forms of regularization for both the generator and adversary is linked tothe smoothness of the objective function. They present a set of conditions on the generator andadversary that ensure a smooth objective function, which they argue will stabilize GAN training.However, they leave the imposition of the required regularization on the generator to future work.Zhang et al. (2019) and Brock et al. (2019) consider applying spectral normalization (Miyato et al.,2018) to the generator, and find empirically that this improves performance.7 D ISCUSSION AND CONCLUSIONSWe have shown that AdvAs addresses the mismatch between theory, where the adversary is assumedto be trained to optimality, and practice, where this is never the case. We show improved trainingacross three datasets, architectures, and GAN objectives, indicating that it successfully reduces this8Under review as a conference paper at ICLR 2021disparity. This can lead to substantial improvements in final performance. We note that, whileapplying AdvAs in preliminary experiments with BEGAN (Berthelot et al., 2017) and LSGAN (Maoet al., 2017), we did not observe either a significant positive effect, or a significant negative effectother than the increased time per iteration. Nevertheless, AdvAs is simple to apply and will, in manycases, improve both training speed and final performance.<|im_end|> <|im_start|>assistant ### Review Title review ### Review Text This paper proposes a new regularizer to improve GAN training. By noticing that the discriminator does not always reach optimum at each iteration, this paper proposes Adversary's Assistant (AdvAs) for helping the discriminator to satisfy this condition. Interestingly, compared to the previous methods for improving GAN training, this work applies the regularizer at the generator (rather than the discriminator) and is theoretical motivated. Experiments on several GAN objectives, datasets and network architectures are provided to support the effectiveness of AdvAs. *Pros (1) This paper is clearly written. Even I am not an expert in GAN, I do not encounter too many difficulties in understanding the whole paper. (2) The whole framework is theoretical motivated. Given that the discriminator is not always at an optimal point during training, this paper derives several theorems and corollaries, which leads to the finding that adding a regularization on the generator could satisfy a necessary condition for training GAN optimally. (3) Empirical results are provided to show the proposed AdvAs can help GAN training under different settings. *Cons (1) This paper uses a minibatch to approximately compute the regulizer $r(\theta,\phi)$. I am wondering if the proposed AdvAs is sensitive to the estimation quality of $r(\theta,\phi)$? For example, if large batch size is used, will the results be better? If yes, then what is the "minimal" batch size to train a good GAN with AdvAs (i.e., outperforms the baseline)? (2) I appreciate that this paper honestly states that the proposed AdvAs cannot help BEGAN and LSGAN. I encourage the authors to delve deeper into this observed phenomenon and provide a brief discussion on explaining the possible reasons why AdvAs cannot help here. (3) As the main purpose of AdvAs is to encourage the value of Eq. (7) be close to 0, the authors are encouraged to also plot the value of $D_2h(p_θ, a_φ)$ during the training, as direct evidence for supporting the effectiveness of AdvAs. **Overall, I think it is an interesting paper, with good theoretical motivation and strong empirical results, therefore I tend to accept it at this time. Nonetheless, I am not an expert in GAN and cannot accurately access the value/importance of this paper. I am open to increase/decrease my score if other expert reviewers provide any positive/negative comments. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
Sym_tDJwM
ICLR.cc/2018/Workshop
2018
TVAE: Triplet-Based Variational Autoencoder using Metric Learning
["Haque Ishfaq", "Assaf Hoogi", "Daniel Rubin"]
Deep metric learning has been demonstrated to be highly effective in learning semantic representation and encoding information that can be used to measure data similarity, by relying on the embedding learned from metric learning. At the same time, variational autoencoder (VAE) has widely been used to approximate inference and proved to have a good performance for directed probabilistic models. However, for traditional VAE, the data label or feature information are intractable. Similarly, traditional representation learning approaches fail to represent many salient aspects of the data. In this project, we propose a novel integrated framework to learn latent embedding in VAE by incorporating deep metric learning. The features are learned by optimizing a triplet loss on the mean vectors of VAE in conjunction with standard evidence lower bound (ELBO) of VAE. This approach, which we call Triplet based Variational Autoencoder (TVAE), allows us to capture more fine-grained information in the latent embedding. Our model is tested on MNIST data set and achieves a high triplet accuracy of 95.60% while the traditional VAE (Kingma & Welling, 2013) achieves triplet accuracy of 75.08%.
["Metric Learning", "Variational Autoencoder", "Representation Learning", "Deep learning", "Semi-Supervised Learning"]
ABSTRACTDeep metric learning has been demonstrated to be highly effective in learningsemantic representation and encoding information that can be used to measuredata similarity, by relying on the embedding learned from metric learning. Atthe same time, variational autoencoder (V AE) has widely been used to approxi-mate inference and proved to have a good performance for directed probabilisticmodels. However, for traditional V AE, the data label or feature information areintractable. Similarly, traditional representation learning approaches fail to repre-sent many salient aspects of the data. In this project, we propose a novel integratedframework to learn latent embedding in V AE by incorporating deep metric learn-ing. The features are learned by optimizing a triplet loss on the mean vectorsof V AE in conjunction with standard evidence lower bound (ELBO) of V AE. Thisapproach, which we call Triplet based Variational Autoencoder (TV AE), allows usto capture more fine-grained information in the latent embedding. Our model istested on MNIST data set and achieves a high triplet accuracy of 95.60% while thetraditional V AE (Kingma & Welling, 2013) achieves triplet accuracy of 75.08%.1 I NTRODUCTIONLearning semantic similarity between pairs of images is a core part of visual competence and learn-ing. When applied on proper embedding of input data, similarity metric functions such as Euclideandistances, Mahalanobis distance, cosine similarity etc result in superior metric for similarity mea-sure and reduce many complex classification problems to simple nearest neighbor problems. Butthese same similarity metric functions would perform poorly when applied on raw complex inputdatasets. Image embeddings learned as a part of larger classification task using deep nets havevarious practical limitations for several scenarios. In extreme classification problems (Choromanskaet al., 2013; Bengio et al., 2010) where the number of possible categories is very large or possibly un-known, conventional classification learning approaches are essentially useless since the availabilityof training examples for each class becomes scarce, if not totally unavailable. Hence, a new line ofapproach, namely metric learning (Schroff et al., 2015; Oh Song et al., 2016; Huang & Peng, 2017)has gained much popularity for its ability to learn image embedding directly using the concept ofrelative distances rather than relying on specific category information. This way, it is able to learn ametric space where nearest neighbor based methods would naturally give superior performance dueto the higher quality representation of input images in the learned embedding space. This approachhas the potential to improve the way generative models such as Variational Autoencders (Kingma& Welling, 2013; Rezende et al., 2014) are learned. While V AE can perform extremely efficientapproximate inference in latent Gaussian model, the latent embedding space it learns lacks manysalient aspects of the original data. Motivated from Triplet Network as explained in Hoffer & Ailon(2015), in this project, we propose a new architecture and a loss function for training V AE, whichis capable of two tasks at the same time - learning latent image representations with fine-grainedinformation and doing stochastic inference.These two authors contributed equally1Workshop track - ICLR 2018Figure 1: Model overview. As input a triplet of digit images (7,7,5) is given to three identicalencoder networks. The mean latent vectors of three input images are used to calculate the tripletloss and the reconstructed images by the identical decoders are used to calculate the reconstructionerror.2 TRIPLET-BASED VARIATIONAL AUTOENCODEROur proposed hybrid model in Fig.1 is motivated as a way to improve V AE, so that it can learnlatent representation enriched with more fine-grained information. To achieve this we optimize thenetwork by minimizing the upper-bound on the expected negative log-likelihood of data and tripletloss simultaneously.The encoder in V AE encodes an image xto a latent vector z=Encoder (x)q(zjx). Thedecoder decodes the latent vector zback to an image x=Decoder (z)p(xjz). To regularizethe encoder, the V AE imposes a prior over the latent distribution p(z). The V AE loss consists oftwo parts: the reconstruction loss and the KL Divergence loss. The reconstruction loss Lrec=Eq(zjx)[logp(xjz)]is the negative expected log-likelihood of the observations in x. And the KL-Divergence lossLKL= KL[ q(zjx)jjp(z)]characterizes the distance between the distribution q(zjx)and prior distribution.In each iteration of training, the input triplet (xa; xp; xn)is randomly sampled from the trainingset in such a way that the anchor xis more similar to the positive xpthan the negative xn. Thenthe triplet of three images are fed into encoder network simultaneously to get their mean latentembedding f(xa),f(xp)andf(xn). We then define a loss function Ltriplet ()over triplets tomodel the similarity structure over the images as in Wang et al. (2014). The triplet loss can beexpressed asLtriplet (xa; xp; xn) = maxf0; D(xa; xp)D(xa; xn) +mg; (1)where D(xi; xj) =jjf(xi)f(xj)jj2is the Euclidean distance between the mean latent vector ofimages xiandxjand m is threshold margin. Thus our final loss function for an input triplet is givenby:LTV AE =Lrec+LKL+Ltriplet (2)3 E XPERIMENTSWe focus our experiments on preservation of the semantic structure in the learned latent embed-ding and image generation ability compared to original V AE in Kingma & Welling (2013). Forexperiments on MNIST (LeCun et al., 1998), we adopted a simple network structure with twofully connected layers as encoder and decoder and used pixel-to-pixel L2distance loss function asreconstruction loss. The dimension of the latent embedding space was 20.Table 1: Triplet accuracy on MNISTModel Triplet AccuracyV AE (Kingma & Welling, 2013) 75.08%Triplet V AE 95.60%2Workshop track - ICLR 20184 R ESULTSWe visually explore the learned embedding distribution for the mean vector. With an additionaltriplet loss term, the mean vectors from different groups are more compactly clustered, as shown inFig. 2b. On the other hand, without the added triplet loss, the image clusters are less compact andseem to be spreading out in the spatial space as seen in Fig. 2a. In this case, we also observe thatimages from one class are more likely to be divided into multiple small clusters and images fromdifferent clusters overlaps with each other more often.(a) V AE (Kingma & Welling, 2013) (b) Triplet-based V AEFigure 2: t-SNE projection of the latent mean vector for MNIST test dataset.In order to evaluate the structure quality in terms of preserved relative distance among differentclasses, we analyze learned latent embedding of unseen triplets. In Table 1 we calculate tripletaccuracy which is defined by the percentage of triplets that incur a loss of zero in Eq.1. We see thatusing TV AE, for 95.60% of test triplets, we get learned latent embedding which maintain the relativedistances among classes. On the other hand, for traditional V AE, we preserve this relative distancesfor only 75.08% of test triplets.Figure 3: Comparison of reconstructed images from the MNIST dataset. The first row is the inputimages from the MNIST test set. The second row is the reconstructed images generated by the plainV AE. The third row is the reconstructed images generated by the TV AE.5 D ISCUSSIONTriplet based Variational Autoencoders (TV AEs) provide a new set of tools for learning latent em-bedding and performing approximate inference that leverage both traditional V AE and deep metriclearning techniques. By incorporating triplet constraint in the learning process, TV AEs can learnan interpretable latent representation that preserves semantic structure of the original dataset. Ourmethod provides an initial framework for learning latent embedding that would be able to encodevarious notions of similarity. We demonstrate that TV AE generates high quality samples as good asthe traditional V AE while encoding more semantic structural information in the latent embedding.Our future work will include analysis of medical datasets.3Workshop track - ICLR 2018ACKNOWLEDGMENTSThis work was supported in part by grants from the National Cancer Institute, National Institutes ofHealth, 1U01CA190214 and 1U01CA187947.
B1hgWa-Fz
A possibly interesting addition to VAE. Experiments are insufficient.
3: Clear rejection
The paper proposes adding a supervised signal to VAE signal which is meant to improve the usability of the latent space vectors. Specifically, the proposal is a triplet-based loss first used by Frome et al. (2007), which has since been widely used in metric learning (example Weinberger & Saul 2009). This loss relies on a weak supervision telling us "A is more similar to B than it is to C". The strength of this loss function is in cases where direct labels are unavailable, but in can be used in fully labeled datasets as well, as done in this paper. The paper would be improved by finding cases where direct supervision is not available but relative comparisons are, e.g. ranking- or preference-feedback tasks. Pros: 1. Possibly interesting idea for making the latent space vectors of a VAE more useful. Cons: 1. Supervised or semi-supervised VAEs could just as well create better representations of the classes. There is no comparison with them or in fact no mention of this possibility. 2. Tying in with the previous point. the TVAE has strictly more information than VAE, since it has access to label information through the triplet loss, making comparisons less meaningful. 3. The triplet loss in itself is not interesting at all. I would like to see experiments comparing performance of tasks for which a good metric on the latent space is useful. For example, NN classification using latent vectors, one-shot learning of new classes, or interpretability tasks.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title TVAE: Triplet-Based Variational Autoencoder using Metric Learning ### Paper Abstract Deep metric learning has been demonstrated to be highly effective in learning semantic representation and encoding information that can be used to measure data similarity, by relying on the embedding learned from metric learning. At the same time, variational autoencoder (VAE) has widely been used to approximate inference and proved to have a good performance for directed probabilistic models. However, for traditional VAE, the data label or feature information are intractable. Similarly, traditional representation learning approaches fail to represent many salient aspects of the data. In this project, we propose a novel integrated framework to learn latent embedding in VAE by incorporating deep metric learning. The features are learned by optimizing a triplet loss on the mean vectors of VAE in conjunction with standard evidence lower bound (ELBO) of VAE. This approach, which we call Triplet based Variational Autoencoder (TVAE), allows us to capture more fine-grained information in the latent embedding. Our model is tested on MNIST data set and achieves a high triplet accuracy of 95.60% while the traditional VAE (Kingma & Welling, 2013) achieves triplet accuracy of 75.08%. ### Paper Keywords ["Metric Learning", "Variational Autoencoder", "Representation Learning", "Deep learning", "Semi-Supervised Learning"] ### Paper Content ABSTRACTDeep metric learning has been demonstrated to be highly effective in learningsemantic representation and encoding information that can be used to measuredata similarity, by relying on the embedding learned from metric learning. Atthe same time, variational autoencoder (V AE) has widely been used to approxi-mate inference and proved to have a good performance for directed probabilisticmodels. However, for traditional V AE, the data label or feature information areintractable. Similarly, traditional representation learning approaches fail to repre-sent many salient aspects of the data. In this project, we propose a novel integratedframework to learn latent embedding in V AE by incorporating deep metric learn-ing. The features are learned by optimizing a triplet loss on the mean vectorsof V AE in conjunction with standard evidence lower bound (ELBO) of V AE. Thisapproach, which we call Triplet based Variational Autoencoder (TV AE), allows usto capture more fine-grained information in the latent embedding. Our model istested on MNIST data set and achieves a high triplet accuracy of 95.60% while thetraditional V AE (Kingma & Welling, 2013) achieves triplet accuracy of 75.08%.1 I NTRODUCTIONLearning semantic similarity between pairs of images is a core part of visual competence and learn-ing. When applied on proper embedding of input data, similarity metric functions such as Euclideandistances, Mahalanobis distance, cosine similarity etc result in superior metric for similarity mea-sure and reduce many complex classification problems to simple nearest neighbor problems. Butthese same similarity metric functions would perform poorly when applied on raw complex inputdatasets. Image embeddings learned as a part of larger classification task using deep nets havevarious practical limitations for several scenarios. In extreme classification problems (Choromanskaet al., 2013; Bengio et al., 2010) where the number of possible categories is very large or possibly un-known, conventional classification learning approaches are essentially useless since the availabilityof training examples for each class becomes scarce, if not totally unavailable. Hence, a new line ofapproach, namely metric learning (Schroff et al., 2015; Oh Song et al., 2016; Huang & Peng, 2017)has gained much popularity for its ability to learn image embedding directly using the concept ofrelative distances rather than relying on specific category information. This way, it is able to learn ametric space where nearest neighbor based methods would naturally give superior performance dueto the higher quality representation of input images in the learned embedding space. This approachhas the potential to improve the way generative models such as Variational Autoencders (Kingma& Welling, 2013; Rezende et al., 2014) are learned. While V AE can perform extremely efficientapproximate inference in latent Gaussian model, the latent embedding space it learns lacks manysalient aspects of the original data. Motivated from Triplet Network as explained in Hoffer & Ailon(2015), in this project, we propose a new architecture and a loss function for training V AE, whichis capable of two tasks at the same time - learning latent image representations with fine-grainedinformation and doing stochastic inference.These two authors contributed equally1Workshop track - ICLR 2018Figure 1: Model overview. As input a triplet of digit images (7,7,5) is given to three identicalencoder networks. The mean latent vectors of three input images are used to calculate the tripletloss and the reconstructed images by the identical decoders are used to calculate the reconstructionerror.2 TRIPLET-BASED VARIATIONAL AUTOENCODEROur proposed hybrid model in Fig.1 is motivated as a way to improve V AE, so that it can learnlatent representation enriched with more fine-grained information. To achieve this we optimize thenetwork by minimizing the upper-bound on the expected negative log-likelihood of data and tripletloss simultaneously.The encoder in V AE encodes an image xto a latent vector z=Encoder (x)q(zjx). Thedecoder decodes the latent vector zback to an image x=Decoder (z)p(xjz). To regularizethe encoder, the V AE imposes a prior over the latent distribution p(z). The V AE loss consists oftwo parts: the reconstruction loss and the KL Divergence loss. The reconstruction loss Lrec=Eq(zjx)[logp(xjz)]is the negative expected log-likelihood of the observations in x. And the KL-Divergence lossLKL= KL[ q(zjx)jjp(z)]characterizes the distance between the distribution q(zjx)and prior distribution.In each iteration of training, the input triplet (xa; xp; xn)is randomly sampled from the trainingset in such a way that the anchor xis more similar to the positive xpthan the negative xn. Thenthe triplet of three images are fed into encoder network simultaneously to get their mean latentembedding f(xa),f(xp)andf(xn). We then define a loss function Ltriplet ()over triplets tomodel the similarity structure over the images as in Wang et al. (2014). The triplet loss can beexpressed asLtriplet (xa; xp; xn) = maxf0; D(xa; xp)D(xa; xn) +mg; (1)where D(xi; xj) =jjf(xi)f(xj)jj2is the Euclidean distance between the mean latent vector ofimages xiandxjand m is threshold margin. Thus our final loss function for an input triplet is givenby:LTV AE =Lrec+LKL+Ltriplet (2)3 E XPERIMENTSWe focus our experiments on preservation of the semantic structure in the learned latent embed-ding and image generation ability compared to original V AE in Kingma & Welling (2013). Forexperiments on MNIST (LeCun et al., 1998), we adopted a simple network structure with twofully connected layers as encoder and decoder and used pixel-to-pixel L2distance loss function asreconstruction loss. The dimension of the latent embedding space was 20.Table 1: Triplet accuracy on MNISTModel Triplet AccuracyV AE (Kingma & Welling, 2013) 75.08%Triplet V AE 95.60%2Workshop track - ICLR 20184 R ESULTSWe visually explore the learned embedding distribution for the mean vector. With an additionaltriplet loss term, the mean vectors from different groups are more compactly clustered, as shown inFig. 2b. On the other hand, without the added triplet loss, the image clusters are less compact andseem to be spreading out in the spatial space as seen in Fig. 2a. In this case, we also observe thatimages from one class are more likely to be divided into multiple small clusters and images fromdifferent clusters overlaps with each other more often.(a) V AE (Kingma & Welling, 2013) (b) Triplet-based V AEFigure 2: t-SNE projection of the latent mean vector for MNIST test dataset.In order to evaluate the structure quality in terms of preserved relative distance among differentclasses, we analyze learned latent embedding of unseen triplets. In Table 1 we calculate tripletaccuracy which is defined by the percentage of triplets that incur a loss of zero in Eq.1. We see thatusing TV AE, for 95.60% of test triplets, we get learned latent embedding which maintain the relativedistances among classes. On the other hand, for traditional V AE, we preserve this relative distancesfor only 75.08% of test triplets.Figure 3: Comparison of reconstructed images from the MNIST dataset. The first row is the inputimages from the MNIST test set. The second row is the reconstructed images generated by the plainV AE. The third row is the reconstructed images generated by the TV AE.5 D ISCUSSIONTriplet based Variational Autoencoders (TV AEs) provide a new set of tools for learning latent em-bedding and performing approximate inference that leverage both traditional V AE and deep metriclearning techniques. By incorporating triplet constraint in the learning process, TV AEs can learnan interpretable latent representation that preserves semantic structure of the original dataset. Ourmethod provides an initial framework for learning latent embedding that would be able to encodevarious notions of similarity. We demonstrate that TV AE generates high quality samples as good asthe traditional V AE while encoding more semantic structural information in the latent embedding.Our future work will include analysis of medical datasets.3Workshop track - ICLR 2018ACKNOWLEDGMENTSThis work was supported in part by grants from the National Cancer Institute, National Institutes ofHealth, 1U01CA190214 and 1U01CA187947.<|im_end|> <|im_start|>assistant ### Review Title A possibly interesting addition to VAE. Experiments are insufficient. ### Review Text The paper proposes adding a supervised signal to VAE signal which is meant to improve the usability of the latent space vectors. Specifically, the proposal is a triplet-based loss first used by Frome et al. (2007), which has since been widely used in metric learning (example Weinberger & Saul 2009). This loss relies on a weak supervision telling us "A is more similar to B than it is to C". The strength of this loss function is in cases where direct labels are unavailable, but in can be used in fully labeled datasets as well, as done in this paper. The paper would be improved by finding cases where direct supervision is not available but relative comparisons are, e.g. ranking- or preference-feedback tasks. Pros: 1. Possibly interesting idea for making the latent space vectors of a VAE more useful. Cons: 1. Supervised or semi-supervised VAEs could just as well create better representations of the classes. There is no comparison with them or in fact no mention of this possibility. 2. Tying in with the previous point. the TVAE has strictly more information than VAE, since it has access to label information through the triplet loss, making comparisons less meaningful. 3. The triplet loss in itself is not interesting at all. I would like to see experiments comparing performance of tasks for which a good metric on the latent space is useful. For example, NN classification using latent vectors, one-shot learning of new classes, or interpretability tasks. ### Review Rating 3: Clear rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
B1gd0nEFwS
ICLR.cc/2020/Conference
2020
Universal Source-Free Domain Adaptation
["Jogendra Nath Kundu", "Naveen Venkat", "Rahul M V", "R. Venkatesh Babu"]
There is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift. Existing domain adaptation (DA) approaches are not equipped for practical DA scenarios as a result of their reliance on the knowledge of source-target label-set relationship (e.g. Closed-set, Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation. Devoid of such highly impractical assumptions, we propose a novel two-stage learning process. Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift. To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework. Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples. To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches.
["unsupervised domain adaptation", "knowledge transfer", "source-free adaptation"]
ABSTRACTThere is a strong incentive to develop versatile learning techniques that can transferthe knowledge of class-separability from a labeled source domain to an unlabeledtarget domain in the presence of a domain-shift. Existing domain adaptation(DA) approaches are not equipped for practical DA scenarios as a result of theirreliance on the knowledge of source-target label-set relationship (e.g. Closed-set,Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA worksrequire coexistence of source and target samples even during deployment, makingthem unsuitable for incremental, real-time adaptation. Devoid of such highlyimpractical assumptions, we propose a novel two-stage learning process. Initially,in the procurement-stage , the objective is to equip the model for future source-freedeployment, assuming no prior knowledge of the upcoming category-gap anddomain-shift . To achieve this, we enhance the model’s ability to reject out-of-sourcedistribution samples by leveraging the available source data, in a novel generativeclassifier framework. Subsequently, in the deployment-stage , the objective is todesign a unified adaptation algorithm capable of operating across a wide range ofcategory-gaps , with no access to the previously seen source samples. To achievethis, in contrast to the usage of complex adversarial training regimes, we define asimple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thoroughevaluation shows the practical usability of the proposed learning framework withsuperior DA performance even over state-of-the-art source-dependent approaches.1 I NTRODUCTIONDeep learning models have proven to be highly successful over a wide variety of tasks (Krizhevskyet al., 2012; Ren et al., 2015). However, a majority of these remain heavily dependent on access to ahuge amount of labeled samples to achieve a reliable level of generalization. A recognition modeltrained on a certain distribution of labeled samples (source domain) often fails to generalize (Chenet al., 2017) when deployed in a new environment (target domain) in the presence a discrepancy inthe input distribution (Shimodaira, 2000). Domain adaptation (DA) algorithms seek to minimizethis discrepancy either by learning a domain invariant feature representation (Long et al., 2015;Kumar et al., 2018; Ganin et al., 2016; Tzeng et al., 2015), or by learning independent domaintransformations (Long et al., 2016) to a common latent representation through adversarial distributionmatching (Tzeng et al., 2017; Nath Kundu et al., 2018), in the absence of target label information.Most of the existing approaches (Zhang et al., 2018c; Tzeng et al., 2017) assume a common label-setshared between the source and target domains ( i.e.Cs=Ct), which is often regarded as Closed-SetDA(see Fig. 1). Though this assumption helps to analyze various insights of DA algorithms, suchan assumption rarely holds true in real-world scenarios. Recently researchers have independentlyexplored two broad adaptation settings by partly relaxing the above assumption. In the first kind,Partial DA (Zhang et al., 2018b; Cao et al., 2018a;b), the target label space is considered as a subsetof the source label space ( i.e.CtCs). This setting is more suited for large-scale universal sourcedatasets, which will almost always subsume the label-set of a wide range of target domains. However,the availability of such a universal source is highly questionable for a wide range of input domainsand tasks. In the second kind, regarded as Open-set DA (Baktashmotlagh et al., 2019; Ge et al., 2017),the target label space is considered as a superset of the source label space ( i.e.CtCs). The majorchallenge in this setting is attributed to detection of target samples from the unobserved categoriesin a fully-unsupervised scenario. Apart from the above two extremes, certain works define a partlymixed scenario by allowing “ private ” label-set for both source and target domains ( i.e.CsnCt6=;1Under review as a conference paper at ICLR 2020andCtnCs6=;) but with extra supervision such as few-shot labeled data (Luo et al., 2017) or accessto the knowledge of common categories (Panareda Busto & Gall, 2017).2 1 2 8 1 2 8 1 1 5 8 3 3 Closed-set Partial a d 1 1 3 3 Open-set 1 8 Ours: Universal Source T ar get Shared A Figure 1: Various label-set relationships ( category-gap ).Most of the prior approaches considereach scenario in isolation and propose in-dependent solutions. Thus, they requireaccess to the knowledge of label-set re-lationship (or category-gap ) to carefullychoose a DA algorithm, which would besuitable for the problem in hand. Fur-thermore, all the prior unsupervised DAworks require coexistence of source andtarget samples even during deployment, hence not source-free . This is highly impractical, as labeledsource data may not be accessible after deployment due to several reasons such as, privacy concerns,restricted access to proprietary data, accidental loss of source data or other computational limitationsin real-time deployment scenarios.Acknowledging the aforementioned shortcomings, we propose one of the most convenient DAframeworks which is ingeniously equipped to address source-free DA for all kinds of label-setrelationships, without any prior knowledge of the associated category-gap (i.e.universal-DA ). Wenot only focus on identifying the key complications associated with the challenging problem setting,but also devise insightful ideas to tackle such complications by adopting learning techniques muchdifferent from the available DA literature. This leads us to realize a holistic solution which achievessuperior DA performance even over prior source-dependent approaches.2 R ELATED WORKWe briefly review the available domain adaptation methods under the three major divisions accordingto the assumption on label-set relationship. a) Closed-set DA . The cluster of previous works underthis setting focuses on minimizing the domain gap at some intermediate feature level either byminimizing well-defined statistical distance functions (Wang & Schneider, 2014; Duan et al., 2012;Zhang et al., 2013; Saenko et al., 2010) or by formalizing it as an adversarial distribution matchingproblem (Tzeng et al., 2017; Kang et al., 2018; Long et al., 2018; Hu et al., 2018; Hoffman et al.,2018) inspired from the Generative Adversarial Nets (Goodfellow et al., 2014). Certain priorworks (Sankaranarayanan et al., 2018; Zhu et al., 2017; Hoffman et al., 2018) use GAN frameworkto explicitly generate target-like images translated from the source image samples, which is alsoregarded as pixel-level adaptation (Bousmalis et al., 2017) in contrast to other feature level adaptationworks (Nath Kundu et al., 2018; Tzeng et al., 2017; Long et al., 2015; 2016). b) Partial DA . FocusingonPartial DA , Cao et al. (2018a) proposed to achieve adversarial class-level matching by utilizingmultiple domain discriminators furnishing class-level and instance-level weighting for individual datasamples. Zhang et al. (2018b) proposed to utilize importance weights for source samples depending ontheir similarity to the target domain data using an auxilliary discriminator. To effectively address theproblem of negative-transfer (Wang et al., 2019), Cao et al. (2018b) employed a single discriminatorto achieve both adversarial adaptation and class-level weighting of source samples. c) Open-setDA. Saito et al. (2018b) proposed a more general open-set adaptation setting without accessing theknowledge of source private labels set in contrast to the prior work (Panareda Busto & Gall, 2017).They extended the source classifier to accommodate an additional “unknown” class, which is trainedadversarially against the other source classes. Universal DA . You et al. (2019) proposed UniversalDA, which requires no prior knowledge of label-set relationship similar to the proposed setting, butconsiders access to both source and target samples during adaptation.3 P ROPOSED APPROACHThe problem setting for source-free domain adaptation is broadly divided into a two stage process.a) Procurement stage. In this stage, we are given full access to the labeled samples of source domain,Ds=f(xs;ys) :xsp; ys2Csg, wherepis the distribution of source samples and Csdenotes thelabel-set of the source domain. Here, the objective is to equip the model for the second stage, i.e.theDeployment stage, in the presence of a discrepancy in the distribution of input target samples. Toachieve this we rely on an artificially generated negative dataset, Dn=f(xn;yn) :xnpn; yn2Cng, wherepnis the distribution of negative source samples such that Cn\Cs=;.2Under review as a conference paper at ICLR 2020Source-shared class Simulated negative class Procurement stag e Deployment stag e (e)Rearrangement of source-clusters and class-boundaries to accommodate target clusters Frozen source clusters Deployment stag e Frozen class boundaries Target-private class Source-private class Open-set Partial Target clusters c1Source clustersc2c3c4-ve source clusters Ours (a)(b)(c)Rearrangement of source clusters A. Placement of class clusters in DA Open-set DA Partial DA Intra-class compactness with inter-class separability using negative classes Universal Source-Free DA Source model Classical Approaches Proposed Approach Target-shared class Classifier boundary Open-set DA Intra-class compactness with inter-class separability Learning tight class boundaries using negative classes Classical Approaches (b) (c)(d) (a)Figure 2: Latent space cluster arrangement during adaptation (see Section 3.1.1).b) Deployment stage. After obtaining a trained model from the Procurement stage, the model willhave its first encounter with the unlabeled target domain samples from the deployed environment. Wedenote the unlabeled target data by Dt=fxt:xtqg, whereqis the distribution of target samples.Note that, access to the source dataset Dsfrom the previous stage is fully restricted during adaptationin the Deployment stage. Suppose that, Ctis the " unknown " label-set of the target domain. We definethe common label space between the source and target domain as C=Cs\Ct. The private label-setfor the source and the target domains is represented as Cs=CsnCtandCt=CtnCsrespectively.3.1 L EARNING IN THE PROCUREMENT STAGE3.1.1 Challenges. The available DA techniques heavily rely on the adversarial discriminative (Tzenget al., 2017; Saito et al., 2018a) strategy. Thus, they require access to the source samples to reliablycharacterize the source domain distribution. Moreover, these approaches are not equipped to operatein asource-free setting. Though a generative model can be used as a memory-network (Sankara-narayanan et al., 2018; Bousmalis et al., 2017) to realize source-free adaptation, such a solution is notscalable for large-scale source datasets (e.g. ImageNet (Russakovsky et al., 2015)), as it introducesunnecessary extra parameters in addition to the associated training difficulties (Salimans et al., 2016).This calls for a fresh analysis of the requirements beyond the solutions found in literature.In a general DA scenario, with access to source samples in the Deployment stage (specifically forOpen-set orPartial DA), a widely adopted approach is to learn domain invariant features. In suchapproaches the placement of source category clusters is learned in the presence of unlabeled targetsamples which obliquely provides a supervision regarding the relationship between CsandCt. Forinstance, in case of Open-set DA, the source clusters may have to disperse to make space for theclusters from target private Ct(see Fig. 2a to 2b). Similarly, in partial DA, the source clustersmay have to rearrange themselves to keep all the target shared clusters ( C=Ct) separated fromthe source privateCs(see Fig. 2a to 2c). However in a complete source-free framework, we do nothave the liberty to leverage such information as source and target samples never coexist togetherduring training. Motivated by the adversarial discriminative DA technique (Tzeng et al., 2017), wehypothesize that, inculcating the ability to reject samples that are out of the source data distributioncan facilitate future source-free domain alignment using this discriminatory knowledge. Therefore, intheProcurement stage the overarching objective is two-fold.Firstly, we must aim to learn a certain placement of source clusters best suited for all kinds ofcategory-gap scenarios acknowledging the fact that, a source-free scenario does not allow usto modify the placement in the presence of target samples during adaptation (see Fig. 2d).Secondly, the learned embedding must have the ability to reject out-of-distribution samples,which is an essential requirement for unsupervised adaptation in the presence of domain-shift.3.1.2 Solution. In the presence of source data, we aim to restrain the model’s domain and categorybias which is generally inculcated as a result of the over-confident supervised learning paradigms(see Fig. 4A). To achieve this goal, we adopt two regularization strategies viz. i) regularization viagenerative modeling and ii) utilization of a labeled simulated negative source dataset to generalize forthe latent regions not covered by the given positive source samples (see Fig. 4C).How to configure the negative source dataset? While configuringDn, the following key propertieshave to be met. Firstly, latent clusters formed by the negative categories must lie in-between the latentclusters of positive source categories to enable a higher degree of intra-class compactness with inter-class separability (Fig. 4C). Secondly, the negative source samples must enrich the source domain3Under review as a conference paper at ICLR 2020Procurement stag e Deployment stag e Frozen CNN FC Frozen FC Softmax output probabilities SSM Frozen weights B. Architecture of the 2-stage method A. Simulation of -ve samples MugBagCal.Bike+ve Samples -ve Samples Shelf Laptop c-45c-23c-78before procurement C. Latent-space t-SNE p=20 lr=1iter=806 npp nppWith procurement Without procurement Shared Figure 3: A)Simulated labeled negative samples using randomly created spline segments (in pink),B)Proposed architecture, C)Procurement stage yields compact source clusters on experimental data.Over-confident supervised learning (notice non-compact boundaries) A B B A C Procurement stage encourages intra-class compactness and inter-class separability Source samples Negative samples Classifier boundaries Training progression Intra-class compactness Inter-class separability Over-discriminative supervised learning Figure 4: Achieving intra-class compactness and inter-class separability using negative dataset Dn.distribution without forming a new domain by themselves. This rules out the use of Mixup (Zhanget al., 2018a) or adversarial noise (Shu et al., 2018) as negative samples in this scenario. Thus, wepropose the following two ways to synthesize the desired negative source dataset.a) Image-composition as negative dataset D(a)n.One of the key characteristics shared between thesamples from source and unknown target domain is the semantics of the local part-related featuresspecifically for image-based object recognition tasks. Relying on this assumption, we proposea systematic procedure to simulate the samples of D(a)nby randomly compositing local regionsbetween a pair of images drawn from the positive source dataset Ds(see Fig. 3A and appendix,Algo. 2). Intuitively, composite samples xncreated on image pairs from different source categoriesare expected to lie in-between the two positive source clusters in the latent space, thereby introducinga combinatorial amount of new class labels i.e.jCnj=jCsjC2.b) Latent-simulated negative dataset D(b)n.As an alternative approach, in the absence of domainknowledge (e.g. non-image datasets, or for tasks beyond image-recognition such as pose estimation),we propose to sample virtual negative instances, unfrom the latent space which are away from thehigh confidence regions (3-sigma) of positive source clusters (Fig. 4B). For each negative sample, weassign a negative class label (one of jCnj=jCsjC2) corresponding to the pair of most confident sourceclasses predicted by the classifier. Thus, we obtain D(b)n=f(un;yn) :unpun; yn2Cngwherepunis the distribution of negative samples in the latent u-space (more details in appendix Algo. 3).Training procedure. The generative source classifier is divided into three stages; i) backbone-modelM, ii) feature extractor Fs, and iii) classifier D(see Fig. 3B). Output of the backbone-model isdenoted asv=M(x), wherexis drawn from either DsorDn. Following this, the output of FsandDare represented as uanddrespectively. Doutputs aK-dimensional logit denoted as d(k)fork= 1;2:::K;K=jCsj+jCnj. The individual class probabilities, ^y(k)are obtained by applyingsoftmax over the logits i.e.^y(k)=exp(d(k))=PKk=1exp(d(k)) =(k)(DFsM(x)). Additionally,we define priors of only positive source classes as P(usjci) =N(usjci;ci)fori= 1;2:::jCsjat4Under review as a conference paper at ICLR 2020Algorithm 1 Training algorithm in the Procurement stage1:input: (xs;ys)2Ds,(xn;yn)2Dn;Fs,D,G: Parameters of Fs,DandGrespectively.2:initialization: pretrainfFs;Dgusing cross-entropy loss on (xs;ys)followed by initialization of thesample mean ciand covariance ci(atu-space) ofFsM(xs)forxsfrom classci;i= 1;2;:::jCsj3:foriter<MaxIter do4:vs=M(xs);us=Fs(vs);^vs=G(us);urN(ci;ci)fori= 1;2;:::jCsj;^ur=FsG(ur)5: ^y(ks)s=(ks)(DFsM(xs)), and ^y(kn)n=(kn)(DFsM(xn))whereksandknare the indexof ground-truth label ysandynrespectively.6:LCE=log ^y(ks)slog ^y(kn)n;Lv=jvs^vsj;Lu=jur^urj7:Lp=log(exp(P(usjcks))=PjCsji=1exp(P(usjci))), whereP(usjci) =N(usjci;ci)8: Update Fs,D,Gby minimizingLCE,Lv,Lu, andLpalternatively using separate optimizers.9: if(iter%UpdateIter == 0) then10: Recompute the sample mean ( ci) and covariance ( ci) ofFsM(xs)forxsfrom classci;i= 1;2:::jCsj(ForD(b)n: generate fresh latent-simulated negative samples using the updated priors)the intermediate embedding us=FsM(xs). Here, parameters of the normal distributions arecomputed during training as shown in line-10 of Algo. 1. A cross-entropy loss over these priordistributions is defined as Lp(line-7 in Algo. 1), to effectively enforce intra-class compactnesswith inter-class separability (progression from Fig. 4B to 4C). Motivated by generative variationalauto-encoder (V AE) setup (Kingma & Welling, 2013), we introduce a feature decoder G, whichaims to minimize the cyclic reconstruction loss selectively for the samples from positive sourcecategoriesvsand randomly drawn samples urfrom the corresponding class priors ( i.e.LvandLu,line-6 in Algo. 1). This along with a lower weightage for the negative source categories ( i.e.at thecross-entropy lossLCE, line-6 in Algo. 1) is incorporated to deliberately bias Fstowards the positivesource samples, considering the level of unreliability of the generated negative dataset.3.2 L EARNING IN THE DEPLOYMENT STAGE3.2.1 Challenges. We hypothesize that, the large number of negative source categories along withthe positive source classes i.e.Cs[Cncan be interpreted as a universal source dataset, which cansubsume label-set Ctof a wide range of target domains. Moreover, we seek to realize a unifiedadaptation algorithm, which can work for a wide range of category-gaps . However, a forcefuladaptation of target samples to positive source categories will cause target private samples to beclassified as an instance of the source private or the common label-set, instead of being classified as"unknown ", i.e. one of the negative categories in Cn.3.2.2 Solution. In contrast to domain agnostic architectures (You et al., 2019; Cao et al., 2018a; Saitoet al., 2018a), we resort to an architecture supporting domain specific features (Tzeng et al., 2017),as we must avoid disturbing the placement of source clusters obtained from the Procurement stage.This is an essential requirement to retain the task-dependent knowledge gathered from the sourcedataset. Thus, we introduce a domain specific feature extractor denoted as Ft, whose parameters areinitialized from the fully trained Fs(see Fig. 3B). Further, we aim to exploit the learned generativeclassifier from the Procurement stage to complement for the purpose of separate ad-hoc networks(critic or discriminator) as utilized by the prior works (You et al., 2019; Cao et al., 2018b).a) Source Similarity Metric (SSM). We define a weighting factor (SSM) for each target samplext, asw(xt). A higher value of this metric indicates xt’s similarity towards the positive sourcecategories, specifically inclined towards the common label space C. Similarly, a lower value of thismetric indicates xt’s similarity towards the negative source categories Cn, showing its inclinationtowards the private target labels Ct. Let,ps,qtbe the distribution of source and target samples withlabels inCsandCtrespectively. We define, pcandqcto denote the distribution of samples fromsource and target domains belonging to the shared label-set C. Then, the SSM for the positive andnegative source samples should lie on the two extremes, forming the following inequality:Exnpnw(xn)Extqtw(xt)<Extqcw(xt)<Exspcw(xs)Exspsw(xs) (1)To formalize the SSM criterion we rely on the class probabilities defined at the output of sourcemodel only for the positive class labels, i.e.^y(k)fork= 1;2:::jCsj. Note that, ^y(k)is obtained byperforming softmax over jCsj+jCnjcategories as discussed in the Procurement stage. Finally, the5Under review as a conference paper at ICLR 2020SSM and its complement are defined as,w(xt) = maxi=1;2:::jCsjexp(^y(i)); and w0(xt) = maxi=1;2:::jCsjexp(1^y(i)) (2)We hypothesize that, the above definition will satisfy Eq. 1, as a result of the generative learningstrategy adopted in the Procurement stage. In Eq. 2 the exponent is used to further amplify separationbetween target samples from the shared Cand those from the private Ctlabel-set (see Fig. 5A).b) Source-free domain adaptation. To perform domain adaptation, the objective function aims tomove the target samples with higher SSM value towards the clusters of positive source categoriesand vice-versa at the frozen source embedding, u-space (from the Procurement stage). To achievethis, parameters of only Ftnetwork are allowed to be trained in the Deployment stage. However, thedecision of weighting the loss on target samples towards the positive or negative source clusters iscomputed using the source feature extractor Fsi.e.theSSM in Eq. 2. We define, the deploymentmodel ash=DFtM(xt)using the target feature extractor, with softmax predictions over Kcategories obtained as ^z(k)=(h(k)). Thus, the primary loss function for adaptation is defined as,Ld1=w(xt) log(PjCsjk=1^z(k))w0(xt) log(PjCsj+jCnjk=1+jCsj^z(k)) (3)Additionally, in the absence of label information, there would be uncertainty in the predictions ^z(k)as a result of distributed class probabilities. This leads to a higher entropy for such samples. Entropyminimization (Grandvalet & Bengio, 2005; Long et al., 2016) is adopted in such scenarios to movethe target samples close to the highly confident regions ( i.e.positive and negative cluster centersfrom the Procurement stage) of the classifier’s feature space. However, it has to be done separatelyfor positive and negative source categories based on the SSM values of individual target samples toeffectively distinguish the target-private set from the full target dataset. To achieve this, we define twodifferent class probability vectors separately for the positive and negative source classes denoted as,~z(i)s= exp(h(i))=PjCsjj=1exp(h(j))and~z(i)n= exp(h(i+jCsj))=PjCnjj=1exp(h(j+jCsj))respectively (seeFig. 3B). Entropy of the target samples in the positive and negative regimes of the source classifieris obtained as Hs(xt) =PjCsji=1~z(i)slog ~z(i)sandHn(xt) =PjCnji=1~z(i)nlog ~z(i)nrespectively.Consequently, the entropy minimization loss is formalized as,Ld2=w(xt)Hs(xt) +w0(xt)Hn(xt) (4)Thus, the final loss function for adapting the parameters of Ftis presented asLd=Ld1+Ld2.Hereis a hyper-parameter controlling the importance of entropy minimization during adaptation.4 E XPERIMENTSWe perform a thorough evaluation of the proposed source-free , universal domain adaptation frame-work against prior state-of-the-art models across multiple datasets. We also provide a comprehensiveablation study to establish generalizability of the approach across a variety of label-set relationshipsand justification of the various model components.4.1 E XPERIMENTAL SETUPDatasets. For all the following datasets, we resort to the experimental settings inline with the recentwork by You et al. (2019) (UAN). Office-Home (Venkateswara et al., 2017) dataset consists ofimages from 4 different domains - Artistic ( Ar), Clip-art ( Cl), Product ( Pr) and Real-world ( Rw).Alphabetically, the first 10 classes are selected as C, the next 5 classes as Cs, and the rest 50 as Ct.VisDA2017 (Peng et al., 2018) dataset comprises of 12 categories with synthetic images as the sourcedomain and natural images as the target domain, out of which, the first 6 are chosen as C, the next3 asCsand the rest asCt.Office-31 (Saenko et al., 2010) dataset contains images from 3 distinctdomains - Amazon ( A), DSLR ( D) and Webcam ( W). We use the 10 classes shared by Office-31 andCaltech-256 (Gong et al., 2012) to construct the shared label-set Cand alphabetically select the next10 asCs, with the remaining 11 classes contributing to Ct. To evaluate scalability, ImageNet-Caltechis also considered with 84 common classes inline with the setting in You et al. (2019).Simulation of labeled negative samples. To simulate negative labeled samples for training in theProcurement stage, we first sample a pair of images, each from different categories of Cs, to create6Under review as a conference paper at ICLR 2020Table 1: Average per-class accuracy ( Tavg) for universal-DA tasks on Office-Home dataset (withjCj=jCs[Ctj= 0:15). Scores for the prior works are directly taken from UAN (You et al., 2019).MethodOffice-HomeAr!Cl Ar!Pr Ar!Rw Cl!Ar Cl!Pr Cl!Rw Pr!Ar Pr!Cl Pr!Rw Rw!Ar Rw!Cl Rw!Pr AvgResNet (He et al., 2016) 59.37 76.58 87.48 69.86 71.11 81.66 73.72 56.30 86.07 78.68 59.22 78.59 73.22IWAN (Zhang et al., 2018b) 52.55 81.40 86.51 70.58 70.99 85.29 74.88 57.33 85.07 77.48 59.65 78.91 73.39PADA (Zhang et al., 2018b) 39.58 69.37 76.26 62.57 67.39 77.47 48.39 35.79 79.60 75.94 44.50 78.10 62.91ATI (Busto et al., 2017) 52.90 80.37 85.91 71.08 72.41 84.39 74.28 57.84 85.61 76.06 60.17 78.42 73.29OSBP (Saito et al., 2018b) 47.75 60.90 76.78 59.23 61.58 74.33 61.67 44.50 79.31 70.59 54.95 75.18 63.90UAN (You et al., 2019) 63.00 82.83 87.85 76.88 78.70 85.36 78.22 58.59 86.80 83.37 63.17 79.43 77.02Source-free adaptationOurs USFDA-a 63.35 83.30 89.35 70.96 72.34 86.09 78.53 60.15 87.35 81.56 63.17 88.23 77.03Ours USFDA-b 62.46 82.71 88.26 71.10 70.88 85.75 78.21 59.18 86.05 82.17 63.22 87.68 76.47unique negative classes in Cn. Note that, we impose no restriction on how the hypothetical classesare created (e.g. one can composite non-animal with animal). A random mask is defined whichsplits the images into two complementary regions using a quadratic spline passing through a centralimage region (see Appendix Algo. 2). Then, the negative image is created by merging alternate maskregions as shown in Fig. 3A. For the I!Ctask of ImageNet-Caltech, the source domain (ImageNet),consisting of 1000 classes, results in a large number of possible negative classes ( i.e.jCnj=jCsjC2).We address this by randomly selecting only 600 of these negative classes for ImageNet( I), and 200negative classes for Caltech( C) in the task C!I. In a similar fashion, we generate latent-simulatednegative samples only for the selected negative classes in these datasets. Consequently, we comparetwo models with different Procurement stage training - (i) USFDA-a : using image-composition asnegative dataset , and (ii) USFDA-b : using latent-simulated negative samples as the negative dataset.We use USFDA-a for most of our ablation experiments unless mentioned explicitly.4.2 E VALUATION METHODOLOGYAverage accuracy on Target dataset, Tavg.We resort to the evaluation protocol proposed in theVisDA2018 Open-Set Classification challenge. Accordingly, all the target private classes are groupedinto a single " unknown " class and the metric reports the average of per-class accuracy over jCsj+ 1classes. In the proposed framework a target sample is marked as " unknown ", if it is classified(argmaxk^z(k)) into any of the negative jCnjclasses out of total jCsj+jCnjcategories. In contrast,UAN (You et al., 2019) relies on a sensitive hyperparameter, as a threshold on the sample-levelweighting, to mark a target sample as " unknown ". Also note that, our method is completely source-freeduring the Deployment stage, while all other methods have access to the full source-data.Accuracy on Target-Unknown data, Tunk.We evaluate the target unknown accuracy, Tunk, as theproportion of actual target private samples (i.e. f(xt;yt) :yt2Ctg) being classified as " unknown "after adaptation. Note that, UAN (You et al., 2019) does not report Tunkwhich is a crucial metricto evaluate the vulnerability of the model after its deployment in the target environment. The Tavgmetric fails to capture this as a result of class-imbalance in the Open-set scenario (Saito et al., 2018b).Hence, to realize a common evaluation ground, we train the UAN implementation provided by theauthors (You et al., 2019) and denote it as UAN* in further sections of this paper. We observe that,the UAN(You et al., 2019) training algorithm is often unstable with a decreasing trend of TunkandTavgover increasing training iterations. We thus report the mean and standard deviation of the peakvalues ofTunkandTavgachieved by UAN*, over 5 separate runs on Office-31 dataset (see Table 7).Implementation Details. We implement our network in PyTorch and use ResNet-50 (He et al., 2016)as the backbone-model M, pre-trained on ImageNet (Russakovsky et al., 2015) inline with UAN (Youet al., 2019). The complete architecture of other components with fully-connected layers is providedin the Supplementary. A sensitivity analysis of the major hyper-parameters used in the proposedframework is provided in Fig. 5B-C, and Appendix Fig. 8B. In all our ablations across the datasets,we fix the hyperparameters values as = 0:2and= 0:1. We utilize Adam optimizer (Kingma &Ba, 2014) with a fixed learning rate of 0:0001 for training in both Procurement andDeployment stage(see Appendix for the code). For the implementation of UAN*, we use the hyper-parameter valuew0=0:5, as specified by the authors for the task A!Din Office-31 dataset.4.3 D ISCUSSIONa) Comparison with prior arts. We compare our approach with UAN You et al. (2019), and otherprior methods. The results are presented in Table 1 and Table 2. Clearly, our framework achieves state-7Under review as a conference paper at ICLR 2020Table 2:TavgonOffice-31 (withjCj=jCs[Ctj= 0:32),VisDA (withjCj=jCs[Ctj= 0:50), andImageNet-Caltech (withjCj=jCs[Ctj= 0:07). Here, SF denotes support for source-free adaptation.Method SFOffice-31 VisDA ImNet-CaltechA!W D!W W!D A!D D!A W!A Avg S!RI!C C!IResNet (He et al., 2016) 7 75.94 89.60 90.91 80.45 78.83 81.42 82.86 52.80 70.28 65.14IWAN (Zhang et al., 2018b) 7 85.25 90.09 90.00 84.27 84.22 86.25 86.68 58.72 72.19 66.48PADA (Zhang et al., 2018b) 7 85.37 79.26 90.91 81.68 55.32 82.61 79.19 44.98 65.47 58.73ATI (Busto et al., 2017) 7 79.38 92.60 90.08 84.40 78.85 81.57 84.48 54.81 71.59 67.36OSBP (Saito et al., 2018b) 7 66.13 73.57 85.62 72.92 47.35 60.48 67.68 30.26 62.08 55.48UAN (You et al., 2019) 7 85.62 94.77 97.99 86.50 85.45 85.12 89.24 60.83 75.28 70.17UAN*Tavg 783.001.8 94.170.3 95.400.5 83.430.7 86.901.087.180.6 88.34 54.21 74.77 71.51Ours USFDA-aTavg 385.561.6 95.200.397.790.188.470.3 87.500.9 86.610.690.18 63.92 76.85 72.13Ours USFDA-bTavg 383.211.2 95.330.3 96.370.3 86.840.487.910.6 86.740.5 89.40 62.77 76.74 72.25UAN*Tunk 720.7211.7 53.532.4 51.575.0 34.433.3 51.884.8 43.111.3 42.54 19.68 33.43 31.24Ours USFDA-aTunk 373.987.5 85.642.280.001.1 82.232.778.593.275.521.579.32 36.25 51.21 48.76Ours USFDA-bTunk 370.228.8 85.892.3 78.291.784.663.1 76.222.8 73.911.6 78.19 34.84 51.10 48.200.50.60.71.00.90.81.0 1.2 1.40.000.040.000.040.08Relative freq. SSM w(xt)xt from target-private xt from target-shared Piter=100 Piter=500 1.0 1.2 1.40.0 0.50.51.0Sensitivity to 1.0Value of Accuracy A B =04Dependency on 8Value of Accuracy C 150 64 1900.90.80.70.6Figure 5: Ablative analysis on the task A!Din Office-31 dataset. A)Histogram of SSM valuesofxtseparately for target-private and target-shared samples at the Procurement iteration 100 (top)and 500 (bottom). B)The sensitivity curve for shows marginally stable adaptation accuracy for awide-range of values. C)A marginal increase in Tavgis observed with increase in jCnj.of-the-art results even in a source-free setting on several tasks. Particularly in Table 2, we present thetarget-unknown accuracy Tunkon various dataset. It also holds the mean and standard-deviation forboth the accuracy metrics computed over 5 random initializations in the Office-31 dataset (the last sixrows). Our method is able to achieve much higher Tunkthan UAN* (You et al., 2019), highlightingour superiority as a result of the novel learning approach incorporated in both Procurement andDeployment stages. Note that, both USFDA-a andUSFDA-b yield similar performance across a widerange of standard benchmarks. We also perform a characteristic comparison of algorithm complexityin terms of the amount of learnable parameters and training time. In contrast to UAN, the proposedframework offers a much simpler adaptation algorithm devoid of utilization of ad-hoc networks likeadversarial discriminator and additional finetuning of the ResNet-50 backbone. Parameter size andtraining time; a) Ours procurement ( USFDA-a ): [11.1M, 380s], b) Ours deployment: [3.5M, 44s],c) UAN (You et al., 2019): [26.7M, 450s] (in a consistent setting). The significant computationaladvantage in the Deployment stage makes USFDA highly suitable for real-time adaptation.b) Does SSM satisfy the expected inequality? Effectiveness of the proposed learning algorithm,in case of source-free deployment, relies on the formulation of SSM, which is expected to satisfyEq. 1. Fig. 5A shows a histogram of the SSM separately for samples from target-shared (blue) andtarget-private (red) label space. The success of this metric is attributed to the generative nature ofProcurement stage, which enables the source model to distinguish between the marginally morenegative target-private samples as compared to the samples from the shared label space.c) Sensitivity to hyper-parameters. As we tackle DA in a source-free setting simultaneouslyintending to generalize across varied category-gaps , a low sensitivity to hyperparameters wouldfurther enhance our practical usability. To this end, we fix certain hyperparameters for all ourablations (also in Fig. 6C) even across datasets (i.e. = 0:2,= 0:1). Thus, one can treat them asglobal-constants with jCnjbeing the only hyperparameter, as variations in one by fixing the othersyield complementary effect on regularization in the Procurement stage. A thorough analysis reportedin the appendix Fig. 8, clearly demonstrates the low-sensitivity of our model to these hyperparameters.d) Generalization across category-gap. One of the key objectives of the proposed framework isto effectively operate in the absence of the knowledge of label-set relationships. To evaluate it in8Under review as a conference paper at ICLR 2020- 58.53 66.67 74.09 82.22 67.61 82.56 83.87 85.77 63.79 85.29 84.42 88.54 87.97 -- 78.88 72.14 83.52 83.85 73.48 81.66 84.31 90.66 72.94 85.38 84.26 89.98 89.74 -Ours (source-free) Open-set DA Partial DA UAN* (non-source-free) A B CFigure 6: Comparison across varied label-set relationships for the task A!Din Office-31 dataset.A)Visual representation of label-set relationships and Tavgat the corresponding instances for B)UAN* (You et al., 2019) and C)ours source-free model. Effectively, the direction along x-axis (bluehorizontal arrow) characterizes increasing Open-set complexity. The direction along y-axis (redvertical arrow) shows increasing complexity of Partial DA scenario. The pink diagonal arrow denotesthe effect of decreasing shared label space.the most compelling manner, we propose a tabular form shown in Fig. 6A. We vary the number ofprivate classes for target and source along x and y axis respectively, with a fixed jCs[Ctj= 31 . Wecompare theTavgmetric at the corresponding table instances, shown in Fig. 6B-C. The results clearlyhighlight superiority of the proposed framework specifically for the more practical scenarios (close tothe diagonal instances) as compared to the unrealistic Closed-set setting (jCsj=jCtj= 0).e) DA in absence of shared categories. In universal adaptation, we seek to transfer the knowledgeof "class-separability criterion " obtained from the source domain to the deployed target environment.More concretely, it is attributed to the segregation of data samples based on some expected charac-teristics, such as classification of objects according to their pose, color, or shape etc. To quantifythis, we consider an extreme case where Cs\Ct=;(A!Din Office-31 withjCsj= 15 ,jCtj= 16 ).Allowing access to a single labeled target sample from each category in Ct=Ct, we aim to obtain aone-shot recognition accuracy (assignment of cluster index or class label using the one-shot samplesas the cluster center at FtM(xt)) to quantify the above metric. We obtain 64.72% accuracy for theproposed framework as compared to 13.43% for UAN* (You et al., 2019). This strongly validates oursuperior knowledge transfer capability as a result of the generative classifier with labeled negativesamples complementing for the target-private categories.f) Dependency on the simulated negative dataset. Conceding that a combinatorial amount ofnegative labels can be created, we evaluate the scalability of the proposed approach, by varying thenumber of negative classes in the Procurement stage by selecting 0,4,8,64,150and190negativeclasses as reported in the X-axis of Fig. 5C. For the case of 0negative classes, denoted as jCnj= 0in Fig. 5C, we synthetically generate random negative features at the intermediate level u, whichare at least 3-sigma away from each of the positive source priors P(usjci). We then make use ofthese feature samples along with positive image samples, to train a (jCsj+ 1) class Procurementmodel with a single negative class. The results are reported in Fig. 5C on the A!Dtask of Office-31dataset with category relationship inline with the setting in Table 7. We observe an acceptable drop inaccuracy with decrease in number of negative classes, hence validating scalability of the approach forlarge-scale classification datasets (such as ImageNet). Similarly, we also evaluated our framework bycombining three or more images to form such negative classes. An increasing number of negativeclasses (jCsjC3>jCsjC2) attains under-fitting on positive source categories (similar to Fig. 5C, whereaccuracy reduces beyond a certain limit because of over regularization).5 C ONCLUSIONWe have introduced a novel source-free , universal domain adaptation framework, acknowledgingpractical domain adaptation scenarios devoid of any assumption on the source-target label-set rela-tionship. In the proposed two-stage framework, learning in the Procurement stage is found to behighly crucial, as it aims to exploit the knowledge of class-separability in the most general form withenhanced robustness to out-of-distribution samples. Besides this, success in the Deployment stage isattributed to the well-designed learning objectives effectively utilizing the source similarity criterion.This work can be served as a pilot study towards learning efficient inheritable models in future.9Under review as a conference paper at ICLR 2020
B1lOc2F5OS
Official Blind Review #3
3: Weak Reject
The paper proposes a domain adaptation problem with the source domain not available in addition to previous universal domain adaptation. The paper employs a two-stage training procedure to solve the problem. In the first procurement stage, the paper aims to extend the source classification boundary to out-of-domain data and improve the ability of the source classifier to reject out-of-distribution samples. The proposed method generates negative data lying between clusters and far from the cluster centers and uses them to train the feature extractor, the classifier, and the backbone network. The idea of exploiting generated data has been used by previous works [1], but the paper proposes a novel generation method, which is new and useful for universal domain adaptation. However, the generation process has some problems. First, the paper only considers the samples between two clusters. What if considering the samples between several clusters? For data in D_{n}^{(b)}, when the probability of the data is similar among three or more classes, labeling it as the largest two probability classes is sub-optimal. Secondly, when the target domain is far from the source domain, target data belonging to the shared classes will be in the area of D_{n}^{(b)}. The classifier will classify these target data as the classes in C_n, which will assign high w'(x_t) in the second deployment stage and degrade the performance. Thirdly, there is no theoretical guarantee of how many samples are enough or how to sample a complete set to cover all out-of-distribution areas. In the second deployment stage, the paper uses the sum of the exponential probability of shared and private classes. As stated above, with the first stage, the target shared data are not necessary to have a low probability in the private classes. The performance can be degraded. The loss for the second stage is more like a pseudo-labeling method to use the binary pseudo-label of 'shared' or 'private' obtained from the model in the first stage. The paper needs to provide an error bound on the final classification result with respect to the error in pseudo-labeling. The paper fails to report the influence of the number of samples on the final performance, which is essential to the generative model based methods. Minor points: The notation is a little messy, such as what is the relation between D_{n} and D_{n}^{(a)}, D_{n}^{(b)}. [1] Hoffman, Judy, et al. "Cycada: Cycle-consistent adversarial domain adaptation." arXiv preprint arXiv:1711.03213 (2017).
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Universal Source-Free Domain Adaptation ### Paper Abstract There is a strong incentive to develop versatile learning techniques that can transfer the knowledge of class-separability from a labeled source domain to an unlabeled target domain in the presence of a domain-shift. Existing domain adaptation (DA) approaches are not equipped for practical DA scenarios as a result of their reliance on the knowledge of source-target label-set relationship (e.g. Closed-set, Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA works require coexistence of source and target samples even during deployment, making them unsuitable for incremental, real-time adaptation. Devoid of such highly impractical assumptions, we propose a novel two-stage learning process. Initially, in the procurement-stage, the objective is to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift. To achieve this, we enhance the model’s ability to reject out-of-source distribution samples by leveraging the available source data, in a novel generative classifier framework. Subsequently, in the deployment-stage, the objective is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps, with no access to the previously seen source samples. To achieve this, in contrast to the usage of complex adversarial training regimes, we define a simple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thorough evaluation shows the practical usability of the proposed learning framework with superior DA performance even over state-of-the-art source-dependent approaches. ### Paper Keywords ["unsupervised domain adaptation", "knowledge transfer", "source-free adaptation"] ### Paper Content ABSTRACTThere is a strong incentive to develop versatile learning techniques that can transferthe knowledge of class-separability from a labeled source domain to an unlabeledtarget domain in the presence of a domain-shift. Existing domain adaptation(DA) approaches are not equipped for practical DA scenarios as a result of theirreliance on the knowledge of source-target label-set relationship (e.g. Closed-set,Open-set or Partial DA). Furthermore, almost all the prior unsupervised DA worksrequire coexistence of source and target samples even during deployment, makingthem unsuitable for incremental, real-time adaptation. Devoid of such highlyimpractical assumptions, we propose a novel two-stage learning process. Initially,in the procurement-stage , the objective is to equip the model for future source-freedeployment, assuming no prior knowledge of the upcoming category-gap anddomain-shift . To achieve this, we enhance the model’s ability to reject out-of-sourcedistribution samples by leveraging the available source data, in a novel generativeclassifier framework. Subsequently, in the deployment-stage , the objective is todesign a unified adaptation algorithm capable of operating across a wide range ofcategory-gaps , with no access to the previously seen source samples. To achievethis, in contrast to the usage of complex adversarial training regimes, we define asimple yet effective source-free adaptation objective by utilizing a novel instance-level weighing mechanism, named as Source Similarity Metric (SSM). A thoroughevaluation shows the practical usability of the proposed learning framework withsuperior DA performance even over state-of-the-art source-dependent approaches.1 I NTRODUCTIONDeep learning models have proven to be highly successful over a wide variety of tasks (Krizhevskyet al., 2012; Ren et al., 2015). However, a majority of these remain heavily dependent on access to ahuge amount of labeled samples to achieve a reliable level of generalization. A recognition modeltrained on a certain distribution of labeled samples (source domain) often fails to generalize (Chenet al., 2017) when deployed in a new environment (target domain) in the presence a discrepancy inthe input distribution (Shimodaira, 2000). Domain adaptation (DA) algorithms seek to minimizethis discrepancy either by learning a domain invariant feature representation (Long et al., 2015;Kumar et al., 2018; Ganin et al., 2016; Tzeng et al., 2015), or by learning independent domaintransformations (Long et al., 2016) to a common latent representation through adversarial distributionmatching (Tzeng et al., 2017; Nath Kundu et al., 2018), in the absence of target label information.Most of the existing approaches (Zhang et al., 2018c; Tzeng et al., 2017) assume a common label-setshared between the source and target domains ( i.e.Cs=Ct), which is often regarded as Closed-SetDA(see Fig. 1). Though this assumption helps to analyze various insights of DA algorithms, suchan assumption rarely holds true in real-world scenarios. Recently researchers have independentlyexplored two broad adaptation settings by partly relaxing the above assumption. In the first kind,Partial DA (Zhang et al., 2018b; Cao et al., 2018a;b), the target label space is considered as a subsetof the source label space ( i.e.CtCs). This setting is more suited for large-scale universal sourcedatasets, which will almost always subsume the label-set of a wide range of target domains. However,the availability of such a universal source is highly questionable for a wide range of input domainsand tasks. In the second kind, regarded as Open-set DA (Baktashmotlagh et al., 2019; Ge et al., 2017),the target label space is considered as a superset of the source label space ( i.e.CtCs). The majorchallenge in this setting is attributed to detection of target samples from the unobserved categoriesin a fully-unsupervised scenario. Apart from the above two extremes, certain works define a partlymixed scenario by allowing “ private ” label-set for both source and target domains ( i.e.CsnCt6=;1Under review as a conference paper at ICLR 2020andCtnCs6=;) but with extra supervision such as few-shot labeled data (Luo et al., 2017) or accessto the knowledge of common categories (Panareda Busto & Gall, 2017).2 1 2 8 1 2 8 1 1 5 8 3 3 Closed-set Partial a d 1 1 3 3 Open-set 1 8 Ours: Universal Source T ar get Shared A Figure 1: Various label-set relationships ( category-gap ).Most of the prior approaches considereach scenario in isolation and propose in-dependent solutions. Thus, they requireaccess to the knowledge of label-set re-lationship (or category-gap ) to carefullychoose a DA algorithm, which would besuitable for the problem in hand. Fur-thermore, all the prior unsupervised DAworks require coexistence of source andtarget samples even during deployment, hence not source-free . This is highly impractical, as labeledsource data may not be accessible after deployment due to several reasons such as, privacy concerns,restricted access to proprietary data, accidental loss of source data or other computational limitationsin real-time deployment scenarios.Acknowledging the aforementioned shortcomings, we propose one of the most convenient DAframeworks which is ingeniously equipped to address source-free DA for all kinds of label-setrelationships, without any prior knowledge of the associated category-gap (i.e.universal-DA ). Wenot only focus on identifying the key complications associated with the challenging problem setting,but also devise insightful ideas to tackle such complications by adopting learning techniques muchdifferent from the available DA literature. This leads us to realize a holistic solution which achievessuperior DA performance even over prior source-dependent approaches.2 R ELATED WORKWe briefly review the available domain adaptation methods under the three major divisions accordingto the assumption on label-set relationship. a) Closed-set DA . The cluster of previous works underthis setting focuses on minimizing the domain gap at some intermediate feature level either byminimizing well-defined statistical distance functions (Wang & Schneider, 2014; Duan et al., 2012;Zhang et al., 2013; Saenko et al., 2010) or by formalizing it as an adversarial distribution matchingproblem (Tzeng et al., 2017; Kang et al., 2018; Long et al., 2018; Hu et al., 2018; Hoffman et al.,2018) inspired from the Generative Adversarial Nets (Goodfellow et al., 2014). Certain priorworks (Sankaranarayanan et al., 2018; Zhu et al., 2017; Hoffman et al., 2018) use GAN frameworkto explicitly generate target-like images translated from the source image samples, which is alsoregarded as pixel-level adaptation (Bousmalis et al., 2017) in contrast to other feature level adaptationworks (Nath Kundu et al., 2018; Tzeng et al., 2017; Long et al., 2015; 2016). b) Partial DA . FocusingonPartial DA , Cao et al. (2018a) proposed to achieve adversarial class-level matching by utilizingmultiple domain discriminators furnishing class-level and instance-level weighting for individual datasamples. Zhang et al. (2018b) proposed to utilize importance weights for source samples depending ontheir similarity to the target domain data using an auxilliary discriminator. To effectively address theproblem of negative-transfer (Wang et al., 2019), Cao et al. (2018b) employed a single discriminatorto achieve both adversarial adaptation and class-level weighting of source samples. c) Open-setDA. Saito et al. (2018b) proposed a more general open-set adaptation setting without accessing theknowledge of source private labels set in contrast to the prior work (Panareda Busto & Gall, 2017).They extended the source classifier to accommodate an additional “unknown” class, which is trainedadversarially against the other source classes. Universal DA . You et al. (2019) proposed UniversalDA, which requires no prior knowledge of label-set relationship similar to the proposed setting, butconsiders access to both source and target samples during adaptation.3 P ROPOSED APPROACHThe problem setting for source-free domain adaptation is broadly divided into a two stage process.a) Procurement stage. In this stage, we are given full access to the labeled samples of source domain,Ds=f(xs;ys) :xsp; ys2Csg, wherepis the distribution of source samples and Csdenotes thelabel-set of the source domain. Here, the objective is to equip the model for the second stage, i.e.theDeployment stage, in the presence of a discrepancy in the distribution of input target samples. Toachieve this we rely on an artificially generated negative dataset, Dn=f(xn;yn) :xnpn; yn2Cng, wherepnis the distribution of negative source samples such that Cn\Cs=;.2Under review as a conference paper at ICLR 2020Source-shared class Simulated negative class Procurement stag e Deployment stag e (e)Rearrangement of source-clusters and class-boundaries to accommodate target clusters Frozen source clusters Deployment stag e Frozen class boundaries Target-private class Source-private class Open-set Partial Target clusters c1Source clustersc2c3c4-ve source clusters Ours (a)(b)(c)Rearrangement of source clusters A. Placement of class clusters in DA Open-set DA Partial DA Intra-class compactness with inter-class separability using negative classes Universal Source-Free DA Source model Classical Approaches Proposed Approach Target-shared class Classifier boundary Open-set DA Intra-class compactness with inter-class separability Learning tight class boundaries using negative classes Classical Approaches (b) (c)(d) (a)Figure 2: Latent space cluster arrangement during adaptation (see Section 3.1.1).b) Deployment stage. After obtaining a trained model from the Procurement stage, the model willhave its first encounter with the unlabeled target domain samples from the deployed environment. Wedenote the unlabeled target data by Dt=fxt:xtqg, whereqis the distribution of target samples.Note that, access to the source dataset Dsfrom the previous stage is fully restricted during adaptationin the Deployment stage. Suppose that, Ctis the " unknown " label-set of the target domain. We definethe common label space between the source and target domain as C=Cs\Ct. The private label-setfor the source and the target domains is represented as Cs=CsnCtandCt=CtnCsrespectively.3.1 L EARNING IN THE PROCUREMENT STAGE3.1.1 Challenges. The available DA techniques heavily rely on the adversarial discriminative (Tzenget al., 2017; Saito et al., 2018a) strategy. Thus, they require access to the source samples to reliablycharacterize the source domain distribution. Moreover, these approaches are not equipped to operatein asource-free setting. Though a generative model can be used as a memory-network (Sankara-narayanan et al., 2018; Bousmalis et al., 2017) to realize source-free adaptation, such a solution is notscalable for large-scale source datasets (e.g. ImageNet (Russakovsky et al., 2015)), as it introducesunnecessary extra parameters in addition to the associated training difficulties (Salimans et al., 2016).This calls for a fresh analysis of the requirements beyond the solutions found in literature.In a general DA scenario, with access to source samples in the Deployment stage (specifically forOpen-set orPartial DA), a widely adopted approach is to learn domain invariant features. In suchapproaches the placement of source category clusters is learned in the presence of unlabeled targetsamples which obliquely provides a supervision regarding the relationship between CsandCt. Forinstance, in case of Open-set DA, the source clusters may have to disperse to make space for theclusters from target private Ct(see Fig. 2a to 2b). Similarly, in partial DA, the source clustersmay have to rearrange themselves to keep all the target shared clusters ( C=Ct) separated fromthe source privateCs(see Fig. 2a to 2c). However in a complete source-free framework, we do nothave the liberty to leverage such information as source and target samples never coexist togetherduring training. Motivated by the adversarial discriminative DA technique (Tzeng et al., 2017), wehypothesize that, inculcating the ability to reject samples that are out of the source data distributioncan facilitate future source-free domain alignment using this discriminatory knowledge. Therefore, intheProcurement stage the overarching objective is two-fold.Firstly, we must aim to learn a certain placement of source clusters best suited for all kinds ofcategory-gap scenarios acknowledging the fact that, a source-free scenario does not allow usto modify the placement in the presence of target samples during adaptation (see Fig. 2d).Secondly, the learned embedding must have the ability to reject out-of-distribution samples,which is an essential requirement for unsupervised adaptation in the presence of domain-shift.3.1.2 Solution. In the presence of source data, we aim to restrain the model’s domain and categorybias which is generally inculcated as a result of the over-confident supervised learning paradigms(see Fig. 4A). To achieve this goal, we adopt two regularization strategies viz. i) regularization viagenerative modeling and ii) utilization of a labeled simulated negative source dataset to generalize forthe latent regions not covered by the given positive source samples (see Fig. 4C).How to configure the negative source dataset? While configuringDn, the following key propertieshave to be met. Firstly, latent clusters formed by the negative categories must lie in-between the latentclusters of positive source categories to enable a higher degree of intra-class compactness with inter-class separability (Fig. 4C). Secondly, the negative source samples must enrich the source domain3Under review as a conference paper at ICLR 2020Procurement stag e Deployment stag e Frozen CNN FC Frozen FC Softmax output probabilities SSM Frozen weights B. Architecture of the 2-stage method A. Simulation of -ve samples MugBagCal.Bike+ve Samples -ve Samples Shelf Laptop c-45c-23c-78before procurement C. Latent-space t-SNE p=20 lr=1iter=806 npp nppWith procurement Without procurement Shared Figure 3: A)Simulated labeled negative samples using randomly created spline segments (in pink),B)Proposed architecture, C)Procurement stage yields compact source clusters on experimental data.Over-confident supervised learning (notice non-compact boundaries) A B B A C Procurement stage encourages intra-class compactness and inter-class separability Source samples Negative samples Classifier boundaries Training progression Intra-class compactness Inter-class separability Over-discriminative supervised learning Figure 4: Achieving intra-class compactness and inter-class separability using negative dataset Dn.distribution without forming a new domain by themselves. This rules out the use of Mixup (Zhanget al., 2018a) or adversarial noise (Shu et al., 2018) as negative samples in this scenario. Thus, wepropose the following two ways to synthesize the desired negative source dataset.a) Image-composition as negative dataset D(a)n.One of the key characteristics shared between thesamples from source and unknown target domain is the semantics of the local part-related featuresspecifically for image-based object recognition tasks. Relying on this assumption, we proposea systematic procedure to simulate the samples of D(a)nby randomly compositing local regionsbetween a pair of images drawn from the positive source dataset Ds(see Fig. 3A and appendix,Algo. 2). Intuitively, composite samples xncreated on image pairs from different source categoriesare expected to lie in-between the two positive source clusters in the latent space, thereby introducinga combinatorial amount of new class labels i.e.jCnj=jCsjC2.b) Latent-simulated negative dataset D(b)n.As an alternative approach, in the absence of domainknowledge (e.g. non-image datasets, or for tasks beyond image-recognition such as pose estimation),we propose to sample virtual negative instances, unfrom the latent space which are away from thehigh confidence regions (3-sigma) of positive source clusters (Fig. 4B). For each negative sample, weassign a negative class label (one of jCnj=jCsjC2) corresponding to the pair of most confident sourceclasses predicted by the classifier. Thus, we obtain D(b)n=f(un;yn) :unpun; yn2Cngwherepunis the distribution of negative samples in the latent u-space (more details in appendix Algo. 3).Training procedure. The generative source classifier is divided into three stages; i) backbone-modelM, ii) feature extractor Fs, and iii) classifier D(see Fig. 3B). Output of the backbone-model isdenoted asv=M(x), wherexis drawn from either DsorDn. Following this, the output of FsandDare represented as uanddrespectively. Doutputs aK-dimensional logit denoted as d(k)fork= 1;2:::K;K=jCsj+jCnj. The individual class probabilities, ^y(k)are obtained by applyingsoftmax over the logits i.e.^y(k)=exp(d(k))=PKk=1exp(d(k)) =(k)(DFsM(x)). Additionally,we define priors of only positive source classes as P(usjci) =N(usjci;ci)fori= 1;2:::jCsjat4Under review as a conference paper at ICLR 2020Algorithm 1 Training algorithm in the Procurement stage1:input: (xs;ys)2Ds,(xn;yn)2Dn;Fs,D,G: Parameters of Fs,DandGrespectively.2:initialization: pretrainfFs;Dgusing cross-entropy loss on (xs;ys)followed by initialization of thesample mean ciand covariance ci(atu-space) ofFsM(xs)forxsfrom classci;i= 1;2;:::jCsj3:foriter<MaxIter do4:vs=M(xs);us=Fs(vs);^vs=G(us);urN(ci;ci)fori= 1;2;:::jCsj;^ur=FsG(ur)5: ^y(ks)s=(ks)(DFsM(xs)), and ^y(kn)n=(kn)(DFsM(xn))whereksandknare the indexof ground-truth label ysandynrespectively.6:LCE=log ^y(ks)slog ^y(kn)n;Lv=jvs^vsj;Lu=jur^urj7:Lp=log(exp(P(usjcks))=PjCsji=1exp(P(usjci))), whereP(usjci) =N(usjci;ci)8: Update Fs,D,Gby minimizingLCE,Lv,Lu, andLpalternatively using separate optimizers.9: if(iter%UpdateIter == 0) then10: Recompute the sample mean ( ci) and covariance ( ci) ofFsM(xs)forxsfrom classci;i= 1;2:::jCsj(ForD(b)n: generate fresh latent-simulated negative samples using the updated priors)the intermediate embedding us=FsM(xs). Here, parameters of the normal distributions arecomputed during training as shown in line-10 of Algo. 1. A cross-entropy loss over these priordistributions is defined as Lp(line-7 in Algo. 1), to effectively enforce intra-class compactnesswith inter-class separability (progression from Fig. 4B to 4C). Motivated by generative variationalauto-encoder (V AE) setup (Kingma & Welling, 2013), we introduce a feature decoder G, whichaims to minimize the cyclic reconstruction loss selectively for the samples from positive sourcecategoriesvsand randomly drawn samples urfrom the corresponding class priors ( i.e.LvandLu,line-6 in Algo. 1). This along with a lower weightage for the negative source categories ( i.e.at thecross-entropy lossLCE, line-6 in Algo. 1) is incorporated to deliberately bias Fstowards the positivesource samples, considering the level of unreliability of the generated negative dataset.3.2 L EARNING IN THE DEPLOYMENT STAGE3.2.1 Challenges. We hypothesize that, the large number of negative source categories along withthe positive source classes i.e.Cs[Cncan be interpreted as a universal source dataset, which cansubsume label-set Ctof a wide range of target domains. Moreover, we seek to realize a unifiedadaptation algorithm, which can work for a wide range of category-gaps . However, a forcefuladaptation of target samples to positive source categories will cause target private samples to beclassified as an instance of the source private or the common label-set, instead of being classified as"unknown ", i.e. one of the negative categories in Cn.3.2.2 Solution. In contrast to domain agnostic architectures (You et al., 2019; Cao et al., 2018a; Saitoet al., 2018a), we resort to an architecture supporting domain specific features (Tzeng et al., 2017),as we must avoid disturbing the placement of source clusters obtained from the Procurement stage.This is an essential requirement to retain the task-dependent knowledge gathered from the sourcedataset. Thus, we introduce a domain specific feature extractor denoted as Ft, whose parameters areinitialized from the fully trained Fs(see Fig. 3B). Further, we aim to exploit the learned generativeclassifier from the Procurement stage to complement for the purpose of separate ad-hoc networks(critic or discriminator) as utilized by the prior works (You et al., 2019; Cao et al., 2018b).a) Source Similarity Metric (SSM). We define a weighting factor (SSM) for each target samplext, asw(xt). A higher value of this metric indicates xt’s similarity towards the positive sourcecategories, specifically inclined towards the common label space C. Similarly, a lower value of thismetric indicates xt’s similarity towards the negative source categories Cn, showing its inclinationtowards the private target labels Ct. Let,ps,qtbe the distribution of source and target samples withlabels inCsandCtrespectively. We define, pcandqcto denote the distribution of samples fromsource and target domains belonging to the shared label-set C. Then, the SSM for the positive andnegative source samples should lie on the two extremes, forming the following inequality:Exnpnw(xn)Extqtw(xt)<Extqcw(xt)<Exspcw(xs)Exspsw(xs) (1)To formalize the SSM criterion we rely on the class probabilities defined at the output of sourcemodel only for the positive class labels, i.e.^y(k)fork= 1;2:::jCsj. Note that, ^y(k)is obtained byperforming softmax over jCsj+jCnjcategories as discussed in the Procurement stage. Finally, the5Under review as a conference paper at ICLR 2020SSM and its complement are defined as,w(xt) = maxi=1;2:::jCsjexp(^y(i)); and w0(xt) = maxi=1;2:::jCsjexp(1^y(i)) (2)We hypothesize that, the above definition will satisfy Eq. 1, as a result of the generative learningstrategy adopted in the Procurement stage. In Eq. 2 the exponent is used to further amplify separationbetween target samples from the shared Cand those from the private Ctlabel-set (see Fig. 5A).b) Source-free domain adaptation. To perform domain adaptation, the objective function aims tomove the target samples with higher SSM value towards the clusters of positive source categoriesand vice-versa at the frozen source embedding, u-space (from the Procurement stage). To achievethis, parameters of only Ftnetwork are allowed to be trained in the Deployment stage. However, thedecision of weighting the loss on target samples towards the positive or negative source clusters iscomputed using the source feature extractor Fsi.e.theSSM in Eq. 2. We define, the deploymentmodel ash=DFtM(xt)using the target feature extractor, with softmax predictions over Kcategories obtained as ^z(k)=(h(k)). Thus, the primary loss function for adaptation is defined as,Ld1=w(xt) log(PjCsjk=1^z(k))w0(xt) log(PjCsj+jCnjk=1+jCsj^z(k)) (3)Additionally, in the absence of label information, there would be uncertainty in the predictions ^z(k)as a result of distributed class probabilities. This leads to a higher entropy for such samples. Entropyminimization (Grandvalet & Bengio, 2005; Long et al., 2016) is adopted in such scenarios to movethe target samples close to the highly confident regions ( i.e.positive and negative cluster centersfrom the Procurement stage) of the classifier’s feature space. However, it has to be done separatelyfor positive and negative source categories based on the SSM values of individual target samples toeffectively distinguish the target-private set from the full target dataset. To achieve this, we define twodifferent class probability vectors separately for the positive and negative source classes denoted as,~z(i)s= exp(h(i))=PjCsjj=1exp(h(j))and~z(i)n= exp(h(i+jCsj))=PjCnjj=1exp(h(j+jCsj))respectively (seeFig. 3B). Entropy of the target samples in the positive and negative regimes of the source classifieris obtained as Hs(xt) =PjCsji=1~z(i)slog ~z(i)sandHn(xt) =PjCnji=1~z(i)nlog ~z(i)nrespectively.Consequently, the entropy minimization loss is formalized as,Ld2=w(xt)Hs(xt) +w0(xt)Hn(xt) (4)Thus, the final loss function for adapting the parameters of Ftis presented asLd=Ld1+Ld2.Hereis a hyper-parameter controlling the importance of entropy minimization during adaptation.4 E XPERIMENTSWe perform a thorough evaluation of the proposed source-free , universal domain adaptation frame-work against prior state-of-the-art models across multiple datasets. We also provide a comprehensiveablation study to establish generalizability of the approach across a variety of label-set relationshipsand justification of the various model components.4.1 E XPERIMENTAL SETUPDatasets. For all the following datasets, we resort to the experimental settings inline with the recentwork by You et al. (2019) (UAN). Office-Home (Venkateswara et al., 2017) dataset consists ofimages from 4 different domains - Artistic ( Ar), Clip-art ( Cl), Product ( Pr) and Real-world ( Rw).Alphabetically, the first 10 classes are selected as C, the next 5 classes as Cs, and the rest 50 as Ct.VisDA2017 (Peng et al., 2018) dataset comprises of 12 categories with synthetic images as the sourcedomain and natural images as the target domain, out of which, the first 6 are chosen as C, the next3 asCsand the rest asCt.Office-31 (Saenko et al., 2010) dataset contains images from 3 distinctdomains - Amazon ( A), DSLR ( D) and Webcam ( W). We use the 10 classes shared by Office-31 andCaltech-256 (Gong et al., 2012) to construct the shared label-set Cand alphabetically select the next10 asCs, with the remaining 11 classes contributing to Ct. To evaluate scalability, ImageNet-Caltechis also considered with 84 common classes inline with the setting in You et al. (2019).Simulation of labeled negative samples. To simulate negative labeled samples for training in theProcurement stage, we first sample a pair of images, each from different categories of Cs, to create6Under review as a conference paper at ICLR 2020Table 1: Average per-class accuracy ( Tavg) for universal-DA tasks on Office-Home dataset (withjCj=jCs[Ctj= 0:15). Scores for the prior works are directly taken from UAN (You et al., 2019).MethodOffice-HomeAr!Cl Ar!Pr Ar!Rw Cl!Ar Cl!Pr Cl!Rw Pr!Ar Pr!Cl Pr!Rw Rw!Ar Rw!Cl Rw!Pr AvgResNet (He et al., 2016) 59.37 76.58 87.48 69.86 71.11 81.66 73.72 56.30 86.07 78.68 59.22 78.59 73.22IWAN (Zhang et al., 2018b) 52.55 81.40 86.51 70.58 70.99 85.29 74.88 57.33 85.07 77.48 59.65 78.91 73.39PADA (Zhang et al., 2018b) 39.58 69.37 76.26 62.57 67.39 77.47 48.39 35.79 79.60 75.94 44.50 78.10 62.91ATI (Busto et al., 2017) 52.90 80.37 85.91 71.08 72.41 84.39 74.28 57.84 85.61 76.06 60.17 78.42 73.29OSBP (Saito et al., 2018b) 47.75 60.90 76.78 59.23 61.58 74.33 61.67 44.50 79.31 70.59 54.95 75.18 63.90UAN (You et al., 2019) 63.00 82.83 87.85 76.88 78.70 85.36 78.22 58.59 86.80 83.37 63.17 79.43 77.02Source-free adaptationOurs USFDA-a 63.35 83.30 89.35 70.96 72.34 86.09 78.53 60.15 87.35 81.56 63.17 88.23 77.03Ours USFDA-b 62.46 82.71 88.26 71.10 70.88 85.75 78.21 59.18 86.05 82.17 63.22 87.68 76.47unique negative classes in Cn. Note that, we impose no restriction on how the hypothetical classesare created (e.g. one can composite non-animal with animal). A random mask is defined whichsplits the images into two complementary regions using a quadratic spline passing through a centralimage region (see Appendix Algo. 2). Then, the negative image is created by merging alternate maskregions as shown in Fig. 3A. For the I!Ctask of ImageNet-Caltech, the source domain (ImageNet),consisting of 1000 classes, results in a large number of possible negative classes ( i.e.jCnj=jCsjC2).We address this by randomly selecting only 600 of these negative classes for ImageNet( I), and 200negative classes for Caltech( C) in the task C!I. In a similar fashion, we generate latent-simulatednegative samples only for the selected negative classes in these datasets. Consequently, we comparetwo models with different Procurement stage training - (i) USFDA-a : using image-composition asnegative dataset , and (ii) USFDA-b : using latent-simulated negative samples as the negative dataset.We use USFDA-a for most of our ablation experiments unless mentioned explicitly.4.2 E VALUATION METHODOLOGYAverage accuracy on Target dataset, Tavg.We resort to the evaluation protocol proposed in theVisDA2018 Open-Set Classification challenge. Accordingly, all the target private classes are groupedinto a single " unknown " class and the metric reports the average of per-class accuracy over jCsj+ 1classes. In the proposed framework a target sample is marked as " unknown ", if it is classified(argmaxk^z(k)) into any of the negative jCnjclasses out of total jCsj+jCnjcategories. In contrast,UAN (You et al., 2019) relies on a sensitive hyperparameter, as a threshold on the sample-levelweighting, to mark a target sample as " unknown ". Also note that, our method is completely source-freeduring the Deployment stage, while all other methods have access to the full source-data.Accuracy on Target-Unknown data, Tunk.We evaluate the target unknown accuracy, Tunk, as theproportion of actual target private samples (i.e. f(xt;yt) :yt2Ctg) being classified as " unknown "after adaptation. Note that, UAN (You et al., 2019) does not report Tunkwhich is a crucial metricto evaluate the vulnerability of the model after its deployment in the target environment. The Tavgmetric fails to capture this as a result of class-imbalance in the Open-set scenario (Saito et al., 2018b).Hence, to realize a common evaluation ground, we train the UAN implementation provided by theauthors (You et al., 2019) and denote it as UAN* in further sections of this paper. We observe that,the UAN(You et al., 2019) training algorithm is often unstable with a decreasing trend of TunkandTavgover increasing training iterations. We thus report the mean and standard deviation of the peakvalues ofTunkandTavgachieved by UAN*, over 5 separate runs on Office-31 dataset (see Table 7).Implementation Details. We implement our network in PyTorch and use ResNet-50 (He et al., 2016)as the backbone-model M, pre-trained on ImageNet (Russakovsky et al., 2015) inline with UAN (Youet al., 2019). The complete architecture of other components with fully-connected layers is providedin the Supplementary. A sensitivity analysis of the major hyper-parameters used in the proposedframework is provided in Fig. 5B-C, and Appendix Fig. 8B. In all our ablations across the datasets,we fix the hyperparameters values as = 0:2and= 0:1. We utilize Adam optimizer (Kingma &Ba, 2014) with a fixed learning rate of 0:0001 for training in both Procurement andDeployment stage(see Appendix for the code). For the implementation of UAN*, we use the hyper-parameter valuew0=0:5, as specified by the authors for the task A!Din Office-31 dataset.4.3 D ISCUSSIONa) Comparison with prior arts. We compare our approach with UAN You et al. (2019), and otherprior methods. The results are presented in Table 1 and Table 2. Clearly, our framework achieves state-7Under review as a conference paper at ICLR 2020Table 2:TavgonOffice-31 (withjCj=jCs[Ctj= 0:32),VisDA (withjCj=jCs[Ctj= 0:50), andImageNet-Caltech (withjCj=jCs[Ctj= 0:07). Here, SF denotes support for source-free adaptation.Method SFOffice-31 VisDA ImNet-CaltechA!W D!W W!D A!D D!A W!A Avg S!RI!C C!IResNet (He et al., 2016) 7 75.94 89.60 90.91 80.45 78.83 81.42 82.86 52.80 70.28 65.14IWAN (Zhang et al., 2018b) 7 85.25 90.09 90.00 84.27 84.22 86.25 86.68 58.72 72.19 66.48PADA (Zhang et al., 2018b) 7 85.37 79.26 90.91 81.68 55.32 82.61 79.19 44.98 65.47 58.73ATI (Busto et al., 2017) 7 79.38 92.60 90.08 84.40 78.85 81.57 84.48 54.81 71.59 67.36OSBP (Saito et al., 2018b) 7 66.13 73.57 85.62 72.92 47.35 60.48 67.68 30.26 62.08 55.48UAN (You et al., 2019) 7 85.62 94.77 97.99 86.50 85.45 85.12 89.24 60.83 75.28 70.17UAN*Tavg 783.001.8 94.170.3 95.400.5 83.430.7 86.901.087.180.6 88.34 54.21 74.77 71.51Ours USFDA-aTavg 385.561.6 95.200.397.790.188.470.3 87.500.9 86.610.690.18 63.92 76.85 72.13Ours USFDA-bTavg 383.211.2 95.330.3 96.370.3 86.840.487.910.6 86.740.5 89.40 62.77 76.74 72.25UAN*Tunk 720.7211.7 53.532.4 51.575.0 34.433.3 51.884.8 43.111.3 42.54 19.68 33.43 31.24Ours USFDA-aTunk 373.987.5 85.642.280.001.1 82.232.778.593.275.521.579.32 36.25 51.21 48.76Ours USFDA-bTunk 370.228.8 85.892.3 78.291.784.663.1 76.222.8 73.911.6 78.19 34.84 51.10 48.200.50.60.71.00.90.81.0 1.2 1.40.000.040.000.040.08Relative freq. SSM w(xt)xt from target-private xt from target-shared Piter=100 Piter=500 1.0 1.2 1.40.0 0.50.51.0Sensitivity to 1.0Value of Accuracy A B =04Dependency on 8Value of Accuracy C 150 64 1900.90.80.70.6Figure 5: Ablative analysis on the task A!Din Office-31 dataset. A)Histogram of SSM valuesofxtseparately for target-private and target-shared samples at the Procurement iteration 100 (top)and 500 (bottom). B)The sensitivity curve for shows marginally stable adaptation accuracy for awide-range of values. C)A marginal increase in Tavgis observed with increase in jCnj.of-the-art results even in a source-free setting on several tasks. Particularly in Table 2, we present thetarget-unknown accuracy Tunkon various dataset. It also holds the mean and standard-deviation forboth the accuracy metrics computed over 5 random initializations in the Office-31 dataset (the last sixrows). Our method is able to achieve much higher Tunkthan UAN* (You et al., 2019), highlightingour superiority as a result of the novel learning approach incorporated in both Procurement andDeployment stages. Note that, both USFDA-a andUSFDA-b yield similar performance across a widerange of standard benchmarks. We also perform a characteristic comparison of algorithm complexityin terms of the amount of learnable parameters and training time. In contrast to UAN, the proposedframework offers a much simpler adaptation algorithm devoid of utilization of ad-hoc networks likeadversarial discriminator and additional finetuning of the ResNet-50 backbone. Parameter size andtraining time; a) Ours procurement ( USFDA-a ): [11.1M, 380s], b) Ours deployment: [3.5M, 44s],c) UAN (You et al., 2019): [26.7M, 450s] (in a consistent setting). The significant computationaladvantage in the Deployment stage makes USFDA highly suitable for real-time adaptation.b) Does SSM satisfy the expected inequality? Effectiveness of the proposed learning algorithm,in case of source-free deployment, relies on the formulation of SSM, which is expected to satisfyEq. 1. Fig. 5A shows a histogram of the SSM separately for samples from target-shared (blue) andtarget-private (red) label space. The success of this metric is attributed to the generative nature ofProcurement stage, which enables the source model to distinguish between the marginally morenegative target-private samples as compared to the samples from the shared label space.c) Sensitivity to hyper-parameters. As we tackle DA in a source-free setting simultaneouslyintending to generalize across varied category-gaps , a low sensitivity to hyperparameters wouldfurther enhance our practical usability. To this end, we fix certain hyperparameters for all ourablations (also in Fig. 6C) even across datasets (i.e. = 0:2,= 0:1). Thus, one can treat them asglobal-constants with jCnjbeing the only hyperparameter, as variations in one by fixing the othersyield complementary effect on regularization in the Procurement stage. A thorough analysis reportedin the appendix Fig. 8, clearly demonstrates the low-sensitivity of our model to these hyperparameters.d) Generalization across category-gap. One of the key objectives of the proposed framework isto effectively operate in the absence of the knowledge of label-set relationships. To evaluate it in8Under review as a conference paper at ICLR 2020- 58.53 66.67 74.09 82.22 67.61 82.56 83.87 85.77 63.79 85.29 84.42 88.54 87.97 -- 78.88 72.14 83.52 83.85 73.48 81.66 84.31 90.66 72.94 85.38 84.26 89.98 89.74 -Ours (source-free) Open-set DA Partial DA UAN* (non-source-free) A B CFigure 6: Comparison across varied label-set relationships for the task A!Din Office-31 dataset.A)Visual representation of label-set relationships and Tavgat the corresponding instances for B)UAN* (You et al., 2019) and C)ours source-free model. Effectively, the direction along x-axis (bluehorizontal arrow) characterizes increasing Open-set complexity. The direction along y-axis (redvertical arrow) shows increasing complexity of Partial DA scenario. The pink diagonal arrow denotesthe effect of decreasing shared label space.the most compelling manner, we propose a tabular form shown in Fig. 6A. We vary the number ofprivate classes for target and source along x and y axis respectively, with a fixed jCs[Ctj= 31 . Wecompare theTavgmetric at the corresponding table instances, shown in Fig. 6B-C. The results clearlyhighlight superiority of the proposed framework specifically for the more practical scenarios (close tothe diagonal instances) as compared to the unrealistic Closed-set setting (jCsj=jCtj= 0).e) DA in absence of shared categories. In universal adaptation, we seek to transfer the knowledgeof "class-separability criterion " obtained from the source domain to the deployed target environment.More concretely, it is attributed to the segregation of data samples based on some expected charac-teristics, such as classification of objects according to their pose, color, or shape etc. To quantifythis, we consider an extreme case where Cs\Ct=;(A!Din Office-31 withjCsj= 15 ,jCtj= 16 ).Allowing access to a single labeled target sample from each category in Ct=Ct, we aim to obtain aone-shot recognition accuracy (assignment of cluster index or class label using the one-shot samplesas the cluster center at FtM(xt)) to quantify the above metric. We obtain 64.72% accuracy for theproposed framework as compared to 13.43% for UAN* (You et al., 2019). This strongly validates oursuperior knowledge transfer capability as a result of the generative classifier with labeled negativesamples complementing for the target-private categories.f) Dependency on the simulated negative dataset. Conceding that a combinatorial amount ofnegative labels can be created, we evaluate the scalability of the proposed approach, by varying thenumber of negative classes in the Procurement stage by selecting 0,4,8,64,150and190negativeclasses as reported in the X-axis of Fig. 5C. For the case of 0negative classes, denoted as jCnj= 0in Fig. 5C, we synthetically generate random negative features at the intermediate level u, whichare at least 3-sigma away from each of the positive source priors P(usjci). We then make use ofthese feature samples along with positive image samples, to train a (jCsj+ 1) class Procurementmodel with a single negative class. The results are reported in Fig. 5C on the A!Dtask of Office-31dataset with category relationship inline with the setting in Table 7. We observe an acceptable drop inaccuracy with decrease in number of negative classes, hence validating scalability of the approach forlarge-scale classification datasets (such as ImageNet). Similarly, we also evaluated our framework bycombining three or more images to form such negative classes. An increasing number of negativeclasses (jCsjC3>jCsjC2) attains under-fitting on positive source categories (similar to Fig. 5C, whereaccuracy reduces beyond a certain limit because of over regularization).5 C ONCLUSIONWe have introduced a novel source-free , universal domain adaptation framework, acknowledgingpractical domain adaptation scenarios devoid of any assumption on the source-target label-set rela-tionship. In the proposed two-stage framework, learning in the Procurement stage is found to behighly crucial, as it aims to exploit the knowledge of class-separability in the most general form withenhanced robustness to out-of-distribution samples. Besides this, success in the Deployment stage isattributed to the well-designed learning objectives effectively utilizing the source similarity criterion.This work can be served as a pilot study towards learning efficient inheritable models in future.9Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text The paper proposes a domain adaptation problem with the source domain not available in addition to previous universal domain adaptation. The paper employs a two-stage training procedure to solve the problem. In the first procurement stage, the paper aims to extend the source classification boundary to out-of-domain data and improve the ability of the source classifier to reject out-of-distribution samples. The proposed method generates negative data lying between clusters and far from the cluster centers and uses them to train the feature extractor, the classifier, and the backbone network. The idea of exploiting generated data has been used by previous works [1], but the paper proposes a novel generation method, which is new and useful for universal domain adaptation. However, the generation process has some problems. First, the paper only considers the samples between two clusters. What if considering the samples between several clusters? For data in D_{n}^{(b)}, when the probability of the data is similar among three or more classes, labeling it as the largest two probability classes is sub-optimal. Secondly, when the target domain is far from the source domain, target data belonging to the shared classes will be in the area of D_{n}^{(b)}. The classifier will classify these target data as the classes in C_n, which will assign high w'(x_t) in the second deployment stage and degrade the performance. Thirdly, there is no theoretical guarantee of how many samples are enough or how to sample a complete set to cover all out-of-distribution areas. In the second deployment stage, the paper uses the sum of the exponential probability of shared and private classes. As stated above, with the first stage, the target shared data are not necessary to have a low probability in the private classes. The performance can be degraded. The loss for the second stage is more like a pseudo-labeling method to use the binary pseudo-label of 'shared' or 'private' obtained from the model in the first stage. The paper needs to provide an error bound on the final classification result with respect to the error in pseudo-labeling. The paper fails to report the influence of the number of samples on the final performance, which is essential to the generative model based methods. Minor points: The notation is a little messy, such as what is the relation between D_{n} and D_{n}^{(a)}, D_{n}^{(b)}. [1] Hoffman, Judy, et al. "Cycada: Cycle-consistent adversarial domain adaptation." arXiv preprint arXiv:1711.03213 (2017). ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
ByJHuTgA-
ICLR.cc/2018/Conference
2018
On the State of the Art of Evaluation in Neural Language Models
["G\u00e1bor Melis", "Chris Dyer", "Phil Blunsom"]
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
["rnn", "language modelling"]
ABSTRACTOngoing innovations in recurrent neural network architectures have provided asteady influx of apparently state-of-the-art results on language modelling bench-marks. However, these have been evaluated using differing codebases and limitedcomputational resources, which represent uncontrolled sources of experimentalvariation. We reevaluate several popular architectures and regularisation meth-ods with large-scale automatic black-box hyperparameter tuning and arrive at thesomewhat surprising conclusion that standard LSTM architectures, when properlyregularised, outperform more recent models. We establish a new state of the arton the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on theHutter Prize dataset.1 I NTRODUCTIONThe scientific process by which the deep learning research community operates is guided by em-pirical studies that evaluate the relative quality of models. Complicating matters, the measuredperformance of a model depends not only on its architecture (and data), but it can strongly dependon hyperparameter values that affect learning, regularisation, and capacity. This hyperparameterdependence is an often inadequately controlled source of variation in experiments, which creates arisk that empirically unsound claims will be reported.In this paper, we use a black-box hyperparameter optimisation technique to control for hyperpa-rameter effects while comparing the relative performance of language modelling architectures basedon LSTMs, Recurrent Highway Networks (Zilly et al., 2016) and NAS (Zoph & Le, 2016). Wespecify flexible, parameterised model families with the ability to adjust embedding and recurrentcell sizes for a given parameter budget and with fine grain control over regularisation and learninghyperparameters.Once hyperparameters have been properly controlled for, we find that LSTMs outperform the morerecent models, contra the published claims. Our result is therefore a demonstration that replicationfailures can happen due to poorly controlled hyperparameter variation, and this paper joins otherrecent papers in warning of the under-acknowledged existence of replication failure in deep learn-ing (Henderson et al., 2017; Reimers & Gurevych, 2017). However, we do show that careful controlsare possible, albeit at considerable computational cost.Several remarks can be made in light of these results. First, as (conditional) language models serveas the central building block of many tasks, including machine translation, there is little reason toexpect that the problem of unreliable evaluation is unique to the tasks discussed here. However, inmachine translation, carefully controlling for hyperparameter effects would be substantially moreexpensive because standard datasets are much larger. Second, the research community should strivefor more consensus about appropriate experimental methodology that balances costs of careful ex-perimentation with the risks associated with false claims. Finally, more attention should be paidto hyperparameter sensitivity. Models that introduce many new hyperparameters or which performwell only in narrow ranges of hyperparameter settings should be identified as such as part of standardpublication practice.1Published as a conference paper at ICLR 2018T h e _ a c h e _ c a T t t (a) two-layer LSTM/NAS with skip connectionsT h e _ a c h e _ c a T t t (b) RHN with two processing steps per inputFigure 1: Recurrent networks with optional down-projection (trapezoids), per-step and per-sequence dropout(dashed and solid lines).2 M ODELSOur focus is on three recurrent architectures:The Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) serves as a well knownand frequently used baseline.The recently proposed Recurrent Highway Network (Zilly et al., 2016) is chosen becauseit has demonstrated state-of-the-art performance on a number of datasets.Finally, we also include NAS (Zoph & Le, 2016), because of its impressive performanceand because its architecture was the result of an automated reinforcement learning basedoptimisation process.Our aim is strictly to do better model comparisons for these architectures and we thus refrain fromincluding techniques that are known to push perplexities even lower, but which are believed to belargely orthogonal to the question of the relative merits of these recurrent cells. In parallel work witha remarkable overlap with ours, Merity et al. (2017) demonstrate the utility of adding a Neural Cache(Grave et al., 2016). Building on their work, Krause et al. (2017) show that Dynamic Evaluation(Graves, 2013) contributes similarly to the final perplexity.As pictured in Fig. 1a, our models with LSTM or NAS cells have all the standard components:an input embedding lookup table, recurrent cells stacked as layers with additive skip connectionscombining outputs of all layers to ease optimisation. There is an optional down-projection whosepresence is governed by a hyperparameter from this combined output to a smaller space whichreduces the number of output embedding parameters. Unless otherwise noted, input and outputembeddings are shared, see (Inan et al., 2016) and (Press & Wolf, 2016).Dropout is applied to feedforward connections denoted by dashed arrows in the figure. From thebottom up: to embedded inputs ( input dropout ), to connections between layers ( intra-layer dropout ),to the combined and the down-projected outputs ( output dropout ). All these dropouts have randommasks drawn independently per time step, in contrast to the dropout on recurrent states where thesame mask is used for all time steps in the sequence.RHN based models are typically conceived of as a single horizontal “highway” to emphasise howthe recurrent state is processed through time. In Fig. 1b, we choose to draw their schema in a waythat makes the differences from LSTMs immediately apparent. In a nutshell, the RHN state is passedfrom the topmost layer to the lowest layer of the next time step. In contrast, each LSTM layer hasits own recurrent connection and state.The same dropout variants are applied to all three model types, with the exception of intra-layerdropout which does not apply to RHNs since only the recurrent state is passed between the layers.2Published as a conference paper at ICLR 2018For the recurrent states, all architectures use either variational dropout (Gal & Ghahramani, 2016,state dropout )1or recurrent dropout (Semeniuta et al., 2016), unless explicitly noted otherwise.3 E XPERIMENTAL SETUP3.1 D ATASETSWe compare models on three datasets. The smallest of them is the Penn Treebank corpus by Marcuset al. (1993) with preprocessing from Mikolov et al. (2010). We also include another word levelcorpus: Wikitext-2 by Merity et al. (2016). It is about twice the size of Penn Treebank with a largervocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset(Hutter, 2012). Following common practice, we use the first 90 million characters for training, andthe remaining 10 million evenly split between validation and test.4 T RAINING DETAILSWhen training word level models we follow common practice and use a batch size of 64, truncatedbackpropagation with 35 time steps, and we feed the final states from the previous batch as the initialstate of the subsequent one. At the beginning of training and test time, the model starts with a zerostate. To bias the model towards being able to easily start from such a state at test time, duringtraining, with probability 0.01 a constant zero state is provided as the initial state.Optimisation is performed by Adam (Kingma & Ba, 2014) with 1= 0 but otherwise defaultparameters ( 2= 0:999,= 109). Setting1so turns off the exponential moving averagefor the estimates of the means of the gradients and brings Adam very close to RMSProp withoutmomentum, but due to Adam’s bias correction, larger learning rates can be used.Batch size is set to 64. The learning rate is multiplied by 0.1 whenever validation performance doesnot improve ever during 30 consecutive checkpoints. These checkpoints are performed after every100 and 200 optimization steps for Penn Treebank and Wikitext-2, respectively.For character level models (i.e. Enwik8), the differences are: truncated backpropagation is per-formed with 50 time steps. Adam’s parameters are 2= 0:99,= 105. Batch size is 128.Checkpoints are only every 400 optimisation steps and embeddings are not shared.5 E VALUATIONFor evaluation, the checkpoint with the best validation perplexity found by the tuner is loaded andthe model is applied to the test set with a batch size of 1. For the word based datasets, usingthe training batch size makes results worse by 0.3 PPL while Enwik8 is practically unaffected dueto its evaluation and training sets being much larger. Preliminary experiments indicate that MCaveraging would bring a small improvement of about 0.4 in perplexity and 0.005 in bits per character,similar to the results of Gal & Ghahramani (2016), while being a 1000 times more expensive whichis prohibitive on larger datasets. Therefore, throughout we use the mean-field approximation fordropout at test time.5.1 H YPERPARAMETER TUNINGHyperparameters are optimised by Google Vizier (Golovin et al., 2017), a black-box hyperparametertuner based on batched GP bandits using the expected improvement acquisition function (Desautelset al., 2014). Tuners of this nature are generally more efficient than grid search when the numberof hyperparameters is small. To keep the problem tractable, we restrict the set of hyperparameterstolearning rate ,input embedding ratio ,input dropout ,state dropout ,output dropout ,weight decay .For deep LSTMs, there is an extra hyperparameter to tune: intra-layer dropout . Even with this smallset, thousands of evaluations are required to reach convergence.1Of the two parameterisations, we used the one in which there is further sharing of masks between gatesrather than independent noise for the gates.3Published as a conference paper at ICLR 2018Model Size Depth Valid TestMedium LSTM, Zaremba et al. (2014) 10M 2 86.2 82.7Large LSTM, Zaremba et al. (2014) 24M 2 82.2 78.4VD LSTM, Press & Wolf (2016) 51M 2 75.8 73.2VD LSTM, Inan et al. (2016) 9M 2 77.1 73.9VD LSTM, Inan et al. (2016) 28M 2 72.5 69.0VD RHN, Zilly et al. (2016) 24M 10 67.9 65.4NAS, Zoph & Le (2016) 25M - - 64.0NAS, Zoph & Le (2016) 54M - - 62.4AWD-LSTM, Merity et al. (2017) y 24M 3 60.0 57.3LSTM10M1 61:8 59 :6LSTM 2 63:0 60 :8LSTM 4 62:4 60 :1RHN 5 66:0 63 :5NAS 1 65:6 62 :7LSTM24M1 61:4 59 :5LSTM 2 62:1 59 :6LSTM 4 60:9 58 :3RHN 5 64:8 62 :2NAS 1 62:1 59 :7Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parametersand depths. All results except those from Zaremba are with shared input and output embeddings. VD standsfor Variational Dropout from Gal & Ghahramani (2016). y: parallel work.Parameter budget. Motivated by recent results from Collins et al. (2016), we compare modelson the basis of the total number of trainable parameters as opposed to the number of hidden units.The tuner is given control over the presence and size of the down-projection, and thus over thetradeoff between the number of embedding vs. recurrent cell parameters. Consequently, the cells’hidden size and the embedding size is determined by the actual parameter budget, depth and theinput embedding ratio hyperparameter.For Enwik8 there are relatively few parameters in the embeddings since the vocabulary size is only205. Here we choose not to share embeddings and to omit the down-projection unconditionally.6 R ESULTS6.1 P ENN TREEBANKWe tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24million matching the sizes of the Medium and Large LSTMs by (Zaremba et al., 2014). The resultsare summarised in Table 1.Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the24M one in the original publication. Our 24M version improves on that further. However, a shallowLSTM-based model with only 10M parameters enjoys a very comfortable margin over that, withdeeper models following near the estimated noise range. At 24M, all depths obtain very similarresults, reaching 58:3at depth 4. Unsurprisingly, NAS whose architecture was chosen based on itsperformance on this dataset does almost equally well, even better than in Zoph & Le (2016).6.2 W IKITEXT -2Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned forPenn Treebank perform reasonably on this dataset, and this is in fact how results in previous workswere produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table2, we report numbers for both approaches. All our results are well below the previous state of theare for models without dynamic evaluation or caching. That said, our best result, 65:9compares4Published as a conference paper at ICLR 2018Model Size Depth Valid TestVD LSTM, Merity et al. (2016) 20M 2 101.7 96.3VD+Zoneout LSTM, Merity et al. (2016) 20M 2 108.7 100.9VD LSTM, Inan et al. (2016) 22M 2 91.5 87.7AWD-LSTM, Merity et al. (2017) y 33M 3 68.6 65.8LSTM (tuned for PTB)10M1 88:4 83 :2LSTM 1 72:7 69 :1LSTM 2 73:8 70 :7LSTM 4 78:3 74 :3RHN 5 83:5 79 :5NAS 1 79:6 75 :9LSTM (tuned for PTB)24M1 79:8 76 :3LSTM 1 69:3 65 :9LSTM 2 69:1 65 :9LSTM 4 70:5 67 :6RHN 5 78:1 75 :6NAS 1 73:0 69 :8Table 2: Validation and test set perplexities on Wikitext-2. All results are with shared input and output embed-dings. y: parallel work.favourably even to the Neural Cache (Grave et al., 2016) whose innovations are fairly orthogonal tothe base model.Shallow LSTMs do especially well here. Deeper models have gradually degrading perplexity, withRHNs lagging all of them by a significant margin. NAS is not quite up there with the LSTMsuggesting its architecture might have overfitted to Penn Treebank, but data for deeper variantswould be necessary to draw this conclusion.6.3 E NWIK 8In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion)are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochswhich is about a tenth of what the model of Zilly et al. (2016) was trained for. Nevertheless, wematch their smaller RHN with our models which are very close to each other. NAS lags the othermodels by a surprising margin at this task.7 A NALYSISOn two of the three datasets, we improved previous results substantially by careful model specifi-cation and hyperparameter optimisation, but the improvement for RHNs is much smaller comparedto that for LSTMs. While it cannot be ruled out that our particular setup somehow favours LSTMs,we believe it is more likely that this effect arises due to the original RHN experimental conditionhaving been tuned more extensively (this is nearly unavoidable during model development).Naturally, NAS benefitted only to a limited degree from our tuning, since the numbers of Zoph & Le(2016) were already produced by employing similar regularisation methods and a grid search. Thesmall edge can be attributed to the suboptimality of grid search (see Section 7.3).In summary, the three recurrent cell architectures are closely matched on all three datasets, withminuscule differences on Enwik8 where regularisation matters the least. These results support theclaims of Collins et al. (2016), that capacities of various cells are very similar and their apparentdifferences result from trainability and regularisation. While comparing three similar architecturescannot prove this point, the inclusion of NAS certainly gives it more credence. This way we havetwo of the best human designed and one machine optimised cell that was the top performer amongthousands of candidates.5Published as a conference paper at ICLR 2018Model Size Depth Valid TestStacked LSTM, Graves (2013) 21M 7 - 1.67Grid LSTM, Kalchbrenner et al. (2015) 17M 6 - 1.47MI-LSTM, Wu et al. (2016) 17M 1 - 1.44LN HM-LSTM, Chung et al. (2016) 35M 3 - 1.32ByteNet, Kalchbrenner et al. (2016) - 25 - 1.31VD RHN, Zilly et al. (2016) 23M 5 - 1.31VD RHN, Zilly et al. (2016) 21M 10 - 1.30VD RHN, Zilly et al. (2016) 46M 10 - 1.27LSTM27M4 1:29 1 :31RHN 5 1:30 1 :31NAS 4 1:38 1 :40LSTM46M4 1:28 1 :30RHN 5 1:29 1 :30NAS 4 1:32 1 :33Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.7.1 T HEEFFECT OF INDIVIDUAL FEATURESDown-projection was found to be very beneficial by the tuner for some depth/budget combinations.On Penn Treebank, it improved results by about 2–5 perplexity points at depths 1 and 2 at 10M, anddepth 1 at 24M, possibly by equipping the recurrent cells with more capacity. The very same modelsbenefited from down-projection on Wikitext-2, but even more so with gaps of about 10–18 pointswhich is readily explained by the larger vocabulary size.We further measured the contribution of other features of the models in a series of experiments. SeeTable 4. To limit the number of resource used, in these experiments only individual features wereevaluated (not their combinations) on Penn Treebank at the best depth for each architecture (LSTMor RHN) and parameter budget (10M or 24M) as determined above.First, we untied input and output embeddings which made perplexities worse by about 6 pointsacross the board which is consistent with the results of Inan et al. (2016).Second, without variational dropout the RHN models suffer quite a bit since there remains nodropout at all in between the layers. The deep LSTM also sees a similar loss of perplexity as havingintra-layer dropout does not in itself provide enough regularisation.Third, we were also interested in how recurrent dropout (Semeniuta et al., 2016) would perform inlieu of variational dropout. Dropout masks were shared between time steps in both methods, andour results indicate no consistent advantage to either of them.7.2 M ODEL SELECTIONWith a large number of hyperparameter combinations evaluated, the question of how much the tuneroverfits arises. There are multiple sources of noise in play,(a) non-deterministic ordering of floating-point operations in optimised linear algebra routines,(b) different initialisation seeds,(c) the validation and test sets being finite samples from a infinite population.To assess the severity of these issues, we conducted the following experiment: models with the besthyperparameter settings for Penn Treebank and Wikitext-2 were retrained from scratch with variousinitialisation seeds and the validation and test scores were recorded. If during tuning, a model justgot a lucky run due to a combination of (a) and (b), then retraining with the same hyperparametersbut with different seeds would fail to reproduce the same good results.There are a few notable things about the results. First, in our environment (Tensorflow with a singleGPU) even with the same seed as the one used by the tuner, the effect of (a) is almost as largeas that of (a) and (b) combined. Second, the variance induced by (a) and (b) together is roughlyequivalent to an absolute difference of 0.4 in perplexity on Penn Treebank and 0.5 on Wikitext-2.6Published as a conference paper at ICLR 2018Size 10M Size 24MModel Depth Valid Test Depth Valid TestLSTM 1 61:8 59 :6 4 60:9 58 :3- Shared Embeddings 1 67:6 65 :2 4 65:6 63 :2- Variational Dropout 1 62:9 61 :2 4 66:3 64 :5+ Recurrent Dropout 1 62:8 60 :6 4 65:2 62 :9+ Untied gates 1 61:4 58 :9 4 64:0 61 :3+ Tied gates 1 61:7 59 :6 4 60:4 58 :0RHN 5 66:0 63 :5 5 64:8 62 :2- Shared Embeddings 5 72:3 69 :5 5 67:4 64 :6- Variational Dropout 5 74:4 71 :7 5 74:7 71 :7+ Recurrent Dropout 5 65:5 63 :0 5 63:4 61 :0Table 4: Validation and test set perplexities on Penn Treebank for variants of our best LSTM and RHN modelsof two sizes.Third, the validation perplexities of the best checkpoints are about one standard deviation lower thanthe sample mean of the reruns, so the tuner could fit the noise only to a limited degree.Because we treat our corpora as a single sequence, test set contents are not i.i.d., and we cannot applytechniques such as the bootstrap to assess (c). Instead, we looked at the gap between validation andtest scores as a proxy and observed that it is very stable, contributing variance of 0.12–0.3 perplexityto the final results on Penn Treebank and Wikitext-2, respectively.We have not explicitly dealt with the unknown uncertainty remaining in the Gaussian Process thatmay affect model comparisons, apart from running it until apparent convergence. All in all, ourfindings suggest that a gap in perplexity of 1.0 is a statistically robust difference between modelstrained in this way on these datasets. The distribution of results was approximately normal withroughly the same variance for all models, so we still report numbers in a tabular form instead ofplotting the distribution of results, for example in a violin plot (Hintze & Nelson, 1998).7.3 S ENSITIVITYTo further verify that the best hyperparameter setting found by the tuner is not a fluke, we plotted thevalidation loss against the hyperparameter settings. Fig. 2 shows one such typical plot, for a 4-layerLSTM. We manually restricted the ranges around the best hyperparameter values to around 15–25%of the entire tuneable range, and observed that the vast majority of settings in that neighbourhoodproduced perplexities within 3.0 of the best value. Widening the ranges further leads to quicklydeteriorating results.Satisfied that the hyperparameter surface is well behaved, we considered whether the same resultscould have possibly been achieved with a simple grid search. Omitting input embedding ratio be-cause the tuner found having a down-projection suboptimal almost non-conditionally for this 4-layerLSTM, there remain six hyperparameters to tune. If there were 5 possible values on the grid for eachhyperparameter (with one value in every 20% interval), then we would need 65, nearly 8000 trialsto get within 3.0 of the best perplexity achieved by the tuner in about 1500 trials.7.4 T YING LSTM GATESNormally, LSTMs have two independent gates controlling the retention of cell state and the admis-sion of updates (Eq. 1). A minor variant which reduces the number of parameters at the loss of someflexibility is to tie the input and forget gates as in Eq. 2. A possible middle ground that keeps thenumber of parameters the same but ensures that values of the cell state cremain in [1;1]is to cap7Published as a conference paper at ICLR 2018Figure 2: Average per-word negative log-likelihoods of hyperparameter combinations in the neighbourhood ofthe best solution for a 4-layer LSTM with 24M weights on the Penn Treebank dataset.the input gate as in Eq. 3.ct=ftct1+itjt (1)ct=ftct1+ (1ft)jt (2)ct=ftct1+ min(1ft;it)jt (3)Where the equations are based on the formulation of Sak et al. (2014). All LSTM models in this pa-per use the third variant, except those titled “Untied gates” and “Tied gates” in Table 4 correspondingto Eq. 1 and 2, respectively.The results show that LSTMs are insensitive to these changes and the results vary only slightly eventhough more hidden units are allocated to the tied version to fill its parameter budget. Finally, thenumbers suggest that deep LSTMs benefit from bounded cell states.8 C ONCLUSIONDuring the transitional period when deep neural language models began to supplant their shallowerpredecessors, effect sizes tended to be large, and robust conclusions about the value of the mod-elling innovations could be made, even in the presence of poorly controlled “hyperparameter noise.”However, now that the neural revolution is in full swing, researchers must often compare competingdeep architectures. In this regime, effect sizes tend to be much smaller, and more methodologicalcare is required to produce reliable results. Furthermore, with so much work carried out in parallelby a growing research community, the costs of faulty conclusions are increased.Although we can draw attention to this problem, this paper does not offer a practical methodologi-cal solution beyond establishing reliable baselines that can be the benchmarks for subsequent work.Still, we demonstrate how, with a huge amount of computation, noise levels of various origins can becarefully estimated and models meaningfully compared. This apparent tradeoff between the amountof computation and the reliability of results seems to lie at the heart of the matter. Solutions to themethodological challenges must therefore make model evaluation cheaper by, for instance, reducingthe number of hyperparameters and the sensitivity of models to them, employing better hyperpa-rameter optimisation strategies, or by defining “leagues” with predefined computational budgets fora single model representing different points on the tradeoff curve.
HkGW8A2gG
Important big-picture work in a fast-moving field
8: Top 50% of accepted papers, clear accept
The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space. Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks. This is a significant result in language modeling and a milestone in deep learning reproducibility research. The paper is clearly motivated and authoritative in its conclusions but it's somewhat lacking in detailed model or experiment descriptions. Some further points: - There are several hyperparameters set to the "standard" or "default" value, like Adam's beta parameter and the batch size/BPTT length. Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors' experience suggests that these choices are indeed reasonably well-justified. - The description of the model is ambiguous on at least two points. First, it wasn't completely clear to me what the down-projection is (if it's simply projecting down from the LSTM hidden size to the embedding size, it wouldn't represent a hyperparameter the tuner can set, so I'm assuming it's separate and prior to the conventional output projection). Second, the phrase "additive skip connections combining outputs of all layers" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?). - Fully evaluating the "claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization. - The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance. - The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table. The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces. For instance, it would be helpful for future work to know which hyperparameters' effects are most nearly independent of other hyperparameters. - The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most "standard" or "conventional" LSTM implementation (e.g., as provided in optimized GPU libraries). In addition to further discussion on this point, this result also suggests evaluating other recently proposed "minor changes" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016) - It would also have been nice to see a comparison between the variational/recurrent dropout parameterization "in which there is further sharing of masks between gates" and the one with "independent noise for the gates," as described in the footnote. There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title On the State of the Art of Evaluation in Neural Language Models ### Paper Abstract Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset. ### Paper Keywords ["rnn", "language modelling"] ### Paper Content ABSTRACTOngoing innovations in recurrent neural network architectures have provided asteady influx of apparently state-of-the-art results on language modelling bench-marks. However, these have been evaluated using differing codebases and limitedcomputational resources, which represent uncontrolled sources of experimentalvariation. We reevaluate several popular architectures and regularisation meth-ods with large-scale automatic black-box hyperparameter tuning and arrive at thesomewhat surprising conclusion that standard LSTM architectures, when properlyregularised, outperform more recent models. We establish a new state of the arton the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on theHutter Prize dataset.1 I NTRODUCTIONThe scientific process by which the deep learning research community operates is guided by em-pirical studies that evaluate the relative quality of models. Complicating matters, the measuredperformance of a model depends not only on its architecture (and data), but it can strongly dependon hyperparameter values that affect learning, regularisation, and capacity. This hyperparameterdependence is an often inadequately controlled source of variation in experiments, which creates arisk that empirically unsound claims will be reported.In this paper, we use a black-box hyperparameter optimisation technique to control for hyperpa-rameter effects while comparing the relative performance of language modelling architectures basedon LSTMs, Recurrent Highway Networks (Zilly et al., 2016) and NAS (Zoph & Le, 2016). Wespecify flexible, parameterised model families with the ability to adjust embedding and recurrentcell sizes for a given parameter budget and with fine grain control over regularisation and learninghyperparameters.Once hyperparameters have been properly controlled for, we find that LSTMs outperform the morerecent models, contra the published claims. Our result is therefore a demonstration that replicationfailures can happen due to poorly controlled hyperparameter variation, and this paper joins otherrecent papers in warning of the under-acknowledged existence of replication failure in deep learn-ing (Henderson et al., 2017; Reimers & Gurevych, 2017). However, we do show that careful controlsare possible, albeit at considerable computational cost.Several remarks can be made in light of these results. First, as (conditional) language models serveas the central building block of many tasks, including machine translation, there is little reason toexpect that the problem of unreliable evaluation is unique to the tasks discussed here. However, inmachine translation, carefully controlling for hyperparameter effects would be substantially moreexpensive because standard datasets are much larger. Second, the research community should strivefor more consensus about appropriate experimental methodology that balances costs of careful ex-perimentation with the risks associated with false claims. Finally, more attention should be paidto hyperparameter sensitivity. Models that introduce many new hyperparameters or which performwell only in narrow ranges of hyperparameter settings should be identified as such as part of standardpublication practice.1Published as a conference paper at ICLR 2018T h e _ a c h e _ c a T t t (a) two-layer LSTM/NAS with skip connectionsT h e _ a c h e _ c a T t t (b) RHN with two processing steps per inputFigure 1: Recurrent networks with optional down-projection (trapezoids), per-step and per-sequence dropout(dashed and solid lines).2 M ODELSOur focus is on three recurrent architectures:The Long Short-Term Memory (Hochreiter & Schmidhuber, 1997) serves as a well knownand frequently used baseline.The recently proposed Recurrent Highway Network (Zilly et al., 2016) is chosen becauseit has demonstrated state-of-the-art performance on a number of datasets.Finally, we also include NAS (Zoph & Le, 2016), because of its impressive performanceand because its architecture was the result of an automated reinforcement learning basedoptimisation process.Our aim is strictly to do better model comparisons for these architectures and we thus refrain fromincluding techniques that are known to push perplexities even lower, but which are believed to belargely orthogonal to the question of the relative merits of these recurrent cells. In parallel work witha remarkable overlap with ours, Merity et al. (2017) demonstrate the utility of adding a Neural Cache(Grave et al., 2016). Building on their work, Krause et al. (2017) show that Dynamic Evaluation(Graves, 2013) contributes similarly to the final perplexity.As pictured in Fig. 1a, our models with LSTM or NAS cells have all the standard components:an input embedding lookup table, recurrent cells stacked as layers with additive skip connectionscombining outputs of all layers to ease optimisation. There is an optional down-projection whosepresence is governed by a hyperparameter from this combined output to a smaller space whichreduces the number of output embedding parameters. Unless otherwise noted, input and outputembeddings are shared, see (Inan et al., 2016) and (Press & Wolf, 2016).Dropout is applied to feedforward connections denoted by dashed arrows in the figure. From thebottom up: to embedded inputs ( input dropout ), to connections between layers ( intra-layer dropout ),to the combined and the down-projected outputs ( output dropout ). All these dropouts have randommasks drawn independently per time step, in contrast to the dropout on recurrent states where thesame mask is used for all time steps in the sequence.RHN based models are typically conceived of as a single horizontal “highway” to emphasise howthe recurrent state is processed through time. In Fig. 1b, we choose to draw their schema in a waythat makes the differences from LSTMs immediately apparent. In a nutshell, the RHN state is passedfrom the topmost layer to the lowest layer of the next time step. In contrast, each LSTM layer hasits own recurrent connection and state.The same dropout variants are applied to all three model types, with the exception of intra-layerdropout which does not apply to RHNs since only the recurrent state is passed between the layers.2Published as a conference paper at ICLR 2018For the recurrent states, all architectures use either variational dropout (Gal & Ghahramani, 2016,state dropout )1or recurrent dropout (Semeniuta et al., 2016), unless explicitly noted otherwise.3 E XPERIMENTAL SETUP3.1 D ATASETSWe compare models on three datasets. The smallest of them is the Penn Treebank corpus by Marcuset al. (1993) with preprocessing from Mikolov et al. (2010). We also include another word levelcorpus: Wikitext-2 by Merity et al. (2016). It is about twice the size of Penn Treebank with a largervocabulary and much lighter preprocessing. The third corpus is Enwik8 from the Hutter Prize dataset(Hutter, 2012). Following common practice, we use the first 90 million characters for training, andthe remaining 10 million evenly split between validation and test.4 T RAINING DETAILSWhen training word level models we follow common practice and use a batch size of 64, truncatedbackpropagation with 35 time steps, and we feed the final states from the previous batch as the initialstate of the subsequent one. At the beginning of training and test time, the model starts with a zerostate. To bias the model towards being able to easily start from such a state at test time, duringtraining, with probability 0.01 a constant zero state is provided as the initial state.Optimisation is performed by Adam (Kingma & Ba, 2014) with 1= 0 but otherwise defaultparameters ( 2= 0:999,= 109). Setting1so turns off the exponential moving averagefor the estimates of the means of the gradients and brings Adam very close to RMSProp withoutmomentum, but due to Adam’s bias correction, larger learning rates can be used.Batch size is set to 64. The learning rate is multiplied by 0.1 whenever validation performance doesnot improve ever during 30 consecutive checkpoints. These checkpoints are performed after every100 and 200 optimization steps for Penn Treebank and Wikitext-2, respectively.For character level models (i.e. Enwik8), the differences are: truncated backpropagation is per-formed with 50 time steps. Adam’s parameters are 2= 0:99,= 105. Batch size is 128.Checkpoints are only every 400 optimisation steps and embeddings are not shared.5 E VALUATIONFor evaluation, the checkpoint with the best validation perplexity found by the tuner is loaded andthe model is applied to the test set with a batch size of 1. For the word based datasets, usingthe training batch size makes results worse by 0.3 PPL while Enwik8 is practically unaffected dueto its evaluation and training sets being much larger. Preliminary experiments indicate that MCaveraging would bring a small improvement of about 0.4 in perplexity and 0.005 in bits per character,similar to the results of Gal & Ghahramani (2016), while being a 1000 times more expensive whichis prohibitive on larger datasets. Therefore, throughout we use the mean-field approximation fordropout at test time.5.1 H YPERPARAMETER TUNINGHyperparameters are optimised by Google Vizier (Golovin et al., 2017), a black-box hyperparametertuner based on batched GP bandits using the expected improvement acquisition function (Desautelset al., 2014). Tuners of this nature are generally more efficient than grid search when the numberof hyperparameters is small. To keep the problem tractable, we restrict the set of hyperparameterstolearning rate ,input embedding ratio ,input dropout ,state dropout ,output dropout ,weight decay .For deep LSTMs, there is an extra hyperparameter to tune: intra-layer dropout . Even with this smallset, thousands of evaluations are required to reach convergence.1Of the two parameterisations, we used the one in which there is further sharing of masks between gatesrather than independent noise for the gates.3Published as a conference paper at ICLR 2018Model Size Depth Valid TestMedium LSTM, Zaremba et al. (2014) 10M 2 86.2 82.7Large LSTM, Zaremba et al. (2014) 24M 2 82.2 78.4VD LSTM, Press & Wolf (2016) 51M 2 75.8 73.2VD LSTM, Inan et al. (2016) 9M 2 77.1 73.9VD LSTM, Inan et al. (2016) 28M 2 72.5 69.0VD RHN, Zilly et al. (2016) 24M 10 67.9 65.4NAS, Zoph & Le (2016) 25M - - 64.0NAS, Zoph & Le (2016) 54M - - 62.4AWD-LSTM, Merity et al. (2017) y 24M 3 60.0 57.3LSTM10M1 61:8 59 :6LSTM 2 63:0 60 :8LSTM 4 62:4 60 :1RHN 5 66:0 63 :5NAS 1 65:6 62 :7LSTM24M1 61:4 59 :5LSTM 2 62:1 59 :6LSTM 4 60:9 58 :3RHN 5 64:8 62 :2NAS 1 62:1 59 :7Table 1: Validation and test set perplexities on Penn Treebank for models with different numbers of parametersand depths. All results except those from Zaremba are with shared input and output embeddings. VD standsfor Variational Dropout from Gal & Ghahramani (2016). y: parallel work.Parameter budget. Motivated by recent results from Collins et al. (2016), we compare modelson the basis of the total number of trainable parameters as opposed to the number of hidden units.The tuner is given control over the presence and size of the down-projection, and thus over thetradeoff between the number of embedding vs. recurrent cell parameters. Consequently, the cells’hidden size and the embedding size is determined by the actual parameter budget, depth and theinput embedding ratio hyperparameter.For Enwik8 there are relatively few parameters in the embeddings since the vocabulary size is only205. Here we choose not to share embeddings and to omit the down-projection unconditionally.6 R ESULTS6.1 P ENN TREEBANKWe tested LSTMs of various depths and an RHN of depth 5 with parameter budgets of 10 and 24million matching the sizes of the Medium and Large LSTMs by (Zaremba et al., 2014). The resultsare summarised in Table 1.Notably, in our experiments even the RHN with only 10M parameters has better perplexity than the24M one in the original publication. Our 24M version improves on that further. However, a shallowLSTM-based model with only 10M parameters enjoys a very comfortable margin over that, withdeeper models following near the estimated noise range. At 24M, all depths obtain very similarresults, reaching 58:3at depth 4. Unsurprisingly, NAS whose architecture was chosen based on itsperformance on this dataset does almost equally well, even better than in Zoph & Le (2016).6.2 W IKITEXT -2Wikitext-2 is not much larger than Penn Treebank, so it is not surprising that even models tuned forPenn Treebank perform reasonably on this dataset, and this is in fact how results in previous workswere produced. For a fairer comparison, we also tune hyperparameters on the same dataset. In Table2, we report numbers for both approaches. All our results are well below the previous state of theare for models without dynamic evaluation or caching. That said, our best result, 65:9compares4Published as a conference paper at ICLR 2018Model Size Depth Valid TestVD LSTM, Merity et al. (2016) 20M 2 101.7 96.3VD+Zoneout LSTM, Merity et al. (2016) 20M 2 108.7 100.9VD LSTM, Inan et al. (2016) 22M 2 91.5 87.7AWD-LSTM, Merity et al. (2017) y 33M 3 68.6 65.8LSTM (tuned for PTB)10M1 88:4 83 :2LSTM 1 72:7 69 :1LSTM 2 73:8 70 :7LSTM 4 78:3 74 :3RHN 5 83:5 79 :5NAS 1 79:6 75 :9LSTM (tuned for PTB)24M1 79:8 76 :3LSTM 1 69:3 65 :9LSTM 2 69:1 65 :9LSTM 4 70:5 67 :6RHN 5 78:1 75 :6NAS 1 73:0 69 :8Table 2: Validation and test set perplexities on Wikitext-2. All results are with shared input and output embed-dings. y: parallel work.favourably even to the Neural Cache (Grave et al., 2016) whose innovations are fairly orthogonal tothe base model.Shallow LSTMs do especially well here. Deeper models have gradually degrading perplexity, withRHNs lagging all of them by a significant margin. NAS is not quite up there with the LSTMsuggesting its architecture might have overfitted to Penn Treebank, but data for deeper variantswould be necessary to draw this conclusion.6.3 E NWIK 8In contrast to the previous datasets, our numbers on this task (reported in BPC, following convetion)are slightly off the state of the art. This is most likely due to optimisation being limited to 14 epochswhich is about a tenth of what the model of Zilly et al. (2016) was trained for. Nevertheless, wematch their smaller RHN with our models which are very close to each other. NAS lags the othermodels by a surprising margin at this task.7 A NALYSISOn two of the three datasets, we improved previous results substantially by careful model specifi-cation and hyperparameter optimisation, but the improvement for RHNs is much smaller comparedto that for LSTMs. While it cannot be ruled out that our particular setup somehow favours LSTMs,we believe it is more likely that this effect arises due to the original RHN experimental conditionhaving been tuned more extensively (this is nearly unavoidable during model development).Naturally, NAS benefitted only to a limited degree from our tuning, since the numbers of Zoph & Le(2016) were already produced by employing similar regularisation methods and a grid search. Thesmall edge can be attributed to the suboptimality of grid search (see Section 7.3).In summary, the three recurrent cell architectures are closely matched on all three datasets, withminuscule differences on Enwik8 where regularisation matters the least. These results support theclaims of Collins et al. (2016), that capacities of various cells are very similar and their apparentdifferences result from trainability and regularisation. While comparing three similar architecturescannot prove this point, the inclusion of NAS certainly gives it more credence. This way we havetwo of the best human designed and one machine optimised cell that was the top performer amongthousands of candidates.5Published as a conference paper at ICLR 2018Model Size Depth Valid TestStacked LSTM, Graves (2013) 21M 7 - 1.67Grid LSTM, Kalchbrenner et al. (2015) 17M 6 - 1.47MI-LSTM, Wu et al. (2016) 17M 1 - 1.44LN HM-LSTM, Chung et al. (2016) 35M 3 - 1.32ByteNet, Kalchbrenner et al. (2016) - 25 - 1.31VD RHN, Zilly et al. (2016) 23M 5 - 1.31VD RHN, Zilly et al. (2016) 21M 10 - 1.30VD RHN, Zilly et al. (2016) 46M 10 - 1.27LSTM27M4 1:29 1 :31RHN 5 1:30 1 :31NAS 4 1:38 1 :40LSTM46M4 1:28 1 :30RHN 5 1:29 1 :30NAS 4 1:32 1 :33Table 3: Validation and test set BPCs on Enwik8 from the Hutter Prize dataset.7.1 T HEEFFECT OF INDIVIDUAL FEATURESDown-projection was found to be very beneficial by the tuner for some depth/budget combinations.On Penn Treebank, it improved results by about 2–5 perplexity points at depths 1 and 2 at 10M, anddepth 1 at 24M, possibly by equipping the recurrent cells with more capacity. The very same modelsbenefited from down-projection on Wikitext-2, but even more so with gaps of about 10–18 pointswhich is readily explained by the larger vocabulary size.We further measured the contribution of other features of the models in a series of experiments. SeeTable 4. To limit the number of resource used, in these experiments only individual features wereevaluated (not their combinations) on Penn Treebank at the best depth for each architecture (LSTMor RHN) and parameter budget (10M or 24M) as determined above.First, we untied input and output embeddings which made perplexities worse by about 6 pointsacross the board which is consistent with the results of Inan et al. (2016).Second, without variational dropout the RHN models suffer quite a bit since there remains nodropout at all in between the layers. The deep LSTM also sees a similar loss of perplexity as havingintra-layer dropout does not in itself provide enough regularisation.Third, we were also interested in how recurrent dropout (Semeniuta et al., 2016) would perform inlieu of variational dropout. Dropout masks were shared between time steps in both methods, andour results indicate no consistent advantage to either of them.7.2 M ODEL SELECTIONWith a large number of hyperparameter combinations evaluated, the question of how much the tuneroverfits arises. There are multiple sources of noise in play,(a) non-deterministic ordering of floating-point operations in optimised linear algebra routines,(b) different initialisation seeds,(c) the validation and test sets being finite samples from a infinite population.To assess the severity of these issues, we conducted the following experiment: models with the besthyperparameter settings for Penn Treebank and Wikitext-2 were retrained from scratch with variousinitialisation seeds and the validation and test scores were recorded. If during tuning, a model justgot a lucky run due to a combination of (a) and (b), then retraining with the same hyperparametersbut with different seeds would fail to reproduce the same good results.There are a few notable things about the results. First, in our environment (Tensorflow with a singleGPU) even with the same seed as the one used by the tuner, the effect of (a) is almost as largeas that of (a) and (b) combined. Second, the variance induced by (a) and (b) together is roughlyequivalent to an absolute difference of 0.4 in perplexity on Penn Treebank and 0.5 on Wikitext-2.6Published as a conference paper at ICLR 2018Size 10M Size 24MModel Depth Valid Test Depth Valid TestLSTM 1 61:8 59 :6 4 60:9 58 :3- Shared Embeddings 1 67:6 65 :2 4 65:6 63 :2- Variational Dropout 1 62:9 61 :2 4 66:3 64 :5+ Recurrent Dropout 1 62:8 60 :6 4 65:2 62 :9+ Untied gates 1 61:4 58 :9 4 64:0 61 :3+ Tied gates 1 61:7 59 :6 4 60:4 58 :0RHN 5 66:0 63 :5 5 64:8 62 :2- Shared Embeddings 5 72:3 69 :5 5 67:4 64 :6- Variational Dropout 5 74:4 71 :7 5 74:7 71 :7+ Recurrent Dropout 5 65:5 63 :0 5 63:4 61 :0Table 4: Validation and test set perplexities on Penn Treebank for variants of our best LSTM and RHN modelsof two sizes.Third, the validation perplexities of the best checkpoints are about one standard deviation lower thanthe sample mean of the reruns, so the tuner could fit the noise only to a limited degree.Because we treat our corpora as a single sequence, test set contents are not i.i.d., and we cannot applytechniques such as the bootstrap to assess (c). Instead, we looked at the gap between validation andtest scores as a proxy and observed that it is very stable, contributing variance of 0.12–0.3 perplexityto the final results on Penn Treebank and Wikitext-2, respectively.We have not explicitly dealt with the unknown uncertainty remaining in the Gaussian Process thatmay affect model comparisons, apart from running it until apparent convergence. All in all, ourfindings suggest that a gap in perplexity of 1.0 is a statistically robust difference between modelstrained in this way on these datasets. The distribution of results was approximately normal withroughly the same variance for all models, so we still report numbers in a tabular form instead ofplotting the distribution of results, for example in a violin plot (Hintze & Nelson, 1998).7.3 S ENSITIVITYTo further verify that the best hyperparameter setting found by the tuner is not a fluke, we plotted thevalidation loss against the hyperparameter settings. Fig. 2 shows one such typical plot, for a 4-layerLSTM. We manually restricted the ranges around the best hyperparameter values to around 15–25%of the entire tuneable range, and observed that the vast majority of settings in that neighbourhoodproduced perplexities within 3.0 of the best value. Widening the ranges further leads to quicklydeteriorating results.Satisfied that the hyperparameter surface is well behaved, we considered whether the same resultscould have possibly been achieved with a simple grid search. Omitting input embedding ratio be-cause the tuner found having a down-projection suboptimal almost non-conditionally for this 4-layerLSTM, there remain six hyperparameters to tune. If there were 5 possible values on the grid for eachhyperparameter (with one value in every 20% interval), then we would need 65, nearly 8000 trialsto get within 3.0 of the best perplexity achieved by the tuner in about 1500 trials.7.4 T YING LSTM GATESNormally, LSTMs have two independent gates controlling the retention of cell state and the admis-sion of updates (Eq. 1). A minor variant which reduces the number of parameters at the loss of someflexibility is to tie the input and forget gates as in Eq. 2. A possible middle ground that keeps thenumber of parameters the same but ensures that values of the cell state cremain in [1;1]is to cap7Published as a conference paper at ICLR 2018Figure 2: Average per-word negative log-likelihoods of hyperparameter combinations in the neighbourhood ofthe best solution for a 4-layer LSTM with 24M weights on the Penn Treebank dataset.the input gate as in Eq. 3.ct=ftct1+itjt (1)ct=ftct1+ (1ft)jt (2)ct=ftct1+ min(1ft;it)jt (3)Where the equations are based on the formulation of Sak et al. (2014). All LSTM models in this pa-per use the third variant, except those titled “Untied gates” and “Tied gates” in Table 4 correspondingto Eq. 1 and 2, respectively.The results show that LSTMs are insensitive to these changes and the results vary only slightly eventhough more hidden units are allocated to the tied version to fill its parameter budget. Finally, thenumbers suggest that deep LSTMs benefit from bounded cell states.8 C ONCLUSIONDuring the transitional period when deep neural language models began to supplant their shallowerpredecessors, effect sizes tended to be large, and robust conclusions about the value of the mod-elling innovations could be made, even in the presence of poorly controlled “hyperparameter noise.”However, now that the neural revolution is in full swing, researchers must often compare competingdeep architectures. In this regime, effect sizes tend to be much smaller, and more methodologicalcare is required to produce reliable results. Furthermore, with so much work carried out in parallelby a growing research community, the costs of faulty conclusions are increased.Although we can draw attention to this problem, this paper does not offer a practical methodologi-cal solution beyond establishing reliable baselines that can be the benchmarks for subsequent work.Still, we demonstrate how, with a huge amount of computation, noise levels of various origins can becarefully estimated and models meaningfully compared. This apparent tradeoff between the amountof computation and the reliability of results seems to lie at the heart of the matter. Solutions to themethodological challenges must therefore make model evaluation cheaper by, for instance, reducingthe number of hyperparameters and the sensitivity of models to them, employing better hyperpa-rameter optimisation strategies, or by defining “leagues” with predefined computational budgets fora single model representing different points on the tradeoff curve.<|im_end|> <|im_start|>assistant ### Review Title Important big-picture work in a fast-moving field ### Review Text The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space. Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks. This is a significant result in language modeling and a milestone in deep learning reproducibility research. The paper is clearly motivated and authoritative in its conclusions but it's somewhat lacking in detailed model or experiment descriptions. Some further points: - There are several hyperparameters set to the "standard" or "default" value, like Adam's beta parameter and the batch size/BPTT length. Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors' experience suggests that these choices are indeed reasonably well-justified. - The description of the model is ambiguous on at least two points. First, it wasn't completely clear to me what the down-projection is (if it's simply projecting down from the LSTM hidden size to the embedding size, it wouldn't represent a hyperparameter the tuner can set, so I'm assuming it's separate and prior to the conventional output projection). Second, the phrase "additive skip connections combining outputs of all layers" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?). - Fully evaluating the "claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent differences result from trainability and regularisation" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization. - The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance. - The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table. The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces. For instance, it would be helpful for future work to know which hyperparameters' effects are most nearly independent of other hyperparameters. - The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most "standard" or "conventional" LSTM implementation (e.g., as provided in optimized GPU libraries). In addition to further discussion on this point, this result also suggests evaluating other recently proposed "minor changes" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016) - It would also have been nice to see a comparison between the variational/recurrent dropout parameterization "in which there is further sharing of masks between gates" and the one with "independent noise for the gates," as described in the footnote. There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
GHCu1utcBvX
ICLR.cc/2021/Conference
2021
Transferability of Compositionality
["Yuanpeng Li", "Liang Zhao", "Joel Hestness", "Ka Yee Lun", "Kenneth Church", "Mohamed Elhoseiny"]
Compositional generalization is the algebraic capacity to understand and produce large amount of novel combinations from known components. It is a key element of human intelligence for out-of-distribution generalization. To equip neural networks with such ability, many algorithms have been proposed to extract compositional representations from the training distribution. However, it has not been discussed whether the trained model can still extract such representations in the test distribution. In this paper, we argue that the extraction ability does not transfer naturally, because the extraction network suffers from the divergence of distributions. To address this problem, we propose to use an auxiliary reconstruction network with regularized hidden representations as input, and optimize the representations during inference. The proposed approach significantly improves accuracy, showing more than a 20% absolute increase in various experiments compared with baselines. To our best knowledge, this is the first work to focus on the transferability of compositionality, and it is orthogonal to existing efforts of learning compositional representations in training distribution. We hope this work will help to advance compositional generalization and artificial intelligence research.
["Compositionality"]
ABSTRACTCompositional generalization is the algebraic capacity to understand and producelarge amount of novel combinations from known components. It is a key element ofhuman intelligence for out-of-distribution generalization. To equip neural networkswith such ability, many algorithms have been proposed to extract compositionalrepresentations from the training distribution. However, it has not been discussedwhether the trained model can still extract such representations in the test distribu-tion. In this paper, we argue that the extraction ability does not transfer naturally,because the extraction network suffers from the divergence of distributions. Toaddress this problem, we propose to use an auxiliary reconstruction network withregularized hidden representations as input, and optimize the representations duringinference. The proposed approach significantly improves accuracy, showing morethan a 20% absolute increase in various experiments compared with baselines.To our best knowledge, this is the first work to focus on the transferability ofcompositionality, and it is orthogonal to existing efforts of learning compositionalrepresentations in training distribution. We hope this work will help to advancecompositional generalization and artificial intelligence research. The code is insupplementary materials.1 I NTRODUCTIONHuman intelligence (Minsky, 1986; Lake et al., 2017) exhibits compositional generalization, thealgebraic capacity to understand and produce large amount of novel combinations from knowncomponents (Chomsky, 1957; Montague, 1970). This capacity helps humans to recognize the worldefficiently and to be imaginative. It is also beneficial to design machine learning algorithms withcompositional generalization skills. Current neural network models, however, generally lack suchability. Compositional generalization is a type of out-of-distribution generalization (Bengio, 2017),where the training and test distributions are different. A sample in such a setting is a combination ofseveral components, and the generalization is enabled by recombining the seen components of theunseen combination during inference. In the image domain, an object is a combination of many partsor properties. In the language domain, a compound word is a combination of multiple words. As anexample, we consider two digits are overlapped (Figure 1). Each digit is a component, and it appearsin training. A test example has a new combination of two digits.The main approach for compositional generalization is to learn compositional representations (Ben-gio, 2013), which contain several component representations. Each of them depends only on theunderlying generative factor, and does not change when other factors change. We call this thecompositionality property, and will formally introduce in Section 3. In the digit example, this meansthat the representation of one digit does not change when the other digit changes.Multiple approaches have been proposed to learn compositional representations in the train distribu-tion. However, little discussion has focused on whether the model can still extract the representationsin the test distribution. We find that the extraction ability does not transfer naturally, because theextraction network suffers from the divergence of distributions (Bengio, 2017; Pleiss et al., 2020), sothat each extracted representation shifts away from the corresponding one in training. Our experimenton the digit example shows that the accuracy drops from 89.6% in training to 49.3% in test (Table 1in Section 5).To address the problem, we hope each representation is consistent with the training one whilereflecting the test sample. We use an auxiliary network, which has hidden representations as inputs,1Under review as a conference paper at ICLR 2021(a) Training samples (b) Test samplesFigure 1: Examples of compositional generalization with overlapping digits. Each sample is ahorizontal block with three images and two digits. The middle image Xis input and the right twodigitsY=Y1;Y2are output. The left two images X1;X2are hidden components. X1is in itsoriginal form, and X2is flipped over left-top to right-bottom diagonal. The sum of the digits is evenin train, and odd in test. We hope to learn a prediction model in training, and transfer it to test.and the original input as output. For a test sample, we regularize each hidden representation inits training manifold, and optimize them to recover the original input. Then we use the optimizedrepresentations for prediction. Experimental results show that the proposed approach has more than a20% absolute increase in various experiments compared to baselines, and even outperforms humanson the overlapping digit task. Our contributions can be summarized as follows.We raise and investigate the problem of transferability of compositionality to test distribution.This work is orthogonal to many efforts of learning compositionality in training distribution.We propose to address the problem by using an auxiliary reconstruction network with regu-larized hidden representations as input, and optimize the representations during inference.We empirically show that the transferability problem exists and the proposed approach hassignificant improvements over baselines.2 R ELATED WORKCompositional generalization (Chomsky, 1957; Montague, 1970) is critical in human cognition (Min-sky, 1986; Lake et al., 2017; Johnson & et al, 2017; Higgins & et al, 2018; Lake et al., 2019). It helpshumans to understand and produce large amount of novel combinations from known components.Broadly speaking, compositional generalization is a type of out-of-distribution (o.o.d.) transferring orgeneralization, which is also called domain adaptation (Redko et al., 2020) or concept drift (Gamaet al., 2014). This is different from traditional i.i.d. setting, where the training and the test distributionsare identical. The transferring requires prior knowledge of how the distribution is changed, andcompositional generalization has a particular form of such change, as mentioned in the later section.Compositional generalization is also a desirable property for deep neural networks. Human-level com-positional learning (Marcus, 2003; Lake & Baroni, 2018) has been an important open challenge (Yanget al., 2019; Keysers & et al, 2020), although there is a long history of studying compositionality inneural networks. Classic view (Fodor & Pylyshyn, 1988; Marcus, 1998; Fodor & Lepore, 2002) con-siders conventional neural networks lack systematic compositionality. With the breakthroughs in deepneural networks, there are more contemporary attempts to encode compositionality in deep neuralnetworks. Compositionality in neural networks is actively explored for systematic behaviour (Wong& Wang, 2007; Brakel & Frank, 2009), counting ability (Rodriguez & Wiles, 1998; Weiss et al.,2018) and sensitivity to hierarchical structure (Linzen et al., 2016). Researchers have also proposedmultiple related tasks (Lake & Baroni, 2018; Loula et al., 2018; Lake et al., 2019) and methods (Lakeet al., 2017; Lake & Baroni, 2018; Loula et al., 2018; Kliegl & Xu, 2018; Li et al., 2019; Lake, 2019;Gorden et al., 2020) for learning compositionality in training distribution. Another line of relatedwork is independent disentangled representation learning (Higgins et al., 2017; Burgess et al., 2018;Kim & Mnih, 2018; Chen et al., 2018; Kumar et al., 2017; Hsieh et al., 2018; Locatello et al., 2019;2020). Its main assumption is that the expected components are statistically independent in training2Under review as a conference paper at ICLR 2021data. This setting does not have transferring problem in test, because all combinations have positivejoint probabilities in training (please see Section 3).More recently, there have been approaches for better compositional learning in NLP tasks byelaborating RNN models (Bastings et al., 2018), by using pointer networks (Kliegl & Xu, 2018),or by using two representations of a sentence (Russin et al., 2019; Li et al., 2019). In these tasks,the input can be divided into words, which have consistent information in different distributions.However, such property is not always available, e.g. the overlapping digits. In this paper, we proposeto acquire the compositional representations with optimization during inference. We will discussmore in the following sections.3 C OMPOSITIONALITY AND TRANSFERABILITYIn this section, we describe compositionality as a model property. We then argue that compositionalitymay not transfer to test distribution, and discuss ideas of the proposed approach.Compositionality Compositional generalization has different training and test distributions. Sam-ples in both training and test data are combinations of Kcomponents. For example, in Figure 1,inputXhas two digits X1;X2, and output Yhas corresponding labels Y1;Y2. While a test sample’scombination does not appear in training, each component of the test sample appears in training.A key for compositional generalization is to have compositional representation, which has multiplecomponent representations. Each component representation corresponds to an underlying inputcomponent, and it does not change when other components change. We call this property ascompositionality . A compositional representation is computed from an entangled input, and theextraction network needs to output correct component representations. If the extraction networktransfers to test distribution , the representations can be correctly extracted in test, so that thecompositional generalization is enabled by recombining them.Transferability As mentioned above, to enable compositional generalization, there is an assump-tion that the property of compositionality should transfer to the test distribution. Since X1;:::;XKare entangled, the model has the entire Xas input. However, input Xhas different distributions intraining and test, so the extraction network suffers from the divergence of distributions (Bengio, 2017;Pleiss et al., 2020). Hence, even if the model is trained to fit the compositionality property in thetraining distribution, the property is not guaranteed to transfer to the test distribution.We propose to obtain compositional representations not from the extractor but reversely from anauxiliary network. We extract compositional representations with optimization, and introduceregularization to make each hidden representation in the corresponding training manifold. We thenuse the optimized hidden representation for prediction. More details are provided in the next section.4 A PPROACHIn this section, we introduce the proposed approach from model architecture, training and inferenceperspectives. The architecture contains three modules, trained with routine end-to-end optimization.The inference, different from conventional procedure, includes three steps: extracts initial hiddenrepresentations; optimizes hidden representations as module input; predicts output. Figure 2 containsoverall flowcharts, and Algorithm 1 is a summary for the approach. We describe details here.4.1 M ODEL ARCHITECTUREThe model takes a sample with input Xand labelY. We have a representation extractor gwithparameter, which takes Xas input, and outputs Khidden representations H=H1;:::;HK:H=g(X;). We also have a prediction network fwith parameter , which takes hidden representationsHas input, and outputs ^Y:^Y=f(H;). These networks can be some existing networks forcompositionality learning. In addition to them, we have an auxiliary network hwith parameters ,which takes hidden representations Has input, and combines them to output ^X:^X=h(H; ).3Under review as a conference paper at ICLR 2021X H Yg(X;)h(H; )f(H;)(a) Training flowchart. The three mod-ules are trained with end-to-end opti-mization.X HX HH Yg(X;)h(H; )f(H;)(b) Inference flowchart. (Left) initial hidden representations extrac-tion. (Middle) optimization of hidden representations as moduleinput. (Right) output prediction.Figure 2: Flowcharts of the proposed approach. Xis input,Yis output, and His hidden representa-tion. The architecture has three modules: g;h;f .Algorithm 1 The proposed approach for training (left) and inference (right). ;;; are hyperparameters. Kis the number of components. Mis the number of instances in memory.Training sample: X;Y1:H=H1;:::;HK=g(X;)2:H0=H+;2N(0;I)3:^Y=f(H0;);^X=h(H0; )4:Ltrain=CE(Y;^Y) +L2(X;^X) +L2(H)5:^;^;^ = arg min;; Ltrain(X;Y;;; )6:Memm=g(Xm;^);m= 1;:::;MInference sample: X1:Hinit=g(X;^)2:^X=h(H;^ )3:Lmanf(H) =PKk=1minmL2(Hk;Memmk)4:Linfer(X;H ) =L2(X;^X) +Lmanf(H)5:^H= arg minHLinfer(X;H ); H 0=Hinit6:^Y=f(^H;^)4.2 T RAININGIn training, we sequentially use the extractor gand predictor f, by setting the output of the extractorHas input of the predictor. We have a loss Loriginal (containing regularization terms) , such as crossentropy: CE (Y;^Y), to train a model with compositionality by existing algorithms.On top of that, we train withHas inputs and ^Xas output. We set auxiliary loss as the difference(L2distance) between Xand ^X:Lauxiliary =L2(X;^X). We also regularize the L2norm ofH,Lhidden =L2(H), and add noise, H0=H+;2N(0;I), to avoid remembering X.is a hyperparameter. The whole train loss Ltrainis the combination of the original loss, auxiliary loss Lauxiliary ,and regularizationLhidden , with coefficients ;.Ltrain=Loriginal +Lauxiliary +LhiddenWe train the model in end-to-end manner. This can be a standard training for neural networks.^;^;^ = arg min;; Ltrain(X;Y;;; )After training, we store hidden representations for Mtraining samples. They are used to restrict testrepresentation manifold during inference.Memm=g(Xm;^);8m= 1;:::;M4.3 I NFERENCEWe use optimization to acquire hidden representations during inference. Given a test sample X, themodel predicts its output ^Y. We use the auxiliary network h(H;^ )to search for hidden represen-tationsHso thathcan output ^Xthat is close to the original intput X. This can be achieved byoptimization on Hwith auxiliary loss Lauxiliary . The initial value Hinitis the output of extractor g:Hinit=g(X;^).We also add a manifold regularization term Lmanfto constrain each hidden representation to lie in thecorresponding training manifold. For a test sample, we compute the minimum of L2distance betweeneach of its hidden representation Hkand the corresponding representations in memory Memmk. We4Under review as a conference paper at ICLR 2021then use the sum of the distances as regularization.Lmanf(H) =KXk=1minm=1;:::;ML2(Hk;Memmk)Inference lossLinferis the combination of auxiliary loss and the regularization, with as a coefficient.Linfer(X;H ) =Lauxiliary (X;^X) +Lmanf(H)Then, we obtain the hidden representations by optimization.^H= arg minHLinfer(X;H ); H 0=HinitWe get prediction from the optimized hidden representations: ^Y=f(^H;^).5 E XPERIMENTSIn this section, we show examples that, given a model with compositionality in training distribution,the compositionality does not transfer to test distributions, and how the proposed approach is appliedto these cases. As our focus is orthogonal with learning compositional model in training distribution,we obtain this model by directly providing true labels for each compositional component duringtraining. Then we evaluate the transferability to test distribution.Since this is the first work for the transferability problem of compositionality, we do not have previousbaselines, so that we use standard deep neural network models as baselines, and also compare withvariations of the proposed approach. The main changes of the proposed approach include using noise,auxiliary network and manifold regularization. A variation removes one of these changes, so it isalso an ablation experiment. The details of hyper parameters and experiment settings can be found inAppendix A. For all the experiments, we use accuracy as metric. A prediction is correct if and only ifall the components are correctly predicted. We repeat each experiment five times and report meanand variance.Overlapping digits Compound words Colored digitsMethod Test dist. Train dist. Test dist. Train dist. Test dist. Train dist.Human 10.0 7.7 - - - - -Standard DNN 49.3 0.9 89.60.3 27.06.7 100.00.0 48.84.3 99.00.2Variation -noise 44.2 0.9 79.80.9 11.36.2 100.00.0 37.74.0 96.90.6Variation -auxiliary 51.1 0.7 88.71.1 46.83.7 100.00.0 10.63.5 98.70.3Variation -manifold 60.8 3.7 69.82.351.93.3 99.80.5 81.81.4 92.41.0Proposed 69.40.3 81.11.0 51.23.0 100.00.091.20.6 96.20.4Table 1: Evaluation accuracy (%). We see that the proposed method has significant improvementover the standard DNN on three datasets and outperforms human on overlapping digit dataset.5.1 E XPERIMENTS ON OVERLAPPING DIGITSThe first experiment is on overlapping hand written digit recognition, as shown in Figure 1. Weconstruct the dataset from MNIST (LeCun et al., 1998). A sample is made by overlapping and takingthe average of two original images, the first one in its original form, and the second flipped over up-leftto down-right diagonal. The output is a vector of the two labels Y=Y1;Y2(not exchangeable). Eachoriginal label has 10 possible values, so the output has 100 possible values. To evaluate compositionalgeneralization, we use different distributions in training and test. In train, the sum of the two labels iseven, i.e. (Y1+Y2)mod2 = 0 . In test, the sum is odd.As a baseline, we use a standard neural network with two sub networks, each for an output. Each ofthe network is a three layer convolutional neural network. We train the model with cross entropy onboth outputs.The proposed method uses an auxiliary network that takes hidden layer as input and outputs thereconstruction of the original input. The auxiliary network has two sub-networks each with one5Under review as a conference paper at ICLR 2021Input Outputs Hiddenjanuarymarch 0, 2 january marchfebruaryoctober 1, 9 february octoberjanuaryfebruary 0, 1 january februaryaugustmay 7, 4 august mayTable 2: Examples of compound word experi-ments. The output labels align with correspondinghidden words. Upper is train, lower is test.Figure 3: Examples of input for colored digitsexperiments. Output is the digit label and colorlabel. Upper is train, lower is test.hidden representation as input. Each sub-network is a three layer trans-convolutional neural network,and we average the outputs to recover the original input. We use L2loss as the auxiliary loss.We also collect human performance data through crowd sourcing. There are 27 participants, and eachperson works on 20 fixed samples randomly selected from test data. Please refer to Appendix B formore details.The results in Table 1 (left) show that the proposed method has significant improvement over thebaseline by about 20% absolute increase, and it also outperforms humans. The ablation study showsthat performances drop in the experiments, indicating that all the modifications in the proposedapproach are necessary to achieve the result.5.2 E XPERIMENTS ON COMPOUND WORDSWe also conduct experiments for language processing. Language has natural units of words, whichhave consistent information across different distributions. So we design a setting that we cannot usethis property. We consider a problem that converts a compound word to two words. We constructcompound words from two month names (January to October), e.g. “julyfebruary”. The output labelis the zero-based index of the month (0 for January). We have each character as an input unit, andassign a one-hot representation to it. Other problem settings are the same as previous one. Please seeTable 2 for more examples.In baseline, We use two feed forward neural networks, each for an output. For the auxiliary network,we also use two feed forward neural networks, and average their outputs. Each feed forward networkhas three hidden layers. The training and other settings for baseline, proposed approach and ablationsare the same as the previous experiment.The results listed in Table 1 (middle) demonstrate that the proposed method is significantly betterthan the baseline by around 24% absolute increase. We find removing manifold regularization inablation study shows slight improvement over the proposed approach. Other ablation experimentshave significant reductions. This might be because regularizing manifold is not important wheninputs (characters) are discrete.5.3 E XPERIMENTS ON COLORED DIGITSWe also explore the capability of the proposed approach to another hand written digit problem withdifferent types of components: digits and colors. We construct the dataset from MNIST (LeCun et al.,1998) by changing the color of digits. We use digit label Y1as a component (0-9), and color labelY2as another (0-2). Color label is 0, 1, 2 for red, blue, green, respectively. In training, we use labelcombinations with: Y1mod36=Y2. In test, we use the rest of the combinations. Please refer toFigure 3 for examples.We use two compositional neural networks for each output, respectively. Each network has threehidden layers. For the auxiliary network, we concatenated the hidden representation as input, and usea three layer trans-convolutional neural network. Other settings for the methods and ablation are thesame as the overlapping digit experiment.The results shown in Table 1 (right) demonstrate that the proposed method is significantly betterthan the baseline by more than 40% absolute increase. It also outperforms the ablations, indicating6Under review as a conference paper at ICLR 2021Figure 4: Out-of-distribution problem has differ-ent distributions in training and test. We hope tolearn a model in training distribution (blue), anduse it in test distribution (orange).20 40 60 80 100Inference steps5060708090100 TER (%)BaselineProposedFigure 5: Transfer Error Rate (TER) during in-ference optimization. 100% is upper bound bydefinition. 50% means errors are balanced forin-distribution and out-of-distribution.that all the modifications are necessary. Among them, auxiliary network contributes the most to theperformance improvement.6 D ISCUSSIONSIn this section, we perform visualization and error analysis to better understand the experimentalresults and the behavior of the proposed approach.6.1 D ISTRIBUTION VISUALIZATIONWe visualize hidden representations of both baseline and the proposed approach for overlappingdigit experiments (Figure 6). We use t-SNE (Maaten & Hinton, 2008) to reduce each of two hiddenrepresentations to one dimension, and jointly plot them along horizontal and vertical coordinates,respectively. Training samples are blue, and test samples are orange.Our expectation is a chess board like distribution, similar to the true underlying distribution (Figure 4).Note that though there are 10 labels, the expected results may not be 1010colored blocks, becausethe labels, along with the representation, may not be in order (e.g. switching label 5, 6 reduces twoblocks lines), and the first and the second hidden representations may differ for the same digit.We find the visualization of the proposed approach (Figure 6b) is closer to the expectation thanthe baseline one (Figure 6a). The proposed approach has less empty areas, indicating that it canrecombine the components in the out-of-distribution setting. This analysis demonstrates that theproposed approach works in the expected way.6.2 S AMPLE VISUALIZATIONWe also hope to visualize concrete samples for the proposed approach. Since we have the auxiliarynetwork with two sub-networks for each hidden component representation, we visualize their outputs,and compare with the ground-truth (Figure 7). The result shows that the original input and hiddencomponents are reasonably recovered for both training and test samples. This means that the proposedapproach is able to extract information for each component in training distribution, and transfer theability in test distribution.6.3 T RANSFER ERROR ANALYSISWe analyze errors to show that the inference optimization process helps addressing the transferproblem. When the model makes mistakes in test, the predicted output may correspond to in-distribution or out-of-distribution label pairs. We investigate how frequent errors are associated withthe distribution transfer. We define a metric of Transfer Error Rate (TER) to measure this. It is the7Under review as a conference paper at ICLR 2021(a) Baseline approach (b) Proposed approachFigure 6: Visualization of hidden representations. Each representation is reduced to one dimensionvia t-SNE (Maaten & Hinton, 2008). We plot them jointly, training in blue, and test in orange. Theproposed approach (b) is close to expected chess board like result, similar to Figure 4.Train samples Test samplesFigure 7: Visualization for the proposed approach. The first row is ground truth. The second row isrecovered images from auxiliary network. The first column is the overlapping image. The second andthird columns are the images for the first and the second components, respectively. The results showthat the proposed approach is able to learn and transfer the ability to extract correct components.number of errors predicted to be in wrong distribution, over all errors. In this setting, the sum of labelsis odd in test, and even in training, so TER is the number of errors with even predicted label sum overall errors. If this rate is high (100% is upper bound by definition), it means that most of errors areassociated with transfer. If it is around 50%, it means that the errors are balanced in distributions.TER is 89.50.7% for baseline, and 68.3 2.8% for the proposed approach, with 500 test samples.Also, before the inference optimization in the proposed approach, the value is 88.8 2.0%. Theresults show that the baseline has most of the errors to be transfer error, and the propose approachreduces it significantly. Also, the reduction is during optimization in inference, because the value isclose to baseline before optimization. This indicates that the proposed approach is effective to reduceTER, and optimization is an important factor for it. Please refer to Figure 5 for more details.7 C ONCLUSIONIn this paper, we discuss our finding that compositionality does not transfer naturally from training totest data distributions. We further propose to address this problem with an auxiliary reconstructionnetwork and a regularized optimization procedure during the inference stage. Experimental resultsshow that the proposed approach has more than 20% absolute increase in various experimentscomparing with baselines, and even outperforms human on the overlapping digit recognition task.We hope this work would reshape our thoughts on transferability of compositionality and help toadvance compositional generalization and artificial intelligence research.8Under review as a conference paper at ICLR 2021
xGHcvStyzNG
the paper considers an interesting general problem, but the concrete supervised learning instantiation is problematic
4: Ok but not good enough - rejection
The paper introduces a “transferability of compositionality” problem and proposes an approach to alleviate it. The said problem may arise when one trains neural models to produce “compositional” representations of the input. In the paper “compositional representations” consist of multiple vectors which are supposed to correspond to semantically meaningful aspects of the input, for example different objects in the case of images or different parts of compound words in the case of linguistic inputs. The transferability problem arises when there is a difference between training and test distributions, namely when certain combinations of objects have different probabilities in training & testing. The proposed solution at inference time is to project object representations to the manifold of individual object representations. The manifold is estimated by saving representations of individual object representations from the training time. The problem that the paper considers is an interesting one. There have been a lot of papers on learning object-oriented representations recently [1, 2], and an implicit assumption in all these works is that there is no statistical dependency between which objects that occur in the scenes. There is also the literature on disentangled representations that the paper extensively cites, where the independence assumption is also common. My concerns regarding the paper are as follows: - Positioning with the respect to the prior work. The literature on learning object-oriented representations is not cited. The work on disentangled representation is cited, but still new setups are created from scratch. - Related to the previous point, the use of full supervision (in the form of labels) in the proposed tasks strikes me as a deviation from most previously used setups. Previous work aimed to learn compositional representations without supervision, often positioning their efforts as a cog-sci-style inquiry in building human-like models. The use of labels makes this look less like a cog-sci and more like a machine learning paper. Viewing the work as an ML paper, one thing that stands out is the lack of connections to any applied ML problem. - I think the negative results in the paper would look stronger if pretrained image- and language- processing models were used in all experiments (e.g. Contrastive Predicting Coding & BERT) The proposed method seems appropriate for the tasks that the paper considers. The experiments appear to be technically sound. My main concerns are thus focused on the motivation of the proposed tasks themselves and the positioning with respect to their prior work. I think a great direction to improve the paper would be to add experiments without supervised learning and using 3D-rendered images with multiple objects as it is done e.g. in [1] and [2]. Few comments on writing: - Algorithm 1 is very confusing because sample-level steps 1-4 are mixed with dataset-level steps 5 and 6. - A confusing sentence in the intro: “For a test sample, we regularize each hidden representation in its training manifold, and optimize them to recover the original input” - for colored digits experiments you might want to compare to and cite [3] - [1] “Multi-Object Representation Learning with Iterative Variational Inference” by Greff et al, 2020 - [2] “MONet: Unsupervised Scene Decomposition and Representation” by Burgess et al, 2019 - [3] “Invariant Risk Minimization” by Arjovsky et al, 2019
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Transferability of Compositionality ### Paper Abstract Compositional generalization is the algebraic capacity to understand and produce large amount of novel combinations from known components. It is a key element of human intelligence for out-of-distribution generalization. To equip neural networks with such ability, many algorithms have been proposed to extract compositional representations from the training distribution. However, it has not been discussed whether the trained model can still extract such representations in the test distribution. In this paper, we argue that the extraction ability does not transfer naturally, because the extraction network suffers from the divergence of distributions. To address this problem, we propose to use an auxiliary reconstruction network with regularized hidden representations as input, and optimize the representations during inference. The proposed approach significantly improves accuracy, showing more than a 20% absolute increase in various experiments compared with baselines. To our best knowledge, this is the first work to focus on the transferability of compositionality, and it is orthogonal to existing efforts of learning compositional representations in training distribution. We hope this work will help to advance compositional generalization and artificial intelligence research. ### Paper Keywords ["Compositionality"] ### Paper Content ABSTRACTCompositional generalization is the algebraic capacity to understand and producelarge amount of novel combinations from known components. It is a key element ofhuman intelligence for out-of-distribution generalization. To equip neural networkswith such ability, many algorithms have been proposed to extract compositionalrepresentations from the training distribution. However, it has not been discussedwhether the trained model can still extract such representations in the test distribu-tion. In this paper, we argue that the extraction ability does not transfer naturally,because the extraction network suffers from the divergence of distributions. Toaddress this problem, we propose to use an auxiliary reconstruction network withregularized hidden representations as input, and optimize the representations duringinference. The proposed approach significantly improves accuracy, showing morethan a 20% absolute increase in various experiments compared with baselines.To our best knowledge, this is the first work to focus on the transferability ofcompositionality, and it is orthogonal to existing efforts of learning compositionalrepresentations in training distribution. We hope this work will help to advancecompositional generalization and artificial intelligence research. The code is insupplementary materials.1 I NTRODUCTIONHuman intelligence (Minsky, 1986; Lake et al., 2017) exhibits compositional generalization, thealgebraic capacity to understand and produce large amount of novel combinations from knowncomponents (Chomsky, 1957; Montague, 1970). This capacity helps humans to recognize the worldefficiently and to be imaginative. It is also beneficial to design machine learning algorithms withcompositional generalization skills. Current neural network models, however, generally lack suchability. Compositional generalization is a type of out-of-distribution generalization (Bengio, 2017),where the training and test distributions are different. A sample in such a setting is a combination ofseveral components, and the generalization is enabled by recombining the seen components of theunseen combination during inference. In the image domain, an object is a combination of many partsor properties. In the language domain, a compound word is a combination of multiple words. As anexample, we consider two digits are overlapped (Figure 1). Each digit is a component, and it appearsin training. A test example has a new combination of two digits.The main approach for compositional generalization is to learn compositional representations (Ben-gio, 2013), which contain several component representations. Each of them depends only on theunderlying generative factor, and does not change when other factors change. We call this thecompositionality property, and will formally introduce in Section 3. In the digit example, this meansthat the representation of one digit does not change when the other digit changes.Multiple approaches have been proposed to learn compositional representations in the train distribu-tion. However, little discussion has focused on whether the model can still extract the representationsin the test distribution. We find that the extraction ability does not transfer naturally, because theextraction network suffers from the divergence of distributions (Bengio, 2017; Pleiss et al., 2020), sothat each extracted representation shifts away from the corresponding one in training. Our experimenton the digit example shows that the accuracy drops from 89.6% in training to 49.3% in test (Table 1in Section 5).To address the problem, we hope each representation is consistent with the training one whilereflecting the test sample. We use an auxiliary network, which has hidden representations as inputs,1Under review as a conference paper at ICLR 2021(a) Training samples (b) Test samplesFigure 1: Examples of compositional generalization with overlapping digits. Each sample is ahorizontal block with three images and two digits. The middle image Xis input and the right twodigitsY=Y1;Y2are output. The left two images X1;X2are hidden components. X1is in itsoriginal form, and X2is flipped over left-top to right-bottom diagonal. The sum of the digits is evenin train, and odd in test. We hope to learn a prediction model in training, and transfer it to test.and the original input as output. For a test sample, we regularize each hidden representation inits training manifold, and optimize them to recover the original input. Then we use the optimizedrepresentations for prediction. Experimental results show that the proposed approach has more than a20% absolute increase in various experiments compared to baselines, and even outperforms humanson the overlapping digit task. Our contributions can be summarized as follows.We raise and investigate the problem of transferability of compositionality to test distribution.This work is orthogonal to many efforts of learning compositionality in training distribution.We propose to address the problem by using an auxiliary reconstruction network with regu-larized hidden representations as input, and optimize the representations during inference.We empirically show that the transferability problem exists and the proposed approach hassignificant improvements over baselines.2 R ELATED WORKCompositional generalization (Chomsky, 1957; Montague, 1970) is critical in human cognition (Min-sky, 1986; Lake et al., 2017; Johnson & et al, 2017; Higgins & et al, 2018; Lake et al., 2019). It helpshumans to understand and produce large amount of novel combinations from known components.Broadly speaking, compositional generalization is a type of out-of-distribution (o.o.d.) transferring orgeneralization, which is also called domain adaptation (Redko et al., 2020) or concept drift (Gamaet al., 2014). This is different from traditional i.i.d. setting, where the training and the test distributionsare identical. The transferring requires prior knowledge of how the distribution is changed, andcompositional generalization has a particular form of such change, as mentioned in the later section.Compositional generalization is also a desirable property for deep neural networks. Human-level com-positional learning (Marcus, 2003; Lake & Baroni, 2018) has been an important open challenge (Yanget al., 2019; Keysers & et al, 2020), although there is a long history of studying compositionality inneural networks. Classic view (Fodor & Pylyshyn, 1988; Marcus, 1998; Fodor & Lepore, 2002) con-siders conventional neural networks lack systematic compositionality. With the breakthroughs in deepneural networks, there are more contemporary attempts to encode compositionality in deep neuralnetworks. Compositionality in neural networks is actively explored for systematic behaviour (Wong& Wang, 2007; Brakel & Frank, 2009), counting ability (Rodriguez & Wiles, 1998; Weiss et al.,2018) and sensitivity to hierarchical structure (Linzen et al., 2016). Researchers have also proposedmultiple related tasks (Lake & Baroni, 2018; Loula et al., 2018; Lake et al., 2019) and methods (Lakeet al., 2017; Lake & Baroni, 2018; Loula et al., 2018; Kliegl & Xu, 2018; Li et al., 2019; Lake, 2019;Gorden et al., 2020) for learning compositionality in training distribution. Another line of relatedwork is independent disentangled representation learning (Higgins et al., 2017; Burgess et al., 2018;Kim & Mnih, 2018; Chen et al., 2018; Kumar et al., 2017; Hsieh et al., 2018; Locatello et al., 2019;2020). Its main assumption is that the expected components are statistically independent in training2Under review as a conference paper at ICLR 2021data. This setting does not have transferring problem in test, because all combinations have positivejoint probabilities in training (please see Section 3).More recently, there have been approaches for better compositional learning in NLP tasks byelaborating RNN models (Bastings et al., 2018), by using pointer networks (Kliegl & Xu, 2018),or by using two representations of a sentence (Russin et al., 2019; Li et al., 2019). In these tasks,the input can be divided into words, which have consistent information in different distributions.However, such property is not always available, e.g. the overlapping digits. In this paper, we proposeto acquire the compositional representations with optimization during inference. We will discussmore in the following sections.3 C OMPOSITIONALITY AND TRANSFERABILITYIn this section, we describe compositionality as a model property. We then argue that compositionalitymay not transfer to test distribution, and discuss ideas of the proposed approach.Compositionality Compositional generalization has different training and test distributions. Sam-ples in both training and test data are combinations of Kcomponents. For example, in Figure 1,inputXhas two digits X1;X2, and output Yhas corresponding labels Y1;Y2. While a test sample’scombination does not appear in training, each component of the test sample appears in training.A key for compositional generalization is to have compositional representation, which has multiplecomponent representations. Each component representation corresponds to an underlying inputcomponent, and it does not change when other components change. We call this property ascompositionality . A compositional representation is computed from an entangled input, and theextraction network needs to output correct component representations. If the extraction networktransfers to test distribution , the representations can be correctly extracted in test, so that thecompositional generalization is enabled by recombining them.Transferability As mentioned above, to enable compositional generalization, there is an assump-tion that the property of compositionality should transfer to the test distribution. Since X1;:::;XKare entangled, the model has the entire Xas input. However, input Xhas different distributions intraining and test, so the extraction network suffers from the divergence of distributions (Bengio, 2017;Pleiss et al., 2020). Hence, even if the model is trained to fit the compositionality property in thetraining distribution, the property is not guaranteed to transfer to the test distribution.We propose to obtain compositional representations not from the extractor but reversely from anauxiliary network. We extract compositional representations with optimization, and introduceregularization to make each hidden representation in the corresponding training manifold. We thenuse the optimized hidden representation for prediction. More details are provided in the next section.4 A PPROACHIn this section, we introduce the proposed approach from model architecture, training and inferenceperspectives. The architecture contains three modules, trained with routine end-to-end optimization.The inference, different from conventional procedure, includes three steps: extracts initial hiddenrepresentations; optimizes hidden representations as module input; predicts output. Figure 2 containsoverall flowcharts, and Algorithm 1 is a summary for the approach. We describe details here.4.1 M ODEL ARCHITECTUREThe model takes a sample with input Xand labelY. We have a representation extractor gwithparameter, which takes Xas input, and outputs Khidden representations H=H1;:::;HK:H=g(X;). We also have a prediction network fwith parameter , which takes hidden representationsHas input, and outputs ^Y:^Y=f(H;). These networks can be some existing networks forcompositionality learning. In addition to them, we have an auxiliary network hwith parameters ,which takes hidden representations Has input, and combines them to output ^X:^X=h(H; ).3Under review as a conference paper at ICLR 2021X H Yg(X;)h(H; )f(H;)(a) Training flowchart. The three mod-ules are trained with end-to-end opti-mization.X HX HH Yg(X;)h(H; )f(H;)(b) Inference flowchart. (Left) initial hidden representations extrac-tion. (Middle) optimization of hidden representations as moduleinput. (Right) output prediction.Figure 2: Flowcharts of the proposed approach. Xis input,Yis output, and His hidden representa-tion. The architecture has three modules: g;h;f .Algorithm 1 The proposed approach for training (left) and inference (right). ;;; are hyperparameters. Kis the number of components. Mis the number of instances in memory.Training sample: X;Y1:H=H1;:::;HK=g(X;)2:H0=H+;2N(0;I)3:^Y=f(H0;);^X=h(H0; )4:Ltrain=CE(Y;^Y) +L2(X;^X) +L2(H)5:^;^;^ = arg min;; Ltrain(X;Y;;; )6:Memm=g(Xm;^);m= 1;:::;MInference sample: X1:Hinit=g(X;^)2:^X=h(H;^ )3:Lmanf(H) =PKk=1minmL2(Hk;Memmk)4:Linfer(X;H ) =L2(X;^X) +Lmanf(H)5:^H= arg minHLinfer(X;H ); H 0=Hinit6:^Y=f(^H;^)4.2 T RAININGIn training, we sequentially use the extractor gand predictor f, by setting the output of the extractorHas input of the predictor. We have a loss Loriginal (containing regularization terms) , such as crossentropy: CE (Y;^Y), to train a model with compositionality by existing algorithms.On top of that, we train withHas inputs and ^Xas output. We set auxiliary loss as the difference(L2distance) between Xand ^X:Lauxiliary =L2(X;^X). We also regularize the L2norm ofH,Lhidden =L2(H), and add noise, H0=H+;2N(0;I), to avoid remembering X.is a hyperparameter. The whole train loss Ltrainis the combination of the original loss, auxiliary loss Lauxiliary ,and regularizationLhidden , with coefficients ;.Ltrain=Loriginal +Lauxiliary +LhiddenWe train the model in end-to-end manner. This can be a standard training for neural networks.^;^;^ = arg min;; Ltrain(X;Y;;; )After training, we store hidden representations for Mtraining samples. They are used to restrict testrepresentation manifold during inference.Memm=g(Xm;^);8m= 1;:::;M4.3 I NFERENCEWe use optimization to acquire hidden representations during inference. Given a test sample X, themodel predicts its output ^Y. We use the auxiliary network h(H;^ )to search for hidden represen-tationsHso thathcan output ^Xthat is close to the original intput X. This can be achieved byoptimization on Hwith auxiliary loss Lauxiliary . The initial value Hinitis the output of extractor g:Hinit=g(X;^).We also add a manifold regularization term Lmanfto constrain each hidden representation to lie in thecorresponding training manifold. For a test sample, we compute the minimum of L2distance betweeneach of its hidden representation Hkand the corresponding representations in memory Memmk. We4Under review as a conference paper at ICLR 2021then use the sum of the distances as regularization.Lmanf(H) =KXk=1minm=1;:::;ML2(Hk;Memmk)Inference lossLinferis the combination of auxiliary loss and the regularization, with as a coefficient.Linfer(X;H ) =Lauxiliary (X;^X) +Lmanf(H)Then, we obtain the hidden representations by optimization.^H= arg minHLinfer(X;H ); H 0=HinitWe get prediction from the optimized hidden representations: ^Y=f(^H;^).5 E XPERIMENTSIn this section, we show examples that, given a model with compositionality in training distribution,the compositionality does not transfer to test distributions, and how the proposed approach is appliedto these cases. As our focus is orthogonal with learning compositional model in training distribution,we obtain this model by directly providing true labels for each compositional component duringtraining. Then we evaluate the transferability to test distribution.Since this is the first work for the transferability problem of compositionality, we do not have previousbaselines, so that we use standard deep neural network models as baselines, and also compare withvariations of the proposed approach. The main changes of the proposed approach include using noise,auxiliary network and manifold regularization. A variation removes one of these changes, so it isalso an ablation experiment. The details of hyper parameters and experiment settings can be found inAppendix A. For all the experiments, we use accuracy as metric. A prediction is correct if and only ifall the components are correctly predicted. We repeat each experiment five times and report meanand variance.Overlapping digits Compound words Colored digitsMethod Test dist. Train dist. Test dist. Train dist. Test dist. Train dist.Human 10.0 7.7 - - - - -Standard DNN 49.3 0.9 89.60.3 27.06.7 100.00.0 48.84.3 99.00.2Variation -noise 44.2 0.9 79.80.9 11.36.2 100.00.0 37.74.0 96.90.6Variation -auxiliary 51.1 0.7 88.71.1 46.83.7 100.00.0 10.63.5 98.70.3Variation -manifold 60.8 3.7 69.82.351.93.3 99.80.5 81.81.4 92.41.0Proposed 69.40.3 81.11.0 51.23.0 100.00.091.20.6 96.20.4Table 1: Evaluation accuracy (%). We see that the proposed method has significant improvementover the standard DNN on three datasets and outperforms human on overlapping digit dataset.5.1 E XPERIMENTS ON OVERLAPPING DIGITSThe first experiment is on overlapping hand written digit recognition, as shown in Figure 1. Weconstruct the dataset from MNIST (LeCun et al., 1998). A sample is made by overlapping and takingthe average of two original images, the first one in its original form, and the second flipped over up-leftto down-right diagonal. The output is a vector of the two labels Y=Y1;Y2(not exchangeable). Eachoriginal label has 10 possible values, so the output has 100 possible values. To evaluate compositionalgeneralization, we use different distributions in training and test. In train, the sum of the two labels iseven, i.e. (Y1+Y2)mod2 = 0 . In test, the sum is odd.As a baseline, we use a standard neural network with two sub networks, each for an output. Each ofthe network is a three layer convolutional neural network. We train the model with cross entropy onboth outputs.The proposed method uses an auxiliary network that takes hidden layer as input and outputs thereconstruction of the original input. The auxiliary network has two sub-networks each with one5Under review as a conference paper at ICLR 2021Input Outputs Hiddenjanuarymarch 0, 2 january marchfebruaryoctober 1, 9 february octoberjanuaryfebruary 0, 1 january februaryaugustmay 7, 4 august mayTable 2: Examples of compound word experi-ments. The output labels align with correspondinghidden words. Upper is train, lower is test.Figure 3: Examples of input for colored digitsexperiments. Output is the digit label and colorlabel. Upper is train, lower is test.hidden representation as input. Each sub-network is a three layer trans-convolutional neural network,and we average the outputs to recover the original input. We use L2loss as the auxiliary loss.We also collect human performance data through crowd sourcing. There are 27 participants, and eachperson works on 20 fixed samples randomly selected from test data. Please refer to Appendix B formore details.The results in Table 1 (left) show that the proposed method has significant improvement over thebaseline by about 20% absolute increase, and it also outperforms humans. The ablation study showsthat performances drop in the experiments, indicating that all the modifications in the proposedapproach are necessary to achieve the result.5.2 E XPERIMENTS ON COMPOUND WORDSWe also conduct experiments for language processing. Language has natural units of words, whichhave consistent information across different distributions. So we design a setting that we cannot usethis property. We consider a problem that converts a compound word to two words. We constructcompound words from two month names (January to October), e.g. “julyfebruary”. The output labelis the zero-based index of the month (0 for January). We have each character as an input unit, andassign a one-hot representation to it. Other problem settings are the same as previous one. Please seeTable 2 for more examples.In baseline, We use two feed forward neural networks, each for an output. For the auxiliary network,we also use two feed forward neural networks, and average their outputs. Each feed forward networkhas three hidden layers. The training and other settings for baseline, proposed approach and ablationsare the same as the previous experiment.The results listed in Table 1 (middle) demonstrate that the proposed method is significantly betterthan the baseline by around 24% absolute increase. We find removing manifold regularization inablation study shows slight improvement over the proposed approach. Other ablation experimentshave significant reductions. This might be because regularizing manifold is not important wheninputs (characters) are discrete.5.3 E XPERIMENTS ON COLORED DIGITSWe also explore the capability of the proposed approach to another hand written digit problem withdifferent types of components: digits and colors. We construct the dataset from MNIST (LeCun et al.,1998) by changing the color of digits. We use digit label Y1as a component (0-9), and color labelY2as another (0-2). Color label is 0, 1, 2 for red, blue, green, respectively. In training, we use labelcombinations with: Y1mod36=Y2. In test, we use the rest of the combinations. Please refer toFigure 3 for examples.We use two compositional neural networks for each output, respectively. Each network has threehidden layers. For the auxiliary network, we concatenated the hidden representation as input, and usea three layer trans-convolutional neural network. Other settings for the methods and ablation are thesame as the overlapping digit experiment.The results shown in Table 1 (right) demonstrate that the proposed method is significantly betterthan the baseline by more than 40% absolute increase. It also outperforms the ablations, indicating6Under review as a conference paper at ICLR 2021Figure 4: Out-of-distribution problem has differ-ent distributions in training and test. We hope tolearn a model in training distribution (blue), anduse it in test distribution (orange).20 40 60 80 100Inference steps5060708090100 TER (%)BaselineProposedFigure 5: Transfer Error Rate (TER) during in-ference optimization. 100% is upper bound bydefinition. 50% means errors are balanced forin-distribution and out-of-distribution.that all the modifications are necessary. Among them, auxiliary network contributes the most to theperformance improvement.6 D ISCUSSIONSIn this section, we perform visualization and error analysis to better understand the experimentalresults and the behavior of the proposed approach.6.1 D ISTRIBUTION VISUALIZATIONWe visualize hidden representations of both baseline and the proposed approach for overlappingdigit experiments (Figure 6). We use t-SNE (Maaten & Hinton, 2008) to reduce each of two hiddenrepresentations to one dimension, and jointly plot them along horizontal and vertical coordinates,respectively. Training samples are blue, and test samples are orange.Our expectation is a chess board like distribution, similar to the true underlying distribution (Figure 4).Note that though there are 10 labels, the expected results may not be 1010colored blocks, becausethe labels, along with the representation, may not be in order (e.g. switching label 5, 6 reduces twoblocks lines), and the first and the second hidden representations may differ for the same digit.We find the visualization of the proposed approach (Figure 6b) is closer to the expectation thanthe baseline one (Figure 6a). The proposed approach has less empty areas, indicating that it canrecombine the components in the out-of-distribution setting. This analysis demonstrates that theproposed approach works in the expected way.6.2 S AMPLE VISUALIZATIONWe also hope to visualize concrete samples for the proposed approach. Since we have the auxiliarynetwork with two sub-networks for each hidden component representation, we visualize their outputs,and compare with the ground-truth (Figure 7). The result shows that the original input and hiddencomponents are reasonably recovered for both training and test samples. This means that the proposedapproach is able to extract information for each component in training distribution, and transfer theability in test distribution.6.3 T RANSFER ERROR ANALYSISWe analyze errors to show that the inference optimization process helps addressing the transferproblem. When the model makes mistakes in test, the predicted output may correspond to in-distribution or out-of-distribution label pairs. We investigate how frequent errors are associated withthe distribution transfer. We define a metric of Transfer Error Rate (TER) to measure this. It is the7Under review as a conference paper at ICLR 2021(a) Baseline approach (b) Proposed approachFigure 6: Visualization of hidden representations. Each representation is reduced to one dimensionvia t-SNE (Maaten & Hinton, 2008). We plot them jointly, training in blue, and test in orange. Theproposed approach (b) is close to expected chess board like result, similar to Figure 4.Train samples Test samplesFigure 7: Visualization for the proposed approach. The first row is ground truth. The second row isrecovered images from auxiliary network. The first column is the overlapping image. The second andthird columns are the images for the first and the second components, respectively. The results showthat the proposed approach is able to learn and transfer the ability to extract correct components.number of errors predicted to be in wrong distribution, over all errors. In this setting, the sum of labelsis odd in test, and even in training, so TER is the number of errors with even predicted label sum overall errors. If this rate is high (100% is upper bound by definition), it means that most of errors areassociated with transfer. If it is around 50%, it means that the errors are balanced in distributions.TER is 89.50.7% for baseline, and 68.3 2.8% for the proposed approach, with 500 test samples.Also, before the inference optimization in the proposed approach, the value is 88.8 2.0%. Theresults show that the baseline has most of the errors to be transfer error, and the propose approachreduces it significantly. Also, the reduction is during optimization in inference, because the value isclose to baseline before optimization. This indicates that the proposed approach is effective to reduceTER, and optimization is an important factor for it. Please refer to Figure 5 for more details.7 C ONCLUSIONIn this paper, we discuss our finding that compositionality does not transfer naturally from training totest data distributions. We further propose to address this problem with an auxiliary reconstructionnetwork and a regularized optimization procedure during the inference stage. Experimental resultsshow that the proposed approach has more than 20% absolute increase in various experimentscomparing with baselines, and even outperforms human on the overlapping digit recognition task.We hope this work would reshape our thoughts on transferability of compositionality and help toadvance compositional generalization and artificial intelligence research.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title the paper considers an interesting general problem, but the concrete supervised learning instantiation is problematic ### Review Text The paper introduces a “transferability of compositionality” problem and proposes an approach to alleviate it. The said problem may arise when one trains neural models to produce “compositional” representations of the input. In the paper “compositional representations” consist of multiple vectors which are supposed to correspond to semantically meaningful aspects of the input, for example different objects in the case of images or different parts of compound words in the case of linguistic inputs. The transferability problem arises when there is a difference between training and test distributions, namely when certain combinations of objects have different probabilities in training & testing. The proposed solution at inference time is to project object representations to the manifold of individual object representations. The manifold is estimated by saving representations of individual object representations from the training time. The problem that the paper considers is an interesting one. There have been a lot of papers on learning object-oriented representations recently [1, 2], and an implicit assumption in all these works is that there is no statistical dependency between which objects that occur in the scenes. There is also the literature on disentangled representations that the paper extensively cites, where the independence assumption is also common. My concerns regarding the paper are as follows: - Positioning with the respect to the prior work. The literature on learning object-oriented representations is not cited. The work on disentangled representation is cited, but still new setups are created from scratch. - Related to the previous point, the use of full supervision (in the form of labels) in the proposed tasks strikes me as a deviation from most previously used setups. Previous work aimed to learn compositional representations without supervision, often positioning their efforts as a cog-sci-style inquiry in building human-like models. The use of labels makes this look less like a cog-sci and more like a machine learning paper. Viewing the work as an ML paper, one thing that stands out is the lack of connections to any applied ML problem. - I think the negative results in the paper would look stronger if pretrained image- and language- processing models were used in all experiments (e.g. Contrastive Predicting Coding & BERT) The proposed method seems appropriate for the tasks that the paper considers. The experiments appear to be technically sound. My main concerns are thus focused on the motivation of the proposed tasks themselves and the positioning with respect to their prior work. I think a great direction to improve the paper would be to add experiments without supervised learning and using 3D-rendered images with multiple objects as it is done e.g. in [1] and [2]. Few comments on writing: - Algorithm 1 is very confusing because sample-level steps 1-4 are mixed with dataset-level steps 5 and 6. - A confusing sentence in the intro: “For a test sample, we regularize each hidden representation in its training manifold, and optimize them to recover the original input” - for colored digits experiments you might want to compare to and cite [3] - [1] “Multi-Object Representation Learning with Iterative Variational Inference” by Greff et al, 2020 - [2] “MONet: Unsupervised Scene Decomposition and Representation” by Burgess et al, 2019 - [3] “Invariant Risk Minimization” by Arjovsky et al, 2019 ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
NTEz-6wysdb
ICLR.cc/2021/Conference
2021
Distilling Knowledge from Reader to Retriever for Question Answering
["Gautier Izacard", "Edouard Grave"]
The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results.
["question answering", "information retrieval"]
ABSTRACTThe task of information retrieval is an important component of many natural lan-guage processing systems, such as open domain question answering. While tra-ditional methods were based on hand-crafted features, continuous representationsbased on neural networks recently obtained competitive results. A challenge ofusing such methods is to obtain supervised data to train the retriever model, cor-responding to pairs of query and support documents. In this paper, we propose atechnique to learn retriever models for downstream tasks, inspired by knowledgedistillation, and which does not require annotated pairs of query and documents.Our approach leverages attention scores of a reader model, used to solve the taskbased on retrieved documents, to obtain synthetic labels for the retriever. We eval-uate our method on question answering, obtaining state-of-the-art results.1 I NTRODUCTIONInformation retrieval is an important component for many natural language processing tasks, suchas question answering (V oorhees et al., 1999) or fact checking (Thorne et al., 2018). For example,many real world question answering systems start by retrieving a set of support documents from alarge source of knowledge such as Wikipedia. Then, a finer-grained model processes these docu-ments to extract the answer. Traditionally, information retrieval systems were based on hand-craftedsparse representations of text documents, such as TF-IDF or BM25 (Jones, 1972; Robertson et al.,1995). Recently, methods based on dense vectors and machine learning have shown promising re-sults (Karpukhin et al., 2020; Khattab et al., 2020). Deep neural networks based on pre-training,such as BERT (Devlin et al., 2019), have been used to encode documents into fixed-size representa-tions. These representations are then queried using approximate nearest neighbors (Johnson et al.,2019). These techniques have lead to improved performance on various question answering tasks.A challenge of applying machine learning to information retrieval is to obtain training data for theretriever. To train such models, one needs pairs of queries and the corresponding list of documentsthat contains the information corresponding to the queries. Unfortunately, hand-labeling data to thatend is time consuming, and many datasets and applications lack such annotations. An alternativeapproach is to resort to heuristics, or weakly supervised learning, for example by considering that alldocuments containing the answer are positive examples. However, these approaches suffer from thefollowing limitations. First, frequent answers or entities might lead to false positive examples. Asan example, consider the question “where was Ada Lovelace born?” . The sentence “Ada Lovelacedied in 1852 in London” would be considered as a positive example, because it contains the answer“London” . A second limitation is that for some tasks, such as fact checking or long form questionanswering, such heuristics might not be applicable directly.In this paper, we propose a procedure to learn retriever systems without strong supervision in theform of pairs of queries and documents. Following previous work (Chen et al., 2017), our approachuses two models: the first one retrieves documents from a large source of knowledge ( the retriever ),the second one processes the support documents to solve the task ( the reader ). Our method isinspired by knowledge distillation (Hinton et al., 2015), and uses the reader model to obtain syntheticlabels to train the retriever model. More precisely, we use a sequence-to-sequence model as thereader, and use the attention activations over the input documents as synthetic labels to train theretriever. Said otherwise, we assume that attention activations are a good proxy for the relevance of1Published as a conference paper at ICLR 2021documents. We then train the retriever to reproduce the ranking of documents corresponding to thatmetric.We make the following contributions:First, we show that attention scores from a sequence-to-sequence reader model are a goodmeasure of document relevance (Sec. 3.2) ;Second, inspired by knowledge distillation, we propose to iteratively train the retriever fromthese activations, and compare different loss functions (Sec. 3.4) ;Finally, we evaluate our method on three question-answering benchmarks, obtaining state-of-the-art results (Sec. 4).Our code is available at: github.com/facebookresearch/FiD .2 R ELATED WORKWe briefly review information retrieval based on machine learning. We refer the reader to Manninget al. (2008) and Mitra et al. (2018) for a more exhaustive introduction to the subject.Vector space models. In traditional information retrieval systems, documents and queries are rep-resented as sparse vectors, each dimension corresponding to a different term. Different schemeshave been considered to weigh the different term, the most well known being based on inverse doc-ument frequency, or term specificity (Jones, 1972). This technique was later extended, leading tothe BM25 weighting scheme which is still widely used today (Robertson et al., 1995). A limita-tion of sparse representations is that the terms of the query need to match the terms of the returneddocuments. To overcome this, Deerwester et al. (1990) proposed to use latent semantic analysis forindexing, leading to low-dimension dense representations of documents.Neural information retrieval. Following the success of deep learning for other natural process-ing tasks, neural networks were applied to the task of information retrieval. Huang et al. (2013)proposed a deep bag-of-words model, where queries and documents were embedded independently,a technique known as bi-encoder. Documents were then ranked by using the cosine similarity withthe query, and the model was trained on clickthrough data from a search engine. This technique waslater extended by using convolutional neural networks (Shen et al., 2014) and recurrent neural net-works (Palangi et al., 2016). A limitation of independently embedding documents and query is thatit does not capture fine-grained interactions between the query and documents. This lead Nogueira& Cho (2019) and Yang et al. (2019) to use a BERT model to jointly embed documents and query,a technique known as cross-encoder.End-to-end retrieval. Most of the methods described in the previous paragraph were used to re-rank a small number of documents, usually returned by a traditional IR systems. In the contextofad-hoc document retrieval, Gillick et al. (2018) showed that bi-encoder models could be com-petitive with traditional IR systems. For open domain question-answering, Karpukhin et al. (2020)introduced dense passage retrieval (DPR), which uses dense embeddings and nearest neighborssearch. More precisely, question and passage embeddings are obtained using a BERT-based bi-encoder model, which is trained on a small dataset of question and passage pairs. Then, the fullknowledge source (Wikipedia) is encoded using this model, and passages are queried by computingthe k-nearest neighbors of the embedding of the question. Jointly embedding the query and docu-ments makes the application of cross-encoder models intractable to large database. To address thislimitation, Humeau et al. (2019) introduced the poly-encoder architecture, in which each documentsis represented by multiple vectors instead of one. Similarly, Khattab et al. (2020) proposed a scoringfunction where each term of the query and documents is represented by a single vector. To makethe method tractable, their system retrieves documents with an approximate score, which are thenre-ranked with the exact one. Finally, Luan et al. (2020) conducts a theoretical and empirical studyof sparse, dense and cross-attention information retrieval systems.Unsupervised learning. Closest to our work, there is growing body of work trying to learn infor-mation retrieval systems from unsupervised data. Lee et al. (2019) introduced the inverse cloze task2Published as a conference paper at ICLR 2021for pre-training retrievers, which can then be fine-tuned end-to-end on question-answering tasks.This pre-training scheme was later evaluated for ad-hoc document retrieval by Chang et al. (2020).Guu et al. (2020) proposed to augment language model pre-training with a retriever module, whichis trained using the masked language modeling objective. Similarly, Lewis et al. (2020a) introduceda sequence-to-sequence model that is pre-trained by generating a target text, after retrieving a set ofrelated texts. Lewis et al. (2020b) further train the retriever obtained in Karpukhin et al. (2020) bybackpropagating to the retriever the error between the generated output and the gold answer.Simultaneously to our work, Yang & Seo (2020) proposes to train a retriever with knowledge dis-tillation. The main difference with our method is the nature of the synthetic labels that are used totrain the retriever. Yang & Seo (2020) uses the DPR reader, which includes a classifier that predictswhich passage contains the answer, and can be seen as a cross-encoder reranker. This techniquethus performs the distillation of a cross-encoder retriever to a bi-encoder retriever. In contrast, ourmethod uses the internal attention scores of the reader, which does not require additional supervisionbesides pairs of question and answer.3 M ETHODOLOGYOur system is composed of two modules, the retriever and the reader, following the standard pipelinefor open-domain question answering. Given an input question these modules are used in a two-stepprocess to generate an answer. First the retriever selects support passages in a large knowledgesource. Then these passages are processed by the reader, along with the question, to generate ananswer. For the reader module we use the Fusion-in-Decoder model (Izacard & Grave, 2020), whichachieves state-of-the-art performance when combined with BM25 or DPR (Karpukhin et al., 2020).It is based on a sequence-to-sequence architecture, and is initialized from pre-trained models suchas T5 or BART (Raffel et al., 2019; Lewis et al., 2019).The focus of this work is to train the retriever without strong supervision or weakly supervisedlearning based on heuristics. For this we propose to train the retriever by learning to approximatethe attention score of the reader. The training scheme outlined here can be seen as a student-teacherpipeline, where the teacher, the reader module, produces targets which are used to train a studentnetwork, the reader. By doing so, we hope to leverage the signal extracted from the question-answerpairs by the reader. Since the goal of the retriever is to retrieve the most relevant passages, bytraining the retriever to estimate the reader attention scores, we implicitly make the assumption thatthese scores are a good proxy for the usefulness of a passage to answer the question.In this section we will first describe the Fusion-in-Decoder architecture, before elaborating on thesignal which is used to train the retriever, the design of the retriever, and how this module is trained.3.1 C ROSS -ATTENTION MECHANISMFirst, let us briefly review the Fusion-in-Decoder model (FiD, Izacard & Grave, 2020). The under-lying architecture is a sequence-to-sequence model, composed of an encoder and a decoder. Theencoder independently processes npdifferent text inputs (sk)1knp. In the case of open-domainquestion answering based on Wikipedia, each input skis the concatenation of the question qanda support passage, with special tokens question: ,title: andcontext: added before thequestion, the title of the Wikipedia article and the text of each passage. The output representationsof the encoder are then concatenated to form a global representation Xof dimension (Pk`k)d,where`kis the length of the k-th segment and dis the dimension of the embeddings and hidden rep-resentations of the model. Then, the decoder processes this representation as a regular autoregressivemodel, alternating self-attention, cross-attention and feed-forward modules.Only the cross-attention module explicitly takes as input the global output representation Xof theencoder. If H2Rddenotes the output of the previous self-attention layer of the decoder, the cross-attention operation consists in the following operations. First, queries Q, keysKand values Varecomputed by applying linear transformations:Q=WQH;K=WKX;V=WVX:3Published as a conference paper at ICLR 2021Then a similarity score between the query at position i,Qi, and the key at position j,Kj, is obtainedby computing the dot-product between these two elements, and normalized over the dimension:i;j=QTiKj; ~i;j=exp(i;j)Pmexp(i;m):A new representation is obtained as a sum of the values, weighted by the attention probabilities,before going through a final linear transformation Wo:Oi=WOXj~i;jVi;jThe operations described above are performed in parallel with different linear transformations in thecase of multi-head attention. Finally a normalization layer is applied, and this pipeline is wrappedby a skip connection. See Vaswani et al. (2017) for more details on the structure of Transformers.3.2 C ROSS -ATTENTION SCORE AS A RELEVANCE MEASURE FOR PASSAGE RETRIEVALIn some sense, the attention scores :;jinvolving the j-th key measures the importance of this key,and corresponding value, to compute the next representation. We hypothesize that it is good proxyto estimate the relevance of a passage — the more the tokens in a text segment are attended to, themore relevant the text segment is to answer the question.Given the reader model, an input question qand a corresponding set of support passagesDq= (pk)1kn, we obtain relevance scores (Gq;pk)1knfor each passage by aggregatingattention scores. In particular, the score Gq;pkis obtained by averaging the pre-attention scores 0;:over all the tokens in the input skcorresponding to the passage pk, all the layers and all the heads ofthe decoder. Note that the FiD decoder jointly processes the passages, and thus the score Gq;pkde-pends on the other support passages. We consider other pooling operators, such as max, to aggregateattention scores over layers, heads and tokens and empirically compare them in Sec. 5.2.Before we proceed, let us consider the following simple experiment, which is a first indication thatreader attention scores are indeed a strong relevance signal. Given a question and 100 passagesretrieved with DPR, our goal is to select the 10 best passages. When using the top 10 passages fromDPR instead of the top 100, the performance of our reader drops from 48.2 EM to 42.9 EM. Onthe other hand, if we select the top 10 documents according to the attention scores, the performanceonly drops to 46.8 EM.3.3 D ENSE BI -ENCODER FOR PASSAGE RETRIEVALIdeally, we would like to rank passages according to the reader cross-attention scores. In practicehowever, since the passages and the question need to be processed simultaneously by the readermodule it is impractical to query a large knowledge source this way. Thus, we use a retriever modelcomposed of an embedder function Ethat maps any text passage to a d-dimensional vector, such thatthe similarity score between a question qand a passage pis defined as S(q;p) =E(q)TE(p). Thissimilarity metric enables us to index all passages in the knowledge source as a preprocessing step.Then at runtime, passages with the highest similarity score with the input question are retrieved, byusing an efficient similarity search library such as FAISS (Johnson et al., 2019).For the embedder we use BERT and follow DPR by considering that the encodings E(q)andE(p)are obtained by extracting the representation of the initial [CLS] token. This leads to a represen-tation of dimension d= 768 in the case of a base model. Differently from DPR, we use the sameencoding function Efor the questions and passages by sharing parameters.3.4 D ISTILLING THE CROSS -ATTENTION SCORE TO A BI -ENCODERIn this section, we describe how to train the retriever model, based on the relevance scores obtainedin Sec. 3.2. For the training objective of the retriever, we propose to minimize the KL-divergencebetween the output S(q;p)and the score Gq;pafter normalization:LKL(;Q) =Xq2Q;p2Dq~Gq;p(log ~Gq;plog~S(q;p));4Published as a conference paper at ICLR 2021where~Gq;p=exp(Gq;p)Pp02Dqexp(Gq;p0); ~S(q;p) =exp(S(q;p))Pp02Dqexp(S(q;p0)):In Sec. 5.1 we present results obtained when using alternatives to this training objective. We con-sider two other objectives which have been used in Dehghani et al. (2017), where BM25 is usedas a teacher model to train a neural ranker. A first option consists in training the retriever with aregression approach by minimizing the mean squared error:LMSE(;Q) =Xq2Q;p2Dq(S(q;p)Gq;p)2:The second option we consider is to use a max-margin loss that explicitly penalizes inversions in theranking estimated by the retriever:Lranking (;Q) =Xq2Q;p1;p22Dqmax (0;sign(Gq;p1Gq;p2)(S(q;p1)S(q;p2))):In words, ifp1is more relevant to answer the question qthanp2, i.e.Gq;p1>Gq;p2, the loss pushesthe retriever score of p1to be larger than the score of p2by at least a margin of .3.5 I TERATIVE TRAININGIn this section, we explain how iterative training can be used with the student-teacher scheme de-scribed in the previous section, similarly to Khattab et al. (2020). This iterative procedure can beinterpreted as using the current retriever to sample negative examples, in order to train a new re-triever. When learning a retriever with discriminative training, negative samples play an importantrole, and various strategies have been considered in previous work. Karpukhin et al. (2020) com-pared random sampling with using the top-k passages from BM25 which do not contain the answerand with using the positive passages from other queries. Consider that for each question, we havean initial set of support documents D0q. We propose to use an iterative pipeline where each iterationcan be described as the following 4-step process:1. Train the reader Rusing the set of support documents for each question D0q.2. Compute aggregated attention scores (Gq;p)q2Q;p2D0qwith the reader R.3. Train the retriever Eusing the scores (Gq;p)q2Q;p2D0q.4. Retrieve top-passages with the new trained retriever E.This multi-step procedure can be repeated multiple times. A critical point of the training procedureis the initial set of documents corresponding to each question. In Sec. 4, we compare retrieversobtained by starting from documents obtained using BM25 or cosine similarity from a BERT model.In particular, we show that while the initial performance with BERT is low, the iterative procedureallows to greatly improve the performance of the model.4 E XPERIMENTSIn this section we evaluate the student-teacher training procedure from the previous section. Weshow that we obtain competitive performance without strong supervision for support documents.4.1 E XPERIMENTAL SETTINGDatasets. We perform experiments on TriviaQA (Joshi et al., 2017) and NaturalQues-tions (Kwiatkowski et al., 2019), two standard benchmarks for open-domain question answering.TriviaQA is made of questions from trivia and quiz league websites, and does not contain goldsupport documents. NaturalQuestions contains questions corresponding to web search queries, andgold support documents from Wikipedia. Following the setting from Lee et al. (2019); Karpukhin5Published as a conference paper at ICLR 2021BERT BM25Iter. P@20 P@100 Dev EM P@20 P@100 Dev EMNaturalQuestions0 4.8 12.0 9.8 59.3 74.0 41.21 32.2 45.8 16.9 76.4 84.3 46.82 51.1 62.6 28.6 80.4 86.7 47.93 67.8 76.8 39.3 80.0 86.3 46.2TriviaQA0 4.6 12.0 9.7 75.0 82.3 65.31 37.1 59.4 19.6 79.0 85.5 66.72 60.8 73.4 43.3 82.1 86.5 67.53 72.0 83.2 52.0 81.6 86.6 67.74 76.4 84.6 62.3 - - -Table 1: Iterative training starting with documents retrieved with BERT and BM25. Iteration 0corresponds to the performance of the reader trained on the set of initial support documents. Wereport all metrics on the validation set.et al. (2020), we use the original evaluation set as test set, and keep 10% of the training data forvalidation. We use the Wikipedia dump from Dec. 20, 2018 for support documents, splitting articlesinto non-overlapping passages of 100 tokens, and applying the same preprocessing as Chen et al.(2017).We also evaluate on NarrativeQuestions (Ko ˇcisk`y et al., 2018), using a publicly available prepro-cessed version.1This is a reading comprehension dataset built on a corpus of books and moviescripts. For each story, questions are generated by human annotators based on a summary of thegiven document. We consider the full story setting, where the task is to answer questions giventhe entire story and not the summary used to generate question-answer pairs. Here the knowledgesource is not the same for all questions: given a question the retrieval operation is performed onall passages of the associated story. These passages are obtained by dividing the story in chunksof 100 words. These stories are long documents, with an average of 60k words. While part of thedocuments could be processed entirely by the Fusion-in-Decoder module, it is interesting to limitthe number of support passages to reduce the computational cost of the reading step.While answers in TriviaQA and NaturalQuestions are short, NarrativeQA answers are about fivewords long on average, with medium length answers such as ”He dismantles it and attaches it to hismother’s jeep” which answers the question ”What does Mark do with his radio station?” . Notably asignificant number of answers do not correspond to spans in the story. It is thus not straightforward totrain the retriever with heuristics using question-answer pairs. In our case we use the same pipelineas for TriviaQA and NaturalQuestions, demonstrating the flexibility of our approach.Evaluation. The model performance is assessed in two ways. First, following previous work suchas DPR and ColbertQA, we report the top- kretrieval accuracy (P@k), which is the percentage ofquestions for which at least one passage of the top- kretrieved passages contains the gold answer. It isunclear how well this metric evaluates the retriever performance, since the answer can be containedin a passage without being related to the question. This is notably true for common words or entities.We also report the final end-to-end performance of the question answering system composed of theretriever and reader modules. This is the metric we are fundamentally interested in. For TriviaQAand NaturalQuestions, predicted answers are evaluated with the standard exact match metric (EM),as introduced by Rajpurkar et al. (2016). For NarrativeQA we report the metrics proposed in theoriginal paper: ROUGE-L, BLEU-1, BLEU-4 and METEOR.4.2 T ECHNICAL DETAILSInitialization. Similarly to DPR, we initialize the retriever with the BERT base model, pretrainedwith uncased text. The Fusion-in-Decoder reader is initialized with the T5 base model. A criticalcomponent of the iterative training procedure is the initialization of the support passages D0qasso-1https://cs.nyu.edu/ ̃kcho/NarrativeQA6Published as a conference paper at ICLR 2021Model NQ TriviaQAdev. test dev. testDPR (Karpukhin et al., 2020) - 41.5 - 57.9RAG (Lewis et al., 2020b) - 44.5 - 56.1ColBERT-QA (Khattab et al., 2020) - 48.2 - 63.2Fusion-in-Decoder (T5 base) (Izacard & Grave, 2020) - 48.2 - 65.0Fusion-in-Decoder (T5 large) (Izacard & Grave, 2020) - 51.4 - 67.6Ours (starting from BERT, T5 base) 39.3 40.0 62.5 62.7Ours (starting from BM25, T5 base) 47.9 48.9 67.7 67.7Ours (starting from DPR, T5 base) 48.0 49.6 68.6 68.8Ours (starting from DPR, T5 large) 51.9 53.7 71.9 72.1Table 2: Comparison to state-of-the-art models on NaturalQuestions and TriviaQA.ciated with each question q. For this we consider different options. The first one is to use passagesretrieved using BM25. We use the implementation from Apache Lucene2with default parameters,and tokenize questions and passages with SpaCy3. We also use passages obtained with BERT as aretriever without fine-tuning, this leads to poor initial performance. Finally in Table 2 we show thatinitializingD0qwith passages obtained with DPR (Karpukhin et al., 2020) outperforms the two pre-vious initializations. We train all retrievers using 100 passages. For the reader, we use 100 passagesfor NaturalQuestions and TriviaQA and 20 passages for NarrativeQA.Iterative training. We apply the iterative training procedure on each dataset independently. Boththe reader and the retriever are fine-tuned using the ADAM algorithm (Kingma & Ba, 2014), with abatch of size 64. The reader is trained for 10k gradient steps with a constant learning rate of 104,and the best model is selected based on the validation performance. The retriever is trained with aconstant learning rate of 5105until the performance saturates. To monitor the performance of theretriever during training, we measure the similarity between the reader and the retriever rankings. Ateach new training iteration the reader is reinitialized from T5 base, while we pursue the training ofthe retriever. We found that restarting from T5 base is important for the first iterations when startingwith BERT documents. We have not tried to reinitialize the retriever between each iteration. Moredetails on the hyperparameters and the training procedure are reported in Appendix A.2.4.3 R ESULTSIn Table 1, we report the performance of our approach for different number of self-training iterations.Generally, we observe that the accuracy of our system increases with the number of iterations,obtaining strong performance after a few iterations. Interestingly, while the initial performance withdocuments retrieved with BERT is very poor, our method still reach competitive scores on TriviaQA,and to a lesser extent, NaturalQuestions. However, a second observation is that the quality of theinitial document sets plays an important role on the performance of the end system. Indeed, weobserve that starting the procedure from BM25 documents, which are higher quality as indicated bythe performance of the system at iteration 0, leads to stronger results than using BERT documents.An interesting research question would be to explore pre-training of the initial BERT model forretrieval, for example by using the inverse cloze task.In Table 2, we report the performance of our approach, as well as existing state-of-the-art systemson TriviaQA and NaturalQuestions. In addition to initializing our method with documents retrievedwith BM25 and BERT, we also train a system by starting from DPR documents. First, we observethat our method improve the performance over the state-of-the-art, even when starting from BM25documents. This validates our assumption that it is possible to obtain strong retrievers without theneed of supervision for the documents. Second, when starting from DPR passages, our method leadsto a +4.5 EM improvement on TriviaQA and +2.3 EM improvement on NaturalQuestions when thefinal evaluation is carried out with a large reader.2lucene.apache.org3spacy.io7Published as a conference paper at ICLR 2021Method Iter. Rouge-L Bleu-1 Bleu-4 Meteordev. test dev. test dev. test dev. testBest from Ko ˇcisk`y et al. (2018) - 14.5 14.0 20.0 19.1 2.23 2.1 4.6 4.4DPR + FiD - 29.7 30.8 33.0 34.0 6.7 6.9 10.3 10.8Ours starting from BM25 0 29.9 30.3 34.6 33.7 7.1 6.5 10.5 10.4Ours starting from BM25 1 31.6 32.0 34.9 35.3 7.6 7.5 11.0 11.1Table 3: Performance on NarrativeQA.In Table 3, we report the performance of our method on the NarrativeQA dataset. We use the settingwhere the knowledge source corresponds to the whole document, and in particular, we do not usethe summary. We compare our results to the best ones reported in the original paper for this setting.Similar to results obtained on NaturalQuestions and TriviaQA, we observe that training the retrieverby using the attention scores of the reader leads to improvements, compared to the BM25 baseline.5 A BLATIONSIn this section, we investigate design choices regarding two key elements of our approach: thetraining objective and the aggregation of cross-attention scores. For all experiments, we consider asimplified experimental setting: a single training iteration is performed on NaturalQuestions, startingfrom BM25 passages.5.1 T RAINING OBJECTIVESIn Table 4 we report the performance of our model trained with the different training objectivesdescribed in Sec. 3.3. We observe that using the KL-divergence between the aggregated scores ofthe reader and the scores of the retriever outperforms the other objective functions.Method P@5 P@20 P@100 Dev EMMean Squared Error 46.5 61.2 73.9 40.6Max-margin loss, = 1 60.3 73.6 82.7 45.4Max-margin loss, = 0:260.3 73.5 82.6 45.8Max-margin loss, = 0:160.2 73.5 82.6 45.1KL-divergence 64.7 76.4 84.3 46.8Table 4: Comparison of training objectives on NaturalQuestions after one iteration. We report allthe metrics on the validation set.5.2 H OW TO AGGREGATE CROSS -ATTENTION SCORES ?In Section 4 the cross-attention scores are aggregated in a specific way, in order to obtain a singlescalar used to train the retriever. Formally let us denote by i;j;k;h the cross-attention scores betweentokeniof the output and token jof the input, for the k-th layer and h-th head. Then, the scores Gq;pforp2Dqused in Section 4 are computed as follows:Gq;p= meanj;k;h0;j;k;h;wherejdescribes the input tokens corresponding to p. In Table 5 we explore alternatives to thischoice by considering different aggregation schemes. In particular, we consider (1) taking the maxover the input tokens corresponding to passage pinstead of the average, (2) taking the average overthe output tokens instead of taking the score of the first token, (3) taking the mean over the last sixlayers instead of all the layers, (4) taking the max over the layers instead of the average, (5) takingthe max over the heads instead of the average. We observe that the performance of our approachis relatively stable to the choice of aggregation, and that the best result is obtained by averaging,except over the output tokens where it is best to only consider the first token.8Published as a conference paper at ICLR 2021Method P@5 P@20 P@100 Dev EM(0)meanj;k;h0;j;k;h 64.7 76.4 84.3 46.8(1)meank;hmaxj0;j;k;h 61.2 72.5 81.0 46.0(2)meani;j;k;hi;j;k;h 63.5 75.3 83.1 45.8(3)mean 7k12;j;h0;j;k;h 64.1 75.7 83.8 46.4(4)meanj;hmaxk0;j;k;h 63.9 75.5 83.7 46.5(5)meanj;kmaxh0;j;k;h 64.2 76.1 83.9 46.8Table 5: Comparison of attention aggregation schemes on NaturalQuestions after one iteration. Theindexicorresponds to output tokens, jcorresponds to input tokens of a given passage, hto headsandkto layers of the decoder. We report all metrics on the validation set.6 C ONCLUSIONIn this paper, we introduce a method to train an information retrieval module for downstream tasks,without using pairs of queries and documents as annotations. Our approach is inspired by knowl-edge distillation, where the retriever module corresponds to the student model and the reader modulecorresponds to the teacher model. In particular, we use the cross-attention scores, from a sequence-to-sequence reader, to obtain synthetic targets for the retriever. We compare different ways to aggre-gate the scores, as well as different training objectives to learn the retriever. We show that iterativelytraining the reader and the retriever leads to better performance, and obtain state-of-the-art perfor-mance on competitive question answering benchmarks. In the future, we would like to explore betterpre-training strategies for the retriever module, as well as better scoring functions for the retriever.
PWS7IuRysr
Interesting technique
6: Marginally above acceptance threshold
The authors propose a training technique for information retrieval models in the context of (open domain) question answering. Assuming the existence of some reader model, the idea is to use internal information of that model as a training signal for a retriever. Specifically, they use the attention activations over the input documents as synthetic labels for the retriever. The paper is written well, and proposes an interesting idea. The technique is well motivated, I particularly like that they motivate the use of the cross-attention score through a simple experiment, where they compare the top 10 DPR passages with the top 10 passages by attention score. The results over the SOTA seem moderate though, and the number of iterations seems to be an important (and potentially underexplored) variable there. I support the acceptance of the paper, because I believe the technique and the choice of model score (cross-attention score) are both interesting contributions. As the authors say, training retrieval systems is tricky, since there is usually not sufficient labelled data available and it might depend heavily on the task. The iterative training that exploits internal state of the “downstream” model that they describe is an interesting idea that deserves attention from the community. In Table, only up to 4 iterations in the table, which still show large improvements from one to the next. It would be interesting to know at what point no additional gains are seen.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Distilling Knowledge from Reader to Retriever for Question Answering ### Paper Abstract The task of information retrieval is an important component of many natural language processing systems, such as open domain question answering. While traditional methods were based on hand-crafted features, continuous representations based on neural networks recently obtained competitive results. A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents. In this paper, we propose a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents. Our approach leverages attention scores of a reader model, used to solve the task based on retrieved documents, to obtain synthetic labels for the retriever. We evaluate our method on question answering, obtaining state-of-the-art results. ### Paper Keywords ["question answering", "information retrieval"] ### Paper Content ABSTRACTThe task of information retrieval is an important component of many natural lan-guage processing systems, such as open domain question answering. While tra-ditional methods were based on hand-crafted features, continuous representationsbased on neural networks recently obtained competitive results. A challenge ofusing such methods is to obtain supervised data to train the retriever model, cor-responding to pairs of query and support documents. In this paper, we propose atechnique to learn retriever models for downstream tasks, inspired by knowledgedistillation, and which does not require annotated pairs of query and documents.Our approach leverages attention scores of a reader model, used to solve the taskbased on retrieved documents, to obtain synthetic labels for the retriever. We eval-uate our method on question answering, obtaining state-of-the-art results.1 I NTRODUCTIONInformation retrieval is an important component for many natural language processing tasks, suchas question answering (V oorhees et al., 1999) or fact checking (Thorne et al., 2018). For example,many real world question answering systems start by retrieving a set of support documents from alarge source of knowledge such as Wikipedia. Then, a finer-grained model processes these docu-ments to extract the answer. Traditionally, information retrieval systems were based on hand-craftedsparse representations of text documents, such as TF-IDF or BM25 (Jones, 1972; Robertson et al.,1995). Recently, methods based on dense vectors and machine learning have shown promising re-sults (Karpukhin et al., 2020; Khattab et al., 2020). Deep neural networks based on pre-training,such as BERT (Devlin et al., 2019), have been used to encode documents into fixed-size representa-tions. These representations are then queried using approximate nearest neighbors (Johnson et al.,2019). These techniques have lead to improved performance on various question answering tasks.A challenge of applying machine learning to information retrieval is to obtain training data for theretriever. To train such models, one needs pairs of queries and the corresponding list of documentsthat contains the information corresponding to the queries. Unfortunately, hand-labeling data to thatend is time consuming, and many datasets and applications lack such annotations. An alternativeapproach is to resort to heuristics, or weakly supervised learning, for example by considering that alldocuments containing the answer are positive examples. However, these approaches suffer from thefollowing limitations. First, frequent answers or entities might lead to false positive examples. Asan example, consider the question “where was Ada Lovelace born?” . The sentence “Ada Lovelacedied in 1852 in London” would be considered as a positive example, because it contains the answer“London” . A second limitation is that for some tasks, such as fact checking or long form questionanswering, such heuristics might not be applicable directly.In this paper, we propose a procedure to learn retriever systems without strong supervision in theform of pairs of queries and documents. Following previous work (Chen et al., 2017), our approachuses two models: the first one retrieves documents from a large source of knowledge ( the retriever ),the second one processes the support documents to solve the task ( the reader ). Our method isinspired by knowledge distillation (Hinton et al., 2015), and uses the reader model to obtain syntheticlabels to train the retriever model. More precisely, we use a sequence-to-sequence model as thereader, and use the attention activations over the input documents as synthetic labels to train theretriever. Said otherwise, we assume that attention activations are a good proxy for the relevance of1Published as a conference paper at ICLR 2021documents. We then train the retriever to reproduce the ranking of documents corresponding to thatmetric.We make the following contributions:First, we show that attention scores from a sequence-to-sequence reader model are a goodmeasure of document relevance (Sec. 3.2) ;Second, inspired by knowledge distillation, we propose to iteratively train the retriever fromthese activations, and compare different loss functions (Sec. 3.4) ;Finally, we evaluate our method on three question-answering benchmarks, obtaining state-of-the-art results (Sec. 4).Our code is available at: github.com/facebookresearch/FiD .2 R ELATED WORKWe briefly review information retrieval based on machine learning. We refer the reader to Manninget al. (2008) and Mitra et al. (2018) for a more exhaustive introduction to the subject.Vector space models. In traditional information retrieval systems, documents and queries are rep-resented as sparse vectors, each dimension corresponding to a different term. Different schemeshave been considered to weigh the different term, the most well known being based on inverse doc-ument frequency, or term specificity (Jones, 1972). This technique was later extended, leading tothe BM25 weighting scheme which is still widely used today (Robertson et al., 1995). A limita-tion of sparse representations is that the terms of the query need to match the terms of the returneddocuments. To overcome this, Deerwester et al. (1990) proposed to use latent semantic analysis forindexing, leading to low-dimension dense representations of documents.Neural information retrieval. Following the success of deep learning for other natural process-ing tasks, neural networks were applied to the task of information retrieval. Huang et al. (2013)proposed a deep bag-of-words model, where queries and documents were embedded independently,a technique known as bi-encoder. Documents were then ranked by using the cosine similarity withthe query, and the model was trained on clickthrough data from a search engine. This technique waslater extended by using convolutional neural networks (Shen et al., 2014) and recurrent neural net-works (Palangi et al., 2016). A limitation of independently embedding documents and query is thatit does not capture fine-grained interactions between the query and documents. This lead Nogueira& Cho (2019) and Yang et al. (2019) to use a BERT model to jointly embed documents and query,a technique known as cross-encoder.End-to-end retrieval. Most of the methods described in the previous paragraph were used to re-rank a small number of documents, usually returned by a traditional IR systems. In the contextofad-hoc document retrieval, Gillick et al. (2018) showed that bi-encoder models could be com-petitive with traditional IR systems. For open domain question-answering, Karpukhin et al. (2020)introduced dense passage retrieval (DPR), which uses dense embeddings and nearest neighborssearch. More precisely, question and passage embeddings are obtained using a BERT-based bi-encoder model, which is trained on a small dataset of question and passage pairs. Then, the fullknowledge source (Wikipedia) is encoded using this model, and passages are queried by computingthe k-nearest neighbors of the embedding of the question. Jointly embedding the query and docu-ments makes the application of cross-encoder models intractable to large database. To address thislimitation, Humeau et al. (2019) introduced the poly-encoder architecture, in which each documentsis represented by multiple vectors instead of one. Similarly, Khattab et al. (2020) proposed a scoringfunction where each term of the query and documents is represented by a single vector. To makethe method tractable, their system retrieves documents with an approximate score, which are thenre-ranked with the exact one. Finally, Luan et al. (2020) conducts a theoretical and empirical studyof sparse, dense and cross-attention information retrieval systems.Unsupervised learning. Closest to our work, there is growing body of work trying to learn infor-mation retrieval systems from unsupervised data. Lee et al. (2019) introduced the inverse cloze task2Published as a conference paper at ICLR 2021for pre-training retrievers, which can then be fine-tuned end-to-end on question-answering tasks.This pre-training scheme was later evaluated for ad-hoc document retrieval by Chang et al. (2020).Guu et al. (2020) proposed to augment language model pre-training with a retriever module, whichis trained using the masked language modeling objective. Similarly, Lewis et al. (2020a) introduceda sequence-to-sequence model that is pre-trained by generating a target text, after retrieving a set ofrelated texts. Lewis et al. (2020b) further train the retriever obtained in Karpukhin et al. (2020) bybackpropagating to the retriever the error between the generated output and the gold answer.Simultaneously to our work, Yang & Seo (2020) proposes to train a retriever with knowledge dis-tillation. The main difference with our method is the nature of the synthetic labels that are used totrain the retriever. Yang & Seo (2020) uses the DPR reader, which includes a classifier that predictswhich passage contains the answer, and can be seen as a cross-encoder reranker. This techniquethus performs the distillation of a cross-encoder retriever to a bi-encoder retriever. In contrast, ourmethod uses the internal attention scores of the reader, which does not require additional supervisionbesides pairs of question and answer.3 M ETHODOLOGYOur system is composed of two modules, the retriever and the reader, following the standard pipelinefor open-domain question answering. Given an input question these modules are used in a two-stepprocess to generate an answer. First the retriever selects support passages in a large knowledgesource. Then these passages are processed by the reader, along with the question, to generate ananswer. For the reader module we use the Fusion-in-Decoder model (Izacard & Grave, 2020), whichachieves state-of-the-art performance when combined with BM25 or DPR (Karpukhin et al., 2020).It is based on a sequence-to-sequence architecture, and is initialized from pre-trained models suchas T5 or BART (Raffel et al., 2019; Lewis et al., 2019).The focus of this work is to train the retriever without strong supervision or weakly supervisedlearning based on heuristics. For this we propose to train the retriever by learning to approximatethe attention score of the reader. The training scheme outlined here can be seen as a student-teacherpipeline, where the teacher, the reader module, produces targets which are used to train a studentnetwork, the reader. By doing so, we hope to leverage the signal extracted from the question-answerpairs by the reader. Since the goal of the retriever is to retrieve the most relevant passages, bytraining the retriever to estimate the reader attention scores, we implicitly make the assumption thatthese scores are a good proxy for the usefulness of a passage to answer the question.In this section we will first describe the Fusion-in-Decoder architecture, before elaborating on thesignal which is used to train the retriever, the design of the retriever, and how this module is trained.3.1 C ROSS -ATTENTION MECHANISMFirst, let us briefly review the Fusion-in-Decoder model (FiD, Izacard & Grave, 2020). The under-lying architecture is a sequence-to-sequence model, composed of an encoder and a decoder. Theencoder independently processes npdifferent text inputs (sk)1knp. In the case of open-domainquestion answering based on Wikipedia, each input skis the concatenation of the question qanda support passage, with special tokens question: ,title: andcontext: added before thequestion, the title of the Wikipedia article and the text of each passage. The output representationsof the encoder are then concatenated to form a global representation Xof dimension (Pk`k)d,where`kis the length of the k-th segment and dis the dimension of the embeddings and hidden rep-resentations of the model. Then, the decoder processes this representation as a regular autoregressivemodel, alternating self-attention, cross-attention and feed-forward modules.Only the cross-attention module explicitly takes as input the global output representation Xof theencoder. If H2Rddenotes the output of the previous self-attention layer of the decoder, the cross-attention operation consists in the following operations. First, queries Q, keysKand values Varecomputed by applying linear transformations:Q=WQH;K=WKX;V=WVX:3Published as a conference paper at ICLR 2021Then a similarity score between the query at position i,Qi, and the key at position j,Kj, is obtainedby computing the dot-product between these two elements, and normalized over the dimension:i;j=QTiKj; ~i;j=exp(i;j)Pmexp(i;m):A new representation is obtained as a sum of the values, weighted by the attention probabilities,before going through a final linear transformation Wo:Oi=WOXj~i;jVi;jThe operations described above are performed in parallel with different linear transformations in thecase of multi-head attention. Finally a normalization layer is applied, and this pipeline is wrappedby a skip connection. See Vaswani et al. (2017) for more details on the structure of Transformers.3.2 C ROSS -ATTENTION SCORE AS A RELEVANCE MEASURE FOR PASSAGE RETRIEVALIn some sense, the attention scores :;jinvolving the j-th key measures the importance of this key,and corresponding value, to compute the next representation. We hypothesize that it is good proxyto estimate the relevance of a passage — the more the tokens in a text segment are attended to, themore relevant the text segment is to answer the question.Given the reader model, an input question qand a corresponding set of support passagesDq= (pk)1kn, we obtain relevance scores (Gq;pk)1knfor each passage by aggregatingattention scores. In particular, the score Gq;pkis obtained by averaging the pre-attention scores 0;:over all the tokens in the input skcorresponding to the passage pk, all the layers and all the heads ofthe decoder. Note that the FiD decoder jointly processes the passages, and thus the score Gq;pkde-pends on the other support passages. We consider other pooling operators, such as max, to aggregateattention scores over layers, heads and tokens and empirically compare them in Sec. 5.2.Before we proceed, let us consider the following simple experiment, which is a first indication thatreader attention scores are indeed a strong relevance signal. Given a question and 100 passagesretrieved with DPR, our goal is to select the 10 best passages. When using the top 10 passages fromDPR instead of the top 100, the performance of our reader drops from 48.2 EM to 42.9 EM. Onthe other hand, if we select the top 10 documents according to the attention scores, the performanceonly drops to 46.8 EM.3.3 D ENSE BI -ENCODER FOR PASSAGE RETRIEVALIdeally, we would like to rank passages according to the reader cross-attention scores. In practicehowever, since the passages and the question need to be processed simultaneously by the readermodule it is impractical to query a large knowledge source this way. Thus, we use a retriever modelcomposed of an embedder function Ethat maps any text passage to a d-dimensional vector, such thatthe similarity score between a question qand a passage pis defined as S(q;p) =E(q)TE(p). Thissimilarity metric enables us to index all passages in the knowledge source as a preprocessing step.Then at runtime, passages with the highest similarity score with the input question are retrieved, byusing an efficient similarity search library such as FAISS (Johnson et al., 2019).For the embedder we use BERT and follow DPR by considering that the encodings E(q)andE(p)are obtained by extracting the representation of the initial [CLS] token. This leads to a represen-tation of dimension d= 768 in the case of a base model. Differently from DPR, we use the sameencoding function Efor the questions and passages by sharing parameters.3.4 D ISTILLING THE CROSS -ATTENTION SCORE TO A BI -ENCODERIn this section, we describe how to train the retriever model, based on the relevance scores obtainedin Sec. 3.2. For the training objective of the retriever, we propose to minimize the KL-divergencebetween the output S(q;p)and the score Gq;pafter normalization:LKL(;Q) =Xq2Q;p2Dq~Gq;p(log ~Gq;plog~S(q;p));4Published as a conference paper at ICLR 2021where~Gq;p=exp(Gq;p)Pp02Dqexp(Gq;p0); ~S(q;p) =exp(S(q;p))Pp02Dqexp(S(q;p0)):In Sec. 5.1 we present results obtained when using alternatives to this training objective. We con-sider two other objectives which have been used in Dehghani et al. (2017), where BM25 is usedas a teacher model to train a neural ranker. A first option consists in training the retriever with aregression approach by minimizing the mean squared error:LMSE(;Q) =Xq2Q;p2Dq(S(q;p)Gq;p)2:The second option we consider is to use a max-margin loss that explicitly penalizes inversions in theranking estimated by the retriever:Lranking (;Q) =Xq2Q;p1;p22Dqmax (0;sign(Gq;p1Gq;p2)(S(q;p1)S(q;p2))):In words, ifp1is more relevant to answer the question qthanp2, i.e.Gq;p1>Gq;p2, the loss pushesthe retriever score of p1to be larger than the score of p2by at least a margin of .3.5 I TERATIVE TRAININGIn this section, we explain how iterative training can be used with the student-teacher scheme de-scribed in the previous section, similarly to Khattab et al. (2020). This iterative procedure can beinterpreted as using the current retriever to sample negative examples, in order to train a new re-triever. When learning a retriever with discriminative training, negative samples play an importantrole, and various strategies have been considered in previous work. Karpukhin et al. (2020) com-pared random sampling with using the top-k passages from BM25 which do not contain the answerand with using the positive passages from other queries. Consider that for each question, we havean initial set of support documents D0q. We propose to use an iterative pipeline where each iterationcan be described as the following 4-step process:1. Train the reader Rusing the set of support documents for each question D0q.2. Compute aggregated attention scores (Gq;p)q2Q;p2D0qwith the reader R.3. Train the retriever Eusing the scores (Gq;p)q2Q;p2D0q.4. Retrieve top-passages with the new trained retriever E.This multi-step procedure can be repeated multiple times. A critical point of the training procedureis the initial set of documents corresponding to each question. In Sec. 4, we compare retrieversobtained by starting from documents obtained using BM25 or cosine similarity from a BERT model.In particular, we show that while the initial performance with BERT is low, the iterative procedureallows to greatly improve the performance of the model.4 E XPERIMENTSIn this section we evaluate the student-teacher training procedure from the previous section. Weshow that we obtain competitive performance without strong supervision for support documents.4.1 E XPERIMENTAL SETTINGDatasets. We perform experiments on TriviaQA (Joshi et al., 2017) and NaturalQues-tions (Kwiatkowski et al., 2019), two standard benchmarks for open-domain question answering.TriviaQA is made of questions from trivia and quiz league websites, and does not contain goldsupport documents. NaturalQuestions contains questions corresponding to web search queries, andgold support documents from Wikipedia. Following the setting from Lee et al. (2019); Karpukhin5Published as a conference paper at ICLR 2021BERT BM25Iter. P@20 P@100 Dev EM P@20 P@100 Dev EMNaturalQuestions0 4.8 12.0 9.8 59.3 74.0 41.21 32.2 45.8 16.9 76.4 84.3 46.82 51.1 62.6 28.6 80.4 86.7 47.93 67.8 76.8 39.3 80.0 86.3 46.2TriviaQA0 4.6 12.0 9.7 75.0 82.3 65.31 37.1 59.4 19.6 79.0 85.5 66.72 60.8 73.4 43.3 82.1 86.5 67.53 72.0 83.2 52.0 81.6 86.6 67.74 76.4 84.6 62.3 - - -Table 1: Iterative training starting with documents retrieved with BERT and BM25. Iteration 0corresponds to the performance of the reader trained on the set of initial support documents. Wereport all metrics on the validation set.et al. (2020), we use the original evaluation set as test set, and keep 10% of the training data forvalidation. We use the Wikipedia dump from Dec. 20, 2018 for support documents, splitting articlesinto non-overlapping passages of 100 tokens, and applying the same preprocessing as Chen et al.(2017).We also evaluate on NarrativeQuestions (Ko ˇcisk`y et al., 2018), using a publicly available prepro-cessed version.1This is a reading comprehension dataset built on a corpus of books and moviescripts. For each story, questions are generated by human annotators based on a summary of thegiven document. We consider the full story setting, where the task is to answer questions giventhe entire story and not the summary used to generate question-answer pairs. Here the knowledgesource is not the same for all questions: given a question the retrieval operation is performed onall passages of the associated story. These passages are obtained by dividing the story in chunksof 100 words. These stories are long documents, with an average of 60k words. While part of thedocuments could be processed entirely by the Fusion-in-Decoder module, it is interesting to limitthe number of support passages to reduce the computational cost of the reading step.While answers in TriviaQA and NaturalQuestions are short, NarrativeQA answers are about fivewords long on average, with medium length answers such as ”He dismantles it and attaches it to hismother’s jeep” which answers the question ”What does Mark do with his radio station?” . Notably asignificant number of answers do not correspond to spans in the story. It is thus not straightforward totrain the retriever with heuristics using question-answer pairs. In our case we use the same pipelineas for TriviaQA and NaturalQuestions, demonstrating the flexibility of our approach.Evaluation. The model performance is assessed in two ways. First, following previous work suchas DPR and ColbertQA, we report the top- kretrieval accuracy (P@k), which is the percentage ofquestions for which at least one passage of the top- kretrieved passages contains the gold answer. It isunclear how well this metric evaluates the retriever performance, since the answer can be containedin a passage without being related to the question. This is notably true for common words or entities.We also report the final end-to-end performance of the question answering system composed of theretriever and reader modules. This is the metric we are fundamentally interested in. For TriviaQAand NaturalQuestions, predicted answers are evaluated with the standard exact match metric (EM),as introduced by Rajpurkar et al. (2016). For NarrativeQA we report the metrics proposed in theoriginal paper: ROUGE-L, BLEU-1, BLEU-4 and METEOR.4.2 T ECHNICAL DETAILSInitialization. Similarly to DPR, we initialize the retriever with the BERT base model, pretrainedwith uncased text. The Fusion-in-Decoder reader is initialized with the T5 base model. A criticalcomponent of the iterative training procedure is the initialization of the support passages D0qasso-1https://cs.nyu.edu/ ̃kcho/NarrativeQA6Published as a conference paper at ICLR 2021Model NQ TriviaQAdev. test dev. testDPR (Karpukhin et al., 2020) - 41.5 - 57.9RAG (Lewis et al., 2020b) - 44.5 - 56.1ColBERT-QA (Khattab et al., 2020) - 48.2 - 63.2Fusion-in-Decoder (T5 base) (Izacard & Grave, 2020) - 48.2 - 65.0Fusion-in-Decoder (T5 large) (Izacard & Grave, 2020) - 51.4 - 67.6Ours (starting from BERT, T5 base) 39.3 40.0 62.5 62.7Ours (starting from BM25, T5 base) 47.9 48.9 67.7 67.7Ours (starting from DPR, T5 base) 48.0 49.6 68.6 68.8Ours (starting from DPR, T5 large) 51.9 53.7 71.9 72.1Table 2: Comparison to state-of-the-art models on NaturalQuestions and TriviaQA.ciated with each question q. For this we consider different options. The first one is to use passagesretrieved using BM25. We use the implementation from Apache Lucene2with default parameters,and tokenize questions and passages with SpaCy3. We also use passages obtained with BERT as aretriever without fine-tuning, this leads to poor initial performance. Finally in Table 2 we show thatinitializingD0qwith passages obtained with DPR (Karpukhin et al., 2020) outperforms the two pre-vious initializations. We train all retrievers using 100 passages. For the reader, we use 100 passagesfor NaturalQuestions and TriviaQA and 20 passages for NarrativeQA.Iterative training. We apply the iterative training procedure on each dataset independently. Boththe reader and the retriever are fine-tuned using the ADAM algorithm (Kingma & Ba, 2014), with abatch of size 64. The reader is trained for 10k gradient steps with a constant learning rate of 104,and the best model is selected based on the validation performance. The retriever is trained with aconstant learning rate of 5105until the performance saturates. To monitor the performance of theretriever during training, we measure the similarity between the reader and the retriever rankings. Ateach new training iteration the reader is reinitialized from T5 base, while we pursue the training ofthe retriever. We found that restarting from T5 base is important for the first iterations when startingwith BERT documents. We have not tried to reinitialize the retriever between each iteration. Moredetails on the hyperparameters and the training procedure are reported in Appendix A.2.4.3 R ESULTSIn Table 1, we report the performance of our approach for different number of self-training iterations.Generally, we observe that the accuracy of our system increases with the number of iterations,obtaining strong performance after a few iterations. Interestingly, while the initial performance withdocuments retrieved with BERT is very poor, our method still reach competitive scores on TriviaQA,and to a lesser extent, NaturalQuestions. However, a second observation is that the quality of theinitial document sets plays an important role on the performance of the end system. Indeed, weobserve that starting the procedure from BM25 documents, which are higher quality as indicated bythe performance of the system at iteration 0, leads to stronger results than using BERT documents.An interesting research question would be to explore pre-training of the initial BERT model forretrieval, for example by using the inverse cloze task.In Table 2, we report the performance of our approach, as well as existing state-of-the-art systemson TriviaQA and NaturalQuestions. In addition to initializing our method with documents retrievedwith BM25 and BERT, we also train a system by starting from DPR documents. First, we observethat our method improve the performance over the state-of-the-art, even when starting from BM25documents. This validates our assumption that it is possible to obtain strong retrievers without theneed of supervision for the documents. Second, when starting from DPR passages, our method leadsto a +4.5 EM improvement on TriviaQA and +2.3 EM improvement on NaturalQuestions when thefinal evaluation is carried out with a large reader.2lucene.apache.org3spacy.io7Published as a conference paper at ICLR 2021Method Iter. Rouge-L Bleu-1 Bleu-4 Meteordev. test dev. test dev. test dev. testBest from Ko ˇcisk`y et al. (2018) - 14.5 14.0 20.0 19.1 2.23 2.1 4.6 4.4DPR + FiD - 29.7 30.8 33.0 34.0 6.7 6.9 10.3 10.8Ours starting from BM25 0 29.9 30.3 34.6 33.7 7.1 6.5 10.5 10.4Ours starting from BM25 1 31.6 32.0 34.9 35.3 7.6 7.5 11.0 11.1Table 3: Performance on NarrativeQA.In Table 3, we report the performance of our method on the NarrativeQA dataset. We use the settingwhere the knowledge source corresponds to the whole document, and in particular, we do not usethe summary. We compare our results to the best ones reported in the original paper for this setting.Similar to results obtained on NaturalQuestions and TriviaQA, we observe that training the retrieverby using the attention scores of the reader leads to improvements, compared to the BM25 baseline.5 A BLATIONSIn this section, we investigate design choices regarding two key elements of our approach: thetraining objective and the aggregation of cross-attention scores. For all experiments, we consider asimplified experimental setting: a single training iteration is performed on NaturalQuestions, startingfrom BM25 passages.5.1 T RAINING OBJECTIVESIn Table 4 we report the performance of our model trained with the different training objectivesdescribed in Sec. 3.3. We observe that using the KL-divergence between the aggregated scores ofthe reader and the scores of the retriever outperforms the other objective functions.Method P@5 P@20 P@100 Dev EMMean Squared Error 46.5 61.2 73.9 40.6Max-margin loss, = 1 60.3 73.6 82.7 45.4Max-margin loss, = 0:260.3 73.5 82.6 45.8Max-margin loss, = 0:160.2 73.5 82.6 45.1KL-divergence 64.7 76.4 84.3 46.8Table 4: Comparison of training objectives on NaturalQuestions after one iteration. We report allthe metrics on the validation set.5.2 H OW TO AGGREGATE CROSS -ATTENTION SCORES ?In Section 4 the cross-attention scores are aggregated in a specific way, in order to obtain a singlescalar used to train the retriever. Formally let us denote by i;j;k;h the cross-attention scores betweentokeniof the output and token jof the input, for the k-th layer and h-th head. Then, the scores Gq;pforp2Dqused in Section 4 are computed as follows:Gq;p= meanj;k;h0;j;k;h;wherejdescribes the input tokens corresponding to p. In Table 5 we explore alternatives to thischoice by considering different aggregation schemes. In particular, we consider (1) taking the maxover the input tokens corresponding to passage pinstead of the average, (2) taking the average overthe output tokens instead of taking the score of the first token, (3) taking the mean over the last sixlayers instead of all the layers, (4) taking the max over the layers instead of the average, (5) takingthe max over the heads instead of the average. We observe that the performance of our approachis relatively stable to the choice of aggregation, and that the best result is obtained by averaging,except over the output tokens where it is best to only consider the first token.8Published as a conference paper at ICLR 2021Method P@5 P@20 P@100 Dev EM(0)meanj;k;h0;j;k;h 64.7 76.4 84.3 46.8(1)meank;hmaxj0;j;k;h 61.2 72.5 81.0 46.0(2)meani;j;k;hi;j;k;h 63.5 75.3 83.1 45.8(3)mean 7k12;j;h0;j;k;h 64.1 75.7 83.8 46.4(4)meanj;hmaxk0;j;k;h 63.9 75.5 83.7 46.5(5)meanj;kmaxh0;j;k;h 64.2 76.1 83.9 46.8Table 5: Comparison of attention aggregation schemes on NaturalQuestions after one iteration. Theindexicorresponds to output tokens, jcorresponds to input tokens of a given passage, hto headsandkto layers of the decoder. We report all metrics on the validation set.6 C ONCLUSIONIn this paper, we introduce a method to train an information retrieval module for downstream tasks,without using pairs of queries and documents as annotations. Our approach is inspired by knowl-edge distillation, where the retriever module corresponds to the student model and the reader modulecorresponds to the teacher model. In particular, we use the cross-attention scores, from a sequence-to-sequence reader, to obtain synthetic targets for the retriever. We compare different ways to aggre-gate the scores, as well as different training objectives to learn the retriever. We show that iterativelytraining the reader and the retriever leads to better performance, and obtain state-of-the-art perfor-mance on competitive question answering benchmarks. In the future, we would like to explore betterpre-training strategies for the retriever module, as well as better scoring functions for the retriever.<|im_end|> <|im_start|>assistant ### Review Title Interesting technique ### Review Text The authors propose a training technique for information retrieval models in the context of (open domain) question answering. Assuming the existence of some reader model, the idea is to use internal information of that model as a training signal for a retriever. Specifically, they use the attention activations over the input documents as synthetic labels for the retriever. The paper is written well, and proposes an interesting idea. The technique is well motivated, I particularly like that they motivate the use of the cross-attention score through a simple experiment, where they compare the top 10 DPR passages with the top 10 passages by attention score. The results over the SOTA seem moderate though, and the number of iterations seems to be an important (and potentially underexplored) variable there. I support the acceptance of the paper, because I believe the technique and the choice of model score (cross-attention score) are both interesting contributions. As the authors say, training retrieval systems is tricky, since there is usually not sufficient labelled data available and it might depend heavily on the task. The iterative training that exploits internal state of the “downstream” model that they describe is an interesting idea that deserves attention from the community. In Table, only up to 4 iterations in the table, which still show large improvements from one to the next. It would be interesting to know at what point no additional gains are seen. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
PcUprce4TM2
ICLR.cc/2021/Conference
2021
CAFE: Catastrophic Data Leakage in Federated Learning
["Xiao Jin", "Ruijie Du", "Pin-Yu Chen", "Tianyi Chen"]
Private training data can be leaked through the gradient sharing mechanism deployed in machine learning systems, such as federated learning (FL). Increasing batch size is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as \textit{\underline{c}atastrophic d\underline{a}ta leakage in \underline{f}ederated l\underline{e}arning (CAFE)}. Comparing to existing data leakage attacks, CAFE demonstrates the ability to perform large-batch data leakage attack with high data recovery quality. Experimental results on vertical and horizontal FL settings have validated the effectiveness of CAFE in recovering private data from the shared aggregated gradients. Our results suggest that data participated in FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings.
["cafe", "catastrophic data leakage", "aggregated gradients", "federated", "gradient", "mechanism", "machine", "systems"]
ABSTRACTPrivate training data can be leaked through the gradient sharing mechanism de-ployed in machine learning systems, such as federated learning (FL). Increasingbatch size is often viewed as a promising defense strategy against data leakage.In this paper, we revisit this defense premise and propose an advanced data leak-age attack to efficiently recover batch data from the shared aggregated gradients.We name our proposed method as catastrophic da ta leakage in f ederated le arning(CAFE) . Comparing to existing data leakage attacks, CAFE demonstrates the abil-ity to perform large-batch data leakage attack with high data recovery quality.Experimental results on vertical and horizontal FL settings have validated the ef-fectiveness of CAFE in recovering private data from the shared aggregated gradi-ents. Our results suggest that data participated in FL, especially the vertical case,have a high risk of being leaked from the training gradients. Our analysis impliesunprecedented and practical data leakage risks in those learning settings.1 I NTRODUCTIONFederated learning (FL) (Chilimbi et al., 2014; Shokri & Shmatikov, 2015) is an emerging machinelearning framework where a central server and multiple workers collaboratively train a machinelearning model. Most of existing FL methods consider the setting where each worker has data ofa different set of subjects but their data share many common features. This setting is also referredto data partitioned or horizontal FL (HFL). Unlike the HFL setting, in many learning scenarios,multiple workers handle data about the same set of subjects, but each has a different set of features.This case arises in financial and healthcare applications (Chen et al., 2020). In these examples, dataowners (e.g., financial institutions and hospitals) have different records of those users in their jointuser base, so, by combining their features, they can establish a more accurate model. We refer tothis setting as feature-partitioned or vertical FL (VFL).Compared with existing distributed learning paradigms, FL raises new challenges including theheterogeneity of data and the privacy of data (McMahan et al., 2017). To protect data privacy, onlymodel parameters and the change of parameters (e.g., gradients) are exchanged between server andworkers (Li, 2014; Iandola et al., 2015). Recent works have studied how a malicious worker canembed backdoors or replace the global model in FL (Bagdasaryan et al., 2018; Bhagoji et al., 2019;Xie et al., 2020). As exchanging gradients is often viewed as privacy-preserving protocols, littleattention has been paid to information leakage from public shared gradients and batch identities.In this context, inferring private user data from the gradients has received growing interests (Fredrik-son et al., 2015; Hitaj et al., 2017; Melis et al., 2018). A popular method that was termed deepleakage from gradients (DLG) has been developed in (Zhu et al., 2019) that infers training data inan efficient way without using any generative models or prior information. However, DLG lacksgeneralizability on model architecture and weight distribution initialization (Geiping et al., 2020).In Zhao et al. (2020), an analytical approach has been developed to extract accurate labels from thegradients. Wang et al. (2020) proposed a novel gradient difference as a distance measure to improverecovery accuracy. However, all of them cannot scale up to the large-batch data leakage setting.The contributions of this paper are summarized in the following.1) We develop an advanced data leakage attack that we term CAFE to overcome the limitation ofcurrent data leakage attacks on FL. CAFE is able to recover large-scale data both in VFL and HFL.1Under review as a conference paper at ICLR 2021(a) Original(b) DLG (batch size = 40)(c) CAFE (batch size = 104)Figure 1: Illustration of large-batch data leakage on CIFAR-10 from shared gradients in FL2) Our large-batch data recovery is based on the novel use of data index alignment and internalrepresentation alignment in FL, which can significantly improve the recovery performance.3) The effectiveness and practical risk induced from our data leakage algorithm is justified in thedynamic FL training setting when all parameters in the model are updated every iteration.2 P RELIMINARYFL can be categorized into horizontal and vertical FL settings (Kairouz et al., 2019). In this section,we provide necessary background of FL in this section.Horizontal FL. In HFL, data are distributed among local workers holding the same feature space.Suppose that there are Mworkers participating in the FL process and the number of samples in thedatasetXisN. The dataset is denoted as X:= [X1;:::; Xm;:::; XM]T, where Xm2RNmpis the local data partitioned to worker m, andpis the dimension of data feature space, Nmis thenumber of data samples partitioned to local worker m, andPMm=1Nm=N. Since all local datashare the same feature space, each local worker computes the gradients independently and uploadsthem to the server. The server receives all gradients from each local worker and uses gradientaggregation methods such as FedAvg (Kone ˇcn`y et al., 2016). Let the parameters of the model as and the loss function as L. Then the objective function of HFL can be defined as:min1NMXm=1L(;Xm) withL(;Xm) :=Xn2NmL(;xn) (1)Vertical FL. Different from HFL, in VFL, each local worker mis associated with a unique set offeatures. Each data sample xnin datasetXcan be written asxn= [xTn1;:::; xTnm;:::; xTnM]T(2)where xnm2Rpmis the data partitioned to worker mandpmis the data dimension in local workerm. The label spacefyngNn=1can be regarded as a special feature and is partitioned to the server ora certain local worker. Similar to (1), the objective function of VFL can be written as:min1NNXn=1L(;xn1;:::;xnM) (3)3 C ATASTROPHIC DATA LEAKAGE FROM BATCH GRADIENTSTo realize large-scale data recovery from aggregated gradients, we propose our algorithm named asCAFE: Catastrophic dAta leakage in Federated lEarning . While CAFE can be applied to any typeof data, without loss of generality, we use image datasets throughout the paper.2Under review as a conference paper at ICLR 2021Figure 2: Overview of CAFE in VFL3.1 W HYLARGE -BATCH DATA LEAKAGE ATTACK IS DIFFICULT ?We start by providing some intuition on the difficulty of performing large-batch data leakage fromaggregated gradients based on the formulation of DLG (Zhu et al., 2019). Assume that Nimages areselected as the input for a certain learning iteration. We define the data batch as X=fxn;ynjxn2RHWC;n= 1;2;:::;Ng, whereH;W;C represents the height, the width and the channelnumber of each image. Likewise, the batched ‘recovered data’ is denoted by ^X=f^xn;^ynj^xn2RHWC;n= 1;2;:::;Ng, which have the same dimension as X. Then the objective function is^X= arg min^X1NNXn=1rL(;xn;yn)1NNXn=1rL(;^xn;^yn)2(4)Note that in (4), the dimensions of the aggregated gradients is fixed. However, as the Nincreases,the dimension of ^XandXrise. When Nis sufficiently large, it will be more challenging to findthe “right” solution ^Xof (4) corresponding to the ground-truth dataset X. On the other hand,CAFE addresses this large-batch issue by data index alignment for batch data recovery, which caneffectively exclude undesired solutions. We discuss a specific example in Appendix A.As a motivating example, Figure 1 compares our proposed attack with DLG on a batch of 40 images.The recovery quality of DLG is far from satisfactory, while CAFE can successfully recover allimages in the batch. It is worth noting that because DLG is not effective on large-batch recovery, itis suggested in Zhu et al. (2019) that increasing batch size could be a promising defense. However,the successful recovery of CAFE shows that such defense premise gives a false sense of security indata leakage and the current FL is at risk, as large-batch data recovery can be accomplished.3.2 CAFE INVFLIn VFL, the server sends public key to local workers and decides the data index in each iterationof training and evaluation (Yang et al., 2019; Cheng et al., 2019). During the training process,local workers exchange their intermediate results with others to compute gradients and upload them.Therefore, the server has the access to both the model parameters and their gradients. Notably,CAFE can be readily applied to existing VFL protocols where the batch data index is assigned.Figure 2 gives an overview of CAFE in the VFL setting. The blue part represents a normal VFLparadigm and the red part represents the CAFE attack. Since data are vertically partitioned amongdifferent workers, data index alignment turns out to be an inevitable step in the vertical trainingprocess, which provides the server (the attacker) an opportunity to control the selected batch dataindex. Suppose that there are Mworkers participating FL and the batch size is N. The aggregatedgradients can be denoted byrL(;Xt) =1NbNbXn=1rL(;Xtn)withXtn= [xtn1;xtn2;:::;xtnM]: (5)3Under review as a conference paper at ICLR 2021Algorithm 1 CAFE in VFL ( regular VFL protocol and CAFE protocol )1: Initialize model parameters and generate fake data ^X2:fort= 1;2;:::;T do3: Server broadcasts the global model to all local workers (a total of Mworkers)4: form= 1;2;:::;M do5: Worker mtakes real batch data6: Worker mcomputes the intermediate results and exchanges them with other workers7: Worker muses the exchanged intermediate results to compute local aggregated gradients8: Worker muploads real local aggregated gradients to the server.9: end for10: Server computes real global aggregated gradients rL(;Xt)11: Server computes the fake global aggregated gradients rL(;^Xt)12: Server computes CAFE loss: D(Xt;^Xt)andr^XtD(Xt;^Xt)13: Server updates the batch data ^Xtwithr^XtD(Xt;^Xt)14: Server updates the model parameters withrL(;Xt)15:end forA benign server will perform legitimate computations designed by FL protocol. However, as shownin Figure 2, a curious server can provide the same legitimate computation as a benign server whilesimultaneously perform data recovery in a stealthy manner. The server symmetrically generates fakeimages corresponding to the real ones. Once a batch of original data is selected, the server takes thecorresponding fake batch and obtains the fake gradients asrL(;^Xt) =1NbNbXn=1rL(;^Xtn)with ^Xtn= [^xtn1;^xtn2;:::; ^xtnM]: (6)Algorithm 1 gives a pseudo code that implements our CAFE attack in VFL cases. The key partin our algorithm is aligning the real data batch indices with the fake ones. We define the squared`2-norm of the difference between the real and fake aggregated gradients in (7). Since the server hasthe access to the model parameters, the attacker is able to compute the gradient of fake data fromthe loss in (7) and optimize the fake data for the purpose of recovering real data.D(Xt;^Xt) =rL(;Xt)r L(;^Xt)2: (7)3.3 A UXILIARY REGULARIZERSIn addition to the gradient matching loss in (7), we further introduce two regularization terms –internal representation regularization andtotal variance (TV) norm . Motivated by (Geiping et al.,2020), the input vectors of the first fully connected layer can be directly derived from the gradients,we define the real/fake inputs of the first fully connected layer at the tth iteration asZt/^Zt2RNPand we use`2norm of their difference as what we call internal representation regularization.To promote the smoothness of the fake images, we assume the TV norm of the real images as aconstant,, and compare it with the TV norm of the fake ones, TV (^X). For each image x2RHWCin data batchXt, its TV norm is denoted by TV (x) =PcPh;wjxh+1;w;cxh;w;cj+jxh;w+1;cxh;w;cj. As the result, the loss at the tth iteration D(Xt;^Xt)can be rewritten as:D(Xt;^Xt) =rL(;Xt)r L(;^Xt)2+TV (^X)1fTV(^X)0g+Zt^Zt2F;(8)whereandare coefficients and 1fTV(^X)0gis the indicator function. We will provide anablation study in Section 4.5 to demonstrate the utility of these regularizers.4Under review as a conference paper at ICLR 2021(50) (100) (200) (300) (450) (600) Original dataFigure 3: CAFE on Linnaeus(Epoch: 50, 100, 200, 300, 450, 600, Original data)3.4 CAFE INHFLSimilarly, we can apply our CAFE algorithm to HFL as well. Let Xtmdenote the original batch datataken by local worker mat thetth iteration. The gradients of the parameters at the tth iteration isrL(;Xt) =1MMXm=1rL(;Xtm),Xt=fXt1;Xt2;:::;Xtm;:::;XtMg: (9)Symmetrically, we define the batch fake data and fake aggregated gradients asrL(;^Xt) =1MMXm=1rL(;^Xtm),^Xt=f^Xt1;^Xt2;:::; ^Xtm;:::; ^XtMg: (10)Due to space limitation, we will provide the CAFE algorithm for HFL in Appendix B.4 P ERFORMANCE EVALUATION4.1 E XPERIMENT SETUPS AND DATASETSWe conduct experiments on CIFAR-10, CIFAR-100 and Linnaeus 5 datasets in both HFL and VFLsettings. All the fake data are initialized uniformly and optimized by the normalized gradient descentmethod. Our algorithm can recover all the data participating in FL with a relative large batch size(more than 40). Scaling up to our hardware limits, CAFE can leak as many as 2000 images in theVFL setting including 4workers.Evaluation metrics. To measure the data leakage performance, we introduce peak signal-to-noiseratio (PSNR) value with mean squared error (MSE) defined in (11) and (12). Higher PSNR value ofleaked data represents better performance of data recovery.MSE c(x;^x) =1HWHXi=1WXj=1[xijc^xijc]2(11)PSNR( x;^x) =1CCXc=1h20 log10(maxi;jxijc)10 log10(MSE c(x;^x))i: (12)Baseline methods for comparison. We compare CAFE with three other baselines, (i) DLG (Zhuet al., 2019), (ii) DLG given labels (iDLG) (Zhao et al., 2020), and (iii) using cosine similarity tocompare the real and fake gradients (Geiping et al., 2020). We implement the original DLG andour CAFE under the same model and optimization methods. We run the DLG on 50 single imagesrespectively and compute the average iterations required to make the PSNR value of a single leakedimage above 30. We also compute the expected iteration number per image leakage for our CAFEalgorithm. Furthermore, we fix the batch size, and compare the PSNR value obtained by CAFE withthat of DLG. We also test the impact of given labels on CAFE by using the techniques in (Zhaoet al., 2020). Moreover, we compare the performance of CAFE under different loss functions: i)replacing the squared `2norm term with the cosine similarity of two gradients (CAFE with cosinesimilarity) ii) loss proposed in (Geiping et al., 2020), which only contains the TV norm regularizer.5Under review as a conference paper at ICLR 202101000 2000 3000 4000 5000 6000 7000Iteration10−410−310−210−1100L / L0CIFAR -10CIFAR-100Linnaeus(a) HFL Loss01000 2000 3000 4000 5000 6000 7000Iteration101520253035PSNRCIFAR-10CIFAR-100Linnaeus (b) HFL PSNR01000 2000 3000 4000 5000 6000 7000Iteration10−310−210−1100L / L0CIFAR -10CIFAR-100Linnaeus(c) VFL Loss01000 2000 3000 4000 5000 6000 7000Iteration102030405060PSNRCIFAR-10CIFAR-100Linnaeus (d) VFL PSNRFigure 4: CAFE loss ratio and PSNR curvesTable 1: CAFE vs DLGBatch sizeIterations DatasetsCIFAR-10 CIFAR-100 Linnaeus1(DLG) 284.4 266.9 366.7104 9.50 6.00 9.50204 6.75 3.86 4.75304 4.83 3.41 3.17404 3.75 3.75 2.375(a) Comparison of data leakage speed. Loweriteration count is faster.AlgorithmPSNR DatasetsCIFAR-10 CIFAR-100 LinnaeusCAFE 35.03 36.90 36.37DLG 10.09 10.79 10.10(b) Comparison of leakage performance. HigherPSNR is better. Batch size = 40.Table 2: PSNR via LossLossPSNR DatasetsCIFAR-10 CIFAR-100 LinnaeusCAFE ((8)) 35.03 36.90 36.37CAFE withcosine similarity30.15 31.38 30.76Loss in(Geiping et al., 2020)16.95 19.74 16.42(a) HFL(4workers, batch ratio = 0:1, batch size 104)LossPSNR DatasetsCIFAR-10 CIFAR-100 LinnaeusCAFE ((8)) 43.31 48.10 35.06CAFE withcosine similarity30.96 43.68 34.90Loss in(Geiping et al., 2020)12.76 10.85 10.46(b) VFL(4workers, batch ratio = 0:1, batch size 40)4.2 CAFE INHFL SETTINGSIn the HFL setting, we use a neural network consisting of 2 convolutional layers and 3 fully con-nected layers. The number of output channels of the convolutional layers are 64and128respectively.The number of nodes of the first two fully connected layers are 512and256. The last layer is thesoftmax classification layer. We assume that 4workers are involved in HFL and each of them holdsa dataset including 100images. The batch size of each worker in the training is 10, so there are 40(104) images in total participating per iteration. For each experiment, we initialize the fake datausing uniform distribution and optimize them for 800 epochs.6Under review as a conference paper at ICLR 2021Figures 4a and 4b show the CAFE loss curves and the PSNR curves on the three datasets in HFLcases. In the loss ratio curve, we set the ratio of current CAFE loss and the initial CAFE lossL(;Xt)L(;X0)as labely. The PSNR values are always above 35at the end of each CAFE attackingprocess, suggesting high data recovery quality (see Figure 1 as an example). Figure 3 shows theattacking process of CAFE on Linnaeus. Under CAFE, PSNR reaches 35at the 450th epoch wherethe private training data are completely leaked visually.Comparison with DLG baseline. In Table 1a, we set the batch ratio in CAFE as 0:1and compareit with DLG under different batch sizes. Clearly, CAFE outperforms DLG thanks to our noveldesign of large-batch data leakage attack. As shown in Table 1b, DLG cannot obtain satisfactoryresults when the batch size increases to 40, while CAFE successfully recovers all the images. Dueto similarity between iDLG and DLG, the results are in Appendix C.Comparison with cosine similarity. Table 2a shows that the PSNR values are still above 30if weuse cosine similarity instead of `2norm. The slight drop in PSNR value may result from scalingambiguity in cosine similarity. There is a performance gap between the loss of CAFE and the lossin Geiping et al. (2020), which validates the importance of our proposed auxiliary regularizers.Table 3: PSNR via Batch sizeBatch sizePSNR DatasetsCIFAR-10 CIFAR-100 Linnaeus10per worker 35.03 36.90 36.3720per worker 33.14 33.99 36.3230per worker 32.31 33.21 35.9640per worker 30.59 30.70 35.49(a) HFL ( 4workers, batch ratio = 0:1)Batch sizePSNR DatasetsCIFAR-10 CIFAR-100 Linnaeus8 41.80 44.42 39.9640 59.51 65.00 41.3780 57.20 63.10 43.66160 54.74 64.75 38.72(b) VFL ( 4workers, batch ratio = 0:2)Table 4: PSNR via Batch ratioBatch ratioPSNR DatasetsCIFAR-10(HFL)LinnaeusCIFAR-10(VFL)0:1 34.10 35.38 48.780:05 34.49 32.92 55.460:02 37.96 35.66 48.450:01 35.39 36.56 46.46101102103104Iteration1015202530PSNR10−310−210−1Training lossPSNRTraining lossFigure 5: PSNR and training loss curves4.3 CAFE INVFL SETTINGSSince DLG cannot be applied in VFL protocol, we test the performance of CAFE on various factors.We slice one image into 4 small pieces. Each worker holds one piece and the feature space dimensionof each piece is 16163. The model is composed of 2 parts. The first part consists of 2convolutional layers and 3 fully connected layers for each worker. The second part only consists ofthe softmax layer. In the training process, the pieces are sent into the first part respectively and turn tovectors as intermediate results. Local workers then exchange their intermediate results, concatenatethem and put them into the second part. We set the batch size as 40in VFL. Figure 4c and 4d showthe CAFE loss curves and the PSNR curves on the three datasets in VFL cases. The data recoveryis even better than the results in HFL. The PSNR values of CIFAR-10 and CIFAR-100 rise higherthan40. Same as the part in HFL, we put comparison with iDLG in Appendix C.Comparison with cosine similarity. From Table 2b, we can conclude that the PSNR values stillkeep close to the ones by using CAFE. Scaling ambiguity in cosine similarity may also cause thedrop in PSNR value. The performance gap between the loss of CAFE and the loss in Geiping et al.(2020) is much larger than the one in VFL, which indicates the utility of our auxiliary regularizers.7Under review as a conference paper at ICLR 2021CIFAR-10CIFAR-100LinnaeusCAFE = 0 = 0 = 0;= 0 Original dataFigure 6: Effect of auxiliary regularizers4.4 A TTACKING WHILE FLPrevious works have shown that DLG performs better on an untrained model than a trained one(Geiping et al., 2020). We also implement CAFE in the ‘attacking while learning’ mode, in whichthe FL process is ongoing. When the network is training, the selected batch data and the parametersof the model change every iteration, which may cause the attack loss to diverge. To address thisissue, for each real data batch, we compute the real gradients and optimize the corresponding fakedataktimes. We demonstrate on Linnaeus dataset, set k= 10 and stop CAFE after 1000 iterations(100epochs). Figure 5 gives the curves of the training loss and the corresponding PSNR value. ThePSNR value still can be raised to a relatively high value. It indicates that CAFE can be a practicaldata leakage attack in a dynamic training environment of FL.4.5 A BLATION STUDYWe test CAFE under different batch size, batch ratio, and with (without) auxiliary regularizers.PSNR via Batch size. Table 3 shows that the PSNR values still keep above 30 when the batch sizeincreases with fixed number of workers and batch ratio. The result implies that the increasing batchsize has little influence on data leakage performance of CAFE.PSNR via Batch ratio. In HFL, 4workers participate in the learning setting and we fix the amountof data held by each worker as 500. In the VFL case, we implement CAFE on a total of 800images.In Table 4, we change the batch ratio from 0:1to0:01while keeping the trained epochs as 800. Forboth settings, the data leakage performance keeps at the same level.Impact of auxiliary regularizers Table 6 in Appendix D demonstrates the impact of auxiliaryregularizers. From Figure 6, adjusting the threshold prevents images from being over blurredduring the reconstruction process. TV norm can eliminate the noisy patterns on the recovered imagesand increase the PSNR. Images leaked without regularizing the Frobenius norm of the differencebetween the internal representations Zand^Zmay lose some details and causes the drop of PSNR.5 C ONCLUSIONSIn this paper, we uncover the risk of catastrophic da ta leakage in f ederated le arning (CAFE) throughan algorithm that can perform large-batch data leakage with high data recovery quality. Extensiveexperimental results demonstrate that CAFE can recover large-scale private data from the sharedaggregated gradients on both vertical and horizontal FL settings, overcoming the batch limitationproblem in current data leakage attacks. Our advanced data leakage attack and its stealthy naturesuggests practical data privacy concerns in FL and poses new challenges on future defenses.8Under review as a conference paper at ICLR 2021
-b7PWDT8np
Simple, but unclear.
4: Ok but not good enough - rejection
This work introduces CAFE, a novel training algorithm to leak training data in a federated learning setup. Extending from "deep leakage from gradient" fake images are optimised with respect to the difference observed from the client gradients (i.e. with the real images) and the one observed with the current version of the fake image. However, DLG does not work when the mini-batch size increases due to a messy gradient representation. In this work, the authors propose to keep track of the batch index. Indeed, it may happen that the server decides of the batch index corresponding to the training data that will be used by the client during the local training. Within such conditions, a malicious server can easily store fake images corresponding to specific indices and therefore optimise correctly each fake images w.r.t the corresponding real image. It is clear from the obtained results that this method works, and that images are recovered. However, I am unsure about the relevance of the experimental protocol. 1. If the server does not ask for specific indices (and it is pretty common), the method is equivalent to DLG (i.e. does not work well) with large batches. 2. What if we don't have the gradients ? A common way of doing FL is to simply communicate the locally trained weights (with multiple local epochs). As specified in the introduction (point 3), the proposed method wouldn't work in this realistic scenario. Then, I found Section 3.3 unclear is some aspects. Are the two proposed regularisation methods relying on the "real" image ? If so, isn't this a strong bias ? (we are not expected to have the input images). I suppose that this comes from the citation to the work of Geipinget al., 2020. However, these two paragraph should be re-written to clearly explain how we can extract the input vector and how it relates to eq. 8. "To promote the smoothness of the fake images, we assume the TV norm of the real images as a constant," -> We can't use the real image here, so it is not valid. Pros: + In the given conditions, CAFE clearly outperforms the other approach to leak training data from gradients during FL. + Very simple attack to implement. Cons: - The conditions necessary to the success of the proposed methods seem to be quite strong and not really connected to a realistic FL framework. - Small ideas can lead to drastic changes in the field, but the core idea of the paper is to solely store batch indices. Remarks: - "In this section, we provide necessary background of FL [in this section]." - Figure 2 should be checked. "Aggreaged", "upload fake gradient" only once . - What are t and b in Eq. 5.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title CAFE: Catastrophic Data Leakage in Federated Learning ### Paper Abstract Private training data can be leaked through the gradient sharing mechanism deployed in machine learning systems, such as federated learning (FL). Increasing batch size is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as \textit{\underline{c}atastrophic d\underline{a}ta leakage in \underline{f}ederated l\underline{e}arning (CAFE)}. Comparing to existing data leakage attacks, CAFE demonstrates the ability to perform large-batch data leakage attack with high data recovery quality. Experimental results on vertical and horizontal FL settings have validated the effectiveness of CAFE in recovering private data from the shared aggregated gradients. Our results suggest that data participated in FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings. ### Paper Keywords ["cafe", "catastrophic data leakage", "aggregated gradients", "federated", "gradient", "mechanism", "machine", "systems"] ### Paper Content ABSTRACTPrivate training data can be leaked through the gradient sharing mechanism de-ployed in machine learning systems, such as federated learning (FL). Increasingbatch size is often viewed as a promising defense strategy against data leakage.In this paper, we revisit this defense premise and propose an advanced data leak-age attack to efficiently recover batch data from the shared aggregated gradients.We name our proposed method as catastrophic da ta leakage in f ederated le arning(CAFE) . Comparing to existing data leakage attacks, CAFE demonstrates the abil-ity to perform large-batch data leakage attack with high data recovery quality.Experimental results on vertical and horizontal FL settings have validated the ef-fectiveness of CAFE in recovering private data from the shared aggregated gradi-ents. Our results suggest that data participated in FL, especially the vertical case,have a high risk of being leaked from the training gradients. Our analysis impliesunprecedented and practical data leakage risks in those learning settings.1 I NTRODUCTIONFederated learning (FL) (Chilimbi et al., 2014; Shokri & Shmatikov, 2015) is an emerging machinelearning framework where a central server and multiple workers collaboratively train a machinelearning model. Most of existing FL methods consider the setting where each worker has data ofa different set of subjects but their data share many common features. This setting is also referredto data partitioned or horizontal FL (HFL). Unlike the HFL setting, in many learning scenarios,multiple workers handle data about the same set of subjects, but each has a different set of features.This case arises in financial and healthcare applications (Chen et al., 2020). In these examples, dataowners (e.g., financial institutions and hospitals) have different records of those users in their jointuser base, so, by combining their features, they can establish a more accurate model. We refer tothis setting as feature-partitioned or vertical FL (VFL).Compared with existing distributed learning paradigms, FL raises new challenges including theheterogeneity of data and the privacy of data (McMahan et al., 2017). To protect data privacy, onlymodel parameters and the change of parameters (e.g., gradients) are exchanged between server andworkers (Li, 2014; Iandola et al., 2015). Recent works have studied how a malicious worker canembed backdoors or replace the global model in FL (Bagdasaryan et al., 2018; Bhagoji et al., 2019;Xie et al., 2020). As exchanging gradients is often viewed as privacy-preserving protocols, littleattention has been paid to information leakage from public shared gradients and batch identities.In this context, inferring private user data from the gradients has received growing interests (Fredrik-son et al., 2015; Hitaj et al., 2017; Melis et al., 2018). A popular method that was termed deepleakage from gradients (DLG) has been developed in (Zhu et al., 2019) that infers training data inan efficient way without using any generative models or prior information. However, DLG lacksgeneralizability on model architecture and weight distribution initialization (Geiping et al., 2020).In Zhao et al. (2020), an analytical approach has been developed to extract accurate labels from thegradients. Wang et al. (2020) proposed a novel gradient difference as a distance measure to improverecovery accuracy. However, all of them cannot scale up to the large-batch data leakage setting.The contributions of this paper are summarized in the following.1) We develop an advanced data leakage attack that we term CAFE to overcome the limitation ofcurrent data leakage attacks on FL. CAFE is able to recover large-scale data both in VFL and HFL.1Under review as a conference paper at ICLR 2021(a) Original(b) DLG (batch size = 40)(c) CAFE (batch size = 104)Figure 1: Illustration of large-batch data leakage on CIFAR-10 from shared gradients in FL2) Our large-batch data recovery is based on the novel use of data index alignment and internalrepresentation alignment in FL, which can significantly improve the recovery performance.3) The effectiveness and practical risk induced from our data leakage algorithm is justified in thedynamic FL training setting when all parameters in the model are updated every iteration.2 P RELIMINARYFL can be categorized into horizontal and vertical FL settings (Kairouz et al., 2019). In this section,we provide necessary background of FL in this section.Horizontal FL. In HFL, data are distributed among local workers holding the same feature space.Suppose that there are Mworkers participating in the FL process and the number of samples in thedatasetXisN. The dataset is denoted as X:= [X1;:::; Xm;:::; XM]T, where Xm2RNmpis the local data partitioned to worker m, andpis the dimension of data feature space, Nmis thenumber of data samples partitioned to local worker m, andPMm=1Nm=N. Since all local datashare the same feature space, each local worker computes the gradients independently and uploadsthem to the server. The server receives all gradients from each local worker and uses gradientaggregation methods such as FedAvg (Kone ˇcn`y et al., 2016). Let the parameters of the model as and the loss function as L. Then the objective function of HFL can be defined as:min1NMXm=1L(;Xm) withL(;Xm) :=Xn2NmL(;xn) (1)Vertical FL. Different from HFL, in VFL, each local worker mis associated with a unique set offeatures. Each data sample xnin datasetXcan be written asxn= [xTn1;:::; xTnm;:::; xTnM]T(2)where xnm2Rpmis the data partitioned to worker mandpmis the data dimension in local workerm. The label spacefyngNn=1can be regarded as a special feature and is partitioned to the server ora certain local worker. Similar to (1), the objective function of VFL can be written as:min1NNXn=1L(;xn1;:::;xnM) (3)3 C ATASTROPHIC DATA LEAKAGE FROM BATCH GRADIENTSTo realize large-scale data recovery from aggregated gradients, we propose our algorithm named asCAFE: Catastrophic dAta leakage in Federated lEarning . While CAFE can be applied to any typeof data, without loss of generality, we use image datasets throughout the paper.2Under review as a conference paper at ICLR 2021Figure 2: Overview of CAFE in VFL3.1 W HYLARGE -BATCH DATA LEAKAGE ATTACK IS DIFFICULT ?We start by providing some intuition on the difficulty of performing large-batch data leakage fromaggregated gradients based on the formulation of DLG (Zhu et al., 2019). Assume that Nimages areselected as the input for a certain learning iteration. We define the data batch as X=fxn;ynjxn2RHWC;n= 1;2;:::;Ng, whereH;W;C represents the height, the width and the channelnumber of each image. Likewise, the batched ‘recovered data’ is denoted by ^X=f^xn;^ynj^xn2RHWC;n= 1;2;:::;Ng, which have the same dimension as X. Then the objective function is^X= arg min^X1NNXn=1rL(;xn;yn)1NNXn=1rL(;^xn;^yn)2(4)Note that in (4), the dimensions of the aggregated gradients is fixed. However, as the Nincreases,the dimension of ^XandXrise. When Nis sufficiently large, it will be more challenging to findthe “right” solution ^Xof (4) corresponding to the ground-truth dataset X. On the other hand,CAFE addresses this large-batch issue by data index alignment for batch data recovery, which caneffectively exclude undesired solutions. We discuss a specific example in Appendix A.As a motivating example, Figure 1 compares our proposed attack with DLG on a batch of 40 images.The recovery quality of DLG is far from satisfactory, while CAFE can successfully recover allimages in the batch. It is worth noting that because DLG is not effective on large-batch recovery, itis suggested in Zhu et al. (2019) that increasing batch size could be a promising defense. However,the successful recovery of CAFE shows that such defense premise gives a false sense of security indata leakage and the current FL is at risk, as large-batch data recovery can be accomplished.3.2 CAFE INVFLIn VFL, the server sends public key to local workers and decides the data index in each iterationof training and evaluation (Yang et al., 2019; Cheng et al., 2019). During the training process,local workers exchange their intermediate results with others to compute gradients and upload them.Therefore, the server has the access to both the model parameters and their gradients. Notably,CAFE can be readily applied to existing VFL protocols where the batch data index is assigned.Figure 2 gives an overview of CAFE in the VFL setting. The blue part represents a normal VFLparadigm and the red part represents the CAFE attack. Since data are vertically partitioned amongdifferent workers, data index alignment turns out to be an inevitable step in the vertical trainingprocess, which provides the server (the attacker) an opportunity to control the selected batch dataindex. Suppose that there are Mworkers participating FL and the batch size is N. The aggregatedgradients can be denoted byrL(;Xt) =1NbNbXn=1rL(;Xtn)withXtn= [xtn1;xtn2;:::;xtnM]: (5)3Under review as a conference paper at ICLR 2021Algorithm 1 CAFE in VFL ( regular VFL protocol and CAFE protocol )1: Initialize model parameters and generate fake data ^X2:fort= 1;2;:::;T do3: Server broadcasts the global model to all local workers (a total of Mworkers)4: form= 1;2;:::;M do5: Worker mtakes real batch data6: Worker mcomputes the intermediate results and exchanges them with other workers7: Worker muses the exchanged intermediate results to compute local aggregated gradients8: Worker muploads real local aggregated gradients to the server.9: end for10: Server computes real global aggregated gradients rL(;Xt)11: Server computes the fake global aggregated gradients rL(;^Xt)12: Server computes CAFE loss: D(Xt;^Xt)andr^XtD(Xt;^Xt)13: Server updates the batch data ^Xtwithr^XtD(Xt;^Xt)14: Server updates the model parameters withrL(;Xt)15:end forA benign server will perform legitimate computations designed by FL protocol. However, as shownin Figure 2, a curious server can provide the same legitimate computation as a benign server whilesimultaneously perform data recovery in a stealthy manner. The server symmetrically generates fakeimages corresponding to the real ones. Once a batch of original data is selected, the server takes thecorresponding fake batch and obtains the fake gradients asrL(;^Xt) =1NbNbXn=1rL(;^Xtn)with ^Xtn= [^xtn1;^xtn2;:::; ^xtnM]: (6)Algorithm 1 gives a pseudo code that implements our CAFE attack in VFL cases. The key partin our algorithm is aligning the real data batch indices with the fake ones. We define the squared`2-norm of the difference between the real and fake aggregated gradients in (7). Since the server hasthe access to the model parameters, the attacker is able to compute the gradient of fake data fromthe loss in (7) and optimize the fake data for the purpose of recovering real data.D(Xt;^Xt) =rL(;Xt)r L(;^Xt)2: (7)3.3 A UXILIARY REGULARIZERSIn addition to the gradient matching loss in (7), we further introduce two regularization terms –internal representation regularization andtotal variance (TV) norm . Motivated by (Geiping et al.,2020), the input vectors of the first fully connected layer can be directly derived from the gradients,we define the real/fake inputs of the first fully connected layer at the tth iteration asZt/^Zt2RNPand we use`2norm of their difference as what we call internal representation regularization.To promote the smoothness of the fake images, we assume the TV norm of the real images as aconstant,, and compare it with the TV norm of the fake ones, TV (^X). For each image x2RHWCin data batchXt, its TV norm is denoted by TV (x) =PcPh;wjxh+1;w;cxh;w;cj+jxh;w+1;cxh;w;cj. As the result, the loss at the tth iteration D(Xt;^Xt)can be rewritten as:D(Xt;^Xt) =rL(;Xt)r L(;^Xt)2+TV (^X)1fTV(^X)0g+Zt^Zt2F;(8)whereandare coefficients and 1fTV(^X)0gis the indicator function. We will provide anablation study in Section 4.5 to demonstrate the utility of these regularizers.4Under review as a conference paper at ICLR 2021(50) (100) (200) (300) (450) (600) Original dataFigure 3: CAFE on Linnaeus(Epoch: 50, 100, 200, 300, 450, 600, Original data)3.4 CAFE INHFLSimilarly, we can apply our CAFE algorithm to HFL as well. Let Xtmdenote the original batch datataken by local worker mat thetth iteration. The gradients of the parameters at the tth iteration isrL(;Xt) =1MMXm=1rL(;Xtm),Xt=fXt1;Xt2;:::;Xtm;:::;XtMg: (9)Symmetrically, we define the batch fake data and fake aggregated gradients asrL(;^Xt) =1MMXm=1rL(;^Xtm),^Xt=f^Xt1;^Xt2;:::; ^Xtm;:::; ^XtMg: (10)Due to space limitation, we will provide the CAFE algorithm for HFL in Appendix B.4 P ERFORMANCE EVALUATION4.1 E XPERIMENT SETUPS AND DATASETSWe conduct experiments on CIFAR-10, CIFAR-100 and Linnaeus 5 datasets in both HFL and VFLsettings. All the fake data are initialized uniformly and optimized by the normalized gradient descentmethod. Our algorithm can recover all the data participating in FL with a relative large batch size(more than 40). Scaling up to our hardware limits, CAFE can leak as many as 2000 images in theVFL setting including 4workers.Evaluation metrics. To measure the data leakage performance, we introduce peak signal-to-noiseratio (PSNR) value with mean squared error (MSE) defined in (11) and (12). Higher PSNR value ofleaked data represents better performance of data recovery.MSE c(x;^x) =1HWHXi=1WXj=1[xijc^xijc]2(11)PSNR( x;^x) =1CCXc=1h20 log10(maxi;jxijc)10 log10(MSE c(x;^x))i: (12)Baseline methods for comparison. We compare CAFE with three other baselines, (i) DLG (Zhuet al., 2019), (ii) DLG given labels (iDLG) (Zhao et al., 2020), and (iii) using cosine similarity tocompare the real and fake gradients (Geiping et al., 2020). We implement the original DLG andour CAFE under the same model and optimization methods. We run the DLG on 50 single imagesrespectively and compute the average iterations required to make the PSNR value of a single leakedimage above 30. We also compute the expected iteration number per image leakage for our CAFEalgorithm. Furthermore, we fix the batch size, and compare the PSNR value obtained by CAFE withthat of DLG. We also test the impact of given labels on CAFE by using the techniques in (Zhaoet al., 2020). Moreover, we compare the performance of CAFE under different loss functions: i)replacing the squared `2norm term with the cosine similarity of two gradients (CAFE with cosinesimilarity) ii) loss proposed in (Geiping et al., 2020), which only contains the TV norm regularizer.5Under review as a conference paper at ICLR 202101000 2000 3000 4000 5000 6000 7000Iteration10−410−310−210−1100L / L0CIFAR -10CIFAR-100Linnaeus(a) HFL Loss01000 2000 3000 4000 5000 6000 7000Iteration101520253035PSNRCIFAR-10CIFAR-100Linnaeus (b) HFL PSNR01000 2000 3000 4000 5000 6000 7000Iteration10−310−210−1100L / L0CIFAR -10CIFAR-100Linnaeus(c) VFL Loss01000 2000 3000 4000 5000 6000 7000Iteration102030405060PSNRCIFAR-10CIFAR-100Linnaeus (d) VFL PSNRFigure 4: CAFE loss ratio and PSNR curvesTable 1: CAFE vs DLGBatch sizeIterations DatasetsCIFAR-10 CIFAR-100 Linnaeus1(DLG) 284.4 266.9 366.7104 9.50 6.00 9.50204 6.75 3.86 4.75304 4.83 3.41 3.17404 3.75 3.75 2.375(a) Comparison of data leakage speed. Loweriteration count is faster.AlgorithmPSNR DatasetsCIFAR-10 CIFAR-100 LinnaeusCAFE 35.03 36.90 36.37DLG 10.09 10.79 10.10(b) Comparison of leakage performance. HigherPSNR is better. Batch size = 40.Table 2: PSNR via LossLossPSNR DatasetsCIFAR-10 CIFAR-100 LinnaeusCAFE ((8)) 35.03 36.90 36.37CAFE withcosine similarity30.15 31.38 30.76Loss in(Geiping et al., 2020)16.95 19.74 16.42(a) HFL(4workers, batch ratio = 0:1, batch size 104)LossPSNR DatasetsCIFAR-10 CIFAR-100 LinnaeusCAFE ((8)) 43.31 48.10 35.06CAFE withcosine similarity30.96 43.68 34.90Loss in(Geiping et al., 2020)12.76 10.85 10.46(b) VFL(4workers, batch ratio = 0:1, batch size 40)4.2 CAFE INHFL SETTINGSIn the HFL setting, we use a neural network consisting of 2 convolutional layers and 3 fully con-nected layers. The number of output channels of the convolutional layers are 64and128respectively.The number of nodes of the first two fully connected layers are 512and256. The last layer is thesoftmax classification layer. We assume that 4workers are involved in HFL and each of them holdsa dataset including 100images. The batch size of each worker in the training is 10, so there are 40(104) images in total participating per iteration. For each experiment, we initialize the fake datausing uniform distribution and optimize them for 800 epochs.6Under review as a conference paper at ICLR 2021Figures 4a and 4b show the CAFE loss curves and the PSNR curves on the three datasets in HFLcases. In the loss ratio curve, we set the ratio of current CAFE loss and the initial CAFE lossL(;Xt)L(;X0)as labely. The PSNR values are always above 35at the end of each CAFE attackingprocess, suggesting high data recovery quality (see Figure 1 as an example). Figure 3 shows theattacking process of CAFE on Linnaeus. Under CAFE, PSNR reaches 35at the 450th epoch wherethe private training data are completely leaked visually.Comparison with DLG baseline. In Table 1a, we set the batch ratio in CAFE as 0:1and compareit with DLG under different batch sizes. Clearly, CAFE outperforms DLG thanks to our noveldesign of large-batch data leakage attack. As shown in Table 1b, DLG cannot obtain satisfactoryresults when the batch size increases to 40, while CAFE successfully recovers all the images. Dueto similarity between iDLG and DLG, the results are in Appendix C.Comparison with cosine similarity. Table 2a shows that the PSNR values are still above 30if weuse cosine similarity instead of `2norm. The slight drop in PSNR value may result from scalingambiguity in cosine similarity. There is a performance gap between the loss of CAFE and the lossin Geiping et al. (2020), which validates the importance of our proposed auxiliary regularizers.Table 3: PSNR via Batch sizeBatch sizePSNR DatasetsCIFAR-10 CIFAR-100 Linnaeus10per worker 35.03 36.90 36.3720per worker 33.14 33.99 36.3230per worker 32.31 33.21 35.9640per worker 30.59 30.70 35.49(a) HFL ( 4workers, batch ratio = 0:1)Batch sizePSNR DatasetsCIFAR-10 CIFAR-100 Linnaeus8 41.80 44.42 39.9640 59.51 65.00 41.3780 57.20 63.10 43.66160 54.74 64.75 38.72(b) VFL ( 4workers, batch ratio = 0:2)Table 4: PSNR via Batch ratioBatch ratioPSNR DatasetsCIFAR-10(HFL)LinnaeusCIFAR-10(VFL)0:1 34.10 35.38 48.780:05 34.49 32.92 55.460:02 37.96 35.66 48.450:01 35.39 36.56 46.46101102103104Iteration1015202530PSNR10−310−210−1Training lossPSNRTraining lossFigure 5: PSNR and training loss curves4.3 CAFE INVFL SETTINGSSince DLG cannot be applied in VFL protocol, we test the performance of CAFE on various factors.We slice one image into 4 small pieces. Each worker holds one piece and the feature space dimensionof each piece is 16163. The model is composed of 2 parts. The first part consists of 2convolutional layers and 3 fully connected layers for each worker. The second part only consists ofthe softmax layer. In the training process, the pieces are sent into the first part respectively and turn tovectors as intermediate results. Local workers then exchange their intermediate results, concatenatethem and put them into the second part. We set the batch size as 40in VFL. Figure 4c and 4d showthe CAFE loss curves and the PSNR curves on the three datasets in VFL cases. The data recoveryis even better than the results in HFL. The PSNR values of CIFAR-10 and CIFAR-100 rise higherthan40. Same as the part in HFL, we put comparison with iDLG in Appendix C.Comparison with cosine similarity. From Table 2b, we can conclude that the PSNR values stillkeep close to the ones by using CAFE. Scaling ambiguity in cosine similarity may also cause thedrop in PSNR value. The performance gap between the loss of CAFE and the loss in Geiping et al.(2020) is much larger than the one in VFL, which indicates the utility of our auxiliary regularizers.7Under review as a conference paper at ICLR 2021CIFAR-10CIFAR-100LinnaeusCAFE = 0 = 0 = 0;= 0 Original dataFigure 6: Effect of auxiliary regularizers4.4 A TTACKING WHILE FLPrevious works have shown that DLG performs better on an untrained model than a trained one(Geiping et al., 2020). We also implement CAFE in the ‘attacking while learning’ mode, in whichthe FL process is ongoing. When the network is training, the selected batch data and the parametersof the model change every iteration, which may cause the attack loss to diverge. To address thisissue, for each real data batch, we compute the real gradients and optimize the corresponding fakedataktimes. We demonstrate on Linnaeus dataset, set k= 10 and stop CAFE after 1000 iterations(100epochs). Figure 5 gives the curves of the training loss and the corresponding PSNR value. ThePSNR value still can be raised to a relatively high value. It indicates that CAFE can be a practicaldata leakage attack in a dynamic training environment of FL.4.5 A BLATION STUDYWe test CAFE under different batch size, batch ratio, and with (without) auxiliary regularizers.PSNR via Batch size. Table 3 shows that the PSNR values still keep above 30 when the batch sizeincreases with fixed number of workers and batch ratio. The result implies that the increasing batchsize has little influence on data leakage performance of CAFE.PSNR via Batch ratio. In HFL, 4workers participate in the learning setting and we fix the amountof data held by each worker as 500. In the VFL case, we implement CAFE on a total of 800images.In Table 4, we change the batch ratio from 0:1to0:01while keeping the trained epochs as 800. Forboth settings, the data leakage performance keeps at the same level.Impact of auxiliary regularizers Table 6 in Appendix D demonstrates the impact of auxiliaryregularizers. From Figure 6, adjusting the threshold prevents images from being over blurredduring the reconstruction process. TV norm can eliminate the noisy patterns on the recovered imagesand increase the PSNR. Images leaked without regularizing the Frobenius norm of the differencebetween the internal representations Zand^Zmay lose some details and causes the drop of PSNR.5 C ONCLUSIONSIn this paper, we uncover the risk of catastrophic da ta leakage in f ederated le arning (CAFE) throughan algorithm that can perform large-batch data leakage with high data recovery quality. Extensiveexperimental results demonstrate that CAFE can recover large-scale private data from the sharedaggregated gradients on both vertical and horizontal FL settings, overcoming the batch limitationproblem in current data leakage attacks. Our advanced data leakage attack and its stealthy naturesuggests practical data privacy concerns in FL and poses new challenges on future defenses.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Simple, but unclear. ### Review Text This work introduces CAFE, a novel training algorithm to leak training data in a federated learning setup. Extending from "deep leakage from gradient" fake images are optimised with respect to the difference observed from the client gradients (i.e. with the real images) and the one observed with the current version of the fake image. However, DLG does not work when the mini-batch size increases due to a messy gradient representation. In this work, the authors propose to keep track of the batch index. Indeed, it may happen that the server decides of the batch index corresponding to the training data that will be used by the client during the local training. Within such conditions, a malicious server can easily store fake images corresponding to specific indices and therefore optimise correctly each fake images w.r.t the corresponding real image. It is clear from the obtained results that this method works, and that images are recovered. However, I am unsure about the relevance of the experimental protocol. 1. If the server does not ask for specific indices (and it is pretty common), the method is equivalent to DLG (i.e. does not work well) with large batches. 2. What if we don't have the gradients ? A common way of doing FL is to simply communicate the locally trained weights (with multiple local epochs). As specified in the introduction (point 3), the proposed method wouldn't work in this realistic scenario. Then, I found Section 3.3 unclear is some aspects. Are the two proposed regularisation methods relying on the "real" image ? If so, isn't this a strong bias ? (we are not expected to have the input images). I suppose that this comes from the citation to the work of Geipinget al., 2020. However, these two paragraph should be re-written to clearly explain how we can extract the input vector and how it relates to eq. 8. "To promote the smoothness of the fake images, we assume the TV norm of the real images as a constant," -> We can't use the real image here, so it is not valid. Pros: + In the given conditions, CAFE clearly outperforms the other approach to leak training data from gradients during FL. + Very simple attack to implement. Cons: - The conditions necessary to the success of the proposed methods seem to be quite strong and not really connected to a realistic FL framework. - Small ideas can lead to drastic changes in the field, but the core idea of the paper is to solely store batch indices. Remarks: - "In this section, we provide necessary background of FL [in this section]." - Figure 2 should be checked. "Aggreaged", "upload fake gradient" only once . - What are t and b in Eq. 5. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Skltqh4KvB
ICLR.cc/2020/Conference
2020
Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes?
["Ella M. Gale", "Nicholas Martin", "Ryan Blything", "Anh Nguyen", "Jeffrey S. Bowers"]
Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations. In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks. In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'. Again, we find poor hit-rates and high false-alarm rates for object classification.
["neural networks", "localist coding", "selectivity", "object detectors", "CCMAS", "CNNs", "activation maximisation", "information representation", "network dissection", "interpretabillity", "signal detection"]
ABSTRACTVarious methods of measuring unit selectivity have been developed with the aimof better understanding how neural networks work. But the different measuresprovide divergent estimates of selectivity, and this has led to different conclusionsregarding the conditions in which selective object representations are learned andthe functional relevance of these representations. In an attempt to better characterizeobject selectivity, we undertake a comparison of various selectivity measureson a large set of units in AlexNet, including localist selectivity (Bowers et al.,2014), precision (Zhou et al., 2015), class-conditional mean activity selectivity(CCMAS) (Morcos et al., 2018), network dissection (Zhou et al., 2018a), thehuman interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimatesof object selectivity, with precision and CCMAS measures providing misleadinglyhigh estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. Wefail to find any units that are even remotely as selective as the ‘grandmother cell’units reported in recurrent neural networks. In order to generalize these results,we compared selectivity measures on a few units in VGG-16 and GoogLeNettrained on the ImageNet or Places-365 datasets that have been described as ‘objectdetectors’. Again, we find poor hit-rates and high false-alarm rates for objectclassification.1 I NTRODUCTIONThere have been recent attempts to understand how neural networks (NNs) work by analyzinghidden units one-at-a-time using various measures such as localist selectivity (Bowers et al., 2014),class-conditional mean activity selectivity (CCMAS) (Morcos et al., 2018), precision (Zhou et al.,2015), network dissection (Zhou et al., 2018a), and activation maximization (AM) (Erhan et al.,2009). These measures are all taken to provide evidence that some units respond highly selectivelyto categories of objects under some conditions. Not only are these findings surprising given thewidespread assumption that NNs only learn highly distributed and entangled representations, theyraise a host of questions, including the functional importance of these selective representations (Zhouet al., 2018b), the conditions in which they are learned (e.g., Morcos et al., 2018), and the relationbetween these representations and the selective neurons observed in cortex (Bowers, 2009).To answer these question, it is necessary to have a better understanding of what these metrics actuallymeasure, and how they relate to one another. Accordingly, we directly compare these measures ofselectivity on the same set of units as well as adopt standard signal-detection measures in an attemptto provide better measures of single-unit selectivity to object category. In addition, to provide amore intuitive assessment of selectivity, we report jitterplots for a few of the most selective unitsthat visually display how the unit responds to the different image categories. We focus on AlexNet(Krizhevsky et al., 2012) trained on ImageNet (Deng et al., 2009) because many authors have studiedthe selectivity of single hidden units in this model using a range of quantitative (Zhou et al., 2018a;2015) and qualitative (Nguyen et al., 2017; Yosinski et al., 2015; Simonyan et al., 2013) methods.But we also compare different selectivity measures on specific units in VGG-16 (Simonyan andZisserman, 2014) and GoogLeNet (Szegedy et al., 2015) trained on the the ImageNet and Places-3651Under review as a conference paper at ICLR 2020datasets that were characterized by Zhou et al. (2018a) as “object detectors” based on their NetworkDissection method (Zhou et al., 2018a). Our main findings are:1.Theprecision and CCMAS measures are misleading with near-maximum selectivity scoresassociated with units that strongly respond to many different image categories. By contrast,the signal-detection measures more closely capture the level of selectivity displayed in thejitterplots (Sec. 3.1).2.Units with interpretable AM images do not correspond to highly selective representations(Sec. 3.2).3. The Network Dissection method also provides a misleading measure for “object detectors”(Sec. 3.3).In one line of research, Bowers et al. (2014; 2016) assessed the selectivity of single hidden unitsin recurrent neural networks (RNNs) designed to model human short-term memory. They reportedmany ‘localist’ or ‘grandmother cell’ units that were 100% selective for specific letters or words,where all members of the selective category were more active than and disjoint from all non-members,as can be shown in jitterplots (Berkeley et al., 1995) (see Fig. 1 for a unit selective to the letter ‘j’).The authors argued that the network learned these representations in order to co-activate multipleletters or words at the same time in short-term memory without producing ambiguous blends ofoverlapping distributed patterns (the so-called ‘superposition catastrophe’). Consistent with thishypothesis, localist units did not emerge when the model was trained on letters or words one-at-a-time(Bowers et al., 2014) (see Fig. 1 for an example of a non-selective unit).In parallel, researchers have reported selective units in the hidden layers of various CNNs trained toclassify images into one of multiple categories (Zhou et al., 2015; Morcos et al., 2018; Zeiler andFergus, 2014; Erhan et al., 2009), for a review see Bowers (2017). For example, Zhou et al. (2015)assessed the selectivity of units in the pool5 layer of two CNNs trained to classify images into 1000objects and 205 scene categories, respectively. They reported many highly selective units that theycharacterized as ‘object detectors’ in both networks. Similarly, Morcos et al. (2018) reported thatCNNs trained on CIFAR-10 and ImageNet learned many highly selective hidden units, with CCMASscores approaching the maximum of 1.0. These later findings appear to be inconsistent with Bowerset al. (2016) who failed to observe selective representations in fully connected NNs trained on stimulione-at-a-time (see Fig. 1), but the measures of selectivity that have been applied across studies aredifferent, and accordingly, it is difficult to directly compare results.A better understanding of the relation between selectivity measures is vital given that differentmeasures are frequently used to address similar issues. For example, both the human interpretabilityof generated images (Le, 2013) and localist selectivity (Bowers et al., 2014) have been used tomake claims about ‘grandmother cells’, but it is not clear whether they provide similar insights intounit selectivity. Similarly, based on their precision metric, Zhou et al. (2015) claim that the objectdetectors learned in CNNs play an important role in identifying specific objects, whereas Morcos et al.(2018) challenge this conclusion based on their finding that units with high CCMAS measures werenot especially important in the performance of their CNNs and concluded: “...it implies that methodsfor understanding neural networks based on analyzing highly selective single units, or finding optimalinputs for single units, such as activation maximization (Erhan et al., 2009) may be misleading". Thismakes a direct comparison between selectivity measures all the more important.In order to directly compare and have a better understanding of the different selectivity measures weassessed (1) localist, (2) precision , and (3) CCMAS selectivity of the conv5 ,fc6, and fc7of AlexNettrained on ImageNet, and in addition, we employed a range of signal detection methods on theseunits, namely, (4) recall with 100% and 95% precision, (5) maximum informedness, (6) specificity atmaximum informedness , and (7) recall (also called sensitivity ) at maximum informedness, and falsealarm rates at maximum informedness (described in Sec. 2). We also assessed the selectivity of a fewunits in VGG-16 and GoogLeNet models trained on the ImageNet and Places-365 dataset that werehighly selective according to the Network Dissection method (Zhou et al., 2018a). We show thattheprecision and CCMAS measures often provide misleadingly high estimates of object selectivitycompared to other measures, and we do not find any units that can be reasonably described as ‘objectdetectors’ given that the most selective units show a low hit-rate or a high false-alarm rate (or both)when classifying images. At best, the most selective units in CNNs are sensitive to some unknownfeature that is weakly associated with the class in question.2Under review as a conference paper at ICLR 2020Figure 1: Examples of selectivity measures used. Top left: jitterplot of unit 113in an RNN (underthe superposition constraint) selective to the letter ‘j’ (Bowers et al., 2016). Top middle: jitterplot ofa non-selective unit 160found in an RNN trained on words one-at-a-time from (Bowers et al., 2016).Top right: Activation maximization image of unit conv5 9AlexNet that resembles a lighthouse(Nguyen et al., 2016). Bottom: highest-activation images for a ‘lamp’ detector with 84% precisionin the layer conv5 of AlexNet; from (Zhou et al., 2015).In addition to these quantitative measures and jitterplots we assessed selectivity with a commonqualitative measure, namely, human interpretation of images generated by a state-of-the-art activationmaximization (AM) method (Nguyen et al., 2017). AM images are generated to strongly activateindividual units, and some of them are interpretable by humans (e.g., a generated image that lookslike a lighthouse, see Fig. 1). For the first time, we systematically evaluated the interpretability ofthe AM images and compare these ratings with the selectivity measures for corresponding units. Weshow that the few hidden units with interpretable AM images are not highly selective.2 M ETHODSNetwork and Dataset All1.3M photos from the ImageNet ILSVRC 2012 dataset (Deng et al.,2009) were cropped to 277277pixels and classified by the pre-trained AlexNet CNN (Krizhevskyet al., 2012) shipped with Caffe (Jia et al., 2014), resulting in 721,536 correctly classified images.Once classified, the images are not re-cropped nor subject to any changes. We analyzed the fullyconnected ( fc) layers: fc6andfc7(4096 units), and the top convolutional layer conv5 which has 256filters. We only recorded the activations of correctly classified images. The activation files are storedin .h5 format and will be available at http://anonymizedForReview . We randomly selected233conv5 , 2738 fc6, 2239 fc7units for analysis.Localist selectivity Following Bowers et al. (2014), we define a unit to be localist for class Aif theset of activations for class Awas higher and disjoint with those of :A. Localist selectivity is easilydepicted with jitterplots (Berkeley et al., 1995) in which a scatter plot for each unit is generated (seeFigs. 1 and 3). Each point in a plot corresponds to a unit’s activation in response to a single image,and only correctly classified images are plotted. The level of activation is coded along the x-axis, andan arbitrary value is assigned to each point on the y-axis.Precision Precision refers to the proportion of items above some threshold from a given class. Theprecision method of finding object detectors involves identifying a small subset of images that moststrongly activate a unit and then identifying the critical part of these images that are responsible fordriving the unit. Zhou et al. (2015) took the 60 images that activated a unit the most strongly andasked independent raters to interpret the critical image patches (e.g., if 50 of the 60 images werelabeled as ‘lamp’, the unit would have a precision index of 50/60 or 83%; see Fig. 1). Object detectorswere defined as units with a precision score>75%: they reported multiple such detectors. Here, weapproximate this approach by considering the 60 images that most strongly activate a given unit andassess the highest percentage of images from a given output class.CCMAS Morcos et al. (2018) introduced a selectivity index called the Class-conditional MeanActivation Selectivity (CCMAS). The CCMAS for class Acompares the mean activation of allimages in class A,A, with the mean activation of all images not in class A,:A, and is given by:(A:A)=(A+:A). Here, we assessed class selectivity for the highest mean activation class.3Under review as a conference paper at ICLR 2020Activation Maximization We harnessed an activation maximization method called Plug & PlayGenerative Networks (Nguyen et al., 2017) in which an image generator network was used togenerate images (AM images) that highly activate a unit in a target network. We used the public codereleased by Nguyen et al. (2017) and their default hyperparameters.1We generated 100 separateimages that maximally activated each unit in the conv5 ,fc6, and fc8layers of AlexNet and askedparticipants to judge whether they could identify any repeating objects, animals, or places in imagesin a behavioral experiment (Sec. 3.2). Readers can test themselves at: https://research.sc/participant/login/dynamic/63907FB2-3CB9-45A9-B4AC-EFFD4C4A95D5Recall with perfect and 95% precision Recall with perfect and 95% precision are related to localistselectivity except that they provide a continuous rather than discrete measure. For recall with perfectprecision we identified the image that activated a given unit the most and counted the number ofimages from the same class that were more active than all images from all other classes. We thendivided this result by the total number of correctly identified images from this class. A recall with aperfect precision score of 1 is equivalent to a localist representation. Recall with a 95% precisionallows 5% false alarms.Maximum informedness Maximum informedness identifies the class and threshold where thehighest proportion of images above the threshold and the lowest proportion of images below thethreshold are from that class (Powers, 2011). The informedness is computed for each class at eachthreshold, with the highest value selected. Informedness summarises the diagnostic performanceof unit for a given class at a certain threshold based on the recall [ True Positives / (True Positives +False Negatives )] and specificity [True Negatives / (True Negatives +False Positives)] in the formula[informedness =recall +specificity1](Powers, 2011).Sensitivity or Recall at Maximum Informedness For the threshold and class selected by MaximumInformedness, recall (or hit-rate) is the proportion of items from the given class that are above thethreshold. Also known as true postive rate.Specificity at Maximum Informedness For the threshold and class selected by Maximum Informed-ness, the proportion of items that are not from the given class that are below the threshold. Alsoknown as true negative rate.False Alarm Rate at Maximum Informedness For the threshold and class selected by MaximumInformedness, the proportion of items that are not from the given class that are above the threshold.Network Dissection To assess the selectivity of a unit in the Network Dissection technique, Zhouet al. (2018a) compute the Intersection over Union (IoU) of an annotated input image Lc, for theset of all ‘concepts’ cand a spatial activation map, Mk, of where a unit kis. A unitkis taken as adetector for concept cif its IoUk;cexceeds a pre-defined threshold T. See Zhou et al. (2018a) formore details.3 R ESULTS3.1 C OMPARISON OF SELECTIVITY MEASURES IN ALEXNETThe results from the various of selectivity measures applied to the conv5 ,fc6, and fc7layers ofAlexNet are displayed in Fig. 2a–i. We did not plot the localist selectivity as there were no localist‘grandmother units’. The first point to note is that multiple units in the fc6andfc7layers had near100% precision scores and multiple units had CCMAS scores approaching 1. For example, in layerfc7, we found 14 units with a precision > 0.9, and 1487 units with a CCMAS > 0.9. The second pointis that other measures provided much reduced estimates of selectivity. For example, the unit with thehighest recall with a perfect precision score was only .08 (unit 255responding to images of Monarchbutterflies), and the unit with the top maximum informedness score (unit 3290 also responding toimages of Monarch butterflies with a score of 0.91) had a false alarm rate above its optimal threshold> 99% (indeed the minimum false alarm rate was 0.96).To illustrate the contrasting measures of selectivity consider unit fc61199depicted in Fig. 3 that has aprecision score of 98% and a CCMAS score of .92. By Zhou et al.’s criterion, this is a ‘MonarchButterfly’ detector (its precision score > 75%). By contrast, the scatter plot and signal-detection1https://github.com/Evolving-AI-Lab/ppgn4Under review as a conference paper at ICLR 2020a. Precision b. No. of classes in top 100 c. CCMASd. Recall (precision = 1) e. Recall (precision = 0.95) f. Max. informednessg. Specificity h. Recall i. False alarm proportion(at max. informedness) (at max. informedness) (at max. informedness)Figure 2: Different selectivity measures across the conv5 ,fc6, and fc7layers of AlexNet. Red-line:median of data, top and bottom of box edges is the 25thand 75thpercentile, whiskers extend toextreme edges of distribution not considered outliers and red crosses are outliers. Green points anddashed lines are the means of the distributions with standard errors. The high levels of selectivityobserved with the precision and CCMAS measures are in stark contrast with the low levels ofselectivity observed with the recall with perfect precision and high false-alarm rates at maximuminformedness.scores show this is a mischaracterisation of this unit given that the false alarm rate at maximuminformedness was greater than 99% and the modal response to Monarch butterflies was zero.What level of selectivity is required before a unit can be considered an ‘object detector’ for a givencategory? In the end, this is a terminological point. On an extreme view, one might limit the term tothe ‘grandmother units’ that categorize objects with perfect recall and specificity, or alternatively,it might seem reasonable to describe a unit as a detector for a specific object category if there issome threshold of activation that supports more hits than misses (the unit is strongly activated bythe majority of images from a given category), and at the same time, supports more hits than falsealarms (the unit is strongly activated by items from the given category more often than by items fromother categories). Or perhaps a lower standard could be defended, but in our view, the term "objectdetector" suggests a higher level of selectivity than 8% recall at perfect precision. That said, ourresults show that some units respond strongly to some (unknown) features that are weakly correlatedwith an object category. For instance, unit fc61199is responding to features that occur more frequentlyin Monarch Butterflies than other categories. This can also be seen in a recent ablation study inwhich removing the most selective units tended to impair the CNN’s performance in identifyingthe corresponding object categories more than other categories (Zhou et al., 2018b). But again, thepattern of performance is not consistent with the units being labeled ‘object detectors’.5Under review as a conference paper at ICLR 2020Figure 3: Data for unit fc61199.Left: activation jitterplot, black diamonds: Monarch butterflyimages; grey circles: all other classes; white dashed line: threshold for the butterfly class maximuminformedness; blue solid line: threshold for top 60 activations. Middle: histogram of activations ofMonarch butterflies; red dashed line: threshold for the butterfly class maximum informedness; blacksolid line: threshold for top 60 activations. Inset: zoomed-in histogram of all activations across allImageNet classes of unit fc61199(N.B. this plot shows only the highest 121,586 activations; there are596,734 activations at 0). There are Monarch butterfly images covering the whole range of values,with 72 images (5.8% of the total) having an activation of 0. Right: example ImageNet images withactivations of 0 (top), the mean, 39.2 0.6, (middle), and the maximum, 95, (bottom) of the range.Although the high precision score suggests that this unit is a butterfly detector this is misleading giventhere are butterfly images over the entire activation range (including 0).3.2 HUMAN INTERPRETATION OF ACTIVATION MAXIMIZATION IMAGES FOR ALEXNET UNITSActivation Maximization is one of the most commonly used interpretability methods for explain-ing what a single unit has learned in many artificial CNNs and even biological neural networks(see Nguyen et al. (2019) for a survey). Our behavioral experiment provides the first quantitativeassessment of AM images and compares AM interpretability to other selectivity measures.Table 1: Human judgements of whether AM images look like familiar objects in layers conv5 ,fc6,andfc8in AlexNet.layer % ‘yes’ % units 80% % overlap between humans and:responses ‘yes’ response humans most active CCMAS(a) (b) (c) object (d) class (e)conv5 21.7% ( 1.1%) 4.3% ( 1.3%) 89.5% ( 5.7% ) 34.1% ( 14.4%) 0%fc6 21.0% ( 0.4%) 3.1% ( 0.4%) 80.4% ( 4.1%) 23.3% ( 5.9%) 18.9% ( 5.9%)fc8 71.2% ( 0.6%) 59.3% ( 1.6%) 96.5% ( 0.4%) 95.4% ( 0.6%) 94.6% ( 0.7%)We generated 100 AM images images for every unit in the layers conv5 ,fc6, and fc8in AlexNet, asin Nguyen et al. (2017), and displayed them as 1010-image panels. A total of 3,299 image panelswere used in the experiment (995 fc8, 256 conv5 , and 2048 randomly selected fc6image panels) andwere divided into 64 counterbalanced lists for testing. To assess the interpretability for these units asobject detectors, 333 paid volunteers were asked to look at image panels and asked if the images hadan object / animal or place in common. If the answer was ‘yes’, they were asked to write down ageneric name for that object (e.g. “fish” rather than “goldfish”). Analyses of common responses wasdone for any units where over 80% of humans agreed there was an object present.The results are summarized in Table 1. Not surprisingly, the AM images for output fc8units arethe most human-recognizable as objects across the AlexNet layers (71.2%; Table 1a). In addition,when they were given a consistent interpretation, they almost always (95.4%; Table 1d) match thecorresponding ImageNet category. By contrast, less than 5% of units in conv5 orfc6were associatedwith consistently interpretable images (Table 1b), and the interpretations only weakly matched thecategory associated with the highest-activation images or CCMAS selectivity (Table 1d–e). Apartfrom showing that there are few interpretable units in the hidden layers of AlexNet, our findings showthat the interpretability of images does not imply a high level of selectivity given the signal-detection6Under review as a conference paper at ICLR 2020(a)conv5 183 (b)fc6319 (c)fc8969(d)conv5 65 (e)fc6103 (f)fc8865Figure 4: Example AM images that were either judged by all participants to contain objects (a–c) orto be uninterpretable as objects (d–f). The human label for unit conv5 183(a) was ‘dogs’; the mostactive image was of a ‘flat-coated retriever’; CCMAS class was ‘monitor’. For fc6319(b), subjectsreported ‘green peppers’ or ‘apples’ (all classified as the same broad class in our analysis); both themost active item and CCMAS class were ‘Granny Smith apples’. For fc8969(c), humans suggested‘beverage’ or ‘drink’; both the most active item and CCMAS class were ‘eggnog’.results (Fig. 2d–h). See Fig. 4 for an example of the types of images that participants rated as objectsor non-objects.3.3 C OMPARING SELECTIVITY MEASURES IN OTHER CNN Sa. GoogLeNet on ImageNet b. GoogLeNet on Places-365 c. VGG-16 on Places-365inception4e 494 inception4e 824 conv5 _320precision : 0.0 precision : 0.27 precision : 0.53CCMAS:0.52 CCMAS: 0.55 CCMAS: 0.82A= 72.50:A= 22.81 A= 40.99:A= 11.78 A= 157.6:A= 15.2Figure 5: The units with with the highest Network Dissection scores for the category ‘bus’. Thescatter plots, precision , and CCMAS scores all indicate a low selectivity for this category. bluesquares: ‘school bus’; redpentagons: ‘trolleybus’; green stars: ‘minibus’; grey circles: other classes.Thus far we have assessed the selectivity of hidden units in AlexNet and shown that no units canreasonably be characterized as object detectors despite the high precision and CCMAS scores ofsome units. This raises the question as to whether more recent CNNs learn object detector units. Inorder to address this we display jitterplots for three units that have the highest IoU scores according tothe Network Dissection for the category BUS in (a) GoogLeNet trained on ImageNet, (b) GoogLeNettrained on Places-365, and (c) VGG-16 trained on Places-365, respectively (Zhou et al., 2018a).Models trained on the Places-365 dataset learn to categorize images into scenes (e.g., bedrooms,kitchens, etc.) rather than into object categories, and nevertheless, Zhou et al. (2018a) reported moreobject detectors in the former models. We illustrate the selectivity of the BUS category because it isan output category in ImageNet so we can easily plot the jitterplots for these units.As was the case with AlexNet, the jitterplots show that the most selective units show some degree ofselectivity, with the BUS images more active on average compared to non-Buses, and the percentageof nonzero activations for BUS higher than the non-BUS categories (see tables A3 - A5 in theappendix for summary of more units). But the units are no more selective than the units we observedin AlexNet. Indeed, the precision measure of selectivity for the first units is 0.0, with none of the7Under review as a conference paper at ICLR 2020units having a precision of .75 that was the criterion of object detectors by Zhou et al. (2015), andCCMAS scores for first two units were roughly similar to the mean CCMAS score for AlexNet unitsin conv 5 (and much lower than the mean in fc6 and fc7). The most selective VGG-16 unit trainedon Places-365 has lower precision and CCMAS scores than the Monarch Butterfly unit depicted inFigure 3. So again, different measures of selectivity provide support different conclusions, and eventhe most selective units are far from the selective units observed in recurrent networks as reported inFigure 1a. See tables A3 - A5 in the appendix for more details about these three units.4 D ISCUSSIONS AND CONCLUSIONSOur central finding is that different measures of single-unit selectivity for objects support verydifferent conclusions when applied to the same units in AlexNet. In contrast with the precision (Zhouet al., 2015) and CCMAS (Morcos et al., 2018) measures that suggest some highly selective unitsfor objects in layers conv5 ,fc6, and fc7, the recall with perfect precision and false alarm rates atmaximum informedness show low levels of selectivity. Indeed, the most selective units have a poorhit-rate or a high false-alarm rate (or both) for identifying an object class. The same outcome wasobserved with units in VGG-16 and GoogLeNet trained on either ImageNet or the Places-365 dataset.Not only do the different measures provide very different assessments of selectivity, the precision ,CCMAS, and Network Dissection measures provide highly misleading estimates of selectivity thathave led to mistaken conclusions. For example, unit fc61199in AlexNet trained on ImageNet isconsidered an Monarch Butterfly detector according to Zhou et al. (2015) with a precision score of98% (and a CCMAS score of .93). But the jitterplot in Fig. 3 and signal detection scores (e.g., highfalse alarm rate at maximum informedness) show this is a mischaracterisation of this unit. In the sameway, the Network Dissection method identified many object detectors in VGG-16 and GoogLeNetCNNs, but the jitterplots in Fig. 5 (and precision scores) show that this conclusion is unjustified.For additional problems with the CCMAS score see Figure 5 in Appendix C. Similarly, the imagesgenerated by Activation Maximization also provided a misleading estimate of selectivity given thatinterpretable images were associated with very low selectivity scores. This has led to confusions thathave delayed theoretical progress. For example, describing single units in CNNs as “object detectors”in response to high precision measures (Zhou et al.) suggests similar types of representations arelearned in CNNs and RNNs. Indeed, we are not aware of anyone in the machine learning communitywho has even considered the hypothesis that selectivity is reduced in CNNs compared RNNs. Ourfindings highlight the contrasting results.What should be made of the finding that localist representations are sometimes learned in RNNs(units with perfect specificity and recall), but not in AlexNet and related CNNs? The failure toobserve localist units in the hidden layers of these CNNs is consistent with Bowers et al. (2014)’sclaim that these units emerge in order to support the co-activation of multiple items at the sametime in short-term memory. That is, localist representations may be the solution to the superpositioncatastrophe, and these CNNs only have to identify one image at a time. The pressure to learn highlyselective representations in response to the superposition constraint may help explain the reports ofhighly selective neurons in cortex given that the cortex needs to co-activate multiple items at the sametime in order to support short-term memory (Bowers et al., 2016).Note, the RNNs that learned localist units were very small in scale compared to CNNs we havestudied here, and accordingly, it is possible that the contrasting results reflect the size of the networksrather than the superposition catastrophe per se . Relevant to this issue a number of authors havereported the existence of selective units in larger RNNs with long-short term memory (LSTM) units(Karpathy et al., 2016; Radford et al., 2017; Lakretz et al., 2019; Na et al., 2019). Indeed, Lakretzet al. (2019) use the term ‘grandmother cell’ to describe the units they observed. It will be interestingto apply our measures of selectivity to these larger RNNs and see whether these units are indeed‘grandmother units’.It should also be noted that there are recent reports of impressively selective representations inGenerative Adversarial Networks (Bau et al., 2019) and Variational Autoencoders (Burgess et al.,2018) where the superposition catastrophe is not an issue. Again, it will be interesting to assess theselectivity of these units according to signal detection measures in order to see whether there areadditional computational pressures to learn highly selective or even grandmother cells. We will beexploring these issues in future work.8Under review as a conference paper at ICLR 2020
B1e6QO3SKH
Official Blind Review #2
3: Weak Reject
The paper empirically studies the category selectivity of individual cells in hidden units of CNNs. It is a sort of "meta-study" and comparison of different metrics proposed to identify cells with a preference for a specific target category. The claimed finding is that there are no cells that are "sufficiently" selective to be called object detectors. The paper is seemingly motivated by the authors' perceiving a contradiction: it is assumed that the power of neural networks is (among others) due to the distributed representation; whereas the presence of object detectors would, in the extreme case, mean that the representation is disentangled into a separate unit per category. It may be a matter of terminology, but this is where my disagreement with the authors start. I do not see a simplistic dichotomy, where one could or should determine which of the two interpretations is "right" or "wrong". In my view, which I believe is the mainstream interpretation, a distributed representation does not contradict the presence of specialised units. Some categories probably are easily identified by few distinctive features, so there will be more detector-like units; others are complex and hence more diffusely spread through the network; and of course there is no guarantee that the learned "object detectors" are tuned to exactly the target categories, after all it is the purpose of the network to gradually translate the data distribution to the label distribution - if the categories were directly apparent in the data, nearest-neighbour would be enough. So it is not only possible, but rather likely that the learned "object detectors" are to some degree driven by the statistics of the data, not the labels - e.g., there could be a highly selective "bird" unit which nevertheless has high false positive rate for any of the more specific bird species categories in the imageNet nomenclature. And vice versa, there could be a highly specific "Ferrari" detector that is so specialised that it has low recall for the "sports car" class (this case includes, among others, the case of viewpoint-specific detectors for certain categories). In the words of the paper, the "selective units are sensitive to some feature that is frequently, but not exclusively associated with the class" - I thought this is the standard majority view, not a surprising finding. In this context terminology matters: the study effectively tries to disprove that the network learns "near-perfect single output-category detectors", but who claims that it would do that? I agree with the authors that there is by now a zoo of selectivity metrics that are not always highly correlated. But is that a problem? We have a zoo of quality metrics for many machine learning problems - that is not necessarily a weakness, but simply reflects the obvious fact that a single number is not enough to characterise performance in a complex cognitive task. It is the job of the researcher/user to chose the metric that s most suitable for their specific question, and to correctly interpret its numerical value. Regarding the methodology, the paper did a lot of work to systematically crunch the numbers and analyse network units. It is a laudable effort that someone took on that job. A few technical decisions are unclear to me. Why analyse only some of the units? If one collects statistics over >2000 units of a fully connected layer, one might as well do the complete job and use all 4096 units. Similarly, why analyse only the correctly classified images? While it is clear that one must separately look at them, also the activations on incorrectly classified ones could provide valuable insights. E.g., do false positives of class X on average activate a certain "class X detector"? Why chose only the class with highest mean activation for CCMAS? That might be unrepresentative, e.g., a neuron might, for that particular class, always have high activation due to some very common background context, and still be not selective at all. Regarding the results, I find them much less clear-cut than the paper claims. For example, I find it quite remarkable that some unit has 8% recall at perfect precision. After all, only approximately 0.1% of the images are in the correct category, so a unit that flags 8% of them without making a mistake is a pretty good detector for (part of) the target class, cf Fig. 3. Also regarding Fig. 2 / maximum informedness, the statistics actually do not look bad. Of course false alarm proportion remains high - but the chance level here is 99.9%, so even a 99% false alarm rate means that your unit can, on its own, reject 90% of the true negatives. I find the proposed "minimum condition" for an object detector (>50% recall at >50% precision) unrealistic: the top-1 accuracy of AlexNet is, to my knowledge, <63%. Even the complete network probably never reaches 50% recall for most classes. Especially the user study - which is again a commendable effort - in my view does not confirm the claims. According to that study, almost 60% 0f all fc8 units are "object detectors", with very high conherence between humans and selectivity metrics. Overall, while it is an interesting study, it remains unclear to me what I should learn from it. I don't see why different measures provide "misleading conclusions" that need to be rectified. Conclusions are the responsibility of the researcher interpreting the numbers, not of the formula to calculate some statistical performance metric. I am in a difficult situation here: the study is one of those things (like determining human performance on ImageNet, or re-coding some baseline where the original code is not available) where I find it valuable that someone did them in the community, but still I don't think they need a reviewed paper. A note on the blog, or on arXiv, is enough.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes? ### Paper Abstract Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations. In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks. In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'. Again, we find poor hit-rates and high false-alarm rates for object classification. ### Paper Keywords ["neural networks", "localist coding", "selectivity", "object detectors", "CCMAS", "CNNs", "activation maximisation", "information representation", "network dissection", "interpretabillity", "signal detection"] ### Paper Content ABSTRACTVarious methods of measuring unit selectivity have been developed with the aimof better understanding how neural networks work. But the different measuresprovide divergent estimates of selectivity, and this has led to different conclusionsregarding the conditions in which selective object representations are learned andthe functional relevance of these representations. In an attempt to better characterizeobject selectivity, we undertake a comparison of various selectivity measureson a large set of units in AlexNet, including localist selectivity (Bowers et al.,2014), precision (Zhou et al., 2015), class-conditional mean activity selectivity(CCMAS) (Morcos et al., 2018), network dissection (Zhou et al., 2018a), thehuman interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimatesof object selectivity, with precision and CCMAS measures providing misleadinglyhigh estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. Wefail to find any units that are even remotely as selective as the ‘grandmother cell’units reported in recurrent neural networks. In order to generalize these results,we compared selectivity measures on a few units in VGG-16 and GoogLeNettrained on the ImageNet or Places-365 datasets that have been described as ‘objectdetectors’. Again, we find poor hit-rates and high false-alarm rates for objectclassification.1 I NTRODUCTIONThere have been recent attempts to understand how neural networks (NNs) work by analyzinghidden units one-at-a-time using various measures such as localist selectivity (Bowers et al., 2014),class-conditional mean activity selectivity (CCMAS) (Morcos et al., 2018), precision (Zhou et al.,2015), network dissection (Zhou et al., 2018a), and activation maximization (AM) (Erhan et al.,2009). These measures are all taken to provide evidence that some units respond highly selectivelyto categories of objects under some conditions. Not only are these findings surprising given thewidespread assumption that NNs only learn highly distributed and entangled representations, theyraise a host of questions, including the functional importance of these selective representations (Zhouet al., 2018b), the conditions in which they are learned (e.g., Morcos et al., 2018), and the relationbetween these representations and the selective neurons observed in cortex (Bowers, 2009).To answer these question, it is necessary to have a better understanding of what these metrics actuallymeasure, and how they relate to one another. Accordingly, we directly compare these measures ofselectivity on the same set of units as well as adopt standard signal-detection measures in an attemptto provide better measures of single-unit selectivity to object category. In addition, to provide amore intuitive assessment of selectivity, we report jitterplots for a few of the most selective unitsthat visually display how the unit responds to the different image categories. We focus on AlexNet(Krizhevsky et al., 2012) trained on ImageNet (Deng et al., 2009) because many authors have studiedthe selectivity of single hidden units in this model using a range of quantitative (Zhou et al., 2018a;2015) and qualitative (Nguyen et al., 2017; Yosinski et al., 2015; Simonyan et al., 2013) methods.But we also compare different selectivity measures on specific units in VGG-16 (Simonyan andZisserman, 2014) and GoogLeNet (Szegedy et al., 2015) trained on the the ImageNet and Places-3651Under review as a conference paper at ICLR 2020datasets that were characterized by Zhou et al. (2018a) as “object detectors” based on their NetworkDissection method (Zhou et al., 2018a). Our main findings are:1.Theprecision and CCMAS measures are misleading with near-maximum selectivity scoresassociated with units that strongly respond to many different image categories. By contrast,the signal-detection measures more closely capture the level of selectivity displayed in thejitterplots (Sec. 3.1).2.Units with interpretable AM images do not correspond to highly selective representations(Sec. 3.2).3. The Network Dissection method also provides a misleading measure for “object detectors”(Sec. 3.3).In one line of research, Bowers et al. (2014; 2016) assessed the selectivity of single hidden unitsin recurrent neural networks (RNNs) designed to model human short-term memory. They reportedmany ‘localist’ or ‘grandmother cell’ units that were 100% selective for specific letters or words,where all members of the selective category were more active than and disjoint from all non-members,as can be shown in jitterplots (Berkeley et al., 1995) (see Fig. 1 for a unit selective to the letter ‘j’).The authors argued that the network learned these representations in order to co-activate multipleletters or words at the same time in short-term memory without producing ambiguous blends ofoverlapping distributed patterns (the so-called ‘superposition catastrophe’). Consistent with thishypothesis, localist units did not emerge when the model was trained on letters or words one-at-a-time(Bowers et al., 2014) (see Fig. 1 for an example of a non-selective unit).In parallel, researchers have reported selective units in the hidden layers of various CNNs trained toclassify images into one of multiple categories (Zhou et al., 2015; Morcos et al., 2018; Zeiler andFergus, 2014; Erhan et al., 2009), for a review see Bowers (2017). For example, Zhou et al. (2015)assessed the selectivity of units in the pool5 layer of two CNNs trained to classify images into 1000objects and 205 scene categories, respectively. They reported many highly selective units that theycharacterized as ‘object detectors’ in both networks. Similarly, Morcos et al. (2018) reported thatCNNs trained on CIFAR-10 and ImageNet learned many highly selective hidden units, with CCMASscores approaching the maximum of 1.0. These later findings appear to be inconsistent with Bowerset al. (2016) who failed to observe selective representations in fully connected NNs trained on stimulione-at-a-time (see Fig. 1), but the measures of selectivity that have been applied across studies aredifferent, and accordingly, it is difficult to directly compare results.A better understanding of the relation between selectivity measures is vital given that differentmeasures are frequently used to address similar issues. For example, both the human interpretabilityof generated images (Le, 2013) and localist selectivity (Bowers et al., 2014) have been used tomake claims about ‘grandmother cells’, but it is not clear whether they provide similar insights intounit selectivity. Similarly, based on their precision metric, Zhou et al. (2015) claim that the objectdetectors learned in CNNs play an important role in identifying specific objects, whereas Morcos et al.(2018) challenge this conclusion based on their finding that units with high CCMAS measures werenot especially important in the performance of their CNNs and concluded: “...it implies that methodsfor understanding neural networks based on analyzing highly selective single units, or finding optimalinputs for single units, such as activation maximization (Erhan et al., 2009) may be misleading". Thismakes a direct comparison between selectivity measures all the more important.In order to directly compare and have a better understanding of the different selectivity measures weassessed (1) localist, (2) precision , and (3) CCMAS selectivity of the conv5 ,fc6, and fc7of AlexNettrained on ImageNet, and in addition, we employed a range of signal detection methods on theseunits, namely, (4) recall with 100% and 95% precision, (5) maximum informedness, (6) specificity atmaximum informedness , and (7) recall (also called sensitivity ) at maximum informedness, and falsealarm rates at maximum informedness (described in Sec. 2). We also assessed the selectivity of a fewunits in VGG-16 and GoogLeNet models trained on the ImageNet and Places-365 dataset that werehighly selective according to the Network Dissection method (Zhou et al., 2018a). We show thattheprecision and CCMAS measures often provide misleadingly high estimates of object selectivitycompared to other measures, and we do not find any units that can be reasonably described as ‘objectdetectors’ given that the most selective units show a low hit-rate or a high false-alarm rate (or both)when classifying images. At best, the most selective units in CNNs are sensitive to some unknownfeature that is weakly associated with the class in question.2Under review as a conference paper at ICLR 2020Figure 1: Examples of selectivity measures used. Top left: jitterplot of unit 113in an RNN (underthe superposition constraint) selective to the letter ‘j’ (Bowers et al., 2016). Top middle: jitterplot ofa non-selective unit 160found in an RNN trained on words one-at-a-time from (Bowers et al., 2016).Top right: Activation maximization image of unit conv5 9AlexNet that resembles a lighthouse(Nguyen et al., 2016). Bottom: highest-activation images for a ‘lamp’ detector with 84% precisionin the layer conv5 of AlexNet; from (Zhou et al., 2015).In addition to these quantitative measures and jitterplots we assessed selectivity with a commonqualitative measure, namely, human interpretation of images generated by a state-of-the-art activationmaximization (AM) method (Nguyen et al., 2017). AM images are generated to strongly activateindividual units, and some of them are interpretable by humans (e.g., a generated image that lookslike a lighthouse, see Fig. 1). For the first time, we systematically evaluated the interpretability ofthe AM images and compare these ratings with the selectivity measures for corresponding units. Weshow that the few hidden units with interpretable AM images are not highly selective.2 M ETHODSNetwork and Dataset All1.3M photos from the ImageNet ILSVRC 2012 dataset (Deng et al.,2009) were cropped to 277277pixels and classified by the pre-trained AlexNet CNN (Krizhevskyet al., 2012) shipped with Caffe (Jia et al., 2014), resulting in 721,536 correctly classified images.Once classified, the images are not re-cropped nor subject to any changes. We analyzed the fullyconnected ( fc) layers: fc6andfc7(4096 units), and the top convolutional layer conv5 which has 256filters. We only recorded the activations of correctly classified images. The activation files are storedin .h5 format and will be available at http://anonymizedForReview . We randomly selected233conv5 , 2738 fc6, 2239 fc7units for analysis.Localist selectivity Following Bowers et al. (2014), we define a unit to be localist for class Aif theset of activations for class Awas higher and disjoint with those of :A. Localist selectivity is easilydepicted with jitterplots (Berkeley et al., 1995) in which a scatter plot for each unit is generated (seeFigs. 1 and 3). Each point in a plot corresponds to a unit’s activation in response to a single image,and only correctly classified images are plotted. The level of activation is coded along the x-axis, andan arbitrary value is assigned to each point on the y-axis.Precision Precision refers to the proportion of items above some threshold from a given class. Theprecision method of finding object detectors involves identifying a small subset of images that moststrongly activate a unit and then identifying the critical part of these images that are responsible fordriving the unit. Zhou et al. (2015) took the 60 images that activated a unit the most strongly andasked independent raters to interpret the critical image patches (e.g., if 50 of the 60 images werelabeled as ‘lamp’, the unit would have a precision index of 50/60 or 83%; see Fig. 1). Object detectorswere defined as units with a precision score>75%: they reported multiple such detectors. Here, weapproximate this approach by considering the 60 images that most strongly activate a given unit andassess the highest percentage of images from a given output class.CCMAS Morcos et al. (2018) introduced a selectivity index called the Class-conditional MeanActivation Selectivity (CCMAS). The CCMAS for class Acompares the mean activation of allimages in class A,A, with the mean activation of all images not in class A,:A, and is given by:(A:A)=(A+:A). Here, we assessed class selectivity for the highest mean activation class.3Under review as a conference paper at ICLR 2020Activation Maximization We harnessed an activation maximization method called Plug & PlayGenerative Networks (Nguyen et al., 2017) in which an image generator network was used togenerate images (AM images) that highly activate a unit in a target network. We used the public codereleased by Nguyen et al. (2017) and their default hyperparameters.1We generated 100 separateimages that maximally activated each unit in the conv5 ,fc6, and fc8layers of AlexNet and askedparticipants to judge whether they could identify any repeating objects, animals, or places in imagesin a behavioral experiment (Sec. 3.2). Readers can test themselves at: https://research.sc/participant/login/dynamic/63907FB2-3CB9-45A9-B4AC-EFFD4C4A95D5Recall with perfect and 95% precision Recall with perfect and 95% precision are related to localistselectivity except that they provide a continuous rather than discrete measure. For recall with perfectprecision we identified the image that activated a given unit the most and counted the number ofimages from the same class that were more active than all images from all other classes. We thendivided this result by the total number of correctly identified images from this class. A recall with aperfect precision score of 1 is equivalent to a localist representation. Recall with a 95% precisionallows 5% false alarms.Maximum informedness Maximum informedness identifies the class and threshold where thehighest proportion of images above the threshold and the lowest proportion of images below thethreshold are from that class (Powers, 2011). The informedness is computed for each class at eachthreshold, with the highest value selected. Informedness summarises the diagnostic performanceof unit for a given class at a certain threshold based on the recall [ True Positives / (True Positives +False Negatives )] and specificity [True Negatives / (True Negatives +False Positives)] in the formula[informedness =recall +specificity1](Powers, 2011).Sensitivity or Recall at Maximum Informedness For the threshold and class selected by MaximumInformedness, recall (or hit-rate) is the proportion of items from the given class that are above thethreshold. Also known as true postive rate.Specificity at Maximum Informedness For the threshold and class selected by Maximum Informed-ness, the proportion of items that are not from the given class that are below the threshold. Alsoknown as true negative rate.False Alarm Rate at Maximum Informedness For the threshold and class selected by MaximumInformedness, the proportion of items that are not from the given class that are above the threshold.Network Dissection To assess the selectivity of a unit in the Network Dissection technique, Zhouet al. (2018a) compute the Intersection over Union (IoU) of an annotated input image Lc, for theset of all ‘concepts’ cand a spatial activation map, Mk, of where a unit kis. A unitkis taken as adetector for concept cif its IoUk;cexceeds a pre-defined threshold T. See Zhou et al. (2018a) formore details.3 R ESULTS3.1 C OMPARISON OF SELECTIVITY MEASURES IN ALEXNETThe results from the various of selectivity measures applied to the conv5 ,fc6, and fc7layers ofAlexNet are displayed in Fig. 2a–i. We did not plot the localist selectivity as there were no localist‘grandmother units’. The first point to note is that multiple units in the fc6andfc7layers had near100% precision scores and multiple units had CCMAS scores approaching 1. For example, in layerfc7, we found 14 units with a precision > 0.9, and 1487 units with a CCMAS > 0.9. The second pointis that other measures provided much reduced estimates of selectivity. For example, the unit with thehighest recall with a perfect precision score was only .08 (unit 255responding to images of Monarchbutterflies), and the unit with the top maximum informedness score (unit 3290 also responding toimages of Monarch butterflies with a score of 0.91) had a false alarm rate above its optimal threshold> 99% (indeed the minimum false alarm rate was 0.96).To illustrate the contrasting measures of selectivity consider unit fc61199depicted in Fig. 3 that has aprecision score of 98% and a CCMAS score of .92. By Zhou et al.’s criterion, this is a ‘MonarchButterfly’ detector (its precision score > 75%). By contrast, the scatter plot and signal-detection1https://github.com/Evolving-AI-Lab/ppgn4Under review as a conference paper at ICLR 2020a. Precision b. No. of classes in top 100 c. CCMASd. Recall (precision = 1) e. Recall (precision = 0.95) f. Max. informednessg. Specificity h. Recall i. False alarm proportion(at max. informedness) (at max. informedness) (at max. informedness)Figure 2: Different selectivity measures across the conv5 ,fc6, and fc7layers of AlexNet. Red-line:median of data, top and bottom of box edges is the 25thand 75thpercentile, whiskers extend toextreme edges of distribution not considered outliers and red crosses are outliers. Green points anddashed lines are the means of the distributions with standard errors. The high levels of selectivityobserved with the precision and CCMAS measures are in stark contrast with the low levels ofselectivity observed with the recall with perfect precision and high false-alarm rates at maximuminformedness.scores show this is a mischaracterisation of this unit given that the false alarm rate at maximuminformedness was greater than 99% and the modal response to Monarch butterflies was zero.What level of selectivity is required before a unit can be considered an ‘object detector’ for a givencategory? In the end, this is a terminological point. On an extreme view, one might limit the term tothe ‘grandmother units’ that categorize objects with perfect recall and specificity, or alternatively,it might seem reasonable to describe a unit as a detector for a specific object category if there issome threshold of activation that supports more hits than misses (the unit is strongly activated bythe majority of images from a given category), and at the same time, supports more hits than falsealarms (the unit is strongly activated by items from the given category more often than by items fromother categories). Or perhaps a lower standard could be defended, but in our view, the term "objectdetector" suggests a higher level of selectivity than 8% recall at perfect precision. That said, ourresults show that some units respond strongly to some (unknown) features that are weakly correlatedwith an object category. For instance, unit fc61199is responding to features that occur more frequentlyin Monarch Butterflies than other categories. This can also be seen in a recent ablation study inwhich removing the most selective units tended to impair the CNN’s performance in identifyingthe corresponding object categories more than other categories (Zhou et al., 2018b). But again, thepattern of performance is not consistent with the units being labeled ‘object detectors’.5Under review as a conference paper at ICLR 2020Figure 3: Data for unit fc61199.Left: activation jitterplot, black diamonds: Monarch butterflyimages; grey circles: all other classes; white dashed line: threshold for the butterfly class maximuminformedness; blue solid line: threshold for top 60 activations. Middle: histogram of activations ofMonarch butterflies; red dashed line: threshold for the butterfly class maximum informedness; blacksolid line: threshold for top 60 activations. Inset: zoomed-in histogram of all activations across allImageNet classes of unit fc61199(N.B. this plot shows only the highest 121,586 activations; there are596,734 activations at 0). There are Monarch butterfly images covering the whole range of values,with 72 images (5.8% of the total) having an activation of 0. Right: example ImageNet images withactivations of 0 (top), the mean, 39.2 0.6, (middle), and the maximum, 95, (bottom) of the range.Although the high precision score suggests that this unit is a butterfly detector this is misleading giventhere are butterfly images over the entire activation range (including 0).3.2 HUMAN INTERPRETATION OF ACTIVATION MAXIMIZATION IMAGES FOR ALEXNET UNITSActivation Maximization is one of the most commonly used interpretability methods for explain-ing what a single unit has learned in many artificial CNNs and even biological neural networks(see Nguyen et al. (2019) for a survey). Our behavioral experiment provides the first quantitativeassessment of AM images and compares AM interpretability to other selectivity measures.Table 1: Human judgements of whether AM images look like familiar objects in layers conv5 ,fc6,andfc8in AlexNet.layer % ‘yes’ % units 80% % overlap between humans and:responses ‘yes’ response humans most active CCMAS(a) (b) (c) object (d) class (e)conv5 21.7% ( 1.1%) 4.3% ( 1.3%) 89.5% ( 5.7% ) 34.1% ( 14.4%) 0%fc6 21.0% ( 0.4%) 3.1% ( 0.4%) 80.4% ( 4.1%) 23.3% ( 5.9%) 18.9% ( 5.9%)fc8 71.2% ( 0.6%) 59.3% ( 1.6%) 96.5% ( 0.4%) 95.4% ( 0.6%) 94.6% ( 0.7%)We generated 100 AM images images for every unit in the layers conv5 ,fc6, and fc8in AlexNet, asin Nguyen et al. (2017), and displayed them as 1010-image panels. A total of 3,299 image panelswere used in the experiment (995 fc8, 256 conv5 , and 2048 randomly selected fc6image panels) andwere divided into 64 counterbalanced lists for testing. To assess the interpretability for these units asobject detectors, 333 paid volunteers were asked to look at image panels and asked if the images hadan object / animal or place in common. If the answer was ‘yes’, they were asked to write down ageneric name for that object (e.g. “fish” rather than “goldfish”). Analyses of common responses wasdone for any units where over 80% of humans agreed there was an object present.The results are summarized in Table 1. Not surprisingly, the AM images for output fc8units arethe most human-recognizable as objects across the AlexNet layers (71.2%; Table 1a). In addition,when they were given a consistent interpretation, they almost always (95.4%; Table 1d) match thecorresponding ImageNet category. By contrast, less than 5% of units in conv5 orfc6were associatedwith consistently interpretable images (Table 1b), and the interpretations only weakly matched thecategory associated with the highest-activation images or CCMAS selectivity (Table 1d–e). Apartfrom showing that there are few interpretable units in the hidden layers of AlexNet, our findings showthat the interpretability of images does not imply a high level of selectivity given the signal-detection6Under review as a conference paper at ICLR 2020(a)conv5 183 (b)fc6319 (c)fc8969(d)conv5 65 (e)fc6103 (f)fc8865Figure 4: Example AM images that were either judged by all participants to contain objects (a–c) orto be uninterpretable as objects (d–f). The human label for unit conv5 183(a) was ‘dogs’; the mostactive image was of a ‘flat-coated retriever’; CCMAS class was ‘monitor’. For fc6319(b), subjectsreported ‘green peppers’ or ‘apples’ (all classified as the same broad class in our analysis); both themost active item and CCMAS class were ‘Granny Smith apples’. For fc8969(c), humans suggested‘beverage’ or ‘drink’; both the most active item and CCMAS class were ‘eggnog’.results (Fig. 2d–h). See Fig. 4 for an example of the types of images that participants rated as objectsor non-objects.3.3 C OMPARING SELECTIVITY MEASURES IN OTHER CNN Sa. GoogLeNet on ImageNet b. GoogLeNet on Places-365 c. VGG-16 on Places-365inception4e 494 inception4e 824 conv5 _320precision : 0.0 precision : 0.27 precision : 0.53CCMAS:0.52 CCMAS: 0.55 CCMAS: 0.82A= 72.50:A= 22.81 A= 40.99:A= 11.78 A= 157.6:A= 15.2Figure 5: The units with with the highest Network Dissection scores for the category ‘bus’. Thescatter plots, precision , and CCMAS scores all indicate a low selectivity for this category. bluesquares: ‘school bus’; redpentagons: ‘trolleybus’; green stars: ‘minibus’; grey circles: other classes.Thus far we have assessed the selectivity of hidden units in AlexNet and shown that no units canreasonably be characterized as object detectors despite the high precision and CCMAS scores ofsome units. This raises the question as to whether more recent CNNs learn object detector units. Inorder to address this we display jitterplots for three units that have the highest IoU scores according tothe Network Dissection for the category BUS in (a) GoogLeNet trained on ImageNet, (b) GoogLeNettrained on Places-365, and (c) VGG-16 trained on Places-365, respectively (Zhou et al., 2018a).Models trained on the Places-365 dataset learn to categorize images into scenes (e.g., bedrooms,kitchens, etc.) rather than into object categories, and nevertheless, Zhou et al. (2018a) reported moreobject detectors in the former models. We illustrate the selectivity of the BUS category because it isan output category in ImageNet so we can easily plot the jitterplots for these units.As was the case with AlexNet, the jitterplots show that the most selective units show some degree ofselectivity, with the BUS images more active on average compared to non-Buses, and the percentageof nonzero activations for BUS higher than the non-BUS categories (see tables A3 - A5 in theappendix for summary of more units). But the units are no more selective than the units we observedin AlexNet. Indeed, the precision measure of selectivity for the first units is 0.0, with none of the7Under review as a conference paper at ICLR 2020units having a precision of .75 that was the criterion of object detectors by Zhou et al. (2015), andCCMAS scores for first two units were roughly similar to the mean CCMAS score for AlexNet unitsin conv 5 (and much lower than the mean in fc6 and fc7). The most selective VGG-16 unit trainedon Places-365 has lower precision and CCMAS scores than the Monarch Butterfly unit depicted inFigure 3. So again, different measures of selectivity provide support different conclusions, and eventhe most selective units are far from the selective units observed in recurrent networks as reported inFigure 1a. See tables A3 - A5 in the appendix for more details about these three units.4 D ISCUSSIONS AND CONCLUSIONSOur central finding is that different measures of single-unit selectivity for objects support verydifferent conclusions when applied to the same units in AlexNet. In contrast with the precision (Zhouet al., 2015) and CCMAS (Morcos et al., 2018) measures that suggest some highly selective unitsfor objects in layers conv5 ,fc6, and fc7, the recall with perfect precision and false alarm rates atmaximum informedness show low levels of selectivity. Indeed, the most selective units have a poorhit-rate or a high false-alarm rate (or both) for identifying an object class. The same outcome wasobserved with units in VGG-16 and GoogLeNet trained on either ImageNet or the Places-365 dataset.Not only do the different measures provide very different assessments of selectivity, the precision ,CCMAS, and Network Dissection measures provide highly misleading estimates of selectivity thathave led to mistaken conclusions. For example, unit fc61199in AlexNet trained on ImageNet isconsidered an Monarch Butterfly detector according to Zhou et al. (2015) with a precision score of98% (and a CCMAS score of .93). But the jitterplot in Fig. 3 and signal detection scores (e.g., highfalse alarm rate at maximum informedness) show this is a mischaracterisation of this unit. In the sameway, the Network Dissection method identified many object detectors in VGG-16 and GoogLeNetCNNs, but the jitterplots in Fig. 5 (and precision scores) show that this conclusion is unjustified.For additional problems with the CCMAS score see Figure 5 in Appendix C. Similarly, the imagesgenerated by Activation Maximization also provided a misleading estimate of selectivity given thatinterpretable images were associated with very low selectivity scores. This has led to confusions thathave delayed theoretical progress. For example, describing single units in CNNs as “object detectors”in response to high precision measures (Zhou et al.) suggests similar types of representations arelearned in CNNs and RNNs. Indeed, we are not aware of anyone in the machine learning communitywho has even considered the hypothesis that selectivity is reduced in CNNs compared RNNs. Ourfindings highlight the contrasting results.What should be made of the finding that localist representations are sometimes learned in RNNs(units with perfect specificity and recall), but not in AlexNet and related CNNs? The failure toobserve localist units in the hidden layers of these CNNs is consistent with Bowers et al. (2014)’sclaim that these units emerge in order to support the co-activation of multiple items at the sametime in short-term memory. That is, localist representations may be the solution to the superpositioncatastrophe, and these CNNs only have to identify one image at a time. The pressure to learn highlyselective representations in response to the superposition constraint may help explain the reports ofhighly selective neurons in cortex given that the cortex needs to co-activate multiple items at the sametime in order to support short-term memory (Bowers et al., 2016).Note, the RNNs that learned localist units were very small in scale compared to CNNs we havestudied here, and accordingly, it is possible that the contrasting results reflect the size of the networksrather than the superposition catastrophe per se . Relevant to this issue a number of authors havereported the existence of selective units in larger RNNs with long-short term memory (LSTM) units(Karpathy et al., 2016; Radford et al., 2017; Lakretz et al., 2019; Na et al., 2019). Indeed, Lakretzet al. (2019) use the term ‘grandmother cell’ to describe the units they observed. It will be interestingto apply our measures of selectivity to these larger RNNs and see whether these units are indeed‘grandmother units’.It should also be noted that there are recent reports of impressively selective representations inGenerative Adversarial Networks (Bau et al., 2019) and Variational Autoencoders (Burgess et al.,2018) where the superposition catastrophe is not an issue. Again, it will be interesting to assess theselectivity of these units according to signal detection measures in order to see whether there areadditional computational pressures to learn highly selective or even grandmother cells. We will beexploring these issues in future work.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text The paper empirically studies the category selectivity of individual cells in hidden units of CNNs. It is a sort of "meta-study" and comparison of different metrics proposed to identify cells with a preference for a specific target category. The claimed finding is that there are no cells that are "sufficiently" selective to be called object detectors. The paper is seemingly motivated by the authors' perceiving a contradiction: it is assumed that the power of neural networks is (among others) due to the distributed representation; whereas the presence of object detectors would, in the extreme case, mean that the representation is disentangled into a separate unit per category. It may be a matter of terminology, but this is where my disagreement with the authors start. I do not see a simplistic dichotomy, where one could or should determine which of the two interpretations is "right" or "wrong". In my view, which I believe is the mainstream interpretation, a distributed representation does not contradict the presence of specialised units. Some categories probably are easily identified by few distinctive features, so there will be more detector-like units; others are complex and hence more diffusely spread through the network; and of course there is no guarantee that the learned "object detectors" are tuned to exactly the target categories, after all it is the purpose of the network to gradually translate the data distribution to the label distribution - if the categories were directly apparent in the data, nearest-neighbour would be enough. So it is not only possible, but rather likely that the learned "object detectors" are to some degree driven by the statistics of the data, not the labels - e.g., there could be a highly selective "bird" unit which nevertheless has high false positive rate for any of the more specific bird species categories in the imageNet nomenclature. And vice versa, there could be a highly specific "Ferrari" detector that is so specialised that it has low recall for the "sports car" class (this case includes, among others, the case of viewpoint-specific detectors for certain categories). In the words of the paper, the "selective units are sensitive to some feature that is frequently, but not exclusively associated with the class" - I thought this is the standard majority view, not a surprising finding. In this context terminology matters: the study effectively tries to disprove that the network learns "near-perfect single output-category detectors", but who claims that it would do that? I agree with the authors that there is by now a zoo of selectivity metrics that are not always highly correlated. But is that a problem? We have a zoo of quality metrics for many machine learning problems - that is not necessarily a weakness, but simply reflects the obvious fact that a single number is not enough to characterise performance in a complex cognitive task. It is the job of the researcher/user to chose the metric that s most suitable for their specific question, and to correctly interpret its numerical value. Regarding the methodology, the paper did a lot of work to systematically crunch the numbers and analyse network units. It is a laudable effort that someone took on that job. A few technical decisions are unclear to me. Why analyse only some of the units? If one collects statistics over >2000 units of a fully connected layer, one might as well do the complete job and use all 4096 units. Similarly, why analyse only the correctly classified images? While it is clear that one must separately look at them, also the activations on incorrectly classified ones could provide valuable insights. E.g., do false positives of class X on average activate a certain "class X detector"? Why chose only the class with highest mean activation for CCMAS? That might be unrepresentative, e.g., a neuron might, for that particular class, always have high activation due to some very common background context, and still be not selective at all. Regarding the results, I find them much less clear-cut than the paper claims. For example, I find it quite remarkable that some unit has 8% recall at perfect precision. After all, only approximately 0.1% of the images are in the correct category, so a unit that flags 8% of them without making a mistake is a pretty good detector for (part of) the target class, cf Fig. 3. Also regarding Fig. 2 / maximum informedness, the statistics actually do not look bad. Of course false alarm proportion remains high - but the chance level here is 99.9%, so even a 99% false alarm rate means that your unit can, on its own, reject 90% of the true negatives. I find the proposed "minimum condition" for an object detector (>50% recall at >50% precision) unrealistic: the top-1 accuracy of AlexNet is, to my knowledge, <63%. Even the complete network probably never reaches 50% recall for most classes. Especially the user study - which is again a commendable effort - in my view does not confirm the claims. According to that study, almost 60% 0f all fc8 units are "object detectors", with very high conherence between humans and selectivity metrics. Overall, while it is an interesting study, it remains unclear to me what I should learn from it. I don't see why different measures provide "misleading conclusions" that need to be rectified. Conclusions are the responsibility of the researcher interpreting the numbers, not of the formula to calculate some statistical performance metric. I am in a difficult situation here: the study is one of those things (like determining human performance on ImageNet, or re-coding some baseline where the original code is not available) where I find it valuable that someone did them in the community, but still I don't think they need a reviewed paper. A note on the blog, or on arXiv, is enough. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
HygQro05KX
ICLR.cc/2019/Conference
2019
$A^*$ sampling with probability matching
["Yichi Zhou", "Jun Zhu"]
Probabilistic methods often need to draw samples from a nontrivial distribution. $A^*$ sampling is a nice algorithm by building upon a top-down construction of a Gumbel process, where a large state space is divided into subsets and at each round $A^*$ sampling selects a subset to process. However, the selection rule depends on a bound function, which can be intractable. Moreover, we show that such a selection criterion can be inefficient. This paper aims to improve $A^*$ sampling by addressing these issues. To design a suitable selection rule, we apply \emph{Probability Matching}, a widely used method for decision making, to $A^*$ sampling. We provide insights into the relationship between $A^*$ sampling and probability matching by analyzing a nontrivial special case in which the state space is partitioned into two subsets. We show that in this case probability matching is optimal within a constant gap. Furthermore, as directly applying probability matching to $A^*$ sampling is time consuming, we design an approximate version based on Monte-Carlo estimators. We also present an efficient implementation by leveraging special properties of Gumbel distributions and well-designed balanced trees. Empirical results show that our method saves a significantly amount of computational resources on suboptimal regions compared with $A^*$ sampling.
["probability", "subsets", "probabilistic methods", "samples", "nontrivial distribution", "sampling", "nice algorithm", "construction", "gumbel process", "large state space"]
ABSTRACTProbabilistic methods often need to draw samples from a nontrivial distribution.Asampling is a nice algorithm by building upon a top-down construction of aGumbel process, where a large state space is divided into subsets and at each roundAsampling selects a subset to process. However, the selection rule depends on abound function, which can be intractable. Moreover, we show that such a selectioncriterion can be inefficient. This paper aims to improve Asampling by addressingthese issues. To design a suitable selection rule, we apply Probability Matching ,a widely used method for decision making, to Asampling. We provide insightsinto the relationship between Asampling and probability matching by analyzinga nontrivial special case in which the state space is partitioned into two subsets.We show that in this case probability matching is optimal within a constant gap.Furthermore, as directly applying probability matching to Asampling is timeconsuming, we design an approximate version based on Monte-Carlo estimators.We also present an efficient implementation by leveraging special properties ofGumbel distributions and well-designed balanced trees. Empirical results show thatour method saves a significantly amount of computational resources on suboptimalregions compared with Asampling.1 I NTRODUCTIONProbabilistic methods provide an important family of tools in machine learning for modeling uncer-tainty of complex systems, performing probabilistic inference, revealing hidden factors (Ghahramani,2015), and making decisions Kocsis & Szepesvári (2006). These methods usually involve a funda-mental task of drawing samples from a nontrivial distribution.There exists a lot of work approaching the sampling problems, including rejection sampling (Gilks& Wild, 1992), MCMC (Propp & Wilson, 1996), etc. Recently, sampling algorithms based on theGumbel process have received increasing attentions (Malmberg, 2013; Hazan et al., 2013; Gane et al.,2014; Hazan & Jaakkola, 2012; Papandreou & Yuille, 2011; Tarlow et al., 2012; Kappes et al., 2015;Kim et al., 2016) since a Gumbel process can turn a sampling task to an optimization problem so thatwe can use optimization tools to finish the original sampling task.In this work, we focus on Asampling (Maddison et al., 2014) which is one of the most famousGumbel process based sampling algorithm. The major advantage of Asampling is that it canbe applied to large state spaces, e.g., a continuous sample space or a discrete space whose size isexponentially large. The reason is that Asampling divides the state space into disjoint subsets andtakes each subset as a whole, so that it can avoid initializing a large number of states, which is oftenencountered by other Gumbel process based algorithms (Papandreou & Yuille, 2011). Furthermore,Asampling adaptively selects subsets to process and the performance of Asampling is highlydependent on the selection rule.However, how to select subsets to process is very challenging. In each round, Asampling processesthe subset with maximum D(S)which is an upper bound of the maximum Gumbel value within asubsetS(see Section 2 for more details of D(S)). But in general, it is difficult to compute D(S)since it is an instance of non-convex optimization. Another challenge is that even if we are able tocomputeD(S)efficiently, selecting a subset with the maximum D(S)may not be a good choice.This is because our target is to process subsets with larger Gumbel values, but D(S)only provides anupper bound. So it is possible that the Gumbel value of Sis relatively small with high probabilitywhileD(S)is very large. In this case, Asampling will waste many computational resources on1Under review as a conference paper at ICLR 2019suboptimal regions. We’ll discuss more on how this inaccuracy of D(S)deteriorates the performanceofAsampling by analyzing a counter example in Section 3.To address the above challenges, we improve the subset selecting procedure of Asampling withprobability matching (PM) which has been proven efficient in many settings of making decisions,including Bayesian bandits (Chapelle & Li, 2011), MDP (Osband & Van Roy, 2016), economicdecisions (Vulkan, 2000), etc.Contributions: Intuitively, PM randomly selects an option according to its probability of being theoptimal, so that it won’t select a suboptimal option for too many rounds. To provide more insightsinto the efficiency of applying PM to Asampling, we first analyze a simple but nontrivial specialcase in which the state space is partitioned into two subsets. As we’ll present in Section 4.1, inthis case, PM is optimal within a constant gap in terms of the stochastic regret (Guha & Munagala,2014) which measures the number of selected rounds on suboptimal options. Furthermore, as directlyapplying PM to Asampling is time consuming, we design a novel approximate algorithm basedon Monte-Carlo estimators. The approximate algorithm is computationally efficient since it utilizesspecial properties of Gumbel distributions and well-designed balanced trees. We empirically compareour method with popular baselines of Asampling and Metropolis-Hastings algorithm. Experimentsshow that our algorithm works well.2 P RELIMINARIESIn this section, we present some preliminary knowledge of the Gumbel process and Asampling.Below, we first introduce basic definitions of probability distributions and Gumbel distributions.Definition 1 (Probability distributions) .In general, a distribution Pon a state space providedits potential function, P: 2!R, is a sigma-finite measure such that P(S) =1ZPexpfP(S)g,whereZP= exp(P()) is normalizing constant.Definition 2 (Gumbel and Truncated Gumbel distributions (Malmberg, 2013)) .Letcdenote theEuler constant. For convenience, define e (g) = exp(g+ ),F (g) = exp(exp(g+ ))andf (g) =e(g)F (g). Then (1),G( ): a Gumbel distribution with location has PDF and CDFat stateg:f +c(g);F +c(g):(2),TG( ;b): a Truncated Gumbel distribution with location andtruncated value bhas PDF and CDF at state g<b :f +c(g)=F +c(b);F +c(g)=F +c(b):2.1 G UMBEL PROCESSNow we are ready to introduce the Gumbel process.Definition 3 (Gumbel process (Malmberg, 2013)) .LetP(S)be a sigma-finite measure on samplespace ,Sis a measurable subset. Let P()denote the potential function of Psuch thatP(S) = logP(S) + logZP. ThenGP=fGP(S)jSgis a Gumbel process induced from P, if:•(marginal distributions) GP(S)G(P(S)).•(independence of disjoint sets) GP(S)?GP(Sc).•(consistency constraints) for measurable S1;S2, thenGP(S1[S2) =max(GP(S1);GP(S2)):The Gumbel process is useful in sampling since arg maxx2GP(x)P(Malmberg, 2013). There-fore, we can draw a sample from Pby constructing a Gumbel process for distribution P, and thenfinding the maximum one with some optimization techniques.In the sequel, we will use Pto denote the target distribution, and we call GP(S)theGumbel value ofsubsetS. According to (Malmberg, 2013), Defn. 3 is associated with a natural bottom-up construction:for anyx2, we first perturb it with an independent Gumbel noise, i.e., g(x)G(0). After thatwe simply set GP(x) =g(x) +P(dx)and compute GP(S) = maxx2SGP(x)for allSaccording to the consistency constraints . However, when is infinite, such a bottom-up constructionis infeasible.Top-down construction : (Maddison et al., 2014) presents a top-down construction, which partitionsthe state space into regions and resolves the problem caused by infinite spaces by considering eachregion as a whole. Formally, the top-down procedure constructs a top-down tree, tree(P), with each2Under review as a conference paper at ICLR 2019node corresponding to a subset of .tree(P)is rooted in . Letpar(S)denote the parent of subsetS. For eachS2tree(P), its children is a disjoint partition of S, that is,[S0:par(S0)=SS0=S.The top-down construction computes Gumbel values for subsets in the order from the top to thebottom oftree(P). Formally, according to the consistency constraints andmarginal distributions ,we compute GP(S)TG (P(S);L(S))whereL(S) :=GP(par(S)). In the algorithmic viewof point, the top-down construction maintains a collection of subsets of . Initially, the collectioncontains only . At each round, the algorithm selects an element Sfrom the collection and computesGP(S). After that it divides Sinto subsets S1;S2and adds them into the collection.2.2ASAMPLINGObviously, if P(S)is hard to compute, the top-down construction for Pis computationally in-tractable. (Maddison et al., 2014) solves this problem by utilizing the linearity of Gumbel distri-bution. More specifically, given a distribution Q, ifGQ(x)induces a Gumbel process of Q, thenGP(x) :=GQ(x) +P(x)Q(x)induces a Gumbel process of distribution P. Based on thisinsight, (Maddison et al., 2014) proposes the Asampling, which relies on a tractable proposaldistribution Q. Furthermore, since GQ(S)?arg maxx2SGQ(x)(Maddison et al., 2014), Asampling executes the top-down construction for Q, and for each subset, Asampling computes~GP(S) =GQ(S) +P(xQ(S))Q(xQ(S))wherexQ(S)Q(jS).1Suppose at some timepoint thatAsampling has processed nnodes intree(Q), denoted by DoneQ(n). It can be shownthat there are n+ 1nodes in the to be processed collection, denoted by CollectQ(n). As introducedabove, for each A2DoneQ(n), we have a pair (xQ(A);~GP(A)), and eachS2CollectQ(n)isassociated with a truncated Gumbel distribution TG(Q(S);L(S)).The subset selection and termination in Asampling rely on a bound function B: 2!Rsuch thatB(S)supx2SP(x)Q(x). LetD(S) :=L(S) +B(S). If for some n,maxS2Done (n)~GP(S)maxS2Collect (n)D(S),Aterminates and outputs the element with maxi-mum value among the processed nodes. At round n,Asampling selects the node Swith maximumvalue ofD(S)fromCollect (n).3 C HALLENGES OF ASAMPLINGThere are two challenges in Asampling. The first one is about the function Don whichAsampling relies. Computing this function for every Scan be intractable since it can be a non-convexoptimization. If we simply remove the (possibly intractable) bound function or use a very loosebound,Asampling will degenerate to an algorithm which is not efficiency (Maddison et al., 2014).We name the degenerated algorithm as Asampling without a bound ( See Appendix D for details.).The second challenge is that selecting the subset with maximum D(S)is not always a good choice.This is because D(S)is just an upper bound of GP(S)and it is possible that GP(S)is relativelysmall with high probability while D(S)is very large. We now present a simple counter exampleforAsampling to intuitively explain the reason. In this example, = (10:0;+10:0), the targetis a mixture distribution: P(x)/(1:0105)N(x;5:0;1:0) +1[jxj0:510405]10400andQ(x)/N(x; 5:0;1:0). The log likelihoods of PandQare shown in Fig. 1(a). We first empiricallyevaluateAsampling on this example. Fig. 1(b) shows the selected rounds on the optimal subsetsand Fig. 1(c) shows the maximum Gumbel value found by Asampling. Results are averaged over100runs. We can see that Asampling has a poor performance on this example. In this case, D(S)is large ifScovers points near x= 0. SoAsampling will allocate lots of computational resourcesinto such intervals, however, GP(S)being high for such Sis with probability only about 0:00001 .4ASAMPLING WITH PROBABILITY MATCHINGWe now present how to use PM to improve Asampling by addressing the above challenges. Wefirst present an intuitive example in Section 4.1. Then, we present a practical PM algorithm based onMonte-Carlo estimators of GP(S)in Section 4.2 and an efficient implementation with well-designedbalanced trees in Section 4.3.1In this paper, we use P(jS)to denote the distribution Pconditioned on state space S.3Under review as a conference paper at ICLR 2019(a) Log-likelihood (b) Selected rounds on intervalscontaining the optimal point(c) Maximum Gumbel valueFigure 1: The counter example with (a) the log likelihood of PandQ; (b) the selected number on theintervals containing the optimal point; (c) the maximum Gumbel value found by Asampling.4.1 P ROBABILITY MATCHING AND AN EXAMPLE WITH TWO SUBSETSAlgorithm 1 Probability matching with Monte-Carlo estimators.1:Input: the target distribution P, proposal dis-tributionQ, state space , time horizon T.2:Output:x: a sample from P.3:maxgumbel =1;x=None:4:Collect =fg;L() =1;8S;t= 1.5:whiletTdo6:t=t+ 1:7: SelectSaccording to Eq. (4).8: SplitSinto disjoint sets S1;S2.9:L(S1) =L(S2) =GQ(S).10: forS2S1;S2do11:x(S)Q(jS).12:G(S)TG (Q(S);L(S)).13: ~G(S) =G(S) +P(x(S))Q(x(S)).14: ifmaxgumbel< ~G(S)then15:maxgumbel =~Gm(S);x=x(S).16: end if17: end for18: Compute Monte-Carlo estimators for S1;S2and update balanced trees.19:Collect:insert (S1);Collect:insert (S2):20:end whileIn general, when making a choice among a setof options, PM selects an option randomly ac-cording to its probability of being the optimal.More specifically, in our problem, the optimaloption is the subset with the maximum Gumbelvalue. Formally, by definition, the maximumGumbel value within region Sis a random vari-ableGP(S) = maxx2STG(Q(dx);L(S)) +P(x)Q(x). Suppose the state space ispartitioned intofS1;;SKg. PM selects asubset according to the probability:p i= arg maxk2[K]GP(Sk)!: (1)Intuitively, PM has an excellent performancesince it allocates computational resources intothe options which are likely to have large out-comes. To provide more intuition into why PMsuitsAsampling, we analyze a simple but non-trivial case in which we divide into two sets.In order to get a clean theoretical result, weadditionally assume Asampling does not fur-ther split a subset after processing it. We fo-cus on the stochastic regret (Guha & Muna-gala, 2014) which is the expected number ofselections on the suboptimal subset. Formally,suppose is partitioned into S1;S2. Leti=arg maxi2f1;2gGP(Si)which is a random variable. Consider an algorithm Awhich selects a subsetSiA;tat time stept. The stochastic regret of Aat timeTis:RA(T) =EPTt=11[iA;t6=i]:Intuitively, the smaller RAis, the betterAis, sinceAwon’t waste many computational resources onthe suboptimal subset. Moreover, we can prove that PM is optimal within a constant gap in terms ofthe stochastic regret:Lemma 1. Letopt(T)denote the algorithm which minimizes the stochastic regret. Then :RPM(T)2Ropt(T)(T);8TwhereRPM(T)is the stochastic regret of PM.The proof of Lemma 1 is adapted from the proof in (Guha & Munagala, 2014) for Bayesian banditswith two arms, we defer the details in Appendix A.4Under review as a conference paper at ICLR 20194.2 P ROBABILITY MATCHING WITH A MONTE -CARLO ESTIMATORUnfortunately, drawing samples from the probability in Eq. (1) is intractable when GP(S)is complex.So in this section, we present an efficient PM algorithm based on a Monte-Carlo estimator of GP(S).Consider a random variable Y=P(x)Q(x);xQ(jS)whose expectation is a constantplussing the KL-divergence between QandPconditioned on the subset S. We can equally character-izeGP(S)asmaxyTG(log(Q(S)p(Y=y));L(S)) +y: (2)We present the proof of Eq. (2) in Appendix B. Eq. (2) suggests that we can get a Monte-Carloestimator of GP(S)by estimating Y. More specifically, let Y1;;Ymbe a sequence of randomvariables and w1;;wmbe the corresponding weights such thatPmi=1wi= 1;wi>0. Supposethe random variable Ym:p(Ym=Yi) =wiis an unbiased estimator of Y, then we can estimateGP(S)by:^GP(S) = maxi2[m]TG(log(wiQ(S));L(S)) +Yi= maxi2[m]TG(log(wiQ(S)) +Yi;L(S) +Yi)(3)The second equality holds due to the linearity of the truncated Gumbel distribution (Maddison et al.,2014). According to Eq. (3), we can estimate GP(S)with existing Monte-Carlo estimators of Y,such as adaptive importance sampling (Gilks & Wild, 1992).The corresponding PM with Monte-Carlo estimators is to draw samples fromp ^i= arg maxj2[n]^GP(Sj)!: (4)What remains is how to sample from the probability in Eq. (4) efficiently. The most popular executionof Eq. (4) is as in (Chapelle & Li, 2011): we draw yi^GP(Si), and take ^i= arg maxiyi, then itcan be shown that ^iis a sample from the probability in Eq. (4).However, a direct implementation of the above ideas requires time complexity O(m)since we needto draw samples from mtruncated Gumbel distributions, where m=Pi2[n]miis the number ofparticles in total and miis the number of particles in Si. So our selection algorithm executing mrounds would require running time O(m2). It is relatively slow comparing with the O(mlogm)timecomplexity for Asampling (Maddison et al., 2014).4.3 A N EFFICIENT IMPLEMENTATION BY BALANCED TREESWe now present a novel algorithm that only requires O(logm)running time to sample from thedistribution in Eq. (4). Our algorithm is based on the properties of the truncated Gumbel distributionand under the help of well-designed balanced trees.We first decompose sampling from the distribution in Eq. (4) into two steps which can be doneefficiently. The decomposition is an immediate inference of Eq. (4):p ^i= arg maxj2[n]^GP(Sj)!=Zxp(x= maxj2[n]^GP(S))p(^i= arg maxj2[n]^GP(S)jx= maxj2[n]^GP(S))dx:Thus, sampling from the distribution in Eq. (4) equals to the following two sampling problems:xmaxi2[n]xi;xi^GP(Si);^ip(i= arg maxj2[n]xj^GP(Si)jx= maxxj)Recall that ^GP(S)is the maximum one among a set of truncated Gumbels. Thus, the above twosampling problems are essentially sampling the maxima and the argument of the maxima among aset of truncated Gumbels. So our target can be converted into the following problem:5Under review as a conference paper at ICLR 2019Problem 1. Given a set of truncated Gumbel variables fvigmi=1with parameters (ai;bi), i.e.,viTG(ai;bi). We define two sampling problems:v= maxi2[m]vi (5)^ip(i= arg maxj2[m]vjjv= maxj2[m]vj) (6)We use the inverse transform sampling (Devroye, 1986) to sample vin Eq. (5). In inverse transformsampling, for a random variable Xwith CDFUX(x), we first draw a sample suniform (0;1),and then compute xsuch thatUX(x) =s, it can be shown that xX. Thus, letU(g)denote theCDF ofv, we only need an algorithm to compute gsuch thatU(g) =s;s2(0;1). We now showhow to compute such gefficiently with balanced trees.For notational clarity, let Ua;b(g)denote the CDF of a truncated Gumbel distribution, TG(a;b).According to Defn. 2, we have Ua;b(g) =exp(exp(min(g;b)+a))exp(exp(b+a)). Recallv= maxivi, thenvhasCDF:U(g) =QiUai;bi(g) =Qiexp(exp(min(g;bi)+ai))exp(exp(bi+ai)). Take logarithm on both sides, we getlogU(g) =Pi2[m](exp(min(g;bi) +ai) + exp(bi+ai)).Without loss of generality, we sort bi’s in a non-decreasing order, that is, bibi+1. SinceU(g)is amonotonically increasing function, for g2(bi;bi+1], we have:logU(g) =Xj>iexp(g+aj) +Xj>iexp(bj+aj) =exp(g)Xj>iexp(aj) +Xj>iexp(bj+aj)Thus, given U(g)and suppose g2(bi;bi+1], we can compute gby:g=logPj>iexp(ajbj)logU(g)Pj>iexp(aj)(7)Thus, when we get suniform (0;1), we need to find isuch thatU(bi)sU(bi+1), and thensolvegaccording to Eq. (7) and inverse sampling . Both of above two steps can be done efficientlyvia a balanced tree.Suppose we have a balanced tree such that each node in the tree corresponds to an index i2[m],and the key of the balanced tree is bi, that is, for all jin the right subtree of node i, we havebjbiand for alljin the left subtree, we have bjbi. Suppose that from the balancedtree, we can query in O(1)time at each node ifor terms: (1) exp(bi)Pj>iexp(aj); (2)exp(ai)Pj>iexp(bj)and (3)Pj>iexp(ajbj). We can query these terms efficiently in a bal-anced tree because they all are summations over an interval. And according to Defn. 2, we know thatlogU(bi) =Pj>i(exp(bj+ai)exp(ajbi)), we can check out whether logs<logU(bi)inO(1). Therefore, we can find the index isuch thatU(bi)sU(bi+1)in running time O(logm).After that, we can compute U(g) =svia Eq. (7) in running time O(1).Now we turn to sample ^iin Eq. (6). Without loss of generality, suppose g2(bi;bi+1). Obviously, forj <i ,p(j= arg maxj0vj0jg= maxj0vj0) = 0 . Forji, by Defn. 2 and with simple calculations,we have:pj= arg maxj0vj0jg/dUaj;bj(g)dgYj06=j;j0>iUaj0;bj0(g)= exp(g+aj)exp(exp(g+aj))exp(exp(bj+aj))Yj06=j;j0>iUaj0;bj0(g) = exp(aj) exp(g)Yj0>iUaj0;bj0(g)/exp(aj)(8)According to Eq. (8), we can sample ^iinO(logm)running time with a balanced tree from which wecan queryPj>iexp(aj)efficiently. Putting the previous results together, we get the algorithm asoutlined in Alg. 1.5 E XPERIMENTSIn this section, we present our empirical results. We first check the correctness on a simple toyexperiment and the counter example in Section 3. After that, we evaluate the efficiency of Alg. 1on two Bayesian posterior inference tasks, results show that our algorithm outperforms vanilla Asampling significantly.6Under review as a conference paper at ICLR 2019(a) The toy experiment (b) The counter exampleFigure 2: Experiment results with (a) the toy experiment; (b) the counter example.Figure 3: Experiment results on the clutter problem with 5 dimensions on the left, 15 dimensions inthe middle and 20 dimensions on the right.5.1 C ORRECTNESS OF ALG. 1We first verify the correctness of Alg. 1 on a greenhouse experiment. We consider sampling froma one-dimensional Gaussian mixture with potential function P(x) =log(N(x;2:0;1:0) +2N(x; 2:0;1:0)), which is a multi-mode target distribution. We set Q=N(0;2). We present ourresult in Fig. 2(a) which shows the ground truth and the empirical sample distributions of Alg. 1 andbaselines. From Fig. 2(a), Alg. 1 has a similar performance to Asampling and outperforms the Asampling without a bound function.5.2 T HE COUNTER -EXAMPLEWe empirically compare the performance of algorithms on the example in Section 3. The result isshown in Fig. 2(b). We can see that PM- Aoutperforms baselines significantly since our algorithmwastes less resources on suboptimal subsets.5.3 B AYESIAN POSTERIOR INFERENCEIn this section, we evaluate our algorithm on two Bayesian posterior tasks: the clutter problem and theBayesian logistic regression. More specifically, we focus on sampling tasks of formulations P(x)/p(x)Qni=1p(yijx)wherexis the variable we are going to sample, p()is the prior distribution overx, andfyigni=1are observations, and p(yijx)is the likelihood function. We simply set Q(x) :=p(x)in bothAsampling and Alg. 1. For vanilla Asampling, we exploit the standard stochastic gradientdescent (SGD) algorithm to calculate the bound function, maxx2SPilogp(yijx). For the Monte-Carlo estimator in Alg. 1, we exploit the importance sampling over the trajectory of the same SGDalgorithm as in the vanilla Asampling2.5.3.1 E VALUATION ON THE CLUTTER PROBLEMWe now evaluate Alg. 1 on the Clutter problem proposed by (Minka, 2001). Clutter problem aims toinference the mean of an isotropic Gaussian with some data points are outliers. Consider a mixture2We first tune the parameters of SGD for vanilla Asampling, and then apply them to Alg. 1 without furthertuning.7Under review as a conference paper at ICLR 2019(a) The toy experiment (b) The counter exampleFigure 4: Experiments on logistic regression with (a) averaged Gumbel values; (b) averaged log-likelihoods.distribution: p(yjx) = (1w)N(y;x;I) +wN(y; 0;1I); p(x) =N(x; 0;2I), wherewis theratio of outliers which is a known parameter, and N(;)represents Gaussian distribution. Our goalis to inference xgiven datafyigni=1. We do experiments on dimensions varying in 5;15;20,n= 20 .We compare the Gumbel of these algorithms. We run 100 times and present the averaged results inFig. 3. We can see that Alg. 1 outperforms Asampling constantly.5.3.2 E VALUATION ON BAYESIAN LOGISTIC REGRESSIONOur last experiment is on Bayesian Logistic Regression. Given a dataset fxigni=1associated withlabelfyigni=1whereyi2f0;1g. We follow the setting in (Gershman et al., 2012) and definethe Bayesian logistic regression: p() =Gamma (;a;b); p(wk) =N(wk; 0;1); p(yi=1;xi;w) =sigmoid (wTxi):In this model,fw;gare the hidden variables, where wdenotes theregression coefficients and is a precision parameter. We set a=b= 1. We do experiments on 13binary classification datasets proposed by (Mika et al., 1999). The number of features of these datasets are in range from 2 to 60, and the number of points ranges from 24 to 7400 (See Appendix C formore statistics). We present our results in Fig. 4(a) where all results are averaged over 100 runs. Fig.4(a) presents the summation of the maximum likelihood found by each algorithm on these datasetsover time. From Fig. 4(a), we can see that PM- Aoutperforms all baselines.Furthermore, we compare our algorithm with standard Matropolis-Hastings algorithm (MH) andadaptive inference with exploration (AIE) (Rainforth et al., 2018) which also attempts to bridge thegap between sampling problems and decision-making techniques. For MH, the initial points aresampled from the prior. To make the comparison fair, we also evaluate Alg. 1 and AIE with the prioras the Monte-Carlo estimator instead of gradient-based methods. We compare the likelihoods in Fig.4(b). We can see that Alg. 1 outperforms AIE even if they use the same Monte-Carlo estimator. Thisis AIE attempts to use UCB-like algorithm to make decisions, but UCB works only for those modelsin which concentration bounds hold which is not always valid in sampling problems.6 C ONCLUSION AND FUTURE WORKIn this work, we focus on improving the subset selection procedure in Asampling with PM. Weproved that in the special case of two subsets, PM is optimal within a constant gap in terms of thestochastic regret. Moreover, we proposed a practical algorithm based on Monte-Carlo estimators andwell-designed balanced trees. Empirical results show that our methods saves a significantly amountof computational resources on suboptimal regions compared with Asampling.There exists several challenges in future work. The first one is on the analysis of PM. Though weproved PM is efficient in the case of two subsets, it is very challenging to prove the efficiency ingeneral. The second one is that the performance of Alg. 1 relies on the accuracy of the Monte-Carloestimator. However, it is time-consuming to compute an accurate Monte-Carlo estimator. So it isimportant to balance the accuracy of the Monte-Carlo estimator and the performance of PM. We hopeour work is a starting point to address these problems.8Under review as a conference paper at ICLR 2019ACKNOWLEDGMENTS
HyxV1doKhX
Strong contribution to family of A* sampling algorithms, but lacks clarity.
5: Marginally below acceptance threshold
Summary: This paper introduces a probability matching approach for optimizing Gumbel processes, i.e. the extension of the Gumbel-Max trick to more general measure spaces. The basic idea is to use a more refined subset selection mechanism as compared to A* Sampling, but at the cost of being able to guarantee an exact sample. Instead, the authors study the algorithm's best Gumbel so far as a function of the time budget. Quality: This is an interesting and natural idea, and the experiments support the author's claim that it improves over A* Sampling in the regimes that they consider. The claims and proofs look correct to me. Originality: This idea has not been explored in the literature, and is a sensible one to consider when having an exact sample is not necessary. Clarity: The comparison to A* Sampling is a bit apples-to-oranges and the paper lacks some clarity overall. In particular, the authors should be much more clear that their termination condition necessarily differs from A* Sampling's. In A* Sampling the termination condition guarantees that the algorithm has found the true optimum, but in PM-A* the authors consider only a fixed budget of time --- terminating whether or not the algorithm has found the true optimum. The difference is a necessary feature, because otherwise I'm fairly sure A* Sampling is optimal (It has to push the upper bound of every subset below the true maximum, so if the term. cond. is not currently satisfied the subset with the current max. upper bound must eventually be visited before termination. Since it is the only subset where that is necessarily true, taking it is the optimal choice.) More comments below. Significance: I think this is interesting work, but I would recommend the authors consider some examples that clarify where these ideas might impact that problems that the ICLR audience would be interested in. Overall: The paper's clarity problems get in the way of its contribution. I think it is an interesting direction, but the authors are not clear enough about the correctness of the algorithm (it is not guaranteed to return an exact sample) to recommend acceptance. If this and other issues are fixed, it would be a good paper. Pros: - Solid contribution to the literature on Gumbels. - Beginnings of a regret analysis. - Uniform performance gains in comparison to A* Sampling (in the regime considered). Cons: - Lacks clarity in comparisons to A* Sampling. - Writing could clarify significance as well. Specific questions / suggestions: - One of the important omissions of the paper is the following question. What is the distribution of PM-A*'s output at time T for a fixed T? It won't be the target distribution in general, but it may be close. This applicability of this work will be greatly improved by that analysis. - In general the figures and experiments need more details. In particular, (1) In figure 2a, why are there two sets of lines? Different values of T? (2) You need to report T throughout. (3) What did you take as Y? This is a critical part of the method, and you should report the estimator used. - In A* Sampling for two subsets without splitting algorithm, shouldn't you be removing ALL previous Xi from be S_i? i.e. S_i \ {Xi_1, Xi_2, ... Xi_{k-1}} or something like that.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title $A^*$ sampling with probability matching ### Paper Abstract Probabilistic methods often need to draw samples from a nontrivial distribution. $A^*$ sampling is a nice algorithm by building upon a top-down construction of a Gumbel process, where a large state space is divided into subsets and at each round $A^*$ sampling selects a subset to process. However, the selection rule depends on a bound function, which can be intractable. Moreover, we show that such a selection criterion can be inefficient. This paper aims to improve $A^*$ sampling by addressing these issues. To design a suitable selection rule, we apply \emph{Probability Matching}, a widely used method for decision making, to $A^*$ sampling. We provide insights into the relationship between $A^*$ sampling and probability matching by analyzing a nontrivial special case in which the state space is partitioned into two subsets. We show that in this case probability matching is optimal within a constant gap. Furthermore, as directly applying probability matching to $A^*$ sampling is time consuming, we design an approximate version based on Monte-Carlo estimators. We also present an efficient implementation by leveraging special properties of Gumbel distributions and well-designed balanced trees. Empirical results show that our method saves a significantly amount of computational resources on suboptimal regions compared with $A^*$ sampling. ### Paper Keywords ["probability", "subsets", "probabilistic methods", "samples", "nontrivial distribution", "sampling", "nice algorithm", "construction", "gumbel process", "large state space"] ### Paper Content ABSTRACTProbabilistic methods often need to draw samples from a nontrivial distribution.Asampling is a nice algorithm by building upon a top-down construction of aGumbel process, where a large state space is divided into subsets and at each roundAsampling selects a subset to process. However, the selection rule depends on abound function, which can be intractable. Moreover, we show that such a selectioncriterion can be inefficient. This paper aims to improve Asampling by addressingthese issues. To design a suitable selection rule, we apply Probability Matching ,a widely used method for decision making, to Asampling. We provide insightsinto the relationship between Asampling and probability matching by analyzinga nontrivial special case in which the state space is partitioned into two subsets.We show that in this case probability matching is optimal within a constant gap.Furthermore, as directly applying probability matching to Asampling is timeconsuming, we design an approximate version based on Monte-Carlo estimators.We also present an efficient implementation by leveraging special properties ofGumbel distributions and well-designed balanced trees. Empirical results show thatour method saves a significantly amount of computational resources on suboptimalregions compared with Asampling.1 I NTRODUCTIONProbabilistic methods provide an important family of tools in machine learning for modeling uncer-tainty of complex systems, performing probabilistic inference, revealing hidden factors (Ghahramani,2015), and making decisions Kocsis & Szepesvári (2006). These methods usually involve a funda-mental task of drawing samples from a nontrivial distribution.There exists a lot of work approaching the sampling problems, including rejection sampling (Gilks& Wild, 1992), MCMC (Propp & Wilson, 1996), etc. Recently, sampling algorithms based on theGumbel process have received increasing attentions (Malmberg, 2013; Hazan et al., 2013; Gane et al.,2014; Hazan & Jaakkola, 2012; Papandreou & Yuille, 2011; Tarlow et al., 2012; Kappes et al., 2015;Kim et al., 2016) since a Gumbel process can turn a sampling task to an optimization problem so thatwe can use optimization tools to finish the original sampling task.In this work, we focus on Asampling (Maddison et al., 2014) which is one of the most famousGumbel process based sampling algorithm. The major advantage of Asampling is that it canbe applied to large state spaces, e.g., a continuous sample space or a discrete space whose size isexponentially large. The reason is that Asampling divides the state space into disjoint subsets andtakes each subset as a whole, so that it can avoid initializing a large number of states, which is oftenencountered by other Gumbel process based algorithms (Papandreou & Yuille, 2011). Furthermore,Asampling adaptively selects subsets to process and the performance of Asampling is highlydependent on the selection rule.However, how to select subsets to process is very challenging. In each round, Asampling processesthe subset with maximum D(S)which is an upper bound of the maximum Gumbel value within asubsetS(see Section 2 for more details of D(S)). But in general, it is difficult to compute D(S)since it is an instance of non-convex optimization. Another challenge is that even if we are able tocomputeD(S)efficiently, selecting a subset with the maximum D(S)may not be a good choice.This is because our target is to process subsets with larger Gumbel values, but D(S)only provides anupper bound. So it is possible that the Gumbel value of Sis relatively small with high probabilitywhileD(S)is very large. In this case, Asampling will waste many computational resources on1Under review as a conference paper at ICLR 2019suboptimal regions. We’ll discuss more on how this inaccuracy of D(S)deteriorates the performanceofAsampling by analyzing a counter example in Section 3.To address the above challenges, we improve the subset selecting procedure of Asampling withprobability matching (PM) which has been proven efficient in many settings of making decisions,including Bayesian bandits (Chapelle & Li, 2011), MDP (Osband & Van Roy, 2016), economicdecisions (Vulkan, 2000), etc.Contributions: Intuitively, PM randomly selects an option according to its probability of being theoptimal, so that it won’t select a suboptimal option for too many rounds. To provide more insightsinto the efficiency of applying PM to Asampling, we first analyze a simple but nontrivial specialcase in which the state space is partitioned into two subsets. As we’ll present in Section 4.1, inthis case, PM is optimal within a constant gap in terms of the stochastic regret (Guha & Munagala,2014) which measures the number of selected rounds on suboptimal options. Furthermore, as directlyapplying PM to Asampling is time consuming, we design a novel approximate algorithm basedon Monte-Carlo estimators. The approximate algorithm is computationally efficient since it utilizesspecial properties of Gumbel distributions and well-designed balanced trees. We empirically compareour method with popular baselines of Asampling and Metropolis-Hastings algorithm. Experimentsshow that our algorithm works well.2 P RELIMINARIESIn this section, we present some preliminary knowledge of the Gumbel process and Asampling.Below, we first introduce basic definitions of probability distributions and Gumbel distributions.Definition 1 (Probability distributions) .In general, a distribution Pon a state space providedits potential function, P: 2!R, is a sigma-finite measure such that P(S) =1ZPexpfP(S)g,whereZP= exp(P()) is normalizing constant.Definition 2 (Gumbel and Truncated Gumbel distributions (Malmberg, 2013)) .Letcdenote theEuler constant. For convenience, define e (g) = exp(g+ ),F (g) = exp(exp(g+ ))andf (g) =e(g)F (g). Then (1),G( ): a Gumbel distribution with location has PDF and CDFat stateg:f +c(g);F +c(g):(2),TG( ;b): a Truncated Gumbel distribution with location andtruncated value bhas PDF and CDF at state g<b :f +c(g)=F +c(b);F +c(g)=F +c(b):2.1 G UMBEL PROCESSNow we are ready to introduce the Gumbel process.Definition 3 (Gumbel process (Malmberg, 2013)) .LetP(S)be a sigma-finite measure on samplespace ,Sis a measurable subset. Let P()denote the potential function of Psuch thatP(S) = logP(S) + logZP. ThenGP=fGP(S)jSgis a Gumbel process induced from P, if:•(marginal distributions) GP(S)G(P(S)).•(independence of disjoint sets) GP(S)?GP(Sc).•(consistency constraints) for measurable S1;S2, thenGP(S1[S2) =max(GP(S1);GP(S2)):The Gumbel process is useful in sampling since arg maxx2GP(x)P(Malmberg, 2013). There-fore, we can draw a sample from Pby constructing a Gumbel process for distribution P, and thenfinding the maximum one with some optimization techniques.In the sequel, we will use Pto denote the target distribution, and we call GP(S)theGumbel value ofsubsetS. According to (Malmberg, 2013), Defn. 3 is associated with a natural bottom-up construction:for anyx2, we first perturb it with an independent Gumbel noise, i.e., g(x)G(0). After thatwe simply set GP(x) =g(x) +P(dx)and compute GP(S) = maxx2SGP(x)for allSaccording to the consistency constraints . However, when is infinite, such a bottom-up constructionis infeasible.Top-down construction : (Maddison et al., 2014) presents a top-down construction, which partitionsthe state space into regions and resolves the problem caused by infinite spaces by considering eachregion as a whole. Formally, the top-down procedure constructs a top-down tree, tree(P), with each2Under review as a conference paper at ICLR 2019node corresponding to a subset of .tree(P)is rooted in . Letpar(S)denote the parent of subsetS. For eachS2tree(P), its children is a disjoint partition of S, that is,[S0:par(S0)=SS0=S.The top-down construction computes Gumbel values for subsets in the order from the top to thebottom oftree(P). Formally, according to the consistency constraints andmarginal distributions ,we compute GP(S)TG (P(S);L(S))whereL(S) :=GP(par(S)). In the algorithmic viewof point, the top-down construction maintains a collection of subsets of . Initially, the collectioncontains only . At each round, the algorithm selects an element Sfrom the collection and computesGP(S). After that it divides Sinto subsets S1;S2and adds them into the collection.2.2ASAMPLINGObviously, if P(S)is hard to compute, the top-down construction for Pis computationally in-tractable. (Maddison et al., 2014) solves this problem by utilizing the linearity of Gumbel distri-bution. More specifically, given a distribution Q, ifGQ(x)induces a Gumbel process of Q, thenGP(x) :=GQ(x) +P(x)Q(x)induces a Gumbel process of distribution P. Based on thisinsight, (Maddison et al., 2014) proposes the Asampling, which relies on a tractable proposaldistribution Q. Furthermore, since GQ(S)?arg maxx2SGQ(x)(Maddison et al., 2014), Asampling executes the top-down construction for Q, and for each subset, Asampling computes~GP(S) =GQ(S) +P(xQ(S))Q(xQ(S))wherexQ(S)Q(jS).1Suppose at some timepoint thatAsampling has processed nnodes intree(Q), denoted by DoneQ(n). It can be shownthat there are n+ 1nodes in the to be processed collection, denoted by CollectQ(n). As introducedabove, for each A2DoneQ(n), we have a pair (xQ(A);~GP(A)), and eachS2CollectQ(n)isassociated with a truncated Gumbel distribution TG(Q(S);L(S)).The subset selection and termination in Asampling rely on a bound function B: 2!Rsuch thatB(S)supx2SP(x)Q(x). LetD(S) :=L(S) +B(S). If for some n,maxS2Done (n)~GP(S)maxS2Collect (n)D(S),Aterminates and outputs the element with maxi-mum value among the processed nodes. At round n,Asampling selects the node Swith maximumvalue ofD(S)fromCollect (n).3 C HALLENGES OF ASAMPLINGThere are two challenges in Asampling. The first one is about the function Don whichAsampling relies. Computing this function for every Scan be intractable since it can be a non-convexoptimization. If we simply remove the (possibly intractable) bound function or use a very loosebound,Asampling will degenerate to an algorithm which is not efficiency (Maddison et al., 2014).We name the degenerated algorithm as Asampling without a bound ( See Appendix D for details.).The second challenge is that selecting the subset with maximum D(S)is not always a good choice.This is because D(S)is just an upper bound of GP(S)and it is possible that GP(S)is relativelysmall with high probability while D(S)is very large. We now present a simple counter exampleforAsampling to intuitively explain the reason. In this example, = (10:0;+10:0), the targetis a mixture distribution: P(x)/(1:0105)N(x;5:0;1:0) +1[jxj0:510405]10400andQ(x)/N(x; 5:0;1:0). The log likelihoods of PandQare shown in Fig. 1(a). We first empiricallyevaluateAsampling on this example. Fig. 1(b) shows the selected rounds on the optimal subsetsand Fig. 1(c) shows the maximum Gumbel value found by Asampling. Results are averaged over100runs. We can see that Asampling has a poor performance on this example. In this case, D(S)is large ifScovers points near x= 0. SoAsampling will allocate lots of computational resourcesinto such intervals, however, GP(S)being high for such Sis with probability only about 0:00001 .4ASAMPLING WITH PROBABILITY MATCHINGWe now present how to use PM to improve Asampling by addressing the above challenges. Wefirst present an intuitive example in Section 4.1. Then, we present a practical PM algorithm based onMonte-Carlo estimators of GP(S)in Section 4.2 and an efficient implementation with well-designedbalanced trees in Section 4.3.1In this paper, we use P(jS)to denote the distribution Pconditioned on state space S.3Under review as a conference paper at ICLR 2019(a) Log-likelihood (b) Selected rounds on intervalscontaining the optimal point(c) Maximum Gumbel valueFigure 1: The counter example with (a) the log likelihood of PandQ; (b) the selected number on theintervals containing the optimal point; (c) the maximum Gumbel value found by Asampling.4.1 P ROBABILITY MATCHING AND AN EXAMPLE WITH TWO SUBSETSAlgorithm 1 Probability matching with Monte-Carlo estimators.1:Input: the target distribution P, proposal dis-tributionQ, state space , time horizon T.2:Output:x: a sample from P.3:maxgumbel =1;x=None:4:Collect =fg;L() =1;8S;t= 1.5:whiletTdo6:t=t+ 1:7: SelectSaccording to Eq. (4).8: SplitSinto disjoint sets S1;S2.9:L(S1) =L(S2) =GQ(S).10: forS2S1;S2do11:x(S)Q(jS).12:G(S)TG (Q(S);L(S)).13: ~G(S) =G(S) +P(x(S))Q(x(S)).14: ifmaxgumbel< ~G(S)then15:maxgumbel =~Gm(S);x=x(S).16: end if17: end for18: Compute Monte-Carlo estimators for S1;S2and update balanced trees.19:Collect:insert (S1);Collect:insert (S2):20:end whileIn general, when making a choice among a setof options, PM selects an option randomly ac-cording to its probability of being the optimal.More specifically, in our problem, the optimaloption is the subset with the maximum Gumbelvalue. Formally, by definition, the maximumGumbel value within region Sis a random vari-ableGP(S) = maxx2STG(Q(dx);L(S)) +P(x)Q(x). Suppose the state space ispartitioned intofS1;;SKg. PM selects asubset according to the probability:p i= arg maxk2[K]GP(Sk)!: (1)Intuitively, PM has an excellent performancesince it allocates computational resources intothe options which are likely to have large out-comes. To provide more intuition into why PMsuitsAsampling, we analyze a simple but non-trivial case in which we divide into two sets.In order to get a clean theoretical result, weadditionally assume Asampling does not fur-ther split a subset after processing it. We fo-cus on the stochastic regret (Guha & Muna-gala, 2014) which is the expected number ofselections on the suboptimal subset. Formally,suppose is partitioned into S1;S2. Leti=arg maxi2f1;2gGP(Si)which is a random variable. Consider an algorithm Awhich selects a subsetSiA;tat time stept. The stochastic regret of Aat timeTis:RA(T) =EPTt=11[iA;t6=i]:Intuitively, the smaller RAis, the betterAis, sinceAwon’t waste many computational resources onthe suboptimal subset. Moreover, we can prove that PM is optimal within a constant gap in terms ofthe stochastic regret:Lemma 1. Letopt(T)denote the algorithm which minimizes the stochastic regret. Then :RPM(T)2Ropt(T)(T);8TwhereRPM(T)is the stochastic regret of PM.The proof of Lemma 1 is adapted from the proof in (Guha & Munagala, 2014) for Bayesian banditswith two arms, we defer the details in Appendix A.4Under review as a conference paper at ICLR 20194.2 P ROBABILITY MATCHING WITH A MONTE -CARLO ESTIMATORUnfortunately, drawing samples from the probability in Eq. (1) is intractable when GP(S)is complex.So in this section, we present an efficient PM algorithm based on a Monte-Carlo estimator of GP(S).Consider a random variable Y=P(x)Q(x);xQ(jS)whose expectation is a constantplussing the KL-divergence between QandPconditioned on the subset S. We can equally character-izeGP(S)asmaxyTG(log(Q(S)p(Y=y));L(S)) +y: (2)We present the proof of Eq. (2) in Appendix B. Eq. (2) suggests that we can get a Monte-Carloestimator of GP(S)by estimating Y. More specifically, let Y1;;Ymbe a sequence of randomvariables and w1;;wmbe the corresponding weights such thatPmi=1wi= 1;wi>0. Supposethe random variable Ym:p(Ym=Yi) =wiis an unbiased estimator of Y, then we can estimateGP(S)by:^GP(S) = maxi2[m]TG(log(wiQ(S));L(S)) +Yi= maxi2[m]TG(log(wiQ(S)) +Yi;L(S) +Yi)(3)The second equality holds due to the linearity of the truncated Gumbel distribution (Maddison et al.,2014). According to Eq. (3), we can estimate GP(S)with existing Monte-Carlo estimators of Y,such as adaptive importance sampling (Gilks & Wild, 1992).The corresponding PM with Monte-Carlo estimators is to draw samples fromp ^i= arg maxj2[n]^GP(Sj)!: (4)What remains is how to sample from the probability in Eq. (4) efficiently. The most popular executionof Eq. (4) is as in (Chapelle & Li, 2011): we draw yi^GP(Si), and take ^i= arg maxiyi, then itcan be shown that ^iis a sample from the probability in Eq. (4).However, a direct implementation of the above ideas requires time complexity O(m)since we needto draw samples from mtruncated Gumbel distributions, where m=Pi2[n]miis the number ofparticles in total and miis the number of particles in Si. So our selection algorithm executing mrounds would require running time O(m2). It is relatively slow comparing with the O(mlogm)timecomplexity for Asampling (Maddison et al., 2014).4.3 A N EFFICIENT IMPLEMENTATION BY BALANCED TREESWe now present a novel algorithm that only requires O(logm)running time to sample from thedistribution in Eq. (4). Our algorithm is based on the properties of the truncated Gumbel distributionand under the help of well-designed balanced trees.We first decompose sampling from the distribution in Eq. (4) into two steps which can be doneefficiently. The decomposition is an immediate inference of Eq. (4):p ^i= arg maxj2[n]^GP(Sj)!=Zxp(x= maxj2[n]^GP(S))p(^i= arg maxj2[n]^GP(S)jx= maxj2[n]^GP(S))dx:Thus, sampling from the distribution in Eq. (4) equals to the following two sampling problems:xmaxi2[n]xi;xi^GP(Si);^ip(i= arg maxj2[n]xj^GP(Si)jx= maxxj)Recall that ^GP(S)is the maximum one among a set of truncated Gumbels. Thus, the above twosampling problems are essentially sampling the maxima and the argument of the maxima among aset of truncated Gumbels. So our target can be converted into the following problem:5Under review as a conference paper at ICLR 2019Problem 1. Given a set of truncated Gumbel variables fvigmi=1with parameters (ai;bi), i.e.,viTG(ai;bi). We define two sampling problems:v= maxi2[m]vi (5)^ip(i= arg maxj2[m]vjjv= maxj2[m]vj) (6)We use the inverse transform sampling (Devroye, 1986) to sample vin Eq. (5). In inverse transformsampling, for a random variable Xwith CDFUX(x), we first draw a sample suniform (0;1),and then compute xsuch thatUX(x) =s, it can be shown that xX. Thus, letU(g)denote theCDF ofv, we only need an algorithm to compute gsuch thatU(g) =s;s2(0;1). We now showhow to compute such gefficiently with balanced trees.For notational clarity, let Ua;b(g)denote the CDF of a truncated Gumbel distribution, TG(a;b).According to Defn. 2, we have Ua;b(g) =exp(exp(min(g;b)+a))exp(exp(b+a)). Recallv= maxivi, thenvhasCDF:U(g) =QiUai;bi(g) =Qiexp(exp(min(g;bi)+ai))exp(exp(bi+ai)). Take logarithm on both sides, we getlogU(g) =Pi2[m](exp(min(g;bi) +ai) + exp(bi+ai)).Without loss of generality, we sort bi’s in a non-decreasing order, that is, bibi+1. SinceU(g)is amonotonically increasing function, for g2(bi;bi+1], we have:logU(g) =Xj>iexp(g+aj) +Xj>iexp(bj+aj) =exp(g)Xj>iexp(aj) +Xj>iexp(bj+aj)Thus, given U(g)and suppose g2(bi;bi+1], we can compute gby:g=logPj>iexp(ajbj)logU(g)Pj>iexp(aj)(7)Thus, when we get suniform (0;1), we need to find isuch thatU(bi)sU(bi+1), and thensolvegaccording to Eq. (7) and inverse sampling . Both of above two steps can be done efficientlyvia a balanced tree.Suppose we have a balanced tree such that each node in the tree corresponds to an index i2[m],and the key of the balanced tree is bi, that is, for all jin the right subtree of node i, we havebjbiand for alljin the left subtree, we have bjbi. Suppose that from the balancedtree, we can query in O(1)time at each node ifor terms: (1) exp(bi)Pj>iexp(aj); (2)exp(ai)Pj>iexp(bj)and (3)Pj>iexp(ajbj). We can query these terms efficiently in a bal-anced tree because they all are summations over an interval. And according to Defn. 2, we know thatlogU(bi) =Pj>i(exp(bj+ai)exp(ajbi)), we can check out whether logs<logU(bi)inO(1). Therefore, we can find the index isuch thatU(bi)sU(bi+1)in running time O(logm).After that, we can compute U(g) =svia Eq. (7) in running time O(1).Now we turn to sample ^iin Eq. (6). Without loss of generality, suppose g2(bi;bi+1). Obviously, forj <i ,p(j= arg maxj0vj0jg= maxj0vj0) = 0 . Forji, by Defn. 2 and with simple calculations,we have:pj= arg maxj0vj0jg/dUaj;bj(g)dgYj06=j;j0>iUaj0;bj0(g)= exp(g+aj)exp(exp(g+aj))exp(exp(bj+aj))Yj06=j;j0>iUaj0;bj0(g) = exp(aj) exp(g)Yj0>iUaj0;bj0(g)/exp(aj)(8)According to Eq. (8), we can sample ^iinO(logm)running time with a balanced tree from which wecan queryPj>iexp(aj)efficiently. Putting the previous results together, we get the algorithm asoutlined in Alg. 1.5 E XPERIMENTSIn this section, we present our empirical results. We first check the correctness on a simple toyexperiment and the counter example in Section 3. After that, we evaluate the efficiency of Alg. 1on two Bayesian posterior inference tasks, results show that our algorithm outperforms vanilla Asampling significantly.6Under review as a conference paper at ICLR 2019(a) The toy experiment (b) The counter exampleFigure 2: Experiment results with (a) the toy experiment; (b) the counter example.Figure 3: Experiment results on the clutter problem with 5 dimensions on the left, 15 dimensions inthe middle and 20 dimensions on the right.5.1 C ORRECTNESS OF ALG. 1We first verify the correctness of Alg. 1 on a greenhouse experiment. We consider sampling froma one-dimensional Gaussian mixture with potential function P(x) =log(N(x;2:0;1:0) +2N(x; 2:0;1:0)), which is a multi-mode target distribution. We set Q=N(0;2). We present ourresult in Fig. 2(a) which shows the ground truth and the empirical sample distributions of Alg. 1 andbaselines. From Fig. 2(a), Alg. 1 has a similar performance to Asampling and outperforms the Asampling without a bound function.5.2 T HE COUNTER -EXAMPLEWe empirically compare the performance of algorithms on the example in Section 3. The result isshown in Fig. 2(b). We can see that PM- Aoutperforms baselines significantly since our algorithmwastes less resources on suboptimal subsets.5.3 B AYESIAN POSTERIOR INFERENCEIn this section, we evaluate our algorithm on two Bayesian posterior tasks: the clutter problem and theBayesian logistic regression. More specifically, we focus on sampling tasks of formulations P(x)/p(x)Qni=1p(yijx)wherexis the variable we are going to sample, p()is the prior distribution overx, andfyigni=1are observations, and p(yijx)is the likelihood function. We simply set Q(x) :=p(x)in bothAsampling and Alg. 1. For vanilla Asampling, we exploit the standard stochastic gradientdescent (SGD) algorithm to calculate the bound function, maxx2SPilogp(yijx). For the Monte-Carlo estimator in Alg. 1, we exploit the importance sampling over the trajectory of the same SGDalgorithm as in the vanilla Asampling2.5.3.1 E VALUATION ON THE CLUTTER PROBLEMWe now evaluate Alg. 1 on the Clutter problem proposed by (Minka, 2001). Clutter problem aims toinference the mean of an isotropic Gaussian with some data points are outliers. Consider a mixture2We first tune the parameters of SGD for vanilla Asampling, and then apply them to Alg. 1 without furthertuning.7Under review as a conference paper at ICLR 2019(a) The toy experiment (b) The counter exampleFigure 4: Experiments on logistic regression with (a) averaged Gumbel values; (b) averaged log-likelihoods.distribution: p(yjx) = (1w)N(y;x;I) +wN(y; 0;1I); p(x) =N(x; 0;2I), wherewis theratio of outliers which is a known parameter, and N(;)represents Gaussian distribution. Our goalis to inference xgiven datafyigni=1. We do experiments on dimensions varying in 5;15;20,n= 20 .We compare the Gumbel of these algorithms. We run 100 times and present the averaged results inFig. 3. We can see that Alg. 1 outperforms Asampling constantly.5.3.2 E VALUATION ON BAYESIAN LOGISTIC REGRESSIONOur last experiment is on Bayesian Logistic Regression. Given a dataset fxigni=1associated withlabelfyigni=1whereyi2f0;1g. We follow the setting in (Gershman et al., 2012) and definethe Bayesian logistic regression: p() =Gamma (;a;b); p(wk) =N(wk; 0;1); p(yi=1;xi;w) =sigmoid (wTxi):In this model,fw;gare the hidden variables, where wdenotes theregression coefficients and is a precision parameter. We set a=b= 1. We do experiments on 13binary classification datasets proposed by (Mika et al., 1999). The number of features of these datasets are in range from 2 to 60, and the number of points ranges from 24 to 7400 (See Appendix C formore statistics). We present our results in Fig. 4(a) where all results are averaged over 100 runs. Fig.4(a) presents the summation of the maximum likelihood found by each algorithm on these datasetsover time. From Fig. 4(a), we can see that PM- Aoutperforms all baselines.Furthermore, we compare our algorithm with standard Matropolis-Hastings algorithm (MH) andadaptive inference with exploration (AIE) (Rainforth et al., 2018) which also attempts to bridge thegap between sampling problems and decision-making techniques. For MH, the initial points aresampled from the prior. To make the comparison fair, we also evaluate Alg. 1 and AIE with the prioras the Monte-Carlo estimator instead of gradient-based methods. We compare the likelihoods in Fig.4(b). We can see that Alg. 1 outperforms AIE even if they use the same Monte-Carlo estimator. Thisis AIE attempts to use UCB-like algorithm to make decisions, but UCB works only for those modelsin which concentration bounds hold which is not always valid in sampling problems.6 C ONCLUSION AND FUTURE WORKIn this work, we focus on improving the subset selection procedure in Asampling with PM. Weproved that in the special case of two subsets, PM is optimal within a constant gap in terms of thestochastic regret. Moreover, we proposed a practical algorithm based on Monte-Carlo estimators andwell-designed balanced trees. Empirical results show that our methods saves a significantly amountof computational resources on suboptimal regions compared with Asampling.There exists several challenges in future work. The first one is on the analysis of PM. Though weproved PM is efficient in the case of two subsets, it is very challenging to prove the efficiency ingeneral. The second one is that the performance of Alg. 1 relies on the accuracy of the Monte-Carloestimator. However, it is time-consuming to compute an accurate Monte-Carlo estimator. So it isimportant to balance the accuracy of the Monte-Carlo estimator and the performance of PM. We hopeour work is a starting point to address these problems.8Under review as a conference paper at ICLR 2019ACKNOWLEDGMENTS<|im_end|> <|im_start|>assistant ### Review Title Strong contribution to family of A* sampling algorithms, but lacks clarity. ### Review Text Summary: This paper introduces a probability matching approach for optimizing Gumbel processes, i.e. the extension of the Gumbel-Max trick to more general measure spaces. The basic idea is to use a more refined subset selection mechanism as compared to A* Sampling, but at the cost of being able to guarantee an exact sample. Instead, the authors study the algorithm's best Gumbel so far as a function of the time budget. Quality: This is an interesting and natural idea, and the experiments support the author's claim that it improves over A* Sampling in the regimes that they consider. The claims and proofs look correct to me. Originality: This idea has not been explored in the literature, and is a sensible one to consider when having an exact sample is not necessary. Clarity: The comparison to A* Sampling is a bit apples-to-oranges and the paper lacks some clarity overall. In particular, the authors should be much more clear that their termination condition necessarily differs from A* Sampling's. In A* Sampling the termination condition guarantees that the algorithm has found the true optimum, but in PM-A* the authors consider only a fixed budget of time --- terminating whether or not the algorithm has found the true optimum. The difference is a necessary feature, because otherwise I'm fairly sure A* Sampling is optimal (It has to push the upper bound of every subset below the true maximum, so if the term. cond. is not currently satisfied the subset with the current max. upper bound must eventually be visited before termination. Since it is the only subset where that is necessarily true, taking it is the optimal choice.) More comments below. Significance: I think this is interesting work, but I would recommend the authors consider some examples that clarify where these ideas might impact that problems that the ICLR audience would be interested in. Overall: The paper's clarity problems get in the way of its contribution. I think it is an interesting direction, but the authors are not clear enough about the correctness of the algorithm (it is not guaranteed to return an exact sample) to recommend acceptance. If this and other issues are fixed, it would be a good paper. Pros: - Solid contribution to the literature on Gumbels. - Beginnings of a regret analysis. - Uniform performance gains in comparison to A* Sampling (in the regime considered). Cons: - Lacks clarity in comparisons to A* Sampling. - Writing could clarify significance as well. Specific questions / suggestions: - One of the important omissions of the paper is the following question. What is the distribution of PM-A*'s output at time T for a fixed T? It won't be the target distribution in general, but it may be close. This applicability of this work will be greatly improved by that analysis. - In general the figures and experiments need more details. In particular, (1) In figure 2a, why are there two sets of lines? Different values of T? (2) You need to report T throughout. (3) What did you take as Y? This is a critical part of the method, and you should report the estimator used. - In A* Sampling for two subsets without splitting algorithm, shouldn't you be removing ALL previous Xi from be S_i? i.e. S_i \ {Xi_1, Xi_2, ... Xi_{k-1}} or something like that. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
0l3buSHSJj0
IEEE.org/2022/Workshop/altVIS
2022
Maybe, Maybe Not: A Survey on Uncertainty in Visualization
["Anonymous"]
Understanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization.
["Uncertainty", "Data Visualization"]
ABSTRACTUnderstanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands infer-ence, it is necessary that uncertainty is portrayed in it. This papershowcases the importance of representing uncertainty in visualiza-tions. It provides an overview of uncertainty visualization and thechallenges authors and viewers face when working with such charts.I divide the visualization pipeline into four parts, namely data col-lection, preprocessing, visualization, and inference, to evaluate howuncertainty impacts them. Next, I investigate the authors’ method-ologies to process and design uncertainty. Finally, I contribute byexploring future paths for uncertainty visualization.Index Terms: Uncertainty, Data Visualization1 I NTRODUCTIONWith a rise in complexity and dimensionality of data, analyzingand modeling data becomes more challenging. When most of ourdecisions are data-driven, it becomes imperative that we know thenature of the data and the patterns it contains. As a result, analyzingthe inherent uncertainty in the data is gaining more significance. Invarious fields, uncertainty can signify different things. For instance,data bias, random or systematic error, and statistical variance are allfactors that contribute to data uncertainty. Without understandingthe underlying uncertainty in our data, we cannot make accuratepredictions. Similarly, to observe the true structure of our data andas well as identify patterns in it, we need to visualize it. Today, wecan no longer undermine the significance of uncertainty nor ignorethe importance of visualizations for data analysis.As mentioned before uncertainty is bound to exist wheneverthere is data. Therefore representation of uncertainty in datavisualizations is crucial. Consider the example of hurricane pathmaps, as shown in Figure 1. The increase in the width of thepredicted path with time is not due to an increase in the size of thehurricane. Instead, it is due to representing the inherent uncertaintyin the data. In other words, the visualization indicates that comparedto Friday, Sunday’s hurricane path is more difficult to predict withany degree of accuracy.Information tends to be withheld from the viewer when one doesnot portray uncertainty in the visualization. Therefore the viewermight occasionally be ignorant of this exclusion. This breach oftrust can have significant consequences for both the author andthe viewer. Given this significance, it is reasonable to assume thatvisualizations frequently include uncertainty. But how often do weencounter charts that represent uncertainty? How frequently do wecheck for bias in graphs that represent public surveys? As it turnsout, not frequently..In a recent study [9], 121 journalism articles, social sciencesurveys, and economic estimates were examined. Out of 449visualizations created for inference, the study demonstrates that only14 accurately depict uncertainty. “What’s Going on in This Graph?”*e-mail: krisha1204@gmail.comFigure 1: An example chart of a chart for Mattew showing its five-day forecast track [5]is a New York Times (NYT) initiative to increase graphical literacy,especially among students. Different categories of charts, such asmaps, parts-to-whole, and associations, are published for students toexplore and analyze. When I looked into the distribution of thesecharts, I found that only 6 out of the 136 charts show uncertainty.The question I ask is, do we actually examine uncertainty repre-sentations when we come across them in order to make decisions,or do we simply ignore them? Does uncertainty offer value or justclutter these visualizations? I try to investigate these questions in thispaper. Visualizations are an integral part of newspapers, governmentbills, and business earnings reports to name a few. The public usesthem to gain insights, spot trends, and make decisions.Hence, when we visualize data, it becomes critical to supportthose visualizations with information about uncertainty. Peoplefrequently use visualizations to examine data and make observations.Lack of uncertainty representation could result in incorrect anderroneous interpretation. However, it can be challenging tovisualize uncertainty. There are no standard guidelines or protocolsauthors can follow when they create such charts. Given thesedrawbacks, uncertainty visualization is considered one of the topresearch problems in data visualization [13]. With the help ofa few uncertainty visualization examples and xkcd comic strips,this survey studies how uncertainty contributes to every phase invisualization. Most research in this area focuses on creating chartswith uncertainty and how viewers may perceive them. However,uncertainty is also influential in the other parts of the data vi-sualization process, such as during data collection and preprocessing.The objectives of this paper are as follows:•Provide an entry point for anyone who wants to learn aboutuncertainty visualization• Delineate the significance of uncertainty visualizations•Explore how uncertainty influences every phase of the datavisualization process•Understand the challenges authors and viewers face wheninteracting with itFigure 2: Epistemic Uncertainty [16]•Discuss the open problems and future research directions inthe fieldThis work is divided into the following sections. Section 2 definesuncertainty and describes the relationship between uncertainty andvisualization. In Section 3, I classify the data visualization pipelineinto four phases, analyzing the involvement of uncertainty in eachphase. Our classification helps look at each phase individually,focusing on the challenges and bottlenecks authors and viewers facewhen working with uncertainty visualization. Finally, I study somestate-of-the-art methods to visualize uncertainty and discuss futuredirections for research. I conclude the paper in Section 4.2 U NCERTAINTY AND VISUALIZATIONVisualizations are incredibly important for examining, analyzing,and interpreting data in the era of big data. Visualizations are ev-idence that a picture really does say a thousand words. They aidviewers in seeing trends, background noise, and outliers. Asking thecorrect questions can be quite challenging when there is an abun-dance of data. Through visualizations, viewers can determine whatquestions the data can help answer. With improvements in hardware,software, and graphics theory, data visualizations are adopted morefrequently and widely [29]. Viewers use visualizations to makedecisions. However, making decisions and drawing observations bylooking at visualizations can be complex due to statistical varianceand uncertainty present in these visualizations.As mentioned previously, uncertainty can have different defini-tions based on different scenarios [3]. Broadly speaking, uncertaintyis classified into two types, aleatory and epistemic. Aleatory uncer-tainty rises from random fluctuation and unknown outcomes whenan experiment is run multiple times in a consistent environment. Forexample, in a drug trial, a participant’s blood pressure can vary dueto stress and anxiety. There might also be measurement errors inthe sphygmomanometer. Aleatory uncertainty can be minimizedby controlling individual factors and increasing the number of read-ings. Epistemic uncertainty, on the other hand, rises from a lack ofknowledge, like predicting the outcome of the same experiment ina completely different, unknown environment. For example, pre-dicting the effect of a drug on a new disease. Uncertainty can bemeasured, like risks but can also be unquantified, like bias. Whilealeatory uncertainty is more widely represented in the visualiza-tions [28], both types can be represented with distribution graphs.Uncertainty and visualizations are interweaved, and working withone often requires working with the other. In 1644, Michael Florentvan Langren was one of the first researchers to use visualization forstatistical analysis [28]. He used a 1D line graph to present the 12Figure 3: Langren’s line graph is one of the first visualizations topresent uncertaintyFigure 4: Anscombe’s quartet consists for four datasets with similarstatistics but very different distributions.known estimated longitudinal distances between Toledo and Rome,as shown in Figure 3. Instead of using a table to show this data,Langren used this graph to showcase the wide range of variation.Even though all the distances were over-estimated (actual distance,in longitude, is shown using the arrow), the graph remains classic indemonstrating the power of visualization.The popular Anscombe’s quartet [1] is a perfect exampleof how data with similar statistics might have a very differentdistribution which is observed when visualized. The quartet consistsof four datasets with 11 points having nearly the same mean,sample variance, correlation, linear regression, and coefficient ofdetermination. The four datasets may appear very similar to viewerslooking at the data and the descriptive statistics. However, whenone visualizes them, the difference in their distribution is veryevident, as shown in Figure 4. Looking at data in tabular form mayhide insightful observations and can lead to erroneous conclusions.Today, researchers across all domains use extensive libraries suchas [4, 11, 12, 22, 25] to analyze data uncertainty.Figure 5: Priestley’s Chart of Biography [24]Using visualizations to represent and study uncertainty in datais widely adopted. However, uncertainty in visualizations is oftennot communicated [9]. One of the earliest instances of uncertaintybeing presented can be traced back to the 18th century. JosephPriestley, a British scientist, created ”A Chart of Biography” topresent the lifespans of famous people as shown in Figure 5. Heused horizontal lines to portray the lifetime of about 2000 peopleand used dots before or after the lines to communicate uncertainty.Visualizations of uncertainty, however, are not common. Nu-merous factors influence why authors decide against visualizinguncertainty. Since they do not know all the information about thedataset, viewers may draw inaccurate conclusions in the absence ofuncertainty representation. Nevertheless, introducing more uncer-tainty could also make the audience feel too overwhelmed to payattention to it. The study of why visualizing uncertainty is rare isstill in its early stages. In the section that follows, I go through eachof these issues in more detail and look at how uncertainty affectsevery stage of data visualization.3 U NCERTAINTY IN VISUALIZATIONFigure 6: The data visualization process divided into four stages toshow how uncertainty affects each stagePrevious works in the field have attempted to classify the datavisualization process differently. [14] considers sampling, modeling,visualization, and decision-making as the primary sources ofuncertainty. This paper follows a similar classification. I dividethe visualization pipeline into data collection, preprocessing,visualization and inference as shown in Figure 6 Pang etal. [21] classify the process into data collection, derivation, andvisualization and discuss how uncertainty is introduced in each stage.Under the data collection phase, the paper mainly discussesthe uncertainty added due to measurement errors. However, thereare other sources, such as bias and sampling error, that the paperfails to describe. I investigate these uncertainties in Section 3.3.1.The authors then discuss the change data undergoes when it ispreprocessed. These changes include converting one unit to another,rescaling, and resampling. However, they do not mention othervital issues such as missing data, approximation, and interpolationthat I examine in Section 3.3.2. Next, the authors highlight howuncertainty also influences the data visualization stage itself. Theymainly focus on radiosity and volume rendering, while our paperdelves more into 2D visualizations. Finally, I explore how viewersinfer these visualizations and the challenges they face while makinga decision from these charts.Uncertainty is presented at every phase of this classification. How-ever, understanding and evaluating uncertainty in each of thesephases is unique. Therefore, authors are required to approach theseuncertainties based on their type and complexity, understand theirabstraction, and then present them in visualizations in a way that iseasy to grasp.3.1 Data AcquisitionGiven the interdisciplinary nature of visualizations, the format,quantity, and type of data used to create them vary immensely.Different data implies different data collection processes anduncertainties. Uncertainty is intertwined with data acquisitionand can arise from random variables and modeling errors [14].Figure 7: Selection Bias [19]Pang et al. [21] explain how almost all acquired data has statisticalvariation. Collected data can have errors, bias, and variance. [26]study how bias can be introduced during the process of collectingdata. Datasets are prone to various biases that include but are notlimited to selection bias, volunteer bias, admission bias, survivorbias, and misclassification bias.It is imperative that datasets resemble the true population asclosely as possible. Data can also contain different types of errors,such as coverage error, sampling error, nonresponse error, andmeasurement error [7]. Missing data points is another commonchallenge researchers face during data collection.Figure 8: Free Speech, a graph by the New York Times based on anational poll including 1,507 U.S residents [20]Correcting these errors is not always possible, but they can bementioned in the visualization to inform the viewer. However,uncertainty is often ignored when authors create visualizations.Other times this uncertainty in data is not communicated to them [9].For example, when I analyze a piece called “Free Speech” (asshown in Figure 8) published in the What’s Going On in ThisGraph section of the New York Times (NYT) [20], we can see howinformation about uncertainty from the data source is not mentioneddirectly in the graph. The bars of the graph do not sum to 100percent since they are missing the no-response segment. The articlementions that the margin of error for the sample is +/- 3.1%, but thegraph makes no mention of it.Efforts are being made by researchers to improve the way un-certainty in the data collection phase is captured, processed, andcommunicated. Athawale et al. [2] propose using statistical sum-mary maps to represent uncertainty in scalar field data caused bydata acquisition.3.2 Data PreprocessingFigure 9: Flawed Data [18]Raw data is imperfect and can consist of noise and error.Once data is collected, it undergoes processing for accuracyand standardization. However, this phase can add uncertaintiesto the data that may not be immediately evident. For example,fundamental transformations such as rounding off values, convertingdata from one unit to another, rescaling, resampling, and quantizingcan add uncertainty [1]. Even though this might seem minor, theimpact can be significant. For example, based on whether we takethe value of pi as 22/7 or 3.159, the area of the Sun can vary by adifference of 237 x106sq. miles.A significant setback that most datasets suffer from is missingdata. Data can have missing values for many reasons, such asinstrument malfunction, incomplete observations, and lost data.Missing values leave a gap in the dataset, which makes room foruncertainty. Working with such uncertainty requires the authors totake extra measures during preprocessing. Authors attempt to findclose estimates of the missing values to provide the viewers witha complete picture. One way to tackle this problem is by deletingthe complete entry that has the missing value. This leads to a lossof data and insights. Another option is to make an educated guessabout the missing value. However, this is highly unreliable andoften not recommended. Using interpolation, imputation, or othertechniques can induce errors [3].Sometimes, authors choose to encode these estimated valuesdifferently in their designs to inform the viewer about the gap inthe dataset. However, how authors choose to visualize this encod-ing becomes very influential in how viewers perceive these graphs.Whether authors highlight, downplay, annotate or remove the miss-ing values determines how much confidence and credibility theviewer shows in the visualization [27].3.3 Visualization CreationSince uncertainty is ingrained in different parts of the data collectionprocess, it is not easy to identify and control it. However, oncethe data is cleaned and processed, the authors face a new problem.Creating visualizations is a complicated task that requires authors tomake various decisions on behalf of the viewer. Authors are expectedto choose the type of visualization based on data type, which maylead them to choose the scaling, sorting, ordering, and aesthetics [30].Compelling visualizations are accurate and suggest an understandingand interpretation of data. Hence, it is the author’s responsibility toanalyze data correctly before creating any visualizations. Midway[15] describes ten design principles authors can follow to createcharts. However, none of those principles discuss how uncertaintycan be presented. Creating effective visualizations is hard. However,when we add uncertainty representation, the task becomes muchmore complex. The data visualization community of researchers,designers, journalists, etc., has been reluctant to add uncertaintyto their charts. Authors are aware of how significant uncertaintyvisualization is. Yet, they choose to exclude uncertainty when theydesign their charts for various reasons discussed below.3.3.1 Uncertainty is hard to representThough data is replete with uncertainty, the difficulty lies in deter-mining if it should be represented and how. If the uncertainty hasno direct relationship to the goal of the visualization, then it maynot be included in the visualization. But this is not a conclusionthat authors can quickly draw. The rise in techniques of visual-izing uncertainty can make it harder for authors to decide whichone to choose from. One of the biggest challenges in visualizinguncertainty is discovering and communicating the relationship andimpact that the uncertainty has on the data. Data visualization isoften a preferred choice for analysis due to its ability to presenthigh-dimensional data. However, uncertainty also has dimensions,generally classified into scalar, vector, and tensor [23]. While scalarand vector fields of uncertainty are depicted in charts, tensor fieldsare often avoided. Mapping these dimensions of uncertainty alongwith the dimensions of data is challenging and often overlookedwhen creating charts. Instead, authors tend to simplify uncertaintyto align with the dimensionality of the data.3.3.2 Uncertainty is hard to calculate and verifyFigure 10: Error Bars [17]Another reason why authors choose to exclude uncertainty fromtheir charts is that calculating uncertainty is complex [9]. It is wellknown that even mathematicians and statisticians sometimes find itchallenging to calculate the error or variance in a dataset. Verifyingif the presented uncertainty is correct is challenging. Moreover, ifthe authors make an error while designing their charts, they end upproviding wrong information to the viewers and losing their trust.3.3.3 Viewers may be overwhelmed[9] explains why the inclusion of uncertainty in graphs is not widelyadopted. Authors believe that uncertainty can be challenging forthe viewers to perceive and understand. As a result, viewers maychoose to either look at an alternative graph that does not containany uncertainty representation or overlook the uncertainty in theirgraph altogether.3.3.4 Uncertainty can add clutter to the visualizationAuthors can be unsure of how effective communicating uncertaintyis. They also worry about adding more information to an alreadyvisually complex visualization. For many authors, the goal of achart is to express a signal [9] that can be useful to their viewers.This signal tends to present a single point or a single source of truth.Uncertainty tends to challenge that notion by obfuscating the signal.Additionally, expressing the intricacy of uncertainty through a visualabstraction is challenging. The dimensionality of the data also playsa vital role in deciding whether uncertainty should be representedor not. An increase in the dimensionality of data makes it harderfor the human visual system to perceive it effectively. Sometimeseven two-dimensional charts can be overwhelming for the viewer.In such a case, representing uncertainty adds visual overload [23].3.4 Visualization InferenceUncertainty is hard to understand and analyze. When faced withperceiving an uncertain visualization, viewers can get confused orderive inaccurate information from it. One easy method viewerstend to use is to ignore the uncertainty in the graph altogether.Another way is to substitute tricky calculations with easy ones oruse heuristics to make decisions. However, this may not alwaysgive a correct observation. The most common approach to showuncertainty is by using box plots and error bars. Though widelyused, viewers may find them challenging to analyze [6]. Sometimesvisualizing uncertainty as frequency instead of distribution providea better understanding.Currently, research is being done to create visualizations are helpunderstand uncertainty more intuitively. For example, hypotheticaloutcome plots (HOPs) represent uncertainty by animating a finite setof individual draws [10]. This approach expects no prior knowledgeof the domain from the viewer. However, using HOPs in physicalmedia might be challenging. Bubble treemaps [8] are anotherapproach for visualizing uncertainty. These circular treemapsencode additional information about uncertainty by allocatingadditional space for visuals.While uncertainty is still underrepresented in visualizations, moreresearchers are slowly adding it to their designs. One of the signifi-cant setbacks in uncertainty visualizations for authors is calculatinguncertainty, while for viewers, it is graphical literacy. Efforts can betaken to increase this literacy through different programs gradually.Furthermore, work should be done to understand what visualizationtype best suits a given uncertainty type. This relationship can alsodepend on the type of data being represented and the target audienceviewing the graph. For example, it is necessary for graphs publishedin newspapers and reports to be easily understandable by the public.Hence, studies focusing on visualizing uncertainty with no priorknowledge or information can be very insightful.4 C ONCLUSIONUncertainty visualization is one of the most complex research areasin data visualization today. This work provided an overview of un-certainty visualization and the relationship between uncertainty andvisualization. I divided the visualization pipeline into four phasesand surveyed papers to study how uncertainty interacts with eachphase of the process. The work also investigated why the representa-tion of uncertainty is not widely practiced by the data visualizationcommunity and the challenges viewers face when inferring fromsuch a graph. Lastly, I discussed a few state-of-the-art methodsto design uncertainty visualization and offered a glance into theinteresting future research this field has to offer.
CUoynUKxtEj
This paper does a good job of outlining the field of uncertainty visualization and explaining the challenges of the field. It doesn't, however, feel provocative or alternative in any manner. Its only "joke" value comes from the author using xkcd strips to illustrate the points.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Maybe, Maybe Not: A Survey on Uncertainty in Visualization ### Paper Abstract Understanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands inference, it is necessary that uncertainty is portrayed in it. This paper showcases the importance of representing uncertainty in visualizations. It provides an overview of uncertainty visualization and the challenges authors and viewers face when working with such charts. I divide the visualization pipeline into four parts, namely data collection, preprocessing, visualization, and inference, to evaluate how uncertainty impacts them. Next, I investigate the authors' methodologies to process and design uncertainty. Finally, I contribute by exploring future paths for uncertainty visualization. ### Paper Keywords ["Uncertainty", "Data Visualization"] ### Paper Content ABSTRACTUnderstanding and evaluating uncertainty play a key role in decision-making. When a viewer studies a visualization that demands infer-ence, it is necessary that uncertainty is portrayed in it. This papershowcases the importance of representing uncertainty in visualiza-tions. It provides an overview of uncertainty visualization and thechallenges authors and viewers face when working with such charts.I divide the visualization pipeline into four parts, namely data col-lection, preprocessing, visualization, and inference, to evaluate howuncertainty impacts them. Next, I investigate the authors’ method-ologies to process and design uncertainty. Finally, I contribute byexploring future paths for uncertainty visualization.Index Terms: Uncertainty, Data Visualization1 I NTRODUCTIONWith a rise in complexity and dimensionality of data, analyzingand modeling data becomes more challenging. When most of ourdecisions are data-driven, it becomes imperative that we know thenature of the data and the patterns it contains. As a result, analyzingthe inherent uncertainty in the data is gaining more significance. Invarious fields, uncertainty can signify different things. For instance,data bias, random or systematic error, and statistical variance are allfactors that contribute to data uncertainty. Without understandingthe underlying uncertainty in our data, we cannot make accuratepredictions. Similarly, to observe the true structure of our data andas well as identify patterns in it, we need to visualize it. Today, wecan no longer undermine the significance of uncertainty nor ignorethe importance of visualizations for data analysis.As mentioned before uncertainty is bound to exist wheneverthere is data. Therefore representation of uncertainty in datavisualizations is crucial. Consider the example of hurricane pathmaps, as shown in Figure 1. The increase in the width of thepredicted path with time is not due to an increase in the size of thehurricane. Instead, it is due to representing the inherent uncertaintyin the data. In other words, the visualization indicates that comparedto Friday, Sunday’s hurricane path is more difficult to predict withany degree of accuracy.Information tends to be withheld from the viewer when one doesnot portray uncertainty in the visualization. Therefore the viewermight occasionally be ignorant of this exclusion. This breach oftrust can have significant consequences for both the author andthe viewer. Given this significance, it is reasonable to assume thatvisualizations frequently include uncertainty. But how often do weencounter charts that represent uncertainty? How frequently do wecheck for bias in graphs that represent public surveys? As it turnsout, not frequently..In a recent study [9], 121 journalism articles, social sciencesurveys, and economic estimates were examined. Out of 449visualizations created for inference, the study demonstrates that only14 accurately depict uncertainty. “What’s Going on in This Graph?”*e-mail: krisha1204@gmail.comFigure 1: An example chart of a chart for Mattew showing its five-day forecast track [5]is a New York Times (NYT) initiative to increase graphical literacy,especially among students. Different categories of charts, such asmaps, parts-to-whole, and associations, are published for students toexplore and analyze. When I looked into the distribution of thesecharts, I found that only 6 out of the 136 charts show uncertainty.The question I ask is, do we actually examine uncertainty repre-sentations when we come across them in order to make decisions,or do we simply ignore them? Does uncertainty offer value or justclutter these visualizations? I try to investigate these questions in thispaper. Visualizations are an integral part of newspapers, governmentbills, and business earnings reports to name a few. The public usesthem to gain insights, spot trends, and make decisions.Hence, when we visualize data, it becomes critical to supportthose visualizations with information about uncertainty. Peoplefrequently use visualizations to examine data and make observations.Lack of uncertainty representation could result in incorrect anderroneous interpretation. However, it can be challenging tovisualize uncertainty. There are no standard guidelines or protocolsauthors can follow when they create such charts. Given thesedrawbacks, uncertainty visualization is considered one of the topresearch problems in data visualization [13]. With the help ofa few uncertainty visualization examples and xkcd comic strips,this survey studies how uncertainty contributes to every phase invisualization. Most research in this area focuses on creating chartswith uncertainty and how viewers may perceive them. However,uncertainty is also influential in the other parts of the data vi-sualization process, such as during data collection and preprocessing.The objectives of this paper are as follows:•Provide an entry point for anyone who wants to learn aboutuncertainty visualization• Delineate the significance of uncertainty visualizations•Explore how uncertainty influences every phase of the datavisualization process•Understand the challenges authors and viewers face wheninteracting with itFigure 2: Epistemic Uncertainty [16]•Discuss the open problems and future research directions inthe fieldThis work is divided into the following sections. Section 2 definesuncertainty and describes the relationship between uncertainty andvisualization. In Section 3, I classify the data visualization pipelineinto four phases, analyzing the involvement of uncertainty in eachphase. Our classification helps look at each phase individually,focusing on the challenges and bottlenecks authors and viewers facewhen working with uncertainty visualization. Finally, I study somestate-of-the-art methods to visualize uncertainty and discuss futuredirections for research. I conclude the paper in Section 4.2 U NCERTAINTY AND VISUALIZATIONVisualizations are incredibly important for examining, analyzing,and interpreting data in the era of big data. Visualizations are ev-idence that a picture really does say a thousand words. They aidviewers in seeing trends, background noise, and outliers. Asking thecorrect questions can be quite challenging when there is an abun-dance of data. Through visualizations, viewers can determine whatquestions the data can help answer. With improvements in hardware,software, and graphics theory, data visualizations are adopted morefrequently and widely [29]. Viewers use visualizations to makedecisions. However, making decisions and drawing observations bylooking at visualizations can be complex due to statistical varianceand uncertainty present in these visualizations.As mentioned previously, uncertainty can have different defini-tions based on different scenarios [3]. Broadly speaking, uncertaintyis classified into two types, aleatory and epistemic. Aleatory uncer-tainty rises from random fluctuation and unknown outcomes whenan experiment is run multiple times in a consistent environment. Forexample, in a drug trial, a participant’s blood pressure can vary dueto stress and anxiety. There might also be measurement errors inthe sphygmomanometer. Aleatory uncertainty can be minimizedby controlling individual factors and increasing the number of read-ings. Epistemic uncertainty, on the other hand, rises from a lack ofknowledge, like predicting the outcome of the same experiment ina completely different, unknown environment. For example, pre-dicting the effect of a drug on a new disease. Uncertainty can bemeasured, like risks but can also be unquantified, like bias. Whilealeatory uncertainty is more widely represented in the visualiza-tions [28], both types can be represented with distribution graphs.Uncertainty and visualizations are interweaved, and working withone often requires working with the other. In 1644, Michael Florentvan Langren was one of the first researchers to use visualization forstatistical analysis [28]. He used a 1D line graph to present the 12Figure 3: Langren’s line graph is one of the first visualizations topresent uncertaintyFigure 4: Anscombe’s quartet consists for four datasets with similarstatistics but very different distributions.known estimated longitudinal distances between Toledo and Rome,as shown in Figure 3. Instead of using a table to show this data,Langren used this graph to showcase the wide range of variation.Even though all the distances were over-estimated (actual distance,in longitude, is shown using the arrow), the graph remains classic indemonstrating the power of visualization.The popular Anscombe’s quartet [1] is a perfect exampleof how data with similar statistics might have a very differentdistribution which is observed when visualized. The quartet consistsof four datasets with 11 points having nearly the same mean,sample variance, correlation, linear regression, and coefficient ofdetermination. The four datasets may appear very similar to viewerslooking at the data and the descriptive statistics. However, whenone visualizes them, the difference in their distribution is veryevident, as shown in Figure 4. Looking at data in tabular form mayhide insightful observations and can lead to erroneous conclusions.Today, researchers across all domains use extensive libraries suchas [4, 11, 12, 22, 25] to analyze data uncertainty.Figure 5: Priestley’s Chart of Biography [24]Using visualizations to represent and study uncertainty in datais widely adopted. However, uncertainty in visualizations is oftennot communicated [9]. One of the earliest instances of uncertaintybeing presented can be traced back to the 18th century. JosephPriestley, a British scientist, created ”A Chart of Biography” topresent the lifespans of famous people as shown in Figure 5. Heused horizontal lines to portray the lifetime of about 2000 peopleand used dots before or after the lines to communicate uncertainty.Visualizations of uncertainty, however, are not common. Nu-merous factors influence why authors decide against visualizinguncertainty. Since they do not know all the information about thedataset, viewers may draw inaccurate conclusions in the absence ofuncertainty representation. Nevertheless, introducing more uncer-tainty could also make the audience feel too overwhelmed to payattention to it. The study of why visualizing uncertainty is rare isstill in its early stages. In the section that follows, I go through eachof these issues in more detail and look at how uncertainty affectsevery stage of data visualization.3 U NCERTAINTY IN VISUALIZATIONFigure 6: The data visualization process divided into four stages toshow how uncertainty affects each stagePrevious works in the field have attempted to classify the datavisualization process differently. [14] considers sampling, modeling,visualization, and decision-making as the primary sources ofuncertainty. This paper follows a similar classification. I dividethe visualization pipeline into data collection, preprocessing,visualization and inference as shown in Figure 6 Pang etal. [21] classify the process into data collection, derivation, andvisualization and discuss how uncertainty is introduced in each stage.Under the data collection phase, the paper mainly discussesthe uncertainty added due to measurement errors. However, thereare other sources, such as bias and sampling error, that the paperfails to describe. I investigate these uncertainties in Section 3.3.1.The authors then discuss the change data undergoes when it ispreprocessed. These changes include converting one unit to another,rescaling, and resampling. However, they do not mention othervital issues such as missing data, approximation, and interpolationthat I examine in Section 3.3.2. Next, the authors highlight howuncertainty also influences the data visualization stage itself. Theymainly focus on radiosity and volume rendering, while our paperdelves more into 2D visualizations. Finally, I explore how viewersinfer these visualizations and the challenges they face while makinga decision from these charts.Uncertainty is presented at every phase of this classification. How-ever, understanding and evaluating uncertainty in each of thesephases is unique. Therefore, authors are required to approach theseuncertainties based on their type and complexity, understand theirabstraction, and then present them in visualizations in a way that iseasy to grasp.3.1 Data AcquisitionGiven the interdisciplinary nature of visualizations, the format,quantity, and type of data used to create them vary immensely.Different data implies different data collection processes anduncertainties. Uncertainty is intertwined with data acquisitionand can arise from random variables and modeling errors [14].Figure 7: Selection Bias [19]Pang et al. [21] explain how almost all acquired data has statisticalvariation. Collected data can have errors, bias, and variance. [26]study how bias can be introduced during the process of collectingdata. Datasets are prone to various biases that include but are notlimited to selection bias, volunteer bias, admission bias, survivorbias, and misclassification bias.It is imperative that datasets resemble the true population asclosely as possible. Data can also contain different types of errors,such as coverage error, sampling error, nonresponse error, andmeasurement error [7]. Missing data points is another commonchallenge researchers face during data collection.Figure 8: Free Speech, a graph by the New York Times based on anational poll including 1,507 U.S residents [20]Correcting these errors is not always possible, but they can bementioned in the visualization to inform the viewer. However,uncertainty is often ignored when authors create visualizations.Other times this uncertainty in data is not communicated to them [9].For example, when I analyze a piece called “Free Speech” (asshown in Figure 8) published in the What’s Going On in ThisGraph section of the New York Times (NYT) [20], we can see howinformation about uncertainty from the data source is not mentioneddirectly in the graph. The bars of the graph do not sum to 100percent since they are missing the no-response segment. The articlementions that the margin of error for the sample is +/- 3.1%, but thegraph makes no mention of it.Efforts are being made by researchers to improve the way un-certainty in the data collection phase is captured, processed, andcommunicated. Athawale et al. [2] propose using statistical sum-mary maps to represent uncertainty in scalar field data caused bydata acquisition.3.2 Data PreprocessingFigure 9: Flawed Data [18]Raw data is imperfect and can consist of noise and error.Once data is collected, it undergoes processing for accuracyand standardization. However, this phase can add uncertaintiesto the data that may not be immediately evident. For example,fundamental transformations such as rounding off values, convertingdata from one unit to another, rescaling, resampling, and quantizingcan add uncertainty [1]. Even though this might seem minor, theimpact can be significant. For example, based on whether we takethe value of pi as 22/7 or 3.159, the area of the Sun can vary by adifference of 237 x106sq. miles.A significant setback that most datasets suffer from is missingdata. Data can have missing values for many reasons, such asinstrument malfunction, incomplete observations, and lost data.Missing values leave a gap in the dataset, which makes room foruncertainty. Working with such uncertainty requires the authors totake extra measures during preprocessing. Authors attempt to findclose estimates of the missing values to provide the viewers witha complete picture. One way to tackle this problem is by deletingthe complete entry that has the missing value. This leads to a lossof data and insights. Another option is to make an educated guessabout the missing value. However, this is highly unreliable andoften not recommended. Using interpolation, imputation, or othertechniques can induce errors [3].Sometimes, authors choose to encode these estimated valuesdifferently in their designs to inform the viewer about the gap inthe dataset. However, how authors choose to visualize this encod-ing becomes very influential in how viewers perceive these graphs.Whether authors highlight, downplay, annotate or remove the miss-ing values determines how much confidence and credibility theviewer shows in the visualization [27].3.3 Visualization CreationSince uncertainty is ingrained in different parts of the data collectionprocess, it is not easy to identify and control it. However, oncethe data is cleaned and processed, the authors face a new problem.Creating visualizations is a complicated task that requires authors tomake various decisions on behalf of the viewer. Authors are expectedto choose the type of visualization based on data type, which maylead them to choose the scaling, sorting, ordering, and aesthetics [30].Compelling visualizations are accurate and suggest an understandingand interpretation of data. Hence, it is the author’s responsibility toanalyze data correctly before creating any visualizations. Midway[15] describes ten design principles authors can follow to createcharts. However, none of those principles discuss how uncertaintycan be presented. Creating effective visualizations is hard. However,when we add uncertainty representation, the task becomes muchmore complex. The data visualization community of researchers,designers, journalists, etc., has been reluctant to add uncertaintyto their charts. Authors are aware of how significant uncertaintyvisualization is. Yet, they choose to exclude uncertainty when theydesign their charts for various reasons discussed below.3.3.1 Uncertainty is hard to representThough data is replete with uncertainty, the difficulty lies in deter-mining if it should be represented and how. If the uncertainty hasno direct relationship to the goal of the visualization, then it maynot be included in the visualization. But this is not a conclusionthat authors can quickly draw. The rise in techniques of visual-izing uncertainty can make it harder for authors to decide whichone to choose from. One of the biggest challenges in visualizinguncertainty is discovering and communicating the relationship andimpact that the uncertainty has on the data. Data visualization isoften a preferred choice for analysis due to its ability to presenthigh-dimensional data. However, uncertainty also has dimensions,generally classified into scalar, vector, and tensor [23]. While scalarand vector fields of uncertainty are depicted in charts, tensor fieldsare often avoided. Mapping these dimensions of uncertainty alongwith the dimensions of data is challenging and often overlookedwhen creating charts. Instead, authors tend to simplify uncertaintyto align with the dimensionality of the data.3.3.2 Uncertainty is hard to calculate and verifyFigure 10: Error Bars [17]Another reason why authors choose to exclude uncertainty fromtheir charts is that calculating uncertainty is complex [9]. It is wellknown that even mathematicians and statisticians sometimes find itchallenging to calculate the error or variance in a dataset. Verifyingif the presented uncertainty is correct is challenging. Moreover, ifthe authors make an error while designing their charts, they end upproviding wrong information to the viewers and losing their trust.3.3.3 Viewers may be overwhelmed[9] explains why the inclusion of uncertainty in graphs is not widelyadopted. Authors believe that uncertainty can be challenging forthe viewers to perceive and understand. As a result, viewers maychoose to either look at an alternative graph that does not containany uncertainty representation or overlook the uncertainty in theirgraph altogether.3.3.4 Uncertainty can add clutter to the visualizationAuthors can be unsure of how effective communicating uncertaintyis. They also worry about adding more information to an alreadyvisually complex visualization. For many authors, the goal of achart is to express a signal [9] that can be useful to their viewers.This signal tends to present a single point or a single source of truth.Uncertainty tends to challenge that notion by obfuscating the signal.Additionally, expressing the intricacy of uncertainty through a visualabstraction is challenging. The dimensionality of the data also playsa vital role in deciding whether uncertainty should be representedor not. An increase in the dimensionality of data makes it harderfor the human visual system to perceive it effectively. Sometimeseven two-dimensional charts can be overwhelming for the viewer.In such a case, representing uncertainty adds visual overload [23].3.4 Visualization InferenceUncertainty is hard to understand and analyze. When faced withperceiving an uncertain visualization, viewers can get confused orderive inaccurate information from it. One easy method viewerstend to use is to ignore the uncertainty in the graph altogether.Another way is to substitute tricky calculations with easy ones oruse heuristics to make decisions. However, this may not alwaysgive a correct observation. The most common approach to showuncertainty is by using box plots and error bars. Though widelyused, viewers may find them challenging to analyze [6]. Sometimesvisualizing uncertainty as frequency instead of distribution providea better understanding.Currently, research is being done to create visualizations are helpunderstand uncertainty more intuitively. For example, hypotheticaloutcome plots (HOPs) represent uncertainty by animating a finite setof individual draws [10]. This approach expects no prior knowledgeof the domain from the viewer. However, using HOPs in physicalmedia might be challenging. Bubble treemaps [8] are anotherapproach for visualizing uncertainty. These circular treemapsencode additional information about uncertainty by allocatingadditional space for visuals.While uncertainty is still underrepresented in visualizations, moreresearchers are slowly adding it to their designs. One of the signifi-cant setbacks in uncertainty visualizations for authors is calculatinguncertainty, while for viewers, it is graphical literacy. Efforts can betaken to increase this literacy through different programs gradually.Furthermore, work should be done to understand what visualizationtype best suits a given uncertainty type. This relationship can alsodepend on the type of data being represented and the target audienceviewing the graph. For example, it is necessary for graphs publishedin newspapers and reports to be easily understandable by the public.Hence, studies focusing on visualizing uncertainty with no priorknowledge or information can be very insightful.4 C ONCLUSIONUncertainty visualization is one of the most complex research areasin data visualization today. This work provided an overview of un-certainty visualization and the relationship between uncertainty andvisualization. I divided the visualization pipeline into four phasesand surveyed papers to study how uncertainty interacts with eachphase of the process. The work also investigated why the representa-tion of uncertainty is not widely practiced by the data visualizationcommunity and the challenges viewers face when inferring fromsuch a graph. Lastly, I discussed a few state-of-the-art methodsto design uncertainty visualization and offered a glance into theinteresting future research this field has to offer.<|im_end|> <|im_start|>assistant ### Review Title ### Review Text This paper does a good job of outlining the field of uncertainty visualization and explaining the challenges of the field. It doesn't, however, feel provocative or alternative in any manner. Its only "joke" value comes from the author using xkcd strips to illustrate the points. ### Review Rating ### Review Confidence <|im_end|> <|im_end|>
MZPGZsuwqr
NeurIPS.cc/2021/Workshop/DLDE
2021
A neural multilevel method for high-dimensional parametric PDEs
["Cosmas Hei\u00df", "Ingo G\u00fchring", "Martin Eigel"]
In scientific machine learning, neural networks recently have become a popular tool for learning the solutions of differential equations. However, practical results often conflict the existing theoretical predictions in that observed convergence stagnates early. A substantial improvement can be achieved by the presented multilevel scheme which decomposes the considered problem into easier to train sub-problems, resulting in a sequence of neural networks. The efficacy of the approach is demonstrated for high-dimensional parametric elliptic PDEs that are common benchmark problems in uncertainty quantification. Moreover, a theoretical analysis of the expressivity of the developed neural networks is devised.
["scientific machine learning", "deep learning", "partial differential equations", "multigrid methods", "uncertainty quantification"]
A neural multilevel method for high-dimensionalparametric PDEsCosmas HeißTechnical University of Berlincosmas.heiss@gmail.comIngo GühringTechnical University of Berlinguehring@math.tu-berlin.deMartin EigelWeierstraß Institutemartin.eigel@wias-berlin.deAbstractIn scientific machine learning, neural networks recently have become a populartool for learning the solutions of differential equations. However, practical resultsoften conflict the existing theoretical predictions in that observed convergencestagnates early. A substantial improvement can be achieved by the presentedmultilevel scheme which decomposes the considered problem into easier to trainsub-problems, resulting in a sequence of neural networks. The efficacy of theapproach is demonstrated for high-dimensional parametric elliptic PDEs that arecommon benchmark problems in uncertainty quantification. Moreover, a theoreticalanalysis of the expressivity of the developed neural networks is devised.1 IntroductionThe application of current machine learning methods to problems based on mathematical models hasbecome one of the most promising rising research areas. In engineering and the natural sciences inparticular, complicated mathematical models (usually formulated in terms of differential equations)form the basis for new insights and groundbreaking technologies. Since the computation of suchmodels can be extremely time consuming, the use of machine learning tools was obvious and hasbecome increasingly common in recent years2. Consequently, significant progress has been made onthe one hand to understand the expressivity requirements for representing solutions of differentialequations [ 17,23,5,19,14]. On the other hand, a plethora of architectures has been devised toimplement differential equation solvers [ 4,21,24,25]. Interestingly, while analytical results predict asuperior capacity of deep neural networks e.g. for the important cases of partial differential equations(PDE) and stochastic differential equations (SDE), observed results unfortunately do not reflect thetheory [ 1]. Instead, the convergence of the error typically stagnates after some training iterations ase.g. illustrated in [ 13,16]. In particular, it often does not reach the accuracy of “classical” best-in-classmethods (e.g. stochastic Galerkin or collocation, tensor regression, compressed sensing).In the following, we are concerned with learning the solution map v:Rp!H10(D)of linearparametric PDEs of the formD(v;y) :=fA(y)v(y) = 0; (1)defined with a linear differential operator A, fixed right-hand side f, appropriate boundary conditionsand ap-dimensional parameter vector y2Rp(determining the model data) on some domainCo-first author2this is often coined scientific machine learning (SciML)35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia.DRd. Such (possibly very high-dimensional) PDEs are common problems in the field ofUncertainty Quantification (UQ). They can be used as meaningful benchmark problems to assess thequality of novel SciML methods. Note that in contrast to many classical machine learning problems,a dataset can be generated on the fly with arbitrary precision using the Finite Element (FE) method.We recollect the notion of multilevel methods that are ubiquitous in numerical methods. Theseconsist of a hierarchical construction of the approximation, relying on a set of successively refineddiscretizations and hence exploiting a decomposition of the problem complexity. Notably, for ourmethod this means that in each approximation step, only the correction to the next finer level has tobe represented in the respective sub-network.The method which comes closest to our work was recently published in [ 18]. The three maindifferences are: a) We train the networks on each grid level to also correct the errors of the previouspredictions (which turns out to be a significant design choice). b) We learn the solution v(y)ofthe parametric PDE instead of some quantity of interest. For this to be feasible we make use ofa compression of the FE basis. c) We examine the required theoretical network complexity andgive an evaluation on standard UQ problems. Our numerical examples illustrate that the multilevelconstruction leads to a stable convergence of the parametric solutions which is comparable tobest-in-class methods such as the ones in [9, 11, 12, 10, 6, 2].2 A neural multilevel methodTo construct the multilevel scheme for the considered PDE, we denote by V1:::VLH10(D)a nested sequence of FE spaces resulting from Lsuccessive (uniform) refinements of a coarse gridon the spatial domain Dand letv`:Rp!V `be the function which maps a (stochastic) parametery2Rpto the FE solution v`(y)of(1). Note that we can express the solution on the finest mesh vLas a sum of correctionsvL(y) =v1(y) +LX`=2(v`(y)v`1(y)) =:LX`=1~v`(y);where each ~v`(y)2V`:Since dimV`grows exponentially with `, we make use of a proper orthogonal decomposition (POD)[3,22,20] to yield spacesU`= spanfu1`;:::;udimU``gof lower dimension which allow an efficientapproximation of the corrections ~v`(y).In our neural multilevel method, we use neural networks NN`:Rp!RdimU`for`= 1;:::;L tolearn the basis coefficients ci`(y)of the approximate subspace correctionsvL(y) =LX`=1~v`(y)LX`=1dimU`Xi=1ci`(y)ui`LX`=1dimU`Xi=1[NN`(y)]iui`:To avoid an accumulation of the individual errors of NN`in the above sum, we train NN`to alsocorrect the errors of the output of the preceding levels (see Step 2 below for a precise description). Forthis, in addition to the parameter vector y, we also feed the output coefficients of all previous networksas input. We want to point out that correcting preceding errors provides an essential improvementover the method in [ 18] (see Section 3). For simplicity, in the following we just write NN`(y)forthe output vector of the `-th network (instead of NN`(y;NN`1;:::;NN 1)).To generate the training data, we draw N2Nsamples y1;:::;yNand compute solutionsv`(y1);:::;v `(yN)for each`= 1;:::;L using the FE method. The iterative training procedure ofour neural multilevel procedure reads now as follows:Algorithm 2.1.Step 1: (i)Compute the POD basis functions u11;:::;udimU11 from the snapshot matrix of FEsolutions [v1(y1);:::;v 1(yN)]subject to a desired accuracy.(ii) Train the first network NN 1to minimizeNXk=1 dimU1Xi=1[NN 1(yk)]iui1!v1(yk)2H10(D):2Step 2: For`= 2;:::;L :(i) Successively update the current approximationw`1(yk) :=`1Xj=1dimUjXi=1[NNj(yk)]iuij:(ii)Compute the POD basis functions u1`;:::;udimU``from the snapshot matrix of correc-tions [v`(y1)w`1(y1);:::;v `(yN)w`1(yN)].(iii) Train the network NN`to approximate the correction by minimizingNXk=1dimU`Xi=1[NN`(yk)]iui`(v`(yk)w`1(yk))2H10(D):3 Numerical examplesTo assess the performance of Algorithm 2.1, we consider a common high-dimensional PDE andcompare the solution approximation of our neural multilevel scheme with two related NN models.More specifically, we use the stationary parametric diffusion problem div(T(a(y))ru(y)) = 1with homogeneous Dirichlet boundary conditions on the unit square. The conductivity is givenbya(y) =+Ppm=1amymwhereamm2are planar Fourier modes, cf. [ 9]. We considerthe “uniform case” with T= id; = 1; p= 50 andydrawn from U([1;1]p), as well as theseverely more involved “log-normal case” with T= exp; = 1; p= 50 andydrawn fromN(0;I).Algorithm 2.1 is employed with L= 7 feed-forward ReLU neural networks NN 1;:::;NNLconsisting of two hidden layers and 512nodes per layer.We analyse the approximation quality of our neural multilevel scheme by comparing it with theclosely related architecture of [ 18] (denoted “Lye et al.”)3on an independent test set consisting of1024 randomly drawn samples. Moreover, we compare the results to a single larger feed-forwardReLU model (denoted “Single Net”) with roughly the same parameter count as all the multilevelnetworks of one architecture combined. “Single Net” is trained to directly predict an approximationon the finest mesh. For all three approaches, a POD is utilized with a prescribed tolerance of 107.The network training is based on 105samples that are randomly drawn according to the distributionof the respective test case. On the finest level 7, the mesh consists of about 2105triangles withfirst order Lagrange elements.In Figure 1, the H10-error with respect to a high fidelity reference solution of our method and “Lye et.al” is displayed for an increasing number of levels (solid line). It is noteworthy that our “ML Net”essentially keeps its convergence rate throughout while “Lye at al.” flattens out, starting at level 4 or 5,respectively. We also reach a relative error that is much better than what is shown in other NN methodssuch as e.g. [ 13]. The reason for this may be found in the subtle difference of the two methods.“Lye et. al” predicts the level corrections independently of the neural network approximations of theprevious level (and their errors) leading to a successive error accumulation from level to level. Incontrast, on each level, “ML Net” is aware of the previous errors and trained to correct them. TheH10-error with respect to a high fidelity reference solution projected onto the corresponding grid(dotted line) supports this hypothesis. While the error for “ML Net” stays relatively constant withincreasing levels we can observe an error accumulation for “Lye et al”. The approximation error ofthe “Single Net” with respect to the high fidelity reference solution is indicated by a red triangle. Itsperformance is comparable to “Lye et al”.The results suggest that it is advantageous to learn the corrections iteratively with respect to thepreceding approximation to mitigate error accumulation instead of learning the grid correctionsindependently. Throughout the experiments, our method is the only one whose approximation erroris not significantly larger than the FE grid approximation error.3We point out, that the original method from [ 18] aims at predicting quantities of interest. As it is the closestmethod to ours found in the literature, we adapted their approach to the setting considered in this paper.3Figure 1: Relative energy errors evaluated on an independent test set for the uniform case with p= 50(left) and the log-normal case with p= 50 (right).4 Theoretical analysisIn this section, we analyse the complexity (in terms of the number of weights) of the neural networksin our multilevel method4. The proof is based on the well-established ability of neural networks toapproximate polynomials efficiently [ 15,8,26] and will be published in our upcoming paper. Ourresult is in line with the literature [ 7, Section 3.9] and shows that for fixed stochastic dimension p, toreach the accuracy of the FE solution on the finest grid (L)it suffices to have a polynomiallygrowing number of weights assuming a suitable dimensionality reduction. As expected, this ratedeteriorates with increasing stochastic parameter dimension. A related analysis without using amultilevel scheme is presented in [23, 17].Theorem 4.1. For eachL2Nthere exist neural networks NN 1;:::;NNLwhere the total numberof weights is bounded byLX`=1#NN`.L2p+2+LpLX`=1dimU`;such that the multilevel Algorithm 2.1 together with a POD of accuracy "PODachieves the precisionon the order of the FE solution on the finest grid (up to a multiplication of L)LX`=1dimU`Xi=1[NN`(y)]iui`v(y)H10(D)2LL+"PODkfk:Analysing Single Net in a similar way, we can show that its required complexity grows asymptoticallyfaster than the bound in Theorem 4.1. This crucial observation shows the efficiency of the multilevelansatz in terms of its approximative power.5 ConclusionWe describe and numerically demonstrate a neural multilevel scheme which is comparable to best-in-class performance for the solution approximation of high-dimensional parametric PDEs as commonlyencountered in UQ. As such, the proposed scheme outperforms other neural network approachesfrom the recent literature. Regarding analytical aspects, we provide complexity bounds which behavefavourably with respect to the expected theoretical convergence for this type of problem. Moreover,asymptotic estimates also show the advantage of the proposed multilevel structure. For detailedtheoretical results, we refer to our upcoming paper.4For the analysis to be tractable, we ignore the dependence of the corrections on the output of the previouslevels (see Algorithm 2.1 Step 2).4Nevertheless, despite the favourable observed performance, there are still several open questions. Inparticular, a theoretical non-asymptotic multilevel advantage as in other UQ methods has not beenuncovered yet. Moreover, the required POD reduces the scalability of the approach and preventsan application when regularity of the parameter to solution map is low. In future work, one mightconsider different architectures to lessen the dependence on dimensionality reduction methods usinge.g. CNNs.References[1]Ben Adcock and Nick Dexter. The gap between theory and practice in function approximationwith deep neural networks. SIAM Journal on Mathematics of Data Science , 3(2):624–655,2021.[2]Ivo Babuška, Fabio Nobile, and Raul Tempone. A stochastic collocation method for ellipticpartial differential equations with random input data. SIAM Journal on Numerical Analysis ,45(3):1005–1034, 2007.[3]G Berkooz, P Holmes, and J L Lumley. The proper orthogonal decomposition in the analysis ofturbulent flows. Annual Review of Fluid Mechanics , 25(1):539–575, 1993.[4]Julius Berner, Markus Dablander, and Philipp Grohs. Numerically solving parametric familiesof high-dimensional kolmogorov partial differential equations via deep learning. Advances inNeural Information Processing Systems , 33, 2020.[5]Julius Berner, Philipp Grohs, and Arnulf Jentzen. Analysis of the generalization error: Empiricalrisk minimization over deep artificial neural networks overcomes the curse of dimensionality inthe numerical approximation of black–scholes partial differential equations. SIAM Journal onMathematics of Data Science , 2(3):631–657, 2020.[6]Jean-Luc Bouchot, Holger Rauhut, and Christoph Schwab. Multi-level compressed sens-ing petrov-galerkin discretization of high-dimensional parametric pdes. arXiv preprintarXiv:1701.01671 , 2017.[7]Albert Cohen and Ronald DeV ore. Approximation of high-dimensional parametric PDEs. ActaNumerica , 24:1–159, 2015.[8]Tim De Ryck, Samuel Lanthaler, and Siddhartha Mishra. On the approximation of functions bytanh neural networks. Neural Networks , 143:732–750, 2021.[9]Martin Eigel, Claude Jeffrey Gittelson, Christoph Schwab, and Elmar Zander. Adaptivestochastic Galerkin FEM. Computer Methods in Applied Mechanics and Engineering , 270:247–269, 2014.[10] Martin Eigel, Manuel Marschall, Max Pfeffer, and Reinhold Schneider. Adaptive stochasticGalerkin FEM for lognormal coefficients in hierarchical tensor representations. NumerischeMathematik , 145(3):655–692, 2020.[11] Martin Eigel, Max Pfeffer, and Reinhold Schneider. Adaptive stochastic Galerkin FEM withhierarchical tensor representations. Numerische Mathematik , 136(3):765–803, 2017.[12] Martin Eigel, Reinhold Schneider, Philipp Trunschke, and Sebastian Wolf. Variational MonteCarlo – bridging concepts of machine learning and high-dimensional partial differential equa-tions. Advances in Computational Mathematics , 45(5-6):2503–2532, 2019.[13] Moritz Geist, Philipp Petersen, Mones Raslan, Reinhold Schneider, and Gitta Kutyniok. Nu-merical solution of the parametric diffusion equation by deep neural networks. arXiv preprintarXiv:2004.12131 , 2020.[14] Philipp Grohs, Fabian Hornung, Arnulf Jentzen, and Philippe V on Wurstemberger. A proof thatartificial neural networks overcome the curse of dimensionality in the numerical approximationof black-scholes partial differential equations. arXiv preprint arXiv:1809.02362 , 2018.[15] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks areuniversal approximators. Neural Networks , 2(5):359–366, 1989.5[16] Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. Variational physics-informedneural networks for solving partial differential equations. arXiv preprint arXiv:1912.00873 ,2019.[17] Gitta Kutyniok, Philipp Petersen, Mones Raslan, and Reinhold Schneider. A theoretical analysisof deep neural networks and parametric PDEs. arXiv preprint arXiv:1904.00377 , 2019.[18] Kjetil O. Lye, Siddhartha Mishra, and Roberto Molinaro. A multi-level procedure for enhancingaccuracy of machine learning algorithms, 2020.[19] Siddhartha Mishra and Roberto Molinaro. Estimates on the generalization error of physicsinformed neural networks (PINNS) for approximating PDEs. arXiv preprint arXiv:2006.16144 ,2020.[20] A. Quarteroni, A. Manzoni, and F. Negri. Reduced Basis Methods for Partial DifferentialEquations: An Introduction . UNITEXT. Springer International Publishing, 2015.[21] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks:A deep learning framework for solving forward and inverse problems involving nonlinear partialdifferential equations. Journal of Computational Physics , 378:686–707, 2019.[22] Gianluigi Rozza, David Phuong, and Anthony Patera. Reduced basis approximation and aposteriori error estimation for affinely parametrized elliptic coercive partial differential equations.Archives of Computational Methods in Engineering , 15, 09 2008.[23] Christoph Schwab and Jakob Zech. Deep learning in high dimension: Neural network expressionrates for generalized polynomial chaos expansions in UQ. Analysis and Applications , 17(01):19–55, 2019.[24] Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solvingpartial differential equations. Journal of computational physics , 375:1339–1364, 2018.[25] E Weinan, Jiequn Han, and Arnulf Jentzen. Deep learning-based numerical methods forhigh-dimensional parabolic partial differential equations and backward stochastic differentialequations. Communications in Mathematics and Statistics , 5(4):349–380, 2017.[26] Dmitry Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Networks ,94:103–114, 2017.6
Xd187ZLOMc
Solid works
I think this article is written well. It contains both the empirical study and the theoretical analysis, which are quite solid. The proposed method used neural network to learn the basic coefficients in the multi-level scheme, which works well for a stationary parametric diffusion problems. To further demonstrate the feasibility of the proposed algorithm, I suggest that more extensive experiments including some other forms of PDE can be done in the future. By the way, I am not an export on multi-level method in numerical analysis. So I am confused when encountered with the jargon "FE basis". I hope the author could give its full name to easy the confusion of researchers. Some grammar problems are listed as follows: Line 39: We also train the networks to correct.. Line 135: "such as" cannot be used together with "e.g."
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A neural multilevel method for high-dimensional parametric PDEs ### Paper Abstract In scientific machine learning, neural networks recently have become a popular tool for learning the solutions of differential equations. However, practical results often conflict the existing theoretical predictions in that observed convergence stagnates early. A substantial improvement can be achieved by the presented multilevel scheme which decomposes the considered problem into easier to train sub-problems, resulting in a sequence of neural networks. The efficacy of the approach is demonstrated for high-dimensional parametric elliptic PDEs that are common benchmark problems in uncertainty quantification. Moreover, a theoretical analysis of the expressivity of the developed neural networks is devised. ### Paper Keywords ["scientific machine learning", "deep learning", "partial differential equations", "multigrid methods", "uncertainty quantification"] ### Paper Content A neural multilevel method for high-dimensionalparametric PDEsCosmas HeißTechnical University of Berlincosmas.heiss@gmail.comIngo GühringTechnical University of Berlinguehring@math.tu-berlin.deMartin EigelWeierstraß Institutemartin.eigel@wias-berlin.deAbstractIn scientific machine learning, neural networks recently have become a populartool for learning the solutions of differential equations. However, practical resultsoften conflict the existing theoretical predictions in that observed convergencestagnates early. A substantial improvement can be achieved by the presentedmultilevel scheme which decomposes the considered problem into easier to trainsub-problems, resulting in a sequence of neural networks. The efficacy of theapproach is demonstrated for high-dimensional parametric elliptic PDEs that arecommon benchmark problems in uncertainty quantification. Moreover, a theoreticalanalysis of the expressivity of the developed neural networks is devised.1 IntroductionThe application of current machine learning methods to problems based on mathematical models hasbecome one of the most promising rising research areas. In engineering and the natural sciences inparticular, complicated mathematical models (usually formulated in terms of differential equations)form the basis for new insights and groundbreaking technologies. Since the computation of suchmodels can be extremely time consuming, the use of machine learning tools was obvious and hasbecome increasingly common in recent years2. Consequently, significant progress has been made onthe one hand to understand the expressivity requirements for representing solutions of differentialequations [ 17,23,5,19,14]. On the other hand, a plethora of architectures has been devised toimplement differential equation solvers [ 4,21,24,25]. Interestingly, while analytical results predict asuperior capacity of deep neural networks e.g. for the important cases of partial differential equations(PDE) and stochastic differential equations (SDE), observed results unfortunately do not reflect thetheory [ 1]. Instead, the convergence of the error typically stagnates after some training iterations ase.g. illustrated in [ 13,16]. In particular, it often does not reach the accuracy of “classical” best-in-classmethods (e.g. stochastic Galerkin or collocation, tensor regression, compressed sensing).In the following, we are concerned with learning the solution map v:Rp!H10(D)of linearparametric PDEs of the formD(v;y) :=fA(y)v(y) = 0; (1)defined with a linear differential operator A, fixed right-hand side f, appropriate boundary conditionsand ap-dimensional parameter vector y2Rp(determining the model data) on some domainCo-first author2this is often coined scientific machine learning (SciML)35th Conference on Neural Information Processing Systems (NeurIPS 2021), Sydney, Australia.DRd. Such (possibly very high-dimensional) PDEs are common problems in the field ofUncertainty Quantification (UQ). They can be used as meaningful benchmark problems to assess thequality of novel SciML methods. Note that in contrast to many classical machine learning problems,a dataset can be generated on the fly with arbitrary precision using the Finite Element (FE) method.We recollect the notion of multilevel methods that are ubiquitous in numerical methods. Theseconsist of a hierarchical construction of the approximation, relying on a set of successively refineddiscretizations and hence exploiting a decomposition of the problem complexity. Notably, for ourmethod this means that in each approximation step, only the correction to the next finer level has tobe represented in the respective sub-network.The method which comes closest to our work was recently published in [ 18]. The three maindifferences are: a) We train the networks on each grid level to also correct the errors of the previouspredictions (which turns out to be a significant design choice). b) We learn the solution v(y)ofthe parametric PDE instead of some quantity of interest. For this to be feasible we make use ofa compression of the FE basis. c) We examine the required theoretical network complexity andgive an evaluation on standard UQ problems. Our numerical examples illustrate that the multilevelconstruction leads to a stable convergence of the parametric solutions which is comparable tobest-in-class methods such as the ones in [9, 11, 12, 10, 6, 2].2 A neural multilevel methodTo construct the multilevel scheme for the considered PDE, we denote by V1:::VLH10(D)a nested sequence of FE spaces resulting from Lsuccessive (uniform) refinements of a coarse gridon the spatial domain Dand letv`:Rp!V `be the function which maps a (stochastic) parametery2Rpto the FE solution v`(y)of(1). Note that we can express the solution on the finest mesh vLas a sum of correctionsvL(y) =v1(y) +LX`=2(v`(y)v`1(y)) =:LX`=1~v`(y);where each ~v`(y)2V`:Since dimV`grows exponentially with `, we make use of a proper orthogonal decomposition (POD)[3,22,20] to yield spacesU`= spanfu1`;:::;udimU``gof lower dimension which allow an efficientapproximation of the corrections ~v`(y).In our neural multilevel method, we use neural networks NN`:Rp!RdimU`for`= 1;:::;L tolearn the basis coefficients ci`(y)of the approximate subspace correctionsvL(y) =LX`=1~v`(y)LX`=1dimU`Xi=1ci`(y)ui`LX`=1dimU`Xi=1[NN`(y)]iui`:To avoid an accumulation of the individual errors of NN`in the above sum, we train NN`to alsocorrect the errors of the output of the preceding levels (see Step 2 below for a precise description). Forthis, in addition to the parameter vector y, we also feed the output coefficients of all previous networksas input. We want to point out that correcting preceding errors provides an essential improvementover the method in [ 18] (see Section 3). For simplicity, in the following we just write NN`(y)forthe output vector of the `-th network (instead of NN`(y;NN`1;:::;NN 1)).To generate the training data, we draw N2Nsamples y1;:::;yNand compute solutionsv`(y1);:::;v `(yN)for each`= 1;:::;L using the FE method. The iterative training procedure ofour neural multilevel procedure reads now as follows:Algorithm 2.1.Step 1: (i)Compute the POD basis functions u11;:::;udimU11 from the snapshot matrix of FEsolutions [v1(y1);:::;v 1(yN)]subject to a desired accuracy.(ii) Train the first network NN 1to minimizeNXk=1 dimU1Xi=1[NN 1(yk)]iui1!v1(yk)2H10(D):2Step 2: For`= 2;:::;L :(i) Successively update the current approximationw`1(yk) :=`1Xj=1dimUjXi=1[NNj(yk)]iuij:(ii)Compute the POD basis functions u1`;:::;udimU``from the snapshot matrix of correc-tions [v`(y1)w`1(y1);:::;v `(yN)w`1(yN)].(iii) Train the network NN`to approximate the correction by minimizingNXk=1dimU`Xi=1[NN`(yk)]iui`(v`(yk)w`1(yk))2H10(D):3 Numerical examplesTo assess the performance of Algorithm 2.1, we consider a common high-dimensional PDE andcompare the solution approximation of our neural multilevel scheme with two related NN models.More specifically, we use the stationary parametric diffusion problem div(T(a(y))ru(y)) = 1with homogeneous Dirichlet boundary conditions on the unit square. The conductivity is givenbya(y) =+Ppm=1amymwhereamm2are planar Fourier modes, cf. [ 9]. We considerthe “uniform case” with T= id; = 1; p= 50 andydrawn from U([1;1]p), as well as theseverely more involved “log-normal case” with T= exp; = 1; p= 50 andydrawn fromN(0;I).Algorithm 2.1 is employed with L= 7 feed-forward ReLU neural networks NN 1;:::;NNLconsisting of two hidden layers and 512nodes per layer.We analyse the approximation quality of our neural multilevel scheme by comparing it with theclosely related architecture of [ 18] (denoted “Lye et al.”)3on an independent test set consisting of1024 randomly drawn samples. Moreover, we compare the results to a single larger feed-forwardReLU model (denoted “Single Net”) with roughly the same parameter count as all the multilevelnetworks of one architecture combined. “Single Net” is trained to directly predict an approximationon the finest mesh. For all three approaches, a POD is utilized with a prescribed tolerance of 107.The network training is based on 105samples that are randomly drawn according to the distributionof the respective test case. On the finest level 7, the mesh consists of about 2105triangles withfirst order Lagrange elements.In Figure 1, the H10-error with respect to a high fidelity reference solution of our method and “Lye et.al” is displayed for an increasing number of levels (solid line). It is noteworthy that our “ML Net”essentially keeps its convergence rate throughout while “Lye at al.” flattens out, starting at level 4 or 5,respectively. We also reach a relative error that is much better than what is shown in other NN methodssuch as e.g. [ 13]. The reason for this may be found in the subtle difference of the two methods.“Lye et. al” predicts the level corrections independently of the neural network approximations of theprevious level (and their errors) leading to a successive error accumulation from level to level. Incontrast, on each level, “ML Net” is aware of the previous errors and trained to correct them. TheH10-error with respect to a high fidelity reference solution projected onto the corresponding grid(dotted line) supports this hypothesis. While the error for “ML Net” stays relatively constant withincreasing levels we can observe an error accumulation for “Lye et al”. The approximation error ofthe “Single Net” with respect to the high fidelity reference solution is indicated by a red triangle. Itsperformance is comparable to “Lye et al”.The results suggest that it is advantageous to learn the corrections iteratively with respect to thepreceding approximation to mitigate error accumulation instead of learning the grid correctionsindependently. Throughout the experiments, our method is the only one whose approximation erroris not significantly larger than the FE grid approximation error.3We point out, that the original method from [ 18] aims at predicting quantities of interest. As it is the closestmethod to ours found in the literature, we adapted their approach to the setting considered in this paper.3Figure 1: Relative energy errors evaluated on an independent test set for the uniform case with p= 50(left) and the log-normal case with p= 50 (right).4 Theoretical analysisIn this section, we analyse the complexity (in terms of the number of weights) of the neural networksin our multilevel method4. The proof is based on the well-established ability of neural networks toapproximate polynomials efficiently [ 15,8,26] and will be published in our upcoming paper. Ourresult is in line with the literature [ 7, Section 3.9] and shows that for fixed stochastic dimension p, toreach the accuracy of the FE solution on the finest grid (L)it suffices to have a polynomiallygrowing number of weights assuming a suitable dimensionality reduction. As expected, this ratedeteriorates with increasing stochastic parameter dimension. A related analysis without using amultilevel scheme is presented in [23, 17].Theorem 4.1. For eachL2Nthere exist neural networks NN 1;:::;NNLwhere the total numberof weights is bounded byLX`=1#NN`.L2p+2+LpLX`=1dimU`;such that the multilevel Algorithm 2.1 together with a POD of accuracy "PODachieves the precisionon the order of the FE solution on the finest grid (up to a multiplication of L)LX`=1dimU`Xi=1[NN`(y)]iui`v(y)H10(D)2LL+"PODkfk:Analysing Single Net in a similar way, we can show that its required complexity grows asymptoticallyfaster than the bound in Theorem 4.1. This crucial observation shows the efficiency of the multilevelansatz in terms of its approximative power.5 ConclusionWe describe and numerically demonstrate a neural multilevel scheme which is comparable to best-in-class performance for the solution approximation of high-dimensional parametric PDEs as commonlyencountered in UQ. As such, the proposed scheme outperforms other neural network approachesfrom the recent literature. Regarding analytical aspects, we provide complexity bounds which behavefavourably with respect to the expected theoretical convergence for this type of problem. Moreover,asymptotic estimates also show the advantage of the proposed multilevel structure. For detailedtheoretical results, we refer to our upcoming paper.4For the analysis to be tractable, we ignore the dependence of the corrections on the output of the previouslevels (see Algorithm 2.1 Step 2).4Nevertheless, despite the favourable observed performance, there are still several open questions. Inparticular, a theoretical non-asymptotic multilevel advantage as in other UQ methods has not beenuncovered yet. Moreover, the required POD reduces the scalability of the approach and preventsan application when regularity of the parameter to solution map is low. In future work, one mightconsider different architectures to lessen the dependence on dimensionality reduction methods usinge.g. CNNs.References[1]Ben Adcock and Nick Dexter. The gap between theory and practice in function approximationwith deep neural networks. SIAM Journal on Mathematics of Data Science , 3(2):624–655,2021.[2]Ivo Babuška, Fabio Nobile, and Raul Tempone. A stochastic collocation method for ellipticpartial differential equations with random input data. SIAM Journal on Numerical Analysis ,45(3):1005–1034, 2007.[3]G Berkooz, P Holmes, and J L Lumley. The proper orthogonal decomposition in the analysis ofturbulent flows. Annual Review of Fluid Mechanics , 25(1):539–575, 1993.[4]Julius Berner, Markus Dablander, and Philipp Grohs. Numerically solving parametric familiesof high-dimensional kolmogorov partial differential equations via deep learning. Advances inNeural Information Processing Systems , 33, 2020.[5]Julius Berner, Philipp Grohs, and Arnulf Jentzen. Analysis of the generalization error: Empiricalrisk minimization over deep artificial neural networks overcomes the curse of dimensionality inthe numerical approximation of black–scholes partial differential equations. SIAM Journal onMathematics of Data Science , 2(3):631–657, 2020.[6]Jean-Luc Bouchot, Holger Rauhut, and Christoph Schwab. Multi-level compressed sens-ing petrov-galerkin discretization of high-dimensional parametric pdes. arXiv preprintarXiv:1701.01671 , 2017.[7]Albert Cohen and Ronald DeV ore. Approximation of high-dimensional parametric PDEs. ActaNumerica , 24:1–159, 2015.[8]Tim De Ryck, Samuel Lanthaler, and Siddhartha Mishra. On the approximation of functions bytanh neural networks. Neural Networks , 143:732–750, 2021.[9]Martin Eigel, Claude Jeffrey Gittelson, Christoph Schwab, and Elmar Zander. Adaptivestochastic Galerkin FEM. Computer Methods in Applied Mechanics and Engineering , 270:247–269, 2014.[10] Martin Eigel, Manuel Marschall, Max Pfeffer, and Reinhold Schneider. Adaptive stochasticGalerkin FEM for lognormal coefficients in hierarchical tensor representations. NumerischeMathematik , 145(3):655–692, 2020.[11] Martin Eigel, Max Pfeffer, and Reinhold Schneider. Adaptive stochastic Galerkin FEM withhierarchical tensor representations. Numerische Mathematik , 136(3):765–803, 2017.[12] Martin Eigel, Reinhold Schneider, Philipp Trunschke, and Sebastian Wolf. Variational MonteCarlo – bridging concepts of machine learning and high-dimensional partial differential equa-tions. Advances in Computational Mathematics , 45(5-6):2503–2532, 2019.[13] Moritz Geist, Philipp Petersen, Mones Raslan, Reinhold Schneider, and Gitta Kutyniok. Nu-merical solution of the parametric diffusion equation by deep neural networks. arXiv preprintarXiv:2004.12131 , 2020.[14] Philipp Grohs, Fabian Hornung, Arnulf Jentzen, and Philippe V on Wurstemberger. A proof thatartificial neural networks overcome the curse of dimensionality in the numerical approximationof black-scholes partial differential equations. arXiv preprint arXiv:1809.02362 , 2018.[15] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks areuniversal approximators. Neural Networks , 2(5):359–366, 1989.5[16] Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. Variational physics-informedneural networks for solving partial differential equations. arXiv preprint arXiv:1912.00873 ,2019.[17] Gitta Kutyniok, Philipp Petersen, Mones Raslan, and Reinhold Schneider. A theoretical analysisof deep neural networks and parametric PDEs. arXiv preprint arXiv:1904.00377 , 2019.[18] Kjetil O. Lye, Siddhartha Mishra, and Roberto Molinaro. A multi-level procedure for enhancingaccuracy of machine learning algorithms, 2020.[19] Siddhartha Mishra and Roberto Molinaro. Estimates on the generalization error of physicsinformed neural networks (PINNS) for approximating PDEs. arXiv preprint arXiv:2006.16144 ,2020.[20] A. Quarteroni, A. Manzoni, and F. Negri. Reduced Basis Methods for Partial DifferentialEquations: An Introduction . UNITEXT. Springer International Publishing, 2015.[21] Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks:A deep learning framework for solving forward and inverse problems involving nonlinear partialdifferential equations. Journal of Computational Physics , 378:686–707, 2019.[22] Gianluigi Rozza, David Phuong, and Anthony Patera. Reduced basis approximation and aposteriori error estimation for affinely parametrized elliptic coercive partial differential equations.Archives of Computational Methods in Engineering , 15, 09 2008.[23] Christoph Schwab and Jakob Zech. Deep learning in high dimension: Neural network expressionrates for generalized polynomial chaos expansions in UQ. Analysis and Applications , 17(01):19–55, 2019.[24] Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solvingpartial differential equations. Journal of computational physics , 375:1339–1364, 2018.[25] E Weinan, Jiequn Han, and Arnulf Jentzen. Deep learning-based numerical methods forhigh-dimensional parabolic partial differential equations and backward stochastic differentialequations. Communications in Mathematics and Statistics , 5(4):349–380, 2017.[26] Dmitry Yarotsky. Error bounds for approximations with deep ReLU networks. Neural Networks ,94:103–114, 2017.6<|im_end|> <|im_start|>assistant ### Review Title Solid works ### Review Text I think this article is written well. It contains both the empirical study and the theoretical analysis, which are quite solid. The proposed method used neural network to learn the basic coefficients in the multi-level scheme, which works well for a stationary parametric diffusion problems. To further demonstrate the feasibility of the proposed algorithm, I suggest that more extensive experiments including some other forms of PDE can be done in the future. By the way, I am not an export on multi-level method in numerical analysis. So I am confused when encountered with the jargon "FE basis". I hope the author could give its full name to easy the confusion of researchers. Some grammar problems are listed as follows: Line 39: We also train the networks to correct.. Line 135: "such as" cannot be used together with "e.g." ### Review Rating ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
Byl1e1BFPH
ICLR.cc/2020/Conference
2020
Dual Sequential Monte Carlo: Tunneling Filtering and Planning in Continuous POMDPs
["Yunbo Wang", "Bo Liu", "Jiajun Wu", "Yuke Zhu", "Simon Shaolei Du", "Li Fei-Fei", "Joshua B. Tenenbaum"]
We present the DualSMC network that solves continuous POMDPs by learning belief representations and then leveraging them for planning. It is based on the fact that filtering, i.e. state estimation, and planning can be viewed as two related sequential Monte Carlo processes, with one in the belief space and the other in the future planning trajectory space. In particular, we first introduce a novel particle filter network that makes better use of the adversarial relationship between the proposer model and the observation model. We then introduce a new planning algorithm over the belief representations, which learns uncertainty-dependent policies. We allow these two parts to be trained jointly with each other. We testify the effectiveness of our approach on three continuous control and planning tasks: the floor positioning, the 3D light-dark navigation, and a modified Reacher task.
["planning", "tunneling filtering", "continuous pomdps", "belief representations", "dualsmc network", "fact", "state estimation"]
ABSTRACTWe present the DualSMC network that solves continuous POMDPs by learningbelief representations and then leveraging them for planning. It is based on thefact that filtering, i.e. state estimation, and planning can be viewed as two relatedsequential Monte Carlo processes, with one in the belief space and the other in thefuture planning trajectory space. In particular, we first introduce a novel particlefilter network that makes better use of the adversarial relationship between theproposer model and the observation model. We then introduce a new planningalgorithm over the belief representations, which learns uncertainty-dependentpolicies. We allow these two parts to be trained jointly with each other. We testifythe effectiveness of our approach on three continuous control and planning tasks:the floor positioning, the 3D light-dark navigation, and a modified Reacher task.1 I NTRODUCTIONPartially observable Markov Decision Processes (POMDPs) formulate reinforcement learning prob-lems where the robot’s instant observation is insufficient for optimal decision making (Kaelblinget al., 1998). POMDPs can easily become intractable in moderately large discrete space, let alone incontinuous domains (Papadimitriou & Tsitsiklis, 1987). As a result, sampling-based methods aretypically employed. For example, Monte Carlo tree search (MCTS) methods have shown success inrelatively large POMDPs by constructing a search tree of history based on rollout simulations (Sil-ver & Veness, 2010; Seiler et al., 2015; Sunberg & Kochenderfer, 2018). Cross-entropy methods(CEM) (De Boer et al., 2005) update a distribution over policies based on sampled trajectories andhave achieved promising results in Decentralized POMDPs (Oliehoek et al., 2008; Omidshafiei et al.,2016). Despite their effectiveness, MCTS methods often require a black-box simulator to generateplanning trajectory and thus limiting their success to environments with known dynamics. CEMusually assumes all distributions are Gaussian and therefore is restricted to unimodal planning.On the other hand, in the context of deep reinforcement learning, approximate solutions for solvingPOMDPs often directly encode past history with a recurrent neural network (RNN) and performsmodel-free planning on the latent representations of the RNN (Hausknecht & Stone, 2015; Zhu et al.,2018). Recent work further reduces the burden on the RNN by training an internal generative modelfor approximate inference of a latent belief (Igl et al., 2018). By doing end-to-end training on theneural architecture, the resulting models can solve complex POMDPs with visual inputs. Despitebeing simple and generic, these methods lose the ability to incorporate useful prior knowledge likethe state formulation as both the planning is completely based on the latent states of RNN. Moreover,whenever these methods fail to perform well, it is difficult to analyze which part causes the failure.In this work, we present the dual sequential Monte Carlo (DualSMC) model that aims to solvecontinuous POMDPs with complex unknown dynamics and high dimensional observations, whilepreserving the interpretability. In particular, DualSMC consists of two coupled inference processes:one for belief estimation over states, and the other for multi-modal density estimation over the optimalfuture trajectories. To connect the two parts, we feed top particles and the weighted mean estimatefrom the first SMC, i.e. an adversarial particle filter, as the belief representation to the second one.From there, the second SMC explicitly takes uncertainty into consideration and does multi-modalplanning. Note that the learned dynamics is efficiently shared between these two parts. The overallpipeline of our algorithm is summarized in Figure 1.We evaluate our model on three continuous POMDP tasks: the floor-positioning task for explanatorypurpose, the 3D light-dark navigation task simulated by DeepMind Lab (Beattie et al., 2016) withrich visual inputs, and a control task in a modified Mujoco (Todorov et al., 2012) environment. Ourmethod consistently outperforms the baseline methods.1Under review as a conference paper at ICLR 2020Weight particlesbased on observationSMC planning based on top-M (blue) particlesResample old / propose new particlesTransit all particles based on the MPC actionfrom SMC planningFiltering stepsPlanning stepTrue stateGoalFigure 1: The pipeline of DualSMC. The planner and filter are linked via belief representatives.2 P RELIMINARIESIn this section, we provide a brief review of the key concepts for the background of this work.Continuous POMDPs. A continuous POMDP can be specified as a 7-tuple (S;A;T;R;;Z;),whereS,Aandare underlying continuous state, action and observation spaces. We denotest2S as the underlying state at time t. When the robot takes an action at2A according to apolicy(atjot;a<t), the state changes to st+1with probabilityT(st+1jst;at). The robot willthen receive a new observation ot+1Z(ot+1jst+1)and a reward rtR(st;at). Assuming theepisodes are of fixed length L, the robot’s objective is then to maximize the expected cumulativefuture reward G=EPLt=1t1rt, where= (s1;a1;:::;aL;sL+1)are trajectories inducedby, and 0 < 1is the discount factor. In general, the optimal policy has to take the entirehistory into consideration, which can grow exponentially in time steps. Instead of rememberingthe entire history, classical methods often maintain a belief over possible states and graduallyfilter out the true state. Denote bel(st),p(stjot;a<t), then the belief is updated according tobel(st+1) =Rbel(st)Z(ot+1jst+1)T(st+1jst;at)dst, whereis a normalization factor.Particle filter (PF). A particle filter uses a set of particles f(s(k)t;w(k)t)gKk=1withPKk=1w(k)t= 1,to approximate the belief distribution bel(st). With this approximation scheme, belief update reducesto updating individual particles, s(k)t+1T(js(k)t;at)andw(k)t+1/Z(ot+1js(k)t+1)w(k)t. Particle filterbenefits from its flexibility to approximate any distribution. In practice, when the true dynamics TandZare not known a priori, we can use parametrized functions T ()andZ()to approximatetheir corresponding counterparts. Despite its simplicity and efficiency, particle filter can suffer fromthe particle degeneracy problem that most of the probability mass concentrates on a few particles. Awell-known solution is particle resampling, which bootstraps new particles of equal weights fromthe old ones. Recent advances in particle filters (Karkus et al., 2018b; Jonschkowski et al., 2018)adopt neural networks as the parametrized transition and observation functions and make the filteringprocess differentiable. By doing so, particle filters can be more easily applied to problems that havecomplex and high-dimensional observations like images. While previous work focuses on beliefspace filtering, we take one step further and propose a planning method.Sequential Monte Carlo planning (SMCP). The task of planning can also be regarded as aninference problem, provided that the likelihood of any future trajectory is proportional to its expectedcumulative rewards. This idea is connected to the control as inference framework (Todorov, 2008;Toussaint, 2009; Kappen et al., 2012), where control problems are solved by forming a probabilisticgraphical model. Specifically, if we denote Otas the optimality variable and define p(Ot= 1)/exp(R(st;at))as the probability of time step tbeing optimal, the optimal plan then corresponds tothe maximum a posterior estimate conditioned on the optimality of all future steps. We can solvethis inference problem again with an SMC. For a complete derivation, please refer to Levine (2018)and Piche et al. (2018). Notably, Piche et al. (2018) first uses the sequential Monte Carlo for planning(SMCP) in Markov decision processes. We extend the idea to partially observable domains.3 M ETHODIn this section, we present DualSMC, which consists of two interrelated sequential Monte Carloprocesses. We first introduce a more sampling-efficient adversarial particle filter for learning thebelief representation, then an uncertainty-aware planning algorithm based on these representations.2Under review as a conference paper at ICLR 20203.1 A DVERSARIAL PARTICLE FILTERINGSimilar to the architecture in Jonschkowski et al. (2018), our differentiable PF contains three neuralmodules, an observation model Z(otjs(k)t)that weights each particle given the current observation,a proposerP(s(k)newjot;P)for suggesting new probable particles, and a stochastic transition modelT (stjst1;at1;T)that mimics the environment dynamics. Here, PandTare Gaussian noise forstochasticity. At time t, we re-sample K0particles (fs(k)oldgK0k=1) transited from the previous step basedon the updated weight and combine them with (KK0)newly proposed particles ( fs(k)newgKk=K0+1).Naturally,Pis trained by regressing the proposed state to the true state.For successful POMDPs planning, the state estimation particle filter had better be really samplingefficient and provide a belief representation based on the most plausible states for the downstreamplanner. We observe that PandZare opposite yet dependent on each other. Following the intuitionthat leveraging this adversarial relationship would enhance both parts and thus help to narrow downthe belief space of interest, we propose the adversarial particle filter. In particular, we train Ztodifferentiate the true state from all particle states and train Pto foolZ. Formally, denote prealasthe real joint distribution over s;o,ZandPplay the following two-player minimax game withfunctionF(Z;P):minmaxF(Z;P) =Es;opreal()[ logZ(ojs) +Ess(k)oldlog(1Z(ojs))+EPN (0;I)log(1Z(ojP(o;P))) ]:(1)3.2 D UALSMC P LANNING WITH BELIEF REPRESENTATIONSA straightforward solution to POMDP planning is to train the planning module separately from thefiltering module. At inference time, plans are made based on sampled individual particles from thestate filter. We call this planning algorithm particle-independent SMC planning (PI-SMCP) and useit as a baseline method. More details about PI-SMCP can be found in Appendix A. Nevertheless,PI-SMCP does not perceive the state uncertainty. By contrast, the proposed DualSMC explicitlyconsiders the belief distribution by planning directly on an approximated belief representation, i.e. acombination of the top candidates from the state filter as well as the weighted mean estimate.Algorithm 1 Overall DualSMC filtering and planning method1:fs(k)1Priori (s1)gKk=1,fw(k)1= 1gKk=1 .Define initial filtering particles2:fort= 1 :Ldo .At each control time step3:fw(k)t/w(k)t1Z(s(k)t;ot)gKk=1 .Update particle weights4: belt=Pkw(k)ts(k)t .Compute mean belief state5:f~s(m)t;~w(m)tgMm=1=Top-M (fs(k)t;w(k)tgKk=1);w.r.t.fw(k)tgKk=1.Take top-Mparticles6:at=DualSMC-P (belt;f~s(m)t;~w(m)tgMm=1;;Q!) .Perform planning ( Alg 2)7:ot+1;rtpenv(at) .Update the environment8:fs(k)tgK0k=1Multinomial (fs(k)tgKk=1);w.r.t.fw(k)tgKk=1 .Resample particle states9:fs(k)tP(ot)gKk=K0+1;fw(k)t= 1gKk=1 .Propose new particles10:fs(k)t+1T (s(k)t;at)gKk=1 .Predict next-step particles11: Add (st;st+1;at;rt;ot;belt;f~s(m)t;~w(m)tgMm=1) to a training buffer12: Sample a batch from the buffer and update parameters ( ;!;; ; ).Train all modules13:end forWe present the details of DualSMC in Algorithm 1. At time step t, when a new observation comes,we first useZto update the particle weights ( line 3 Alg 1 ), and then perform the planning algorithmin Algorithm 2. We duplicate the top- Mparticles and the mean belief state Ntimes as the rootstates ofNplanning trajectories ( line 1-2 Alg 2 ). Different from the previous SMCP (Piche et al.,2018) method under full observations, the policy network perceives the belief representationsand predicts an action based on the top- Mparticle states as well as the mean belief state ( line 4 AlgDepending on the task, we can keep K0constant or make (KK0)follow an exponential decay.3Under review as a conference paper at ICLR 2020Algorithm 2 DualSMC planning with filtered belief representationsInput: mean belief state belt, top-Mfiltering particlesf~s(m)t;~w(m)tgMm=1, policy network ,value network Q!, planning start time t, planning horizon H1:f~w(m)tgMm=1=Normalize (f~w(m)tgMm=1).Normalize top- Mfiltering particle weights2:f^s(m)(n)t = ~s(m)tgMm=1;^w(n)t1= 1;bel(n)t=beltNn=1.Duplicate initial states set Ntimes3:fori=t:t+Hdo .At each planning time step4:a(n)i(f^s(m)(n)igMm=1;bel(n)i)Nn=1.SampleNactions5:^s(m)(n)i+1;r(m)(n)iT (^s(m)(n)i;a(n)i)M;Nm=1;n=1.Predict next-step states and rewards6:bel(n)i+1T (bel(n)i;a(n)i)Nn=1.Predict next-step mean belief state7:^w(n)i/^w(n)i1expPm~w(m)tA(m)(n)Nn=1.Update weights with Equation 28:x(n)i= (f^s(m)(n)i+1;^s(m)(n)igMm=1;bel(n)i+1;a(n)i)Nn=1.Assemble planning trajectories9:x(n)t:igNn=1Multinomial (fx(n)t:igNn=1);w.r.t.f^w(n)iNn=1.Resample trajectories10:^w(n)i= 1Nn=111:end for12:Selectat=first action of x(n)t:t+H, wherenUniform (1;:::;N ).Return an action2). We then perform Nactions toMNstates and use T to predict the next states and rewards(line 5 Alg 2 ). Since future observations o>tare not available at current time step, we approximatebel(n)i>tby re-usingT y(line 6 Alg 2 ). We update the planning weight of each planning trajectory bysummarizing the advantages of each state using the initial Mbelief weights ( line 7 Alg 2 ). Here, weintroduce an alternative advantage formulation that is equivalent to the one used in Piche et al. (2018)A(m)(n)=TD(m)(n)i1log(a(n)ijf^s(m)(n)igMm=1;bel(n)i); (2)where TD(m)(n)i1=Q!(^s(m)(n)i;a(n)i)Q!(^s(m)(n)i1;a(n)i1) +r(m)(n)i1. At timet,Q!(^s(m)(n)i1;a(n)i1)andr(m)(n)i1are set to 0. We emphasize that our formulation is much simpler. Because our formulationonly requires a learned Qfunction and more importantly, it prevents us from estimating the expectationof the value function V. We leave the full derivation in Appendix B.2.At the end of each planning time step, we apply the sequential importance resampling (SIR) overNplanning trajectories ( line 9-10 Alg 2 ). When the planning horizon is reached, where i=t+H,we sample one planning trajectory and feed its first action to the environment ( line 7 Alg 1 ). Wethen go back to the filtering part and update the belief representations by resampling, proposing, andpredicting the next-step particle states ( line 8-10 Alg 1 ). Lastly, we train all modules of DualSMC,including the policy network, the critic network as well as the adversarial particle filtering networks(line 11-12 Alg 1 ). We add details of each network component to Appendix C.4 E XPERIMENT4.1 F LOOR POSITIONINGA robot localizes itself and approaches a target region on a 2D plane. It knows the distances to thenearest walls: ot= (dx;dx+;dy;dy+)t, but does not know the world coordinates (sx;sy). Itstarts from a random location and is headed to different regions according to different floors. To reachthe correct target, the robot has to learn to reduce the uncertainty about sy. The action is defined asa= (sx;sy)with a maximum magnitude of 0:05. Only at training time, a reward of 100is givenat the end of each episode if the robot reaches the correct target region. Implementations details ofDualSMC can be found in Appendix D.1.Qualitative results. Figure 2(a) and 2(b) are results by applying the standard SMCP (Piche et al.,2018) to the top- 1particle state. We use different particle filters (PF) for these two baseline models.yLike QMDP (Littman et al., 1995), DualSMC assumes the uncertainty disappears on the next time step andperforms model-based planning.4Under review as a conference paper at ICLR 2020TrajectoriesW W Goal (unobservable)Planning trajectoryRobot’s initial states (unobservable)Resampled particle statesTrue states (unobservable)Proposed particle statesMean belief state(a) Regressive particle filter + SMCPTrajectoriesW t=36(b) Adversarial particle filter + SMCPTrajectoriest=3t=12(c) DualSMC planner with adversarial particle filterFigure 2: The left column shows the actual trajectories of the robot, and other columns are its planningtrajectories at different times. The robot can better localize itself stepping across dashed blue lines.Table 1: Floor positioning results averaged over 1;000test episodes, including the success rate, themean value and the standard deviation of numbers of steps.Method Success rate # Steps StdLSTM filter + SMCP 23.5% 149.1 81.2DVRL (Igl et al., 2018) 38.3% 162.4 19.9Regressive PF (top-1) + SMCP 25.0% 107.9 69.8Adversarial PF (top-1) + SMCP 95.0% 73.3 44.0DualSMC w/o proposer 78.9% 64.9 62.2DualSMC with regressive PF (MSE loss) 45.7% 121.6 69.2DualSMC with regressive PF (density loss) 54.0% 109.2 81.8DualSMC with adversarial PF 99.4% 36.8 15.1We use a regressive PFzfor the first one and an adversarial PF for the second one. As shown bythe blue crosses in Figure 2(a), training the proposer with the mean squared loss is equivalent toregressing the proposed particles to the mean values of the multi-modal state distributions underpartial observations. Thus, the robot cannot make reasonable decisions. By contrast, as shownin Figure 2(b), training the PF adversarially leads the proposed particles more akin to plausiblestates. The robot learns an interesting policy: moving right wherever the starting point is, and thenbouncing back at the right wall. However, this policy is clearly not optimal, as it does not considerthe uncertainty of the belief state. Figure 2(c) shows the results by DualSMC. We have three findings.First, the robot learns a policy to localize itself quickly and then approach the target area withconverged belief states, so that the robot can finish the task successfully with fewer steps. Second,the robot learns the multi-modal distributions of the planning trajectories and can move up or downwith different probabilities and advantage values. Third, the observation model works well. Once therobot steps across the blue line, the belief states quickly converge to the actual values.zThe term regressive here indicates that the proposer of the particle filter takes the mean square error betweenthe proposed states and the true states as the training objective function. We may also use the kernel densityestimation as an alternative objective function.5Under review as a conference paper at ICLR 2020Quantitative comparisons. Table 1 shows that, first, the DualSMC planning algorithm achievesthe highest success rate using the least number of steps. Second, the adversarial PF outperformsother forms of PFs, as well as a deterministic LSTM model (LSTMs are previously used as strongbaselines by Karkus et al. (2018b) and Jonschkowski et al. (2018)). DualSMC models with regressiveproposers are even worse than one without any proposers, illustrating that the quality of the beliefrepresentations is important; erroneous state estimations will harm the final planning results.Is the DualSMC filter better than the traditional PF? Given partial observations, an ideal filtershould derive a complete distribution of possible states instead of point estimation. Figure 3(a)compares the averaged RMSE between the filtered results of different models and the true states. Theadversarial PF performs best, while the PF with the regressive proposer performs even worse thanthat without a proposer. A natural question arises: as the filtering error is also related to differentmoving trajectories of different models, can we eliminate this interference? For Figure 3(b), we traindifferent filters without a planner. All filters follow the same expert trajectories. We can see thatthe adversarial PF still achieves the best performance. Figure 3(c) explores the effect of differentnumbers of filtering particles. Using too few particles will deteriorate the filtering performance.step20 40 60 80 100RMSE00.050.10.150.20.250.3Reg PF + SMCPAdv PF + SMCPDualSMC w/o proposerDualSMC with Reg PFDualSMC with Adv PF(a) Using different modelsstep10 20 30 40 50 60RMSE00.050.10.150.20.250.3PF w/o proposerRegressive PFAdversarial PF (b) Tracking an expert robotstep20 40 60 80 100RMSE00.050.10.150.20.250.3DualSMC, NumPar=30DualSMC, NumPar=300DualSMC, NumPar=3000 (c) Changing number of particlesFigure 3: Averaged filtering error as a function of the number of robot steps for floor positioning.Is the DualSMC planner aware of state uncertainty? In fully observable scenarios, we suppressthe filtering part of DualSMC and assume DualSMC plans upon a converged belief on the true state(sx;sy). The DualSMC planner makes different decisions to walk straight to the target region (seeFigure 4). It performs equally well to the standard SMCP, with an average of 21:3steps (v.s. 20:7steps for SMCP) and a 100:0%success rate. We may conclude that DualSMC does not providepolicies by remembering them, but by perceiving the distribution of the belief state. We may alsoconclude that DualSMC trained under POMDPs generalizes well to similar tasks with less uncertainty.(a) DualSMC with partial observations (b) DualSMC with full observationsFigure 4: The DualSMC planner trained under POMDPs generates different polices according to theuncertainty of belief state. For (b), we assign true state values to the top- Mparticles before planning.4.2 3D L IGHT -DARK NAVIGATIONWe extend the 2D light-dark navigation task (Platt Jr et al., 2010) to a visually rich environmentsimulated by DeepMind Lab (Beattie et al., 2016). At each episode, the robot is placed randomly anduniformly on one of the 4platforms at the bottom. Then, the robot’s goal is to navigate toward thecentral cave (marked in orange) while avoiding any of the 4traps (marked by crosses). The maze isdivided into upper and lower parts. Within the lower part, the robot travels in darkness, receives noisyvisual input of a limited range (up to a fixed depth), and therefore suffers from high state uncertainty.When the robot gets to the upper part (the blue area), it has a clear view of the entire maze. We placedecals as visual hints on the top walls of the maze to help the robot figure out its location. However,it has to be very close to the upper walls to see clearly what these decals are. The robot receives apositive reward of 100when it reaches the goal and a negative reward of 100when in a trap. Ateach time step, the robot’s observation includes a 6464RGB image, its current velocity, and itsorientation. Since the moving command in DeepMind Lab is binary (forward/backward), we forcethe robot to move forward and only let it control its orientation, which we make continuous.6Under review as a conference paper at ICLR 2020ObservationCurrent state (unknown)GoalEnd state (unknown)TrapResampled particlesPlanning trajectoryArea w/ true observationInitial state (unknown)Time stepParticle distribution and planning trajectory at time tt = 01t = 08t = 12t = 30t = 88t = 60Figure 5: DualSMC learns to go up to the wall of decals first before turning back to the goal. Notethat the robot figures out its location at t= 30 and where to start turning back at t= 60 .The experiment results are summarized in Table 2. By considering the uncertainty, DualSMC methodsoutperform other baselines and are the only methods that learned to go up and figure out its locationfirst before going directly towards the goal (see Figure 5). However, we notice that DualSMC withthe adversarial proposer performs slightly worse than with a proposer trained with density loss.This might be caused by the unstability of the adversarial training. For simplicity, we use the na ̈ıveadversarial training method as in the original generative adversarial networks (Goodfellow et al.,2014), which may easily suffer from mode collapse, leading to particle degeneracy in our case. Onemay potentially improve DualSMC with modern techniques, i.e. Wasserstein distance (Arjovskyet al., 2017), the gradient penalty (Gulrajani et al., 2017), and the spectral normalization (Miyatoet al., 2018), to stabilize training or guarantee the existence of a unique Nash equilibrium.Table 2: Experimental results on the 3D navigation task. Results are averaged over 100episodes after2;000episodes of training. Methods with a filter are trained (tested) with 100(400) particles.LSTM + SMCP 59% 85.40Adversarial PF (top-1) + SMCP 58% 56.11Adversarial PF (top-3) + PI-SMCP 64% 64.37DualSMC with regressive PF (MSE loss) 93% 64.32DualSMC with regressive PF (density loss) 97% 73.86DualSMC with adversarial PF 95% 65.134.3 M ODIFIED REACHERWe further validate our model on a continuous control task with partial observation, i.e. a modifiedReacher environment from OpenAI Gym (Brockman et al., 2016). The original observation ofReacher is a 11-D vector including (cos1;cos2;sin1sin2;gx;gy;!1;!2;rx;ry;rz), where thefirst 4 dimensions are cos/sin values of the two joint angles 1,2,gx,gythe goal position, !1;!2theangular velocities and rx;ry;rzthe relative distance from the end-effector to the goal. We removegx;gy;rx;ry;rzfrom the original observation and include a single scalar r=jj(rx;ry;rz)jj2+r,Goal positionProposed particlesResampled particlest = 0t = 6t = 16t = 48Figure 6: The modified Reacher environment and the goal estimation of DualSMC v.s. time.7Under review as a conference paper at ICLR 2020whererN(0;0:01)is a small noise (notice that ris usually on the scale of 0:1). The resultingobservation is therefore a 7-D vector. The robot has to simultaneously locate the goal and reach it.0 1000 2000 3000 4000 5000Training e isode−30−25−20−15−10E isodic rewardDualSMC (adv)DualSMC (density)DualSMC (mse)LSTMPF (top3) + PI-SMCPPF (top1) + SMCPSMCP on full stateFigure 7: Smoothed training curves on modifiedReacher. See experiment details in Appendix D.3.We provide a visualization of one sample rununder DualSMC with the adversarial filter in Fig-ure 6. As expected, initially the proposed parti-cles roughly are in a half-cycle and as time goeson, the particles gradually concentrate aroundthe true goal. Since the final performance of vari-ous methods is similar after long enough time oftraining, we provide the smoothed training curveof these methods in Figure 7 to showcase theadvantage of DualSMC. We truncate the resultsup to 5;000episodes since no obvious changein performance is observed from thereon. Aswe can see, DualSMC methods not only achievesimilar asymptotic performance as the SMCPmethod with full observation but also learn fasterto solve the task than baseline methods.5 R ELATED WORKEmbedding useful algorithmic prior to a reinforcement learning model has been a recent trend becausepure model-free approaches can suffer from unstable training and long convergence issues, whichare exaggerated in partially observable domains. QMDP-Net (Karkus et al., 2017) extends the ideafrom value iteration network (VIN) (Tamar et al., 2016) by representing the bellman update in valueiteration as convolution and pooling operations in a neural network. A similar idea has been extendedto more visually rich domain (Karkus et al., 2018a; Shankar et al., 2016). However, the underlyingstate and action space still remain simple because of the explicit value iteration procedure during theplanning step. The idea of connecting a particle filter with a policy has been addressed in Coquelinet al. (2009). The performance of their policy is only evaluated on the mean particle state and theyuse finite-difference method to update the policy. Igl et al. (2018) introduced a variational lowerbound to improve the original black-box learning of RNN while maintaining a set of particles toaddress uncertainty. But in their work, the latent particle states are not interpretable and knownprior knowledge could not apply. Recently, Karkus et al. (2018b) and Jonschkowski et al. (2018)independently discovered methods to make the conventional particle filter differentiable in terms ofneural networks, both showing that end-to-end training improves the filtering performance. Our workextends this idea to planning under uncertainty and we introduce an alternative adversarial proposingstrategy to further improve the differentiable filter.Control as inference methods (Toussaint & Storkey, 2006; Rawlik et al., 2010; Ziebart, 2010; Levine& Koltun, 2013) regard policy search as a probabilistic inference problem by interpreting rewards asthe log-likelihood of task fulfillment. It is, therefore, possible to adopt useful statistical tools to solvethe control and planning problem by maximizing the likelihood of high reward future trajectory, orestimating the posterior distribution of actions conditioned on the optimality of future trajectories.Levine (2018) provided a comprehensive review of these methods. While previous work mainlyfocuses on simplified dynamics and simple state/action space, Piche et al. (2018) extended the idea tomore complicated task domains by adopting the sequential Monte Carlo approach to select trajectoriesbased on how promising they are, and learning the model-based knowledge necessary for planning inthe meantime. All of the above algorithms assume perfect observability and plan in the state space,while our work generalizes the idea to belief space planning.6 C ONCLUSIONIn this paper, we provided a new method named DualSMC to solve continuous POMDPs, whichhas three advantages. First, it learns plausible belief representations for high-dimensional POMDPswith an adversarial particle filter. Second, it plans future actions by considering the distributions ofthe learned belief representations. The filter module and the planning module are inter-dependentand jointly trained. Third, DualSMC combines the richness of neural networks as well as the inter-pretability of classical sequential Monte Carlo methods. We empirically validated the effectivenessof DualSMC on different tasks including visual navigation and control.8Under review as a conference paper at ICLR 2020
H1gXKDdpKS
Official Blind Review #3
1: Reject
The contribution of the paper is a method of filtering and planning as joint probabilistic inference. For that, the duality of inference and control is used, such that an adversarial particle filtering approch can be employed. Several experiments that support the claim are presented. I recommend to reject the paper from publication. Main reasons are the sub-standard presentation of the idea as well as not justifying design decisions. I find the paper confusing. While the high-level idea is simple, the authors chose not to start the presentations from a reasonable point (i.e., the graphical model) and subsequently introduce the necessary approximations to make the problem tractable, several things are introduced ad hoc. E.g. what is the motivation to chose the mean state and the top-M particles? Why was that representation chosen? Were other representations tried? Under what conditions is this representation reasonable, when might it fail? The answers given are not sufficient. The text and the algorithm boxes do not serve complementary purposes. Instead, the text merely reiterates what is in the algorithm box. Further points for improvement. - Language. There are grammatical mistakes in the paper, e.g. missing pronouns, some sentences "do not compile", the language is sloppy ("[...] had better be really sampling efficient [...]") - Eq 1 has two expectations over s. Where does the first s come from? - "Only requires a learned Q function" does not seem very "only" to me. These are typically hard to obtain! Also, why is this better than using the true model (which can equally be required to be available) for Monte Carlo approximations? - When I started reading the paper, I was under the impression that it is about continous time POMDPs. Could be clearer. - For p(O_t = 1) \propto exp(R) the assumption that R <= 0 is required. - page 2, paragraph 1: it reads as if the history grew exponentially. I assume the authors were referring to the policy, but that is an unnatural assessment, as policies in continuous spaces are typically continuous as well, making a cardinality argument cumbersome. - The literature on varitional particle filtering is not respected. Starting points are [1, 2, 3] - Regression does not imply root mean squared error. Counter example: logistic regression. References [1] Gu, Shixiang Shane, Zoubin Ghahramani, and Richard E. Turner. "Neural adaptive sequential monte carlo." Advances in Neural Information Processing Systems. 2015. [2] Naesseth, Christian A., et al. "Variational sequential monte carlo." arXiv preprint arXiv:1705.11140 (2017). [3] Maddison, Chris J., et al. "Filtering variational objectives." Advances in Neural Information Processing Systems. 2017.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Dual Sequential Monte Carlo: Tunneling Filtering and Planning in Continuous POMDPs ### Paper Abstract We present the DualSMC network that solves continuous POMDPs by learning belief representations and then leveraging them for planning. It is based on the fact that filtering, i.e. state estimation, and planning can be viewed as two related sequential Monte Carlo processes, with one in the belief space and the other in the future planning trajectory space. In particular, we first introduce a novel particle filter network that makes better use of the adversarial relationship between the proposer model and the observation model. We then introduce a new planning algorithm over the belief representations, which learns uncertainty-dependent policies. We allow these two parts to be trained jointly with each other. We testify the effectiveness of our approach on three continuous control and planning tasks: the floor positioning, the 3D light-dark navigation, and a modified Reacher task. ### Paper Keywords ["planning", "tunneling filtering", "continuous pomdps", "belief representations", "dualsmc network", "fact", "state estimation"] ### Paper Content ABSTRACTWe present the DualSMC network that solves continuous POMDPs by learningbelief representations and then leveraging them for planning. It is based on thefact that filtering, i.e. state estimation, and planning can be viewed as two relatedsequential Monte Carlo processes, with one in the belief space and the other in thefuture planning trajectory space. In particular, we first introduce a novel particlefilter network that makes better use of the adversarial relationship between theproposer model and the observation model. We then introduce a new planningalgorithm over the belief representations, which learns uncertainty-dependentpolicies. We allow these two parts to be trained jointly with each other. We testifythe effectiveness of our approach on three continuous control and planning tasks:the floor positioning, the 3D light-dark navigation, and a modified Reacher task.1 I NTRODUCTIONPartially observable Markov Decision Processes (POMDPs) formulate reinforcement learning prob-lems where the robot’s instant observation is insufficient for optimal decision making (Kaelblinget al., 1998). POMDPs can easily become intractable in moderately large discrete space, let alone incontinuous domains (Papadimitriou & Tsitsiklis, 1987). As a result, sampling-based methods aretypically employed. For example, Monte Carlo tree search (MCTS) methods have shown success inrelatively large POMDPs by constructing a search tree of history based on rollout simulations (Sil-ver & Veness, 2010; Seiler et al., 2015; Sunberg & Kochenderfer, 2018). Cross-entropy methods(CEM) (De Boer et al., 2005) update a distribution over policies based on sampled trajectories andhave achieved promising results in Decentralized POMDPs (Oliehoek et al., 2008; Omidshafiei et al.,2016). Despite their effectiveness, MCTS methods often require a black-box simulator to generateplanning trajectory and thus limiting their success to environments with known dynamics. CEMusually assumes all distributions are Gaussian and therefore is restricted to unimodal planning.On the other hand, in the context of deep reinforcement learning, approximate solutions for solvingPOMDPs often directly encode past history with a recurrent neural network (RNN) and performsmodel-free planning on the latent representations of the RNN (Hausknecht & Stone, 2015; Zhu et al.,2018). Recent work further reduces the burden on the RNN by training an internal generative modelfor approximate inference of a latent belief (Igl et al., 2018). By doing end-to-end training on theneural architecture, the resulting models can solve complex POMDPs with visual inputs. Despitebeing simple and generic, these methods lose the ability to incorporate useful prior knowledge likethe state formulation as both the planning is completely based on the latent states of RNN. Moreover,whenever these methods fail to perform well, it is difficult to analyze which part causes the failure.In this work, we present the dual sequential Monte Carlo (DualSMC) model that aims to solvecontinuous POMDPs with complex unknown dynamics and high dimensional observations, whilepreserving the interpretability. In particular, DualSMC consists of two coupled inference processes:one for belief estimation over states, and the other for multi-modal density estimation over the optimalfuture trajectories. To connect the two parts, we feed top particles and the weighted mean estimatefrom the first SMC, i.e. an adversarial particle filter, as the belief representation to the second one.From there, the second SMC explicitly takes uncertainty into consideration and does multi-modalplanning. Note that the learned dynamics is efficiently shared between these two parts. The overallpipeline of our algorithm is summarized in Figure 1.We evaluate our model on three continuous POMDP tasks: the floor-positioning task for explanatorypurpose, the 3D light-dark navigation task simulated by DeepMind Lab (Beattie et al., 2016) withrich visual inputs, and a control task in a modified Mujoco (Todorov et al., 2012) environment. Ourmethod consistently outperforms the baseline methods.1Under review as a conference paper at ICLR 2020Weight particlesbased on observationSMC planning based on top-M (blue) particlesResample old / propose new particlesTransit all particles based on the MPC actionfrom SMC planningFiltering stepsPlanning stepTrue stateGoalFigure 1: The pipeline of DualSMC. The planner and filter are linked via belief representatives.2 P RELIMINARIESIn this section, we provide a brief review of the key concepts for the background of this work.Continuous POMDPs. A continuous POMDP can be specified as a 7-tuple (S;A;T;R;;Z;),whereS,Aandare underlying continuous state, action and observation spaces. We denotest2S as the underlying state at time t. When the robot takes an action at2A according to apolicy(atjot;a<t), the state changes to st+1with probabilityT(st+1jst;at). The robot willthen receive a new observation ot+1Z(ot+1jst+1)and a reward rtR(st;at). Assuming theepisodes are of fixed length L, the robot’s objective is then to maximize the expected cumulativefuture reward G=EPLt=1t1rt, where= (s1;a1;:::;aL;sL+1)are trajectories inducedby, and 0 < 1is the discount factor. In general, the optimal policy has to take the entirehistory into consideration, which can grow exponentially in time steps. Instead of rememberingthe entire history, classical methods often maintain a belief over possible states and graduallyfilter out the true state. Denote bel(st),p(stjot;a<t), then the belief is updated according tobel(st+1) =Rbel(st)Z(ot+1jst+1)T(st+1jst;at)dst, whereis a normalization factor.Particle filter (PF). A particle filter uses a set of particles f(s(k)t;w(k)t)gKk=1withPKk=1w(k)t= 1,to approximate the belief distribution bel(st). With this approximation scheme, belief update reducesto updating individual particles, s(k)t+1T(js(k)t;at)andw(k)t+1/Z(ot+1js(k)t+1)w(k)t. Particle filterbenefits from its flexibility to approximate any distribution. In practice, when the true dynamics TandZare not known a priori, we can use parametrized functions T ()andZ()to approximatetheir corresponding counterparts. Despite its simplicity and efficiency, particle filter can suffer fromthe particle degeneracy problem that most of the probability mass concentrates on a few particles. Awell-known solution is particle resampling, which bootstraps new particles of equal weights fromthe old ones. Recent advances in particle filters (Karkus et al., 2018b; Jonschkowski et al., 2018)adopt neural networks as the parametrized transition and observation functions and make the filteringprocess differentiable. By doing so, particle filters can be more easily applied to problems that havecomplex and high-dimensional observations like images. While previous work focuses on beliefspace filtering, we take one step further and propose a planning method.Sequential Monte Carlo planning (SMCP). The task of planning can also be regarded as aninference problem, provided that the likelihood of any future trajectory is proportional to its expectedcumulative rewards. This idea is connected to the control as inference framework (Todorov, 2008;Toussaint, 2009; Kappen et al., 2012), where control problems are solved by forming a probabilisticgraphical model. Specifically, if we denote Otas the optimality variable and define p(Ot= 1)/exp(R(st;at))as the probability of time step tbeing optimal, the optimal plan then corresponds tothe maximum a posterior estimate conditioned on the optimality of all future steps. We can solvethis inference problem again with an SMC. For a complete derivation, please refer to Levine (2018)and Piche et al. (2018). Notably, Piche et al. (2018) first uses the sequential Monte Carlo for planning(SMCP) in Markov decision processes. We extend the idea to partially observable domains.3 M ETHODIn this section, we present DualSMC, which consists of two interrelated sequential Monte Carloprocesses. We first introduce a more sampling-efficient adversarial particle filter for learning thebelief representation, then an uncertainty-aware planning algorithm based on these representations.2Under review as a conference paper at ICLR 20203.1 A DVERSARIAL PARTICLE FILTERINGSimilar to the architecture in Jonschkowski et al. (2018), our differentiable PF contains three neuralmodules, an observation model Z(otjs(k)t)that weights each particle given the current observation,a proposerP(s(k)newjot;P)for suggesting new probable particles, and a stochastic transition modelT (stjst1;at1;T)that mimics the environment dynamics. Here, PandTare Gaussian noise forstochasticity. At time t, we re-sample K0particles (fs(k)oldgK0k=1) transited from the previous step basedon the updated weight and combine them with (KK0)newly proposed particles ( fs(k)newgKk=K0+1).Naturally,Pis trained by regressing the proposed state to the true state.For successful POMDPs planning, the state estimation particle filter had better be really samplingefficient and provide a belief representation based on the most plausible states for the downstreamplanner. We observe that PandZare opposite yet dependent on each other. Following the intuitionthat leveraging this adversarial relationship would enhance both parts and thus help to narrow downthe belief space of interest, we propose the adversarial particle filter. In particular, we train Ztodifferentiate the true state from all particle states and train Pto foolZ. Formally, denote prealasthe real joint distribution over s;o,ZandPplay the following two-player minimax game withfunctionF(Z;P):minmaxF(Z;P) =Es;opreal()[ logZ(ojs) +Ess(k)oldlog(1Z(ojs))+EPN (0;I)log(1Z(ojP(o;P))) ]:(1)3.2 D UALSMC P LANNING WITH BELIEF REPRESENTATIONSA straightforward solution to POMDP planning is to train the planning module separately from thefiltering module. At inference time, plans are made based on sampled individual particles from thestate filter. We call this planning algorithm particle-independent SMC planning (PI-SMCP) and useit as a baseline method. More details about PI-SMCP can be found in Appendix A. Nevertheless,PI-SMCP does not perceive the state uncertainty. By contrast, the proposed DualSMC explicitlyconsiders the belief distribution by planning directly on an approximated belief representation, i.e. acombination of the top candidates from the state filter as well as the weighted mean estimate.Algorithm 1 Overall DualSMC filtering and planning method1:fs(k)1Priori (s1)gKk=1,fw(k)1= 1gKk=1 .Define initial filtering particles2:fort= 1 :Ldo .At each control time step3:fw(k)t/w(k)t1Z(s(k)t;ot)gKk=1 .Update particle weights4: belt=Pkw(k)ts(k)t .Compute mean belief state5:f~s(m)t;~w(m)tgMm=1=Top-M (fs(k)t;w(k)tgKk=1);w.r.t.fw(k)tgKk=1.Take top-Mparticles6:at=DualSMC-P (belt;f~s(m)t;~w(m)tgMm=1;;Q!) .Perform planning ( Alg 2)7:ot+1;rtpenv(at) .Update the environment8:fs(k)tgK0k=1Multinomial (fs(k)tgKk=1);w.r.t.fw(k)tgKk=1 .Resample particle states9:fs(k)tP(ot)gKk=K0+1;fw(k)t= 1gKk=1 .Propose new particles10:fs(k)t+1T (s(k)t;at)gKk=1 .Predict next-step particles11: Add (st;st+1;at;rt;ot;belt;f~s(m)t;~w(m)tgMm=1) to a training buffer12: Sample a batch from the buffer and update parameters ( ;!;; ; ).Train all modules13:end forWe present the details of DualSMC in Algorithm 1. At time step t, when a new observation comes,we first useZto update the particle weights ( line 3 Alg 1 ), and then perform the planning algorithmin Algorithm 2. We duplicate the top- Mparticles and the mean belief state Ntimes as the rootstates ofNplanning trajectories ( line 1-2 Alg 2 ). Different from the previous SMCP (Piche et al.,2018) method under full observations, the policy network perceives the belief representationsand predicts an action based on the top- Mparticle states as well as the mean belief state ( line 4 AlgDepending on the task, we can keep K0constant or make (KK0)follow an exponential decay.3Under review as a conference paper at ICLR 2020Algorithm 2 DualSMC planning with filtered belief representationsInput: mean belief state belt, top-Mfiltering particlesf~s(m)t;~w(m)tgMm=1, policy network ,value network Q!, planning start time t, planning horizon H1:f~w(m)tgMm=1=Normalize (f~w(m)tgMm=1).Normalize top- Mfiltering particle weights2:f^s(m)(n)t = ~s(m)tgMm=1;^w(n)t1= 1;bel(n)t=beltNn=1.Duplicate initial states set Ntimes3:fori=t:t+Hdo .At each planning time step4:a(n)i(f^s(m)(n)igMm=1;bel(n)i)Nn=1.SampleNactions5:^s(m)(n)i+1;r(m)(n)iT (^s(m)(n)i;a(n)i)M;Nm=1;n=1.Predict next-step states and rewards6:bel(n)i+1T (bel(n)i;a(n)i)Nn=1.Predict next-step mean belief state7:^w(n)i/^w(n)i1expPm~w(m)tA(m)(n)Nn=1.Update weights with Equation 28:x(n)i= (f^s(m)(n)i+1;^s(m)(n)igMm=1;bel(n)i+1;a(n)i)Nn=1.Assemble planning trajectories9:x(n)t:igNn=1Multinomial (fx(n)t:igNn=1);w.r.t.f^w(n)iNn=1.Resample trajectories10:^w(n)i= 1Nn=111:end for12:Selectat=first action of x(n)t:t+H, wherenUniform (1;:::;N ).Return an action2). We then perform Nactions toMNstates and use T to predict the next states and rewards(line 5 Alg 2 ). Since future observations o>tare not available at current time step, we approximatebel(n)i>tby re-usingT y(line 6 Alg 2 ). We update the planning weight of each planning trajectory bysummarizing the advantages of each state using the initial Mbelief weights ( line 7 Alg 2 ). Here, weintroduce an alternative advantage formulation that is equivalent to the one used in Piche et al. (2018)A(m)(n)=TD(m)(n)i1log(a(n)ijf^s(m)(n)igMm=1;bel(n)i); (2)where TD(m)(n)i1=Q!(^s(m)(n)i;a(n)i)Q!(^s(m)(n)i1;a(n)i1) +r(m)(n)i1. At timet,Q!(^s(m)(n)i1;a(n)i1)andr(m)(n)i1are set to 0. We emphasize that our formulation is much simpler. Because our formulationonly requires a learned Qfunction and more importantly, it prevents us from estimating the expectationof the value function V. We leave the full derivation in Appendix B.2.At the end of each planning time step, we apply the sequential importance resampling (SIR) overNplanning trajectories ( line 9-10 Alg 2 ). When the planning horizon is reached, where i=t+H,we sample one planning trajectory and feed its first action to the environment ( line 7 Alg 1 ). Wethen go back to the filtering part and update the belief representations by resampling, proposing, andpredicting the next-step particle states ( line 8-10 Alg 1 ). Lastly, we train all modules of DualSMC,including the policy network, the critic network as well as the adversarial particle filtering networks(line 11-12 Alg 1 ). We add details of each network component to Appendix C.4 E XPERIMENT4.1 F LOOR POSITIONINGA robot localizes itself and approaches a target region on a 2D plane. It knows the distances to thenearest walls: ot= (dx;dx+;dy;dy+)t, but does not know the world coordinates (sx;sy). Itstarts from a random location and is headed to different regions according to different floors. To reachthe correct target, the robot has to learn to reduce the uncertainty about sy. The action is defined asa= (sx;sy)with a maximum magnitude of 0:05. Only at training time, a reward of 100is givenat the end of each episode if the robot reaches the correct target region. Implementations details ofDualSMC can be found in Appendix D.1.Qualitative results. Figure 2(a) and 2(b) are results by applying the standard SMCP (Piche et al.,2018) to the top- 1particle state. We use different particle filters (PF) for these two baseline models.yLike QMDP (Littman et al., 1995), DualSMC assumes the uncertainty disappears on the next time step andperforms model-based planning.4Under review as a conference paper at ICLR 2020TrajectoriesW W Goal (unobservable)Planning trajectoryRobot’s initial states (unobservable)Resampled particle statesTrue states (unobservable)Proposed particle statesMean belief state(a) Regressive particle filter + SMCPTrajectoriesW t=36(b) Adversarial particle filter + SMCPTrajectoriest=3t=12(c) DualSMC planner with adversarial particle filterFigure 2: The left column shows the actual trajectories of the robot, and other columns are its planningtrajectories at different times. The robot can better localize itself stepping across dashed blue lines.Table 1: Floor positioning results averaged over 1;000test episodes, including the success rate, themean value and the standard deviation of numbers of steps.Method Success rate # Steps StdLSTM filter + SMCP 23.5% 149.1 81.2DVRL (Igl et al., 2018) 38.3% 162.4 19.9Regressive PF (top-1) + SMCP 25.0% 107.9 69.8Adversarial PF (top-1) + SMCP 95.0% 73.3 44.0DualSMC w/o proposer 78.9% 64.9 62.2DualSMC with regressive PF (MSE loss) 45.7% 121.6 69.2DualSMC with regressive PF (density loss) 54.0% 109.2 81.8DualSMC with adversarial PF 99.4% 36.8 15.1We use a regressive PFzfor the first one and an adversarial PF for the second one. As shown bythe blue crosses in Figure 2(a), training the proposer with the mean squared loss is equivalent toregressing the proposed particles to the mean values of the multi-modal state distributions underpartial observations. Thus, the robot cannot make reasonable decisions. By contrast, as shownin Figure 2(b), training the PF adversarially leads the proposed particles more akin to plausiblestates. The robot learns an interesting policy: moving right wherever the starting point is, and thenbouncing back at the right wall. However, this policy is clearly not optimal, as it does not considerthe uncertainty of the belief state. Figure 2(c) shows the results by DualSMC. We have three findings.First, the robot learns a policy to localize itself quickly and then approach the target area withconverged belief states, so that the robot can finish the task successfully with fewer steps. Second,the robot learns the multi-modal distributions of the planning trajectories and can move up or downwith different probabilities and advantage values. Third, the observation model works well. Once therobot steps across the blue line, the belief states quickly converge to the actual values.zThe term regressive here indicates that the proposer of the particle filter takes the mean square error betweenthe proposed states and the true states as the training objective function. We may also use the kernel densityestimation as an alternative objective function.5Under review as a conference paper at ICLR 2020Quantitative comparisons. Table 1 shows that, first, the DualSMC planning algorithm achievesthe highest success rate using the least number of steps. Second, the adversarial PF outperformsother forms of PFs, as well as a deterministic LSTM model (LSTMs are previously used as strongbaselines by Karkus et al. (2018b) and Jonschkowski et al. (2018)). DualSMC models with regressiveproposers are even worse than one without any proposers, illustrating that the quality of the beliefrepresentations is important; erroneous state estimations will harm the final planning results.Is the DualSMC filter better than the traditional PF? Given partial observations, an ideal filtershould derive a complete distribution of possible states instead of point estimation. Figure 3(a)compares the averaged RMSE between the filtered results of different models and the true states. Theadversarial PF performs best, while the PF with the regressive proposer performs even worse thanthat without a proposer. A natural question arises: as the filtering error is also related to differentmoving trajectories of different models, can we eliminate this interference? For Figure 3(b), we traindifferent filters without a planner. All filters follow the same expert trajectories. We can see thatthe adversarial PF still achieves the best performance. Figure 3(c) explores the effect of differentnumbers of filtering particles. Using too few particles will deteriorate the filtering performance.step20 40 60 80 100RMSE00.050.10.150.20.250.3Reg PF + SMCPAdv PF + SMCPDualSMC w/o proposerDualSMC with Reg PFDualSMC with Adv PF(a) Using different modelsstep10 20 30 40 50 60RMSE00.050.10.150.20.250.3PF w/o proposerRegressive PFAdversarial PF (b) Tracking an expert robotstep20 40 60 80 100RMSE00.050.10.150.20.250.3DualSMC, NumPar=30DualSMC, NumPar=300DualSMC, NumPar=3000 (c) Changing number of particlesFigure 3: Averaged filtering error as a function of the number of robot steps for floor positioning.Is the DualSMC planner aware of state uncertainty? In fully observable scenarios, we suppressthe filtering part of DualSMC and assume DualSMC plans upon a converged belief on the true state(sx;sy). The DualSMC planner makes different decisions to walk straight to the target region (seeFigure 4). It performs equally well to the standard SMCP, with an average of 21:3steps (v.s. 20:7steps for SMCP) and a 100:0%success rate. We may conclude that DualSMC does not providepolicies by remembering them, but by perceiving the distribution of the belief state. We may alsoconclude that DualSMC trained under POMDPs generalizes well to similar tasks with less uncertainty.(a) DualSMC with partial observations (b) DualSMC with full observationsFigure 4: The DualSMC planner trained under POMDPs generates different polices according to theuncertainty of belief state. For (b), we assign true state values to the top- Mparticles before planning.4.2 3D L IGHT -DARK NAVIGATIONWe extend the 2D light-dark navigation task (Platt Jr et al., 2010) to a visually rich environmentsimulated by DeepMind Lab (Beattie et al., 2016). At each episode, the robot is placed randomly anduniformly on one of the 4platforms at the bottom. Then, the robot’s goal is to navigate toward thecentral cave (marked in orange) while avoiding any of the 4traps (marked by crosses). The maze isdivided into upper and lower parts. Within the lower part, the robot travels in darkness, receives noisyvisual input of a limited range (up to a fixed depth), and therefore suffers from high state uncertainty.When the robot gets to the upper part (the blue area), it has a clear view of the entire maze. We placedecals as visual hints on the top walls of the maze to help the robot figure out its location. However,it has to be very close to the upper walls to see clearly what these decals are. The robot receives apositive reward of 100when it reaches the goal and a negative reward of 100when in a trap. Ateach time step, the robot’s observation includes a 6464RGB image, its current velocity, and itsorientation. Since the moving command in DeepMind Lab is binary (forward/backward), we forcethe robot to move forward and only let it control its orientation, which we make continuous.6Under review as a conference paper at ICLR 2020ObservationCurrent state (unknown)GoalEnd state (unknown)TrapResampled particlesPlanning trajectoryArea w/ true observationInitial state (unknown)Time stepParticle distribution and planning trajectory at time tt = 01t = 08t = 12t = 30t = 88t = 60Figure 5: DualSMC learns to go up to the wall of decals first before turning back to the goal. Notethat the robot figures out its location at t= 30 and where to start turning back at t= 60 .The experiment results are summarized in Table 2. By considering the uncertainty, DualSMC methodsoutperform other baselines and are the only methods that learned to go up and figure out its locationfirst before going directly towards the goal (see Figure 5). However, we notice that DualSMC withthe adversarial proposer performs slightly worse than with a proposer trained with density loss.This might be caused by the unstability of the adversarial training. For simplicity, we use the na ̈ıveadversarial training method as in the original generative adversarial networks (Goodfellow et al.,2014), which may easily suffer from mode collapse, leading to particle degeneracy in our case. Onemay potentially improve DualSMC with modern techniques, i.e. Wasserstein distance (Arjovskyet al., 2017), the gradient penalty (Gulrajani et al., 2017), and the spectral normalization (Miyatoet al., 2018), to stabilize training or guarantee the existence of a unique Nash equilibrium.Table 2: Experimental results on the 3D navigation task. Results are averaged over 100episodes after2;000episodes of training. Methods with a filter are trained (tested) with 100(400) particles.LSTM + SMCP 59% 85.40Adversarial PF (top-1) + SMCP 58% 56.11Adversarial PF (top-3) + PI-SMCP 64% 64.37DualSMC with regressive PF (MSE loss) 93% 64.32DualSMC with regressive PF (density loss) 97% 73.86DualSMC with adversarial PF 95% 65.134.3 M ODIFIED REACHERWe further validate our model on a continuous control task with partial observation, i.e. a modifiedReacher environment from OpenAI Gym (Brockman et al., 2016). The original observation ofReacher is a 11-D vector including (cos1;cos2;sin1sin2;gx;gy;!1;!2;rx;ry;rz), where thefirst 4 dimensions are cos/sin values of the two joint angles 1,2,gx,gythe goal position, !1;!2theangular velocities and rx;ry;rzthe relative distance from the end-effector to the goal. We removegx;gy;rx;ry;rzfrom the original observation and include a single scalar r=jj(rx;ry;rz)jj2+r,Goal positionProposed particlesResampled particlest = 0t = 6t = 16t = 48Figure 6: The modified Reacher environment and the goal estimation of DualSMC v.s. time.7Under review as a conference paper at ICLR 2020whererN(0;0:01)is a small noise (notice that ris usually on the scale of 0:1). The resultingobservation is therefore a 7-D vector. The robot has to simultaneously locate the goal and reach it.0 1000 2000 3000 4000 5000Training e isode−30−25−20−15−10E isodic rewardDualSMC (adv)DualSMC (density)DualSMC (mse)LSTMPF (top3) + PI-SMCPPF (top1) + SMCPSMCP on full stateFigure 7: Smoothed training curves on modifiedReacher. See experiment details in Appendix D.3.We provide a visualization of one sample rununder DualSMC with the adversarial filter in Fig-ure 6. As expected, initially the proposed parti-cles roughly are in a half-cycle and as time goeson, the particles gradually concentrate aroundthe true goal. Since the final performance of vari-ous methods is similar after long enough time oftraining, we provide the smoothed training curveof these methods in Figure 7 to showcase theadvantage of DualSMC. We truncate the resultsup to 5;000episodes since no obvious changein performance is observed from thereon. Aswe can see, DualSMC methods not only achievesimilar asymptotic performance as the SMCPmethod with full observation but also learn fasterto solve the task than baseline methods.5 R ELATED WORKEmbedding useful algorithmic prior to a reinforcement learning model has been a recent trend becausepure model-free approaches can suffer from unstable training and long convergence issues, whichare exaggerated in partially observable domains. QMDP-Net (Karkus et al., 2017) extends the ideafrom value iteration network (VIN) (Tamar et al., 2016) by representing the bellman update in valueiteration as convolution and pooling operations in a neural network. A similar idea has been extendedto more visually rich domain (Karkus et al., 2018a; Shankar et al., 2016). However, the underlyingstate and action space still remain simple because of the explicit value iteration procedure during theplanning step. The idea of connecting a particle filter with a policy has been addressed in Coquelinet al. (2009). The performance of their policy is only evaluated on the mean particle state and theyuse finite-difference method to update the policy. Igl et al. (2018) introduced a variational lowerbound to improve the original black-box learning of RNN while maintaining a set of particles toaddress uncertainty. But in their work, the latent particle states are not interpretable and knownprior knowledge could not apply. Recently, Karkus et al. (2018b) and Jonschkowski et al. (2018)independently discovered methods to make the conventional particle filter differentiable in terms ofneural networks, both showing that end-to-end training improves the filtering performance. Our workextends this idea to planning under uncertainty and we introduce an alternative adversarial proposingstrategy to further improve the differentiable filter.Control as inference methods (Toussaint & Storkey, 2006; Rawlik et al., 2010; Ziebart, 2010; Levine& Koltun, 2013) regard policy search as a probabilistic inference problem by interpreting rewards asthe log-likelihood of task fulfillment. It is, therefore, possible to adopt useful statistical tools to solvethe control and planning problem by maximizing the likelihood of high reward future trajectory, orestimating the posterior distribution of actions conditioned on the optimality of future trajectories.Levine (2018) provided a comprehensive review of these methods. While previous work mainlyfocuses on simplified dynamics and simple state/action space, Piche et al. (2018) extended the idea tomore complicated task domains by adopting the sequential Monte Carlo approach to select trajectoriesbased on how promising they are, and learning the model-based knowledge necessary for planning inthe meantime. All of the above algorithms assume perfect observability and plan in the state space,while our work generalizes the idea to belief space planning.6 C ONCLUSIONIn this paper, we provided a new method named DualSMC to solve continuous POMDPs, whichhas three advantages. First, it learns plausible belief representations for high-dimensional POMDPswith an adversarial particle filter. Second, it plans future actions by considering the distributions ofthe learned belief representations. The filter module and the planning module are inter-dependentand jointly trained. Third, DualSMC combines the richness of neural networks as well as the inter-pretability of classical sequential Monte Carlo methods. We empirically validated the effectivenessof DualSMC on different tasks including visual navigation and control.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text The contribution of the paper is a method of filtering and planning as joint probabilistic inference. For that, the duality of inference and control is used, such that an adversarial particle filtering approch can be employed. Several experiments that support the claim are presented. I recommend to reject the paper from publication. Main reasons are the sub-standard presentation of the idea as well as not justifying design decisions. I find the paper confusing. While the high-level idea is simple, the authors chose not to start the presentations from a reasonable point (i.e., the graphical model) and subsequently introduce the necessary approximations to make the problem tractable, several things are introduced ad hoc. E.g. what is the motivation to chose the mean state and the top-M particles? Why was that representation chosen? Were other representations tried? Under what conditions is this representation reasonable, when might it fail? The answers given are not sufficient. The text and the algorithm boxes do not serve complementary purposes. Instead, the text merely reiterates what is in the algorithm box. Further points for improvement. - Language. There are grammatical mistakes in the paper, e.g. missing pronouns, some sentences "do not compile", the language is sloppy ("[...] had better be really sampling efficient [...]") - Eq 1 has two expectations over s. Where does the first s come from? - "Only requires a learned Q function" does not seem very "only" to me. These are typically hard to obtain! Also, why is this better than using the true model (which can equally be required to be available) for Monte Carlo approximations? - When I started reading the paper, I was under the impression that it is about continous time POMDPs. Could be clearer. - For p(O_t = 1) \propto exp(R) the assumption that R <= 0 is required. - page 2, paragraph 1: it reads as if the history grew exponentially. I assume the authors were referring to the policy, but that is an unnatural assessment, as policies in continuous spaces are typically continuous as well, making a cardinality argument cumbersome. - The literature on varitional particle filtering is not respected. Starting points are [1, 2, 3] - Regression does not imply root mean squared error. Counter example: logistic regression. References [1] Gu, Shixiang Shane, Zoubin Ghahramani, and Richard E. Turner. "Neural adaptive sequential monte carlo." Advances in Neural Information Processing Systems. 2015. [2] Naesseth, Christian A., et al. "Variational sequential monte carlo." arXiv preprint arXiv:1705.11140 (2017). [3] Maddison, Chris J., et al. "Filtering variational objectives." Advances in Neural Information Processing Systems. 2017. ### Review Rating 1: Reject ### Review Confidence <|im_end|> <|im_end|>
BTzgpgtNaGq
graphicsinterface.org/Graphics_Interface/2022/Conference
2022
Comparison of a VR Stylus with a Controller, Hand Tracking and a Mouse for Object Manipulation and Medical Marking Tasks in Virtual Reality
["Anonymous"]
For medical surgery planning, virtual reality (VR) provides a new kind of user experience, where 3D images of the operation area can be utilized. Using VR, it is possible to view the 3D models in a more realistic 3D environment, which would reduce the perception problems and increase spatial understanding. In the present experiment, we compared a mouse, hand tracking, and a combination of a VR stylus and a VR controller as interaction methods in VR. The purpose was to study the viability of the methods for tasks conducted in medical surgery planning in VR. The tasks required interaction with 3D objects and high marking accuracy. The stylus and controller combination was the most preferred interaction method. In subjective results, it was considered as the most appropriate, while in objective results, the mouse interaction method was the most accurate.
["3D visualization", "virtual reality", "hand tracking", "controller interaction", "mouse interaction", "3D imaging"]
ABSTRACTFor medical surgery planning, virtual reality (VR) provides a newkind of user experience, where 3D images of the operation area canbe utilized. Using VR, it is possible to view the 3D models in amore realistic 3D environment, which would reduce the perceptionproblems and increase spatial understanding. In the present experi-ment, We compared a mouse, hand tracking, and a combination of aVR stylus and a VR controller as interaction methods in VR. Thepurpose was to study the viability of the methods for tasks conductedin medical surgery planning in VR. The tasks required interactionwith 3D objects and high marking accuracy. The stylus and con-troller combination was the most preferred interaction method. Insubjective results, it was considered as the most appropriate, whilein objective results, the mouse interaction method was the mostaccurate.Index Terms: Human-centered computing—Human com-puter interaction (HCI)—Interaction devices—Pointing devices;Human-centered computing—Human computer interaction (HCI)—Empirical studies in HCI; Human-centered computing—Humancomputer interaction (HCI)—Interaction paradigms—Virtual reality1 I NTRODUCTIONVirtual reality makes it possible to create computer-generated en-vironments that replace the real world. For example, the user caninteract with virtual object models more flexibly using various in-teraction methods than with real objects in the real environment.VR has become a standard technology in research, but it has notbeen fully exploited in professional use even if its potential has beendemonstrated.In the field of medicine, x-ray imaging is routinely used to di-agnose diseases and anatomical changes as well as for scientificsurveys [31]. In many cases 2D medical images are satisfactory,but they can be complemented with 3D images for more complexoperations where detailed understanding of the 3D structures isneeded.When planning surgeries, medical doctors, surgeons, and radiolo-gists study 3D images. Viewing the 3D images in 2D displays canpresent issues to control object position, orientation, and scaling. Us-ing VR devices, like head mounted displays (HMD), 3D images canbe more easily perceived when viewed and interacted with in a 3Denvironment than with a 2D display. For the medical professionalsto be able to do the same tasks in VR as they do in 2D, the interac-tion methods need to be studied properly. The interaction methodneeds to be accurate, reasonable, and suitable for the medical tasks.Because we talk about medical work, the accuracy is crucial to avoidas many mistakes as possible. K ̈onig et al. [21] studied an adaptivepointing for the accuracy problems caused by hand tremor whenpointing distant objects. The used interaction method needs also tobe natural so that the doctors would use it in their daily work and sothat they still can focus on their primary tasks without paying toomuch attention to the interaction method. One typical task for thedoctors is marking anatomical structures and areas on the surface ofthe 3D model. The marked points create the operative area, or theycan be used for training.For 2D content, a mouse is one of the best options for interactiondue to its capability to point at small targets with high accuracyand the fact that many users are already very experienced with thisdevice [27]. Mouse cursor can be used for 3D pointing with ray-casting [34] which allows pointing of the distant objects as well. Thefamiliarity and accuracy make the mouse a worthy input method inVR, even though it is not a 3D input device. In addition, controllershave been identified as an accurate interaction method [13, 17] andthey are typically used in VR environments [22]. Controllers enabledirect manipulation, and the reach of distant objects is differentthan with the mouse with ray-casting. Other devices, like styluseshave been studied in pointing tasks previously [27, 40]. Thereforewe aimed to investigate performance of a stylus together with acontroller in selected tasks.The cameras and sensors on HMD devices also allow hand track-ing without hand-held input devices. Pointing at objects with afinger is natural way of acting for humans, so hand interaction canbe expected to be received well. Hand interaction was selected asone of the conditions based on interviews of medical professionalsand their expectations for the supporting technology.We decided to use a marking task to assess the three interactionconditions. The conditions were a standard mouse, bare hands, and ahandheld controller with VR stylus. All methods were used in a VRenvironment to minimise additional variation between methods andto focus the comparison on interaction techniques. The use of theHMD also allowed the participants to easily study the target fromdifferent directions by moving their head. In the medical markingtask the doctor will observe the anatomical structures by turningand moving the 3D object and at the same time looking for thebest location for the mark. The time spent for the manipulation isnot easily separated from the time spent in the final marking. Thedoctor decides during the manipulation from which angle and howthe marking will be done, which will affect the marking time. Thismade application of Fitts’ law [11] not possible in our study, as itrequires that a participant cannot influence target locations.We had 12 participants who were asked to do simplified medicalsurgery marking tasks. To study the accuracy of the interactionmethods, we created an experiment where in the 3D model therewas a predefined target that was marked (pointed+selected). In thereal medical case, the doctor would define the target, but then theaccuracy cannot be easily measured. This study focused mainlyon subjective evaluations of interaction methods, but also includedobjective measurements.The paper is organized as follows: First, we go through back-ground of object manipulation and marking, interaction methods in3D environment, and jaw osteotomy surgery planning (Section 2).Then, we introduce the compared interaction methods and usedmeasurements (Section 3), as well as go through the experiment(Section 4) including apparatus, participants, and study task. In theend the results are presented (Section 5) and discussed (Section 6).2 B ACKGROUND2.1 Object manipulation and markingObject manipulation, i.e. rotating and translating the object in 3Dspace, and object marking, i.e. putting a small mark on the surfaceof an object, have been used as separate task when different VRinteraction methods have been studied. Sun et al. [32] had 3D posi-tioning task that involved object manipulation. When a mouse and acontroller were compared for precise 3D positioning the mouse wasfound as the more precise input device. Object marking has beenstudied without manipulation in [27]. Argelaguet and Andujar [1]studied 3D object selection techniques in VR and Dang et al. [9]have studied 3D pointing techniques. As there are no clear standardtechniques for 3D object selection nor 3D pointing technique, Arge-laguet and Andujar and Dang et al. attempt to bring practices instudying new techniques in 3D UIs.In earlier work using bimanual techniques, Balakrishnan andKurtenbach [5] presented a study where dominant and non-dominanthand had their own tasks in a virtual 3D scene. The bimanualtechnique was found as faster and preferable. People typically usetheir both hands to cooperatively perform the most skilled tasks [5,12] where the dominant hand is used for the more accurate functions,and the non-dominant hand sets the context such as holding a canvaswhen dominant hand is used to draw. The result is optimal whenbimanual techniques are designed by utilizing the strengths of bothdominant and non-dominant hands.2.2 Input devices for object manipulation and marking2.2.1 MouseA mouse is a common, familiar, and accurate device for 2D contentto point at small targets with high accuracy [27]. The mouse isalso a common device to do medical surgery planning [22]. Manystudies have used a mouse cursor for 3D pointing with ray-casting[6, 8, 22, 27, 34]. Ray-casting technique is easily understood, and itis a solution for reaching objects at a distance [25].Compared to other interaction methods in VR, the issue of thediscrepancy between the 2D mouse and a 3D environment has beenreported [1], and manipulation in 3D requires a way to switch be-tween dimensions [4]. Balakrishnan et al. presented Rocking’Mouseto select in 3D environment while avoiding a hand fatigue. Kimand Choi [20] mentioned that the discrepancy creates a low userimmersion. In addition, use of a mouse usually forces the user to sitdown next to a table instead of standing. The user can rest their armson the table while interacting with the mouse which decrease handfatigue. Johnson et al. [18] stated that fatigue with mouse interactionwill appear only after 3 hours.Bachmann et al. [3] found that Leap Motion controller has ahigher error rate and higher movement time than the mouse. Kimand Choi [20] showed in their study that 2D mouse have high per-formance in working time, accuracy, ease of learning, and ease ofuse in VR. Both Bachmann et al. and Kim and Choi found themouse to be accurate but on the other hand Li et al. [22] pointedthat with difficult marking tasks small displacement of a physicalmouse would lead to a large displacement on the 3D model in the3D environment.2.2.2 HandsHand interaction is a common VR interaction method. V oigt-Antonset al. [39] compared free hand interaction and controller interactionwith different visualizations. Huang et al. [17] compared differentinteraction combinations between free hands and controllers. Bothfound that hand interaction has lower precision than the controllerinteraction. With alternative solutions like a Leap Motion controller[28, 41] or using wearable gloves [42] the hand interaction can bedone more accurately. Physical hand movements create a naturaland realistic experience of interaction [10, 17], and therefore handinteraction is still an area of interest.2.2.3 ControllersControllers are the leading control inputs for VR [17]. When us-ing controllers as the interaction method, marking, and selectingare usually made with some of the triggers or buttons on the con-troller. Handheld controllers are described as stable and accuratedevices [13, 17]. However, holding extra devices in hands may be-come inconvenient, if the hands are needed for other tasks betweendifferent actions. When interacting with hands or controllers in VR,the fatigue in arms is one of the main issues [1,15]. Upholding armsand carrying the devices also increase the arm fatigue.2.2.4 VR stylusA VR stylus is penlike handheld device that is used in VR envi-ronment as a controller. The physical appearance of Logitech VRInk stylus [23] is close to a regular pen except it has buttons whichenables different interaction e.g., selecting, in VR. Batmaz et al. [7]have studied Logitech VR Ink stylus for a selection method in virtualreality. They found that using a precision grip there is no statisticaldifferences on the marking if the distance of the target is changing.Wacker et al. [40] presented as one of their design VR stylus for mid-air pointing and selection happened pressing a button. For objectselection, the users preferred a 3D pen over a controller in VR [27].2.3 Jaw osteotomy surgery planningCone Beam Computed Tomography (CBCT) is a medical imag-ing technique that produce 3D images that can be used in virtualsurgery planning. Compared to previous techniques that were usedin medical surgery planning like cast models, virtual planning withCBCT images has extra costs and time requirements [14]. How-ever, the technique offers several advantages for planning accuracyand reliability [31]. CBCT images can be used as 3D objects inVR for surgery planning with excellent match to real objects [14].Ayoub and Pulijala [2] reviewed different studies about virtual andaugmented reality applications in oral and maxillofacial surgeries.In virtual surgery planning, the procedures for surgery are imple-mented and planned beforehand. The real surgery is done based onthe virtual plan. Common tasks in dental planning are specifying thelocation of impacted teeth, preventing nerve injuries, or preparingguiding flanges [31]. In VR this can be done by marking criticalareas or drawing cutting lines on to the models. Virtual planningcan be used in student education as well, where the procedures canbe realistically practiced. Reymus et al. [29] found that students un-derstood the mouth anatomy better after studying 3D models in VRenvironment than from regular 2D image. The objects can be closer,bigger, and they can move in depth direction in 3D environmentcompared to 2D environment [19].Tasks, like understanding the 3D object and marking critical areason it need to be done in medical surgery planning. However, workingwith 3D objects in 2D environment makes the task more difficult.Hinckley et al. [16] studied issues for developing effective free-space3D user interfaces. Appropriate interaction and marking methodshelp to understand 3D objects and perform the required tasks in VR.In this study, we evaluated three methods for VR object manipulationand marking and examined the performances in simplified medicalsurgery planning tasks.3 M ETHOD3.1 MouseIn the first interaction method a regular mouse was used inside a VRenvironment (Figure 1). In VR environment there was a visualizedmouse model that the participant was able to move by manipulatingthe physical mouse and to control the direction of a ray starting fromthe model. The ray was always visible in Mouse interaction.Mouse was used one-handed when the other two methods weretwo-handed. Mouse was used to perform two functions, manipula-tion and marking, while these functions had been separated in othermethods into different hands. In addition, Mouse used ray-casting,ray from the mouse, while the two other methods did not use it. Theother methods used direct mid-air object manipulation.Figure 1: Mouse interaction method outside VR (left). Mouse marking method inside VR and the study task (right).The participant could rotate the object in 3 dimensions usingthe mouse movements with right click. For the 3D translations theparticipant used the scroll button. Using the scroll wheel, the usercan zoom in and out (translate in Z) and when the user pressesthe scroll button and moves the mouse, the user can translate up-down and sideways (translate in X and Y). Markings were made bypointing the target with the ray and pressing the left button.For the real-world mouse to be visible inside VR, pass through isnot really required even though the mouse was visible in our study.After wearing the headset, the user could see the virtual mouse thatis positioned to where the physical mouse is located to be able tofind and reach the device. When the user moved the physical mousesideways, the movement was converted to a horizontal rotation ofthe beam from the virtual mouse, and when the mouse was movedback and forth, the movement was converted to a vertical rotationof the beam. This way the user can cover a large space similar tousing mouse in 2D displays. To improve ergonomics, the user couldconfigure the desk and chair for their comfort.3.2 HandsAs the second interaction method, the participant used bare hands .The left hand was for object manipulation and the right hand forobject marking. The participant could pick up the 3D object by apinch gesture with their left hand, to rotate and move the object.Marking was done with a virtual pen. In the VR environment theparticipant had the virtual pen attached to their right palm, nearto the index finger (Figure 2 right). As the palm was moved thepen moved accordingly. When the virtual pen tip was close to thetarget, the tip changed its color to green to show that the pen wastouching the surface of the object. The mark was put on the surfaceby bending the index finger and pressing the pen’s virtual button.The participant had to keep their palm steady when pressing thebutton to prevent the pen from moving.3.3 Controller and VR stylusThe third interaction method was based on having a controller onparticipant’s left hand for the object manipulation and a VR stylus onthe right hand for the marking (Figure 3). The participant grabbedthe 3D object with hand grab gesture around the controller to rotateand move the object. The markings were made with the physical VRstylus. The VR stylus was visualized in VR as was the mouse, sothe participant knew where the device was located. The participantpointed the target with the stylus and pressed its physical button tomake the mark. The act of press was identical to the virtual penpress in Hands method. There was a passive haptic feedback whentouching the physical VR stylus, which did not happen with thevirtual pen.There have been some supporting results for using mouse inVR [3, 20, 22, 25] but 2D mouse is not fully compatible with the 3Denvironment [20]. We studied the ray method with Mouse to com-pare it against Hands and Controller+Stylus for 3D object marking.We also compared Hands without any devices to a method with adevice in one or two hands. The marking gesture was designed tobe similar in Hands and Controller+Stylus methods to be able tocompare the effect of the devices.3.4 Measurements and the pilot studyThe participant was asked to make a marking as close to the targetlocation as possible. We used Euclidean distance to measure thedistance between the target and the participant’s marking. Thetask completion times were measured. The participant was able toremark the target if s/he was dissatisfied with the current marking.We counted how many remarkings were made to see if any of theinteraction methods required more remarking than the other methods.We measured accuracy in these two ways, as a distance from thetarget and as the number of dissatisfied markings.A satisfaction questionnaire was filled after each interactionmethod trial. There were a question and seven satisfaction statementsthat were evaluated on a Likert scale from 1 (strongly disagree) to5 (strongly agree). The statements were grouped so that the ques-tion and the first statement were about the overall feeling and therest of the statements were about object manipulation and markingseparately. The statements were:• Would you think to use this method daily?• Your hands are NOT tired.•It was natural to perform the given tasks with this interactionmethod.•It was easy to handle the 3D objects with this interactionmethod.• The interaction method was accurate.• The marking method was natural.• It was easy to make the marking with this marking method.• The marking method was accurate.Figure 2: Hands interaction method outside VR (left). Hands marking method inside VR and the study task (right).Figure 3: Controller interaction method outside VR (left). Stylus marking method inside VR and the study task (right).The statements were designed to measure fatigue, naturalness,and accuracy as they have been measured in earlier studies [1,10,17]as well. Accuracy was measured also from data to see if the objectiveand subjective results are consistent. With these statements, it waspossible to measure the easiness and ability to use the method dailyunlike from objective data.In the questionnaire there were also open-ended questions aboutpositive and negative aspects of the interaction method. In the endthe participant was asked to rank the interaction methods in orderfrom the most liked to the least liked.A pilot study was arranged to ensure that tasks and the studyprocedure were feasible. Based on the findings in the pilot study, wemodified the introduction to be more specific and added a mentionabout the measured features. We also added the ability to rotate the3D object even after the mouse ray moved out of the object. Thespeed of the mouse ray in VR environment was increased to bettermatch the movements of the real mouse.3.5 Statistical measuresWe used two different statistical tests to analyze possible statisticallysignificant differences between different parameter sets. For ob-jective data (completion times, number of markings, and accuracy)we used the paired t-test. For data from evaluation questionnaires(fatigue, daily use, naturalness, easiness, and subjective accuracy)we first used Friedman test to see if any statistically significant dif-ferences appeared, and then we used the Wilcoxon signed rank testas it does not assume the numbers to be in ratio scale or to havenormal distribution.The study software saved the resolution of time in millisecondsand the resolution of distances in meters. To clarify the analysis, wetransferred these to seconds and millimeters.4 E XPERIMENT4.1 ParticipantsWe recruited 12 participants for the study. The number of partici-pants was decided based on a power analysis for paired t-test and theWilcoxon signed rank test, assuming large effect size, a power levelof 0.8 and an alpha level of 0.05. The post hoc calculated effect sizes(Cohen’s d or R value, for paired t-test or Wilcoxon signed ranktest, respectively) are reported together with the p-values in ResultsSection 5 for comparison to the assumption of large effect size. 10of the participants were university students and two were full timeemployees, on the field not related to medicine or dentistry. Theages varied from 21 to 30 years, mean age was 25 years. There were6 female participants and 6 male participants. Earlier VR experiencewas asked on a scale from 0 to 5, and the mean was 1.75. Twoparticipants did not have any earlier experience. One participant wasleft-handed but was used to use the mouse with the right hand. Otherparticipants were right-handed.4.2 Apparatus4.2.1 Software, hardware, and hand trackingThe experiment software was built using the Unity software [35].With all methods we used Varjo VR2 Pro headset [37], which hasan integrated vision based hand tracking system that was used forHands interaction. Hands were tracked by Ultraleap Stereo IR 170sensor mounted on a Varjo VR2 Pro. For the Controller+Stylus, weused Valve Index Controller [36] together with Logitech VR Inkstylus [23]. These were tracked by SteamVR 2.0 base stations [38]around the experiment area.4.2.2 Object manipulation and object markingThe study task combined two phases: object manipulation phasewhere the object was rotated and translated in 3D space and objectmarking phase where a small mark was put on the surface of anobject. In object manipulation phase the participant either selectedthe 3D object by mouse ray or pinched or grabbed the 3D objectwith hand gesture. The 3D objects did not have any physics and laidin mid-air. By rotating and translating the object the participant canview the object from different angles. The participant can also usehead moves to change their point-of-view.Instead of only pointing the target, the marking needs to be con-firmed. This allows us to measure the marking accuracy and if theuser understood the 3D target’s location related to the pointing de-vice. The participant could either release the 3D object in mid-airor hold it in their hand when Hands or Controller+Stylus was usedin object marking task. The marking was done either pointing bymouse ray and clicking with Left click, touching the target with vir-tual pen, and marked with a hand gesture, or touching and markingwith the VR stylus.4.3 ProcedureFirst, the participant was introduced to the study, s/he was askedto read and sign a consent form, and fill in a background infor-mation form. For all conditions the facilitator would demonstratehim/herself the system functions and the controls. Each participanthad an opportunity to practice before every condition. The practicetask was to move and rotate a cube having several target spheres, andto mark those targets as many times as needed to get to know boththe interaction and the marking methods. After the participant feltconfident with the used method, s/he was asked to press the Donebutton, and the real study task appeared.The participant was asked to find and mark a hidden target on thesurface of each 3D object model. The target was visible all the timewhereas the participant’s marking was created by the participant.When the target was found it was first pointed and then marked. Theaim was to place the participant’s mark (a yellow sphere) insidethe target sphere (red) (see Figures 1 right, 2 right, and 3 right).Each 3D object had one target on it and the task was repeatedfive times per each condition. The order of 3D objects was thesame to all participants: lower jaw, heart, skull, tooth, and skull.The order of interaction methods was counter-balanced betweenthe participants using balanced Latin Squares. This was done tocompensate possible learning effects. The target locations on the3D object were predefined and presented in the same order for theparticipants.The used task needed both object manipulation (rotating andtranslating) and marking (pointing and selecting). By combiningthe manipulation and marking tasks together, we wanted to create atask that simulates a task that medical professionals would do duringvirtual surgery planning. Both the object manipulation and markingare needed by the medical professionals. The marking is relevantwhen selecting specific locations and areas of a 3D model and itrequires accuracy to make the marks in relevant locations. Thismedical marking task does not differ from regular marking tasks inother contexts as such, but the accuracy requirements are higher. Bymanipulating the 3D model, the professional has an option to look atthe pointed area from different angles to verify its specific locationin 3D environment.A satisfaction questionnaire was filled after each interactionmethod trial, and after all three trials, a questionnaire was usedto rank the conditions.5 R ESULTSIn this section, we report the findings of the study. First, we presentthe objective results from data collected during the experiment, andthen the subjective results from the questionnaires.5.1 Objective resultsThe task completion times (Figure 4, top left) include both objectmanipulation and marking, and it had some variation, but the distri-butions of median values for each interaction method were similarand there were no significant differences. The completion time var-ied slightly depending on how much VR experience the participanthad before, but there were no statistically significant differences.The number of markings done before the task completion variedbetween the interaction methods (Figure 4, top right). The medianvalues for Mouse, Hands, and Controller+Stylus conditions were 6.5,12, and 7 markings, respectively. However, there were no statisticallysignificant differences. Some participants did many markings in afast pace (2-3 markings per second) leading to a high number oftotal markings.There were some clear differences in final marking accuracybetween the interaction methods (Figure 4, bottom). The medianvalues for Mouse, Hands, and Controller+Stylus methods were 3.2,5.9, and 4.2 millimeters, respectively. The variability between par-ticipants was highest with Hands method. We found statisticallysignificant difference between the Mouse and Hands methods (p-value 0.004, Cohen’s d 1.1781) using a paired t-test and Bonferronicorrected p-value limit 0.017 (= 0.05 / 3). There were no statisticallysignificant differences between the Mouse and Controller+Stylusmethods or Hands and Controller+Stylus methods.5.2 Subjective dataFriedman tests showed statistically significant differences in dailyuse (p-value 0.002), interaction naturalness (p-value 0.000), interac-tion easiness (p-value 0.001), interaction accuracy (p-value 0.007),marking easiness (p-value 0.039), and ranking (p-value 0.000).There were no significant differences in marking naturalness ormarking accuracy. In evaluations for tiredness there were no signifi-cant differences (Figure 5, left). Most participants did not feel tiredusing any of the methods, but the experiment was rather short.In pairwise tests of everyday use using Wilcoxon signed ranktest we found significant differences (Figure 5, right). We foundstatistically significant differences between the Mouse and Con-troller+Stylus methods (p-value 0.015, R 0.7732) and between Handsand Controller+Stylus methods (p-value 0.003, R 1.000). There wereno statistically significant differences between the Hands and Mousemethods or Hands and Controller+Stylus methods.We asked the participants to evaluate both object manipulationand marking separately. In object manipulation evaluation, therewere statistically significant differences in naturalness betweenController+Stylus and Mouse (p-value 0.003, R 1.000) and Con-troller+Stylus and Hands (p-value 0.009, R 0.879). There was nostatistically significant difference between Mouse and Hands. Inobject manipulation easiness Controller+Stylus had statistically sig-nificant difference between Mouse and Hands (p-values 0.003, R1.000 in both methods), see Figure 6. There were no no statisti-cally significant differences between Mouse and Controller+Stylus1Cohen’s d ≥0.8 is considered a large effect size2R value ≥0.5 is considered a large effect sizeFigure 4: The task completion times for different conditions (top left). The median values for each participant are rather similar between themethods. There were two outlier values (by the same participant, for Mouse and Hands conditions) that are removed from the visualization. Thenumber of markings per five targets (top right). There were some differences between the interaction methods (the median value for Hands washigher than for the other methods), but no significant differences. The marking accuracy (bottom). There were some clear differences between theinteraction methods in the final marking accuracy.Figure 5: The evaluation of fatigue (left). None of the methods were found to be particularly tiring. The evaluation of possible daily use (right).Controller+Stylus was significantly more usable for daily use than the other methods.or Hands and Controller+Stylus. In manipulation accuracy eval-uation we found statistically significant difference between Con-troller+Stylus method and Hands method (p-value 0.003, R 1.000).There were no no statistically significant differences between Mouseand Controller+Stylus or Hands and Mouse. In the object markingevaluation (Figure 7), the only significant difference was measuredbetween Controller+Stylus method and Mouse method in easiness(p-value 0.009, R 1.000). There were no no statistically significantdifferences between Hands and Controller+Stylus or Hands andMouse.Multiple participants commented that the controller interactionfelt stable and that it was easy to move and rotate the 3D model withthe controller. The participants also commented that holding a phys-ical device in hand so that its weight could be felt increased the feelof naturalness. Not all comments agreed, when one participant feltVR stylus as accurate while another participant said it felt clumsy.When asked 11 out of 12 participants ranked Controller+Stylusthe most liked method. The distribution of ranking values is shownin Table 1. The ranking values of Controller+Stylus method werestatistically significantly different to Mouse (p-value 0.008, R 0.885)and Hands (p-value 0.003, R 1.000). There was no statisticallysignificant difference between Mouse and Hands.Figure 6: The evaluation of interaction method naturalness (left), easiness (middle), and accuracy (right). Controller+Stylus was the most likedmethod in these features.Figure 7: The evaluation of marking method naturalness (left), easiness (middle), and accuracy (right). Median values in these features are rathersimilar, and significant difference was found only in marking easiness.Table 1: The number of mentions of different rankings of the interactionmethods when asked for the most liked ( 1st), the second most liked(2nd), and the least liked ( 3rd) method.Condition Ranking1st2nd3rdMouse 1 7 4Hands 0 4 8Controller+Stylus 11 1 06 D ISCUSSIONIn this study, we were looking for the most feasible interactionmethod in VR for object manipulation and marking in a medicalcontext. Controller+Stylus method was overall the most suitablefor a task that requires both object manipulation and marking. Con-troller+Stylus method was the most liked in all subjective features,while Mouse and Hands conditions were evaluated very similarly.The smallest number of markings were done with Controller+Stylus,but no significant differences were found. There were statisticallysignificant differences between the methods in daily use, interactionnaturalness, and easiness. Controller+Stylus was statistically signif-icantly more accurate in object manipulation than Hands (p-value0.003), and easier to use than Mouse (p-value 0.003). Without earlierexperience with the VR stylus, the participants had difficulties infinding the correct button when marking with the stylus. The physi-cal stylus device cannot be seen when wearing the VR headset andthe button could not be felt clearly. Even though Controller+Styluscombination was evaluated as natural and the most liked method inthis study, the hand-held devices may feel inconvenient [17]. In ourstudy, some participants liked the physical feel of devices. However,our result was based on the subjective opinions of participants, andthat might change depending on the use case or devices.There are many possible reasons for the low hand tracking accu-racy. Hand inaccuracy can be seen in the large number of markingsand large distribution in task completion times with Hands as theparticipants were not satisfied with their first marking. Hands werethe only method where only one participant succeeded with a min-imum of 5 markings, when by other methods, several participantssucceeded in the task with 5 markings. One explanatory factor canbe the lack of hand tracking fidelity that also has been noticed inother studies [17, 42]. In addition, inaccuracy in human motor sys-tem leads to the inaccuracy of hands [15]. The vision based handtracking system that uses camera on HMD does not recognize thehand gesture well enough and as a result, the participant must repeatthe same gesture or movement multiple times to succeed. This extrawork also increases the fatigue in Hands. Even though the fatiguewere low with all interaction methods, this study did not measurethe fatigue of long-term activity. These are clear indications thatHands interaction needs further development before it can be used intasks that needs high marking accuracy. Several earlier studies havereported the hands inaccuracy compared to controllers [15, 17, 42].Passive haptics were available with Mouse and when markingwith VR stylus. With Hands there was only visual feedback. Thelack of any haptic feedback might have affected the marking accu-racy as well because the accuracy was much better with the physicalstylus. Li et al. [22] found that with the low marking difficulty,the mouse with 2D display was faster than the kinesthetic forcefeedback device in VR. For high marking difficulty the other VRinterface that used a VR controller with vibrotactile feedback wasbetter than the 2D interface. They found that mouse in 2D displayhas fast pointing capability but in our study, the task completiontimes did not vary between Mouse and the other methods. Li et al.described the fact that manipulating viewing angle is more flexiblewhen wearing HMD than with a mouse in 2D display. In VR in-terfaces the participant can rotate the 3D object while changing theviewing angle by moving their head. In our study, all methods usedHMD, so change of viewing angle was as equally flexible.Mouse was statistically significantly more accurate markingmethod than Hands. Mouse was not affected by some of the issuesthat were noticed with Hands or Controller+Stylus. With Mouse itwas not felt problematic that the device cannot be seen during theuse. There were no sensor fidelity issues with Mouse, and Mousewas a familiar device to all participants. Only the ray that replacedthe cursor was an unfamiliar feature and caused some problems. Wefound that the ray worked well with simple 3D models but therewere a lot of difficulties with complex models where the viewingangle needed to be exactly right to reach the target. If any part ofthe 3D model blocked the ray, the target could not be marked. Whenthe target was easy to mark the accuracy using Mouse was high. Itcan be stated that Mouse was an accurate method in VR but for allother measured properties of Controller+Stylus were measured tobe better.Both the target and the marking were spheres in 3D environment.During the study, it was noticed that when a participant made theirmarking in the same location as the target, the marking spheredisappeared inside the target sphere. This caused uncertainty ifthe marking was lost or if it was in the center of the target. Thismay have affected the results when the participants needed to makeremarking to be able to see their marking that was not in the centerof the target anymore. In future studies the marking sphere shouldbe designed bigger size than the target and transparent so that theparticipant can be sure about the location of both spheres.Our focus was in comparing three different interaction and mark-ing methods and their suitability for the medical marking task. Tosimplify the experimental setup, the experiment was conducted withsimplified medical images, which may have led to optimistic resultsfor the viability of the methods. Even then, there were some prob-lems with Mouse interaction method. To further confirm that theresults are similar also for more realistic content, a similar studyshould be conducted in future work with authentic material utilizing,for example, original CBCT images in VR instead of the simplifiedones.Future research may investigate multimodal interaction methodsto support even more natural alternatives. Speech is the primarymode for human communication [30]. Suresh et al. [33] used threevoice commands to control gestures of a robotic arm in VR. V oiceis a well suitable input method in cases where hands and eyes arecontinuously busy [15]. Pfeuffer et al. [26] studied gaze as aninteraction method together with hand gestures but found that bothhand and gaze tracking still lack tracking fidelity. More work is stillneeded, as Nukarinen et al. [24] stated that human factor issues madethe gaze as the least preferred input method in an object selectiontask in VR.7 C ONCLUSIONThe 3D medical images can be viewed in VR environments to planfor surgeries with expected results. During the planning process oneneeds to interact with the 3D models and be able to make markingsof high accuracy on them. In this study, we evaluated the feasibilityof three different VR interaction methods Mouse, Hands, and Con-troller+Stylus combination in virtual reality. Based on the results,we can state that Valve Index controller and Logitech VR Ink styluscombination was the most feasible for tasks that require both 3Dobject manipulation and high marking accuracy in VR. This com-bination did not have issues with complex 3D models and sensorfidelity was better than with Hands interaction. Statistically signifi-cant differences were found between the controller combination andthe other methods.Hand-based interaction was the least feasible for this kind ofuse according to the collected data. Hands and Mouse methodswere evaluated almost equal in the feasibility by participants. Withthe current technology, free hands usage cannot be proposed foraccurate marking tasks. Mouse interaction was more accurate thanController+Stylus. In detailed tasks Mouse could replace the freehands interaction. The discrepancy between the 2D mouse andthe 3D environment needs to be solved before Mouse could beconsidered a viable interaction method in VR.
hC-4PGvv9Oq
some weaknesses with the study
5: Marginally below acceptance threshold
I reviewed an earlier version of this submission. It has been improved by adding a video, which is very helpful to understand the interaction techniques, and also by citing more related work. There is some limited justification given in section 1 (paragraphs 4 and 5) for the choice of interaction methods. I don't find the justification strong, but it is a justification nevertheless. <p> There are still two weaknesses with the study that make me not in favor of accepting the work at GI: <p> First, if I understand correctly, the users could click (or "mark") as many times as they wanted on or near the target before confirming that they had completed the task, and figure 4 (upper right) shows that often users marked a target 10 or even 15 times before confirming that they had finished the task. Section 6, 2nd paragraph, confirms that many users did more than 5 marks per target. Section 6, paragraph 5, then tells us that users were sometimes confused when they had marked inside the target because the marking sphere would disappear inside the target sphere, and this "may have affected the results when the participants needed to [perform] remarking to be able to see their marking that was not in the center of the target anymore". This kind of confusion should be eliminated during warm-up tasks or practice tasks, and the study did indeed have practice tasks (section 4.3, paragraph 1) but it seems that users had control over when they could proceed to the real tasks. A better design for the experiment could force users to perform a minimum fixed number of practice tasks. It is unclear to me how much the results in time and accuracy were affected by this confusion. <p> Second, and more significantly, the fact that users often had to mark (click) 5 times or more also tells me that a better user interface would allow the user to mark close to the target, and then do something to make minor corrections, like hitting a button to switch to a lower gain, or hitting a button to switch to relative displacement, or hitting some arrow keys on a keyboard to make very small adjustments to the marked location. Such a user interface might require much less time to complete the task. <p> Additional, minor comments: <p> - Section 4.1 states 12 participants were chosen to achieve power of 0.8 assuming a large effect size, but the footnote 1 on the next page states "Cohen's d >= 0.8 is considered a large effect size", and if I use https://statulator.com/SampleSize/ss2PM.html to find sample size for effect size of 0.8, it says we need a sample siz of 16. On the other hand, for effect size 1.0, we only need a sample size of 11. So the submission should clarify what is meant in section 4.1 by "large effect size". <p> - p values are not normally reported as "p-value 0.002", but rather as "p < 0.002" or "p = 0.002". A p value of "0.000" should not be reported, as p cannot be zero, it can only be bound above (p < ...). <p> - Figure 4, top left: are these the times for each user and for each task and for each method, or are these times the total time over the 5 tasks for each user and for each method? Should we divide the times by 5 to obtain the time for each user and each task and each method? What about the time for each marking? <p> - In the results figures, what do the error bars show? Do they show quartiles, standard deviation, standard error, confidence intervals (for how much percent?), or something else?
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Comparison of a VR Stylus with a Controller, Hand Tracking and a Mouse for Object Manipulation and Medical Marking Tasks in Virtual Reality ### Paper Abstract For medical surgery planning, virtual reality (VR) provides a new kind of user experience, where 3D images of the operation area can be utilized. Using VR, it is possible to view the 3D models in a more realistic 3D environment, which would reduce the perception problems and increase spatial understanding. In the present experiment, we compared a mouse, hand tracking, and a combination of a VR stylus and a VR controller as interaction methods in VR. The purpose was to study the viability of the methods for tasks conducted in medical surgery planning in VR. The tasks required interaction with 3D objects and high marking accuracy. The stylus and controller combination was the most preferred interaction method. In subjective results, it was considered as the most appropriate, while in objective results, the mouse interaction method was the most accurate. ### Paper Keywords ["3D visualization", "virtual reality", "hand tracking", "controller interaction", "mouse interaction", "3D imaging"] ### Paper Content ABSTRACTFor medical surgery planning, virtual reality (VR) provides a newkind of user experience, where 3D images of the operation area canbe utilized. Using VR, it is possible to view the 3D models in amore realistic 3D environment, which would reduce the perceptionproblems and increase spatial understanding. In the present experi-ment, We compared a mouse, hand tracking, and a combination of aVR stylus and a VR controller as interaction methods in VR. Thepurpose was to study the viability of the methods for tasks conductedin medical surgery planning in VR. The tasks required interactionwith 3D objects and high marking accuracy. The stylus and con-troller combination was the most preferred interaction method. Insubjective results, it was considered as the most appropriate, whilein objective results, the mouse interaction method was the mostaccurate.Index Terms: Human-centered computing—Human com-puter interaction (HCI)—Interaction devices—Pointing devices;Human-centered computing—Human computer interaction (HCI)—Empirical studies in HCI; Human-centered computing—Humancomputer interaction (HCI)—Interaction paradigms—Virtual reality1 I NTRODUCTIONVirtual reality makes it possible to create computer-generated en-vironments that replace the real world. For example, the user caninteract with virtual object models more flexibly using various in-teraction methods than with real objects in the real environment.VR has become a standard technology in research, but it has notbeen fully exploited in professional use even if its potential has beendemonstrated.In the field of medicine, x-ray imaging is routinely used to di-agnose diseases and anatomical changes as well as for scientificsurveys [31]. In many cases 2D medical images are satisfactory,but they can be complemented with 3D images for more complexoperations where detailed understanding of the 3D structures isneeded.When planning surgeries, medical doctors, surgeons, and radiolo-gists study 3D images. Viewing the 3D images in 2D displays canpresent issues to control object position, orientation, and scaling. Us-ing VR devices, like head mounted displays (HMD), 3D images canbe more easily perceived when viewed and interacted with in a 3Denvironment than with a 2D display. For the medical professionalsto be able to do the same tasks in VR as they do in 2D, the interac-tion methods need to be studied properly. The interaction methodneeds to be accurate, reasonable, and suitable for the medical tasks.Because we talk about medical work, the accuracy is crucial to avoidas many mistakes as possible. K ̈onig et al. [21] studied an adaptivepointing for the accuracy problems caused by hand tremor whenpointing distant objects. The used interaction method needs also tobe natural so that the doctors would use it in their daily work and sothat they still can focus on their primary tasks without paying toomuch attention to the interaction method. One typical task for thedoctors is marking anatomical structures and areas on the surface ofthe 3D model. The marked points create the operative area, or theycan be used for training.For 2D content, a mouse is one of the best options for interactiondue to its capability to point at small targets with high accuracyand the fact that many users are already very experienced with thisdevice [27]. Mouse cursor can be used for 3D pointing with ray-casting [34] which allows pointing of the distant objects as well. Thefamiliarity and accuracy make the mouse a worthy input method inVR, even though it is not a 3D input device. In addition, controllershave been identified as an accurate interaction method [13, 17] andthey are typically used in VR environments [22]. Controllers enabledirect manipulation, and the reach of distant objects is differentthan with the mouse with ray-casting. Other devices, like styluseshave been studied in pointing tasks previously [27, 40]. Thereforewe aimed to investigate performance of a stylus together with acontroller in selected tasks.The cameras and sensors on HMD devices also allow hand track-ing without hand-held input devices. Pointing at objects with afinger is natural way of acting for humans, so hand interaction canbe expected to be received well. Hand interaction was selected asone of the conditions based on interviews of medical professionalsand their expectations for the supporting technology.We decided to use a marking task to assess the three interactionconditions. The conditions were a standard mouse, bare hands, and ahandheld controller with VR stylus. All methods were used in a VRenvironment to minimise additional variation between methods andto focus the comparison on interaction techniques. The use of theHMD also allowed the participants to easily study the target fromdifferent directions by moving their head. In the medical markingtask the doctor will observe the anatomical structures by turningand moving the 3D object and at the same time looking for thebest location for the mark. The time spent for the manipulation isnot easily separated from the time spent in the final marking. Thedoctor decides during the manipulation from which angle and howthe marking will be done, which will affect the marking time. Thismade application of Fitts’ law [11] not possible in our study, as itrequires that a participant cannot influence target locations.We had 12 participants who were asked to do simplified medicalsurgery marking tasks. To study the accuracy of the interactionmethods, we created an experiment where in the 3D model therewas a predefined target that was marked (pointed+selected). In thereal medical case, the doctor would define the target, but then theaccuracy cannot be easily measured. This study focused mainlyon subjective evaluations of interaction methods, but also includedobjective measurements.The paper is organized as follows: First, we go through back-ground of object manipulation and marking, interaction methods in3D environment, and jaw osteotomy surgery planning (Section 2).Then, we introduce the compared interaction methods and usedmeasurements (Section 3), as well as go through the experiment(Section 4) including apparatus, participants, and study task. In theend the results are presented (Section 5) and discussed (Section 6).2 B ACKGROUND2.1 Object manipulation and markingObject manipulation, i.e. rotating and translating the object in 3Dspace, and object marking, i.e. putting a small mark on the surfaceof an object, have been used as separate task when different VRinteraction methods have been studied. Sun et al. [32] had 3D posi-tioning task that involved object manipulation. When a mouse and acontroller were compared for precise 3D positioning the mouse wasfound as the more precise input device. Object marking has beenstudied without manipulation in [27]. Argelaguet and Andujar [1]studied 3D object selection techniques in VR and Dang et al. [9]have studied 3D pointing techniques. As there are no clear standardtechniques for 3D object selection nor 3D pointing technique, Arge-laguet and Andujar and Dang et al. attempt to bring practices instudying new techniques in 3D UIs.In earlier work using bimanual techniques, Balakrishnan andKurtenbach [5] presented a study where dominant and non-dominanthand had their own tasks in a virtual 3D scene. The bimanualtechnique was found as faster and preferable. People typically usetheir both hands to cooperatively perform the most skilled tasks [5,12] where the dominant hand is used for the more accurate functions,and the non-dominant hand sets the context such as holding a canvaswhen dominant hand is used to draw. The result is optimal whenbimanual techniques are designed by utilizing the strengths of bothdominant and non-dominant hands.2.2 Input devices for object manipulation and marking2.2.1 MouseA mouse is a common, familiar, and accurate device for 2D contentto point at small targets with high accuracy [27]. The mouse isalso a common device to do medical surgery planning [22]. Manystudies have used a mouse cursor for 3D pointing with ray-casting[6, 8, 22, 27, 34]. Ray-casting technique is easily understood, and itis a solution for reaching objects at a distance [25].Compared to other interaction methods in VR, the issue of thediscrepancy between the 2D mouse and a 3D environment has beenreported [1], and manipulation in 3D requires a way to switch be-tween dimensions [4]. Balakrishnan et al. presented Rocking’Mouseto select in 3D environment while avoiding a hand fatigue. Kimand Choi [20] mentioned that the discrepancy creates a low userimmersion. In addition, use of a mouse usually forces the user to sitdown next to a table instead of standing. The user can rest their armson the table while interacting with the mouse which decrease handfatigue. Johnson et al. [18] stated that fatigue with mouse interactionwill appear only after 3 hours.Bachmann et al. [3] found that Leap Motion controller has ahigher error rate and higher movement time than the mouse. Kimand Choi [20] showed in their study that 2D mouse have high per-formance in working time, accuracy, ease of learning, and ease ofuse in VR. Both Bachmann et al. and Kim and Choi found themouse to be accurate but on the other hand Li et al. [22] pointedthat with difficult marking tasks small displacement of a physicalmouse would lead to a large displacement on the 3D model in the3D environment.2.2.2 HandsHand interaction is a common VR interaction method. V oigt-Antonset al. [39] compared free hand interaction and controller interactionwith different visualizations. Huang et al. [17] compared differentinteraction combinations between free hands and controllers. Bothfound that hand interaction has lower precision than the controllerinteraction. With alternative solutions like a Leap Motion controller[28, 41] or using wearable gloves [42] the hand interaction can bedone more accurately. Physical hand movements create a naturaland realistic experience of interaction [10, 17], and therefore handinteraction is still an area of interest.2.2.3 ControllersControllers are the leading control inputs for VR [17]. When us-ing controllers as the interaction method, marking, and selectingare usually made with some of the triggers or buttons on the con-troller. Handheld controllers are described as stable and accuratedevices [13, 17]. However, holding extra devices in hands may be-come inconvenient, if the hands are needed for other tasks betweendifferent actions. When interacting with hands or controllers in VR,the fatigue in arms is one of the main issues [1,15]. Upholding armsand carrying the devices also increase the arm fatigue.2.2.4 VR stylusA VR stylus is penlike handheld device that is used in VR envi-ronment as a controller. The physical appearance of Logitech VRInk stylus [23] is close to a regular pen except it has buttons whichenables different interaction e.g., selecting, in VR. Batmaz et al. [7]have studied Logitech VR Ink stylus for a selection method in virtualreality. They found that using a precision grip there is no statisticaldifferences on the marking if the distance of the target is changing.Wacker et al. [40] presented as one of their design VR stylus for mid-air pointing and selection happened pressing a button. For objectselection, the users preferred a 3D pen over a controller in VR [27].2.3 Jaw osteotomy surgery planningCone Beam Computed Tomography (CBCT) is a medical imag-ing technique that produce 3D images that can be used in virtualsurgery planning. Compared to previous techniques that were usedin medical surgery planning like cast models, virtual planning withCBCT images has extra costs and time requirements [14]. How-ever, the technique offers several advantages for planning accuracyand reliability [31]. CBCT images can be used as 3D objects inVR for surgery planning with excellent match to real objects [14].Ayoub and Pulijala [2] reviewed different studies about virtual andaugmented reality applications in oral and maxillofacial surgeries.In virtual surgery planning, the procedures for surgery are imple-mented and planned beforehand. The real surgery is done based onthe virtual plan. Common tasks in dental planning are specifying thelocation of impacted teeth, preventing nerve injuries, or preparingguiding flanges [31]. In VR this can be done by marking criticalareas or drawing cutting lines on to the models. Virtual planningcan be used in student education as well, where the procedures canbe realistically practiced. Reymus et al. [29] found that students un-derstood the mouth anatomy better after studying 3D models in VRenvironment than from regular 2D image. The objects can be closer,bigger, and they can move in depth direction in 3D environmentcompared to 2D environment [19].Tasks, like understanding the 3D object and marking critical areason it need to be done in medical surgery planning. However, workingwith 3D objects in 2D environment makes the task more difficult.Hinckley et al. [16] studied issues for developing effective free-space3D user interfaces. Appropriate interaction and marking methodshelp to understand 3D objects and perform the required tasks in VR.In this study, we evaluated three methods for VR object manipulationand marking and examined the performances in simplified medicalsurgery planning tasks.3 M ETHOD3.1 MouseIn the first interaction method a regular mouse was used inside a VRenvironment (Figure 1). In VR environment there was a visualizedmouse model that the participant was able to move by manipulatingthe physical mouse and to control the direction of a ray starting fromthe model. The ray was always visible in Mouse interaction.Mouse was used one-handed when the other two methods weretwo-handed. Mouse was used to perform two functions, manipula-tion and marking, while these functions had been separated in othermethods into different hands. In addition, Mouse used ray-casting,ray from the mouse, while the two other methods did not use it. Theother methods used direct mid-air object manipulation.Figure 1: Mouse interaction method outside VR (left). Mouse marking method inside VR and the study task (right).The participant could rotate the object in 3 dimensions usingthe mouse movements with right click. For the 3D translations theparticipant used the scroll button. Using the scroll wheel, the usercan zoom in and out (translate in Z) and when the user pressesthe scroll button and moves the mouse, the user can translate up-down and sideways (translate in X and Y). Markings were made bypointing the target with the ray and pressing the left button.For the real-world mouse to be visible inside VR, pass through isnot really required even though the mouse was visible in our study.After wearing the headset, the user could see the virtual mouse thatis positioned to where the physical mouse is located to be able tofind and reach the device. When the user moved the physical mousesideways, the movement was converted to a horizontal rotation ofthe beam from the virtual mouse, and when the mouse was movedback and forth, the movement was converted to a vertical rotationof the beam. This way the user can cover a large space similar tousing mouse in 2D displays. To improve ergonomics, the user couldconfigure the desk and chair for their comfort.3.2 HandsAs the second interaction method, the participant used bare hands .The left hand was for object manipulation and the right hand forobject marking. The participant could pick up the 3D object by apinch gesture with their left hand, to rotate and move the object.Marking was done with a virtual pen. In the VR environment theparticipant had the virtual pen attached to their right palm, nearto the index finger (Figure 2 right). As the palm was moved thepen moved accordingly. When the virtual pen tip was close to thetarget, the tip changed its color to green to show that the pen wastouching the surface of the object. The mark was put on the surfaceby bending the index finger and pressing the pen’s virtual button.The participant had to keep their palm steady when pressing thebutton to prevent the pen from moving.3.3 Controller and VR stylusThe third interaction method was based on having a controller onparticipant’s left hand for the object manipulation and a VR stylus onthe right hand for the marking (Figure 3). The participant grabbedthe 3D object with hand grab gesture around the controller to rotateand move the object. The markings were made with the physical VRstylus. The VR stylus was visualized in VR as was the mouse, sothe participant knew where the device was located. The participantpointed the target with the stylus and pressed its physical button tomake the mark. The act of press was identical to the virtual penpress in Hands method. There was a passive haptic feedback whentouching the physical VR stylus, which did not happen with thevirtual pen.There have been some supporting results for using mouse inVR [3, 20, 22, 25] but 2D mouse is not fully compatible with the 3Denvironment [20]. We studied the ray method with Mouse to com-pare it against Hands and Controller+Stylus for 3D object marking.We also compared Hands without any devices to a method with adevice in one or two hands. The marking gesture was designed tobe similar in Hands and Controller+Stylus methods to be able tocompare the effect of the devices.3.4 Measurements and the pilot studyThe participant was asked to make a marking as close to the targetlocation as possible. We used Euclidean distance to measure thedistance between the target and the participant’s marking. Thetask completion times were measured. The participant was able toremark the target if s/he was dissatisfied with the current marking.We counted how many remarkings were made to see if any of theinteraction methods required more remarking than the other methods.We measured accuracy in these two ways, as a distance from thetarget and as the number of dissatisfied markings.A satisfaction questionnaire was filled after each interactionmethod trial. There were a question and seven satisfaction statementsthat were evaluated on a Likert scale from 1 (strongly disagree) to5 (strongly agree). The statements were grouped so that the ques-tion and the first statement were about the overall feeling and therest of the statements were about object manipulation and markingseparately. The statements were:• Would you think to use this method daily?• Your hands are NOT tired.•It was natural to perform the given tasks with this interactionmethod.•It was easy to handle the 3D objects with this interactionmethod.• The interaction method was accurate.• The marking method was natural.• It was easy to make the marking with this marking method.• The marking method was accurate.Figure 2: Hands interaction method outside VR (left). Hands marking method inside VR and the study task (right).Figure 3: Controller interaction method outside VR (left). Stylus marking method inside VR and the study task (right).The statements were designed to measure fatigue, naturalness,and accuracy as they have been measured in earlier studies [1,10,17]as well. Accuracy was measured also from data to see if the objectiveand subjective results are consistent. With these statements, it waspossible to measure the easiness and ability to use the method dailyunlike from objective data.In the questionnaire there were also open-ended questions aboutpositive and negative aspects of the interaction method. In the endthe participant was asked to rank the interaction methods in orderfrom the most liked to the least liked.A pilot study was arranged to ensure that tasks and the studyprocedure were feasible. Based on the findings in the pilot study, wemodified the introduction to be more specific and added a mentionabout the measured features. We also added the ability to rotate the3D object even after the mouse ray moved out of the object. Thespeed of the mouse ray in VR environment was increased to bettermatch the movements of the real mouse.3.5 Statistical measuresWe used two different statistical tests to analyze possible statisticallysignificant differences between different parameter sets. For ob-jective data (completion times, number of markings, and accuracy)we used the paired t-test. For data from evaluation questionnaires(fatigue, daily use, naturalness, easiness, and subjective accuracy)we first used Friedman test to see if any statistically significant dif-ferences appeared, and then we used the Wilcoxon signed rank testas it does not assume the numbers to be in ratio scale or to havenormal distribution.The study software saved the resolution of time in millisecondsand the resolution of distances in meters. To clarify the analysis, wetransferred these to seconds and millimeters.4 E XPERIMENT4.1 ParticipantsWe recruited 12 participants for the study. The number of partici-pants was decided based on a power analysis for paired t-test and theWilcoxon signed rank test, assuming large effect size, a power levelof 0.8 and an alpha level of 0.05. The post hoc calculated effect sizes(Cohen’s d or R value, for paired t-test or Wilcoxon signed ranktest, respectively) are reported together with the p-values in ResultsSection 5 for comparison to the assumption of large effect size. 10of the participants were university students and two were full timeemployees, on the field not related to medicine or dentistry. Theages varied from 21 to 30 years, mean age was 25 years. There were6 female participants and 6 male participants. Earlier VR experiencewas asked on a scale from 0 to 5, and the mean was 1.75. Twoparticipants did not have any earlier experience. One participant wasleft-handed but was used to use the mouse with the right hand. Otherparticipants were right-handed.4.2 Apparatus4.2.1 Software, hardware, and hand trackingThe experiment software was built using the Unity software [35].With all methods we used Varjo VR2 Pro headset [37], which hasan integrated vision based hand tracking system that was used forHands interaction. Hands were tracked by Ultraleap Stereo IR 170sensor mounted on a Varjo VR2 Pro. For the Controller+Stylus, weused Valve Index Controller [36] together with Logitech VR Inkstylus [23]. These were tracked by SteamVR 2.0 base stations [38]around the experiment area.4.2.2 Object manipulation and object markingThe study task combined two phases: object manipulation phasewhere the object was rotated and translated in 3D space and objectmarking phase where a small mark was put on the surface of anobject. In object manipulation phase the participant either selectedthe 3D object by mouse ray or pinched or grabbed the 3D objectwith hand gesture. The 3D objects did not have any physics and laidin mid-air. By rotating and translating the object the participant canview the object from different angles. The participant can also usehead moves to change their point-of-view.Instead of only pointing the target, the marking needs to be con-firmed. This allows us to measure the marking accuracy and if theuser understood the 3D target’s location related to the pointing de-vice. The participant could either release the 3D object in mid-airor hold it in their hand when Hands or Controller+Stylus was usedin object marking task. The marking was done either pointing bymouse ray and clicking with Left click, touching the target with vir-tual pen, and marked with a hand gesture, or touching and markingwith the VR stylus.4.3 ProcedureFirst, the participant was introduced to the study, s/he was askedto read and sign a consent form, and fill in a background infor-mation form. For all conditions the facilitator would demonstratehim/herself the system functions and the controls. Each participanthad an opportunity to practice before every condition. The practicetask was to move and rotate a cube having several target spheres, andto mark those targets as many times as needed to get to know boththe interaction and the marking methods. After the participant feltconfident with the used method, s/he was asked to press the Donebutton, and the real study task appeared.The participant was asked to find and mark a hidden target on thesurface of each 3D object model. The target was visible all the timewhereas the participant’s marking was created by the participant.When the target was found it was first pointed and then marked. Theaim was to place the participant’s mark (a yellow sphere) insidethe target sphere (red) (see Figures 1 right, 2 right, and 3 right).Each 3D object had one target on it and the task was repeatedfive times per each condition. The order of 3D objects was thesame to all participants: lower jaw, heart, skull, tooth, and skull.The order of interaction methods was counter-balanced betweenthe participants using balanced Latin Squares. This was done tocompensate possible learning effects. The target locations on the3D object were predefined and presented in the same order for theparticipants.The used task needed both object manipulation (rotating andtranslating) and marking (pointing and selecting). By combiningthe manipulation and marking tasks together, we wanted to create atask that simulates a task that medical professionals would do duringvirtual surgery planning. Both the object manipulation and markingare needed by the medical professionals. The marking is relevantwhen selecting specific locations and areas of a 3D model and itrequires accuracy to make the marks in relevant locations. Thismedical marking task does not differ from regular marking tasks inother contexts as such, but the accuracy requirements are higher. Bymanipulating the 3D model, the professional has an option to look atthe pointed area from different angles to verify its specific locationin 3D environment.A satisfaction questionnaire was filled after each interactionmethod trial, and after all three trials, a questionnaire was usedto rank the conditions.5 R ESULTSIn this section, we report the findings of the study. First, we presentthe objective results from data collected during the experiment, andthen the subjective results from the questionnaires.5.1 Objective resultsThe task completion times (Figure 4, top left) include both objectmanipulation and marking, and it had some variation, but the distri-butions of median values for each interaction method were similarand there were no significant differences. The completion time var-ied slightly depending on how much VR experience the participanthad before, but there were no statistically significant differences.The number of markings done before the task completion variedbetween the interaction methods (Figure 4, top right). The medianvalues for Mouse, Hands, and Controller+Stylus conditions were 6.5,12, and 7 markings, respectively. However, there were no statisticallysignificant differences. Some participants did many markings in afast pace (2-3 markings per second) leading to a high number oftotal markings.There were some clear differences in final marking accuracybetween the interaction methods (Figure 4, bottom). The medianvalues for Mouse, Hands, and Controller+Stylus methods were 3.2,5.9, and 4.2 millimeters, respectively. The variability between par-ticipants was highest with Hands method. We found statisticallysignificant difference between the Mouse and Hands methods (p-value 0.004, Cohen’s d 1.1781) using a paired t-test and Bonferronicorrected p-value limit 0.017 (= 0.05 / 3). There were no statisticallysignificant differences between the Mouse and Controller+Stylusmethods or Hands and Controller+Stylus methods.5.2 Subjective dataFriedman tests showed statistically significant differences in dailyuse (p-value 0.002), interaction naturalness (p-value 0.000), interac-tion easiness (p-value 0.001), interaction accuracy (p-value 0.007),marking easiness (p-value 0.039), and ranking (p-value 0.000).There were no significant differences in marking naturalness ormarking accuracy. In evaluations for tiredness there were no signifi-cant differences (Figure 5, left). Most participants did not feel tiredusing any of the methods, but the experiment was rather short.In pairwise tests of everyday use using Wilcoxon signed ranktest we found significant differences (Figure 5, right). We foundstatistically significant differences between the Mouse and Con-troller+Stylus methods (p-value 0.015, R 0.7732) and between Handsand Controller+Stylus methods (p-value 0.003, R 1.000). There wereno statistically significant differences between the Hands and Mousemethods or Hands and Controller+Stylus methods.We asked the participants to evaluate both object manipulationand marking separately. In object manipulation evaluation, therewere statistically significant differences in naturalness betweenController+Stylus and Mouse (p-value 0.003, R 1.000) and Con-troller+Stylus and Hands (p-value 0.009, R 0.879). There was nostatistically significant difference between Mouse and Hands. Inobject manipulation easiness Controller+Stylus had statistically sig-nificant difference between Mouse and Hands (p-values 0.003, R1.000 in both methods), see Figure 6. There were no no statisti-cally significant differences between Mouse and Controller+Stylus1Cohen’s d ≥0.8 is considered a large effect size2R value ≥0.5 is considered a large effect sizeFigure 4: The task completion times for different conditions (top left). The median values for each participant are rather similar between themethods. There were two outlier values (by the same participant, for Mouse and Hands conditions) that are removed from the visualization. Thenumber of markings per five targets (top right). There were some differences between the interaction methods (the median value for Hands washigher than for the other methods), but no significant differences. The marking accuracy (bottom). There were some clear differences between theinteraction methods in the final marking accuracy.Figure 5: The evaluation of fatigue (left). None of the methods were found to be particularly tiring. The evaluation of possible daily use (right).Controller+Stylus was significantly more usable for daily use than the other methods.or Hands and Controller+Stylus. In manipulation accuracy eval-uation we found statistically significant difference between Con-troller+Stylus method and Hands method (p-value 0.003, R 1.000).There were no no statistically significant differences between Mouseand Controller+Stylus or Hands and Mouse. In the object markingevaluation (Figure 7), the only significant difference was measuredbetween Controller+Stylus method and Mouse method in easiness(p-value 0.009, R 1.000). There were no no statistically significantdifferences between Hands and Controller+Stylus or Hands andMouse.Multiple participants commented that the controller interactionfelt stable and that it was easy to move and rotate the 3D model withthe controller. The participants also commented that holding a phys-ical device in hand so that its weight could be felt increased the feelof naturalness. Not all comments agreed, when one participant feltVR stylus as accurate while another participant said it felt clumsy.When asked 11 out of 12 participants ranked Controller+Stylusthe most liked method. The distribution of ranking values is shownin Table 1. The ranking values of Controller+Stylus method werestatistically significantly different to Mouse (p-value 0.008, R 0.885)and Hands (p-value 0.003, R 1.000). There was no statisticallysignificant difference between Mouse and Hands.Figure 6: The evaluation of interaction method naturalness (left), easiness (middle), and accuracy (right). Controller+Stylus was the most likedmethod in these features.Figure 7: The evaluation of marking method naturalness (left), easiness (middle), and accuracy (right). Median values in these features are rathersimilar, and significant difference was found only in marking easiness.Table 1: The number of mentions of different rankings of the interactionmethods when asked for the most liked ( 1st), the second most liked(2nd), and the least liked ( 3rd) method.Condition Ranking1st2nd3rdMouse 1 7 4Hands 0 4 8Controller+Stylus 11 1 06 D ISCUSSIONIn this study, we were looking for the most feasible interactionmethod in VR for object manipulation and marking in a medicalcontext. Controller+Stylus method was overall the most suitablefor a task that requires both object manipulation and marking. Con-troller+Stylus method was the most liked in all subjective features,while Mouse and Hands conditions were evaluated very similarly.The smallest number of markings were done with Controller+Stylus,but no significant differences were found. There were statisticallysignificant differences between the methods in daily use, interactionnaturalness, and easiness. Controller+Stylus was statistically signif-icantly more accurate in object manipulation than Hands (p-value0.003), and easier to use than Mouse (p-value 0.003). Without earlierexperience with the VR stylus, the participants had difficulties infinding the correct button when marking with the stylus. The physi-cal stylus device cannot be seen when wearing the VR headset andthe button could not be felt clearly. Even though Controller+Styluscombination was evaluated as natural and the most liked method inthis study, the hand-held devices may feel inconvenient [17]. In ourstudy, some participants liked the physical feel of devices. However,our result was based on the subjective opinions of participants, andthat might change depending on the use case or devices.There are many possible reasons for the low hand tracking accu-racy. Hand inaccuracy can be seen in the large number of markingsand large distribution in task completion times with Hands as theparticipants were not satisfied with their first marking. Hands werethe only method where only one participant succeeded with a min-imum of 5 markings, when by other methods, several participantssucceeded in the task with 5 markings. One explanatory factor canbe the lack of hand tracking fidelity that also has been noticed inother studies [17, 42]. In addition, inaccuracy in human motor sys-tem leads to the inaccuracy of hands [15]. The vision based handtracking system that uses camera on HMD does not recognize thehand gesture well enough and as a result, the participant must repeatthe same gesture or movement multiple times to succeed. This extrawork also increases the fatigue in Hands. Even though the fatiguewere low with all interaction methods, this study did not measurethe fatigue of long-term activity. These are clear indications thatHands interaction needs further development before it can be used intasks that needs high marking accuracy. Several earlier studies havereported the hands inaccuracy compared to controllers [15, 17, 42].Passive haptics were available with Mouse and when markingwith VR stylus. With Hands there was only visual feedback. Thelack of any haptic feedback might have affected the marking accu-racy as well because the accuracy was much better with the physicalstylus. Li et al. [22] found that with the low marking difficulty,the mouse with 2D display was faster than the kinesthetic forcefeedback device in VR. For high marking difficulty the other VRinterface that used a VR controller with vibrotactile feedback wasbetter than the 2D interface. They found that mouse in 2D displayhas fast pointing capability but in our study, the task completiontimes did not vary between Mouse and the other methods. Li et al.described the fact that manipulating viewing angle is more flexiblewhen wearing HMD than with a mouse in 2D display. In VR in-terfaces the participant can rotate the 3D object while changing theviewing angle by moving their head. In our study, all methods usedHMD, so change of viewing angle was as equally flexible.Mouse was statistically significantly more accurate markingmethod than Hands. Mouse was not affected by some of the issuesthat were noticed with Hands or Controller+Stylus. With Mouse itwas not felt problematic that the device cannot be seen during theuse. There were no sensor fidelity issues with Mouse, and Mousewas a familiar device to all participants. Only the ray that replacedthe cursor was an unfamiliar feature and caused some problems. Wefound that the ray worked well with simple 3D models but therewere a lot of difficulties with complex models where the viewingangle needed to be exactly right to reach the target. If any part ofthe 3D model blocked the ray, the target could not be marked. Whenthe target was easy to mark the accuracy using Mouse was high. Itcan be stated that Mouse was an accurate method in VR but for allother measured properties of Controller+Stylus were measured tobe better.Both the target and the marking were spheres in 3D environment.During the study, it was noticed that when a participant made theirmarking in the same location as the target, the marking spheredisappeared inside the target sphere. This caused uncertainty ifthe marking was lost or if it was in the center of the target. Thismay have affected the results when the participants needed to makeremarking to be able to see their marking that was not in the centerof the target anymore. In future studies the marking sphere shouldbe designed bigger size than the target and transparent so that theparticipant can be sure about the location of both spheres.Our focus was in comparing three different interaction and mark-ing methods and their suitability for the medical marking task. Tosimplify the experimental setup, the experiment was conducted withsimplified medical images, which may have led to optimistic resultsfor the viability of the methods. Even then, there were some prob-lems with Mouse interaction method. To further confirm that theresults are similar also for more realistic content, a similar studyshould be conducted in future work with authentic material utilizing,for example, original CBCT images in VR instead of the simplifiedones.Future research may investigate multimodal interaction methodsto support even more natural alternatives. Speech is the primarymode for human communication [30]. Suresh et al. [33] used threevoice commands to control gestures of a robotic arm in VR. V oiceis a well suitable input method in cases where hands and eyes arecontinuously busy [15]. Pfeuffer et al. [26] studied gaze as aninteraction method together with hand gestures but found that bothhand and gaze tracking still lack tracking fidelity. More work is stillneeded, as Nukarinen et al. [24] stated that human factor issues madethe gaze as the least preferred input method in an object selectiontask in VR.7 C ONCLUSIONThe 3D medical images can be viewed in VR environments to planfor surgeries with expected results. During the planning process oneneeds to interact with the 3D models and be able to make markingsof high accuracy on them. In this study, we evaluated the feasibilityof three different VR interaction methods Mouse, Hands, and Con-troller+Stylus combination in virtual reality. Based on the results,we can state that Valve Index controller and Logitech VR Ink styluscombination was the most feasible for tasks that require both 3Dobject manipulation and high marking accuracy in VR. This com-bination did not have issues with complex 3D models and sensorfidelity was better than with Hands interaction. Statistically signifi-cant differences were found between the controller combination andthe other methods.Hand-based interaction was the least feasible for this kind ofuse according to the collected data. Hands and Mouse methodswere evaluated almost equal in the feasibility by participants. Withthe current technology, free hands usage cannot be proposed foraccurate marking tasks. Mouse interaction was more accurate thanController+Stylus. In detailed tasks Mouse could replace the freehands interaction. The discrepancy between the 2D mouse andthe 3D environment needs to be solved before Mouse could beconsidered a viable interaction method in VR.<|im_end|> <|im_start|>assistant ### Review Title some weaknesses with the study ### Review Text I reviewed an earlier version of this submission. It has been improved by adding a video, which is very helpful to understand the interaction techniques, and also by citing more related work. There is some limited justification given in section 1 (paragraphs 4 and 5) for the choice of interaction methods. I don't find the justification strong, but it is a justification nevertheless. <p> There are still two weaknesses with the study that make me not in favor of accepting the work at GI: <p> First, if I understand correctly, the users could click (or "mark") as many times as they wanted on or near the target before confirming that they had completed the task, and figure 4 (upper right) shows that often users marked a target 10 or even 15 times before confirming that they had finished the task. Section 6, 2nd paragraph, confirms that many users did more than 5 marks per target. Section 6, paragraph 5, then tells us that users were sometimes confused when they had marked inside the target because the marking sphere would disappear inside the target sphere, and this "may have affected the results when the participants needed to [perform] remarking to be able to see their marking that was not in the center of the target anymore". This kind of confusion should be eliminated during warm-up tasks or practice tasks, and the study did indeed have practice tasks (section 4.3, paragraph 1) but it seems that users had control over when they could proceed to the real tasks. A better design for the experiment could force users to perform a minimum fixed number of practice tasks. It is unclear to me how much the results in time and accuracy were affected by this confusion. <p> Second, and more significantly, the fact that users often had to mark (click) 5 times or more also tells me that a better user interface would allow the user to mark close to the target, and then do something to make minor corrections, like hitting a button to switch to a lower gain, or hitting a button to switch to relative displacement, or hitting some arrow keys on a keyboard to make very small adjustments to the marked location. Such a user interface might require much less time to complete the task. <p> Additional, minor comments: <p> - Section 4.1 states 12 participants were chosen to achieve power of 0.8 assuming a large effect size, but the footnote 1 on the next page states "Cohen's d >= 0.8 is considered a large effect size", and if I use https://statulator.com/SampleSize/ss2PM.html to find sample size for effect size of 0.8, it says we need a sample siz of 16. On the other hand, for effect size 1.0, we only need a sample size of 11. So the submission should clarify what is meant in section 4.1 by "large effect size". <p> - p values are not normally reported as "p-value 0.002", but rather as "p < 0.002" or "p = 0.002". A p value of "0.000" should not be reported, as p cannot be zero, it can only be bound above (p < ...). <p> - Figure 4, top left: are these the times for each user and for each task and for each method, or are these times the total time over the 5 tasks for each user and for each method? Should we divide the times by 5 to obtain the time for each user and each task and each method? What about the time for each marking? <p> - In the results figures, what do the error bars show? Do they show quartiles, standard deviation, standard error, confidence intervals (for how much percent?), or something else? ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
TOzBiasXxyX
eswc-conferences.org/ESWC/2021/Conference/Resources_Track
2021
CSKG: The CommonSense Knowledge Graph
["Filip Ilievski", "Pedro Szekely", "Bin Zhang"]
Sources of commonsense knowledge aim to support applications in natural language understanding, computer vision, and knowledge graphs. These sources contain complementary knowledge to each other, which makes their integration desired. Yet, such integration is not trivial because of their different foci, modeling approaches, and sparse overlap. In this paper, we propose to consolidate commonsense knowledge by following five principles. We apply these principles to combine seven key sources into a first integrated CommonSense Knowledge Graph (CSKG). We perform analysis of CSKG and its various text and graph embeddings, showing that CSKG is a well-connected graph and that its embeddings provide a useful entry point to the graph. Moreover, we show the impact of CSKG as a source for reasoning evidence retrieval, and for pre-training language models for generalizable downstream reasoning. CSKG and all its embeddings are made publicly available to support further research on commonsense knowledge integration and reasoning.
["Commonsense Knowledge", "Knowledge graphs", "Embeddings"]
CSKG: The CommonSense Knowledge GraphFilip Ilievski, Pedro Szekely, and Bin ZhangInformation Sciences Institute, University of Southern Californiafilievski,pszekely,binzhang g@isi.eduAbstract. Sources of commonsense knowledge support applications innatural language understanding, computer vision, and knowledge graphs.Given their complementarity, their integration is desired. Yet, their dif-ferent foci, modeling approaches, and sparse overlap make integrationdifficult. In this paper, we consolidate commonsense knowledge by fol-lowing five principles, which we apply to combine seven key sources intoa first integrated CommonSense Knowledge Graph (CSKG). We analyzeCSKG and its various text and graph embeddings, showing that CSKGis well-connected and that its embeddings provide a useful entry point tothe graph. We demonstrate how CSKG can provide evidence for gener-alizable downstream reasoning and for pre-training of language models.CSKG and all its embeddings are made publicly available to supportfurther research on commonsense knowledge integration and reasoning.Resource type : Knowledge graphLicense : CC BY-SA 4.0DOI :https://doi.org/10.5281/zenodo.4331372Repository :https://github.com/usc-isi-i2/cskgKeywords: commonsense knowledge ·knowledge graph ·embeddings1 IntroductionRecent commonsense reasoning benchmarks [27,3] and neural advancements [17,16]shed a new light on the longstanding task of capturing, representing, and rea-soning over commonsense knowledge. While state-of-the-art language models[8,17] capture linguistic patterns that allow them to perform well on common-sense reasoning tasks after fine-tuning, their robustness and explainability couldbenefit from integration with structured knowledge, as shown by KagNet [16]and HyKAS [18]. Let us consider an example task question from the SWAGdataset [38],1which describes a woman that takes a sit at the piano:Q: On stage, a woman takes a seat at the piano. She:1. sits on a bench as her sister plays with the doll.2. smiles with someone as the music plays.3. is in the crowd, watching the dancers.-> 4. nervously sets her fingers on the keys.1The multiple-choice task of choosing an intuitive follow-up scene is customarycalled question answering [19,38], despite the absence of a formal question.2 Filip Ilievski, Pedro Szekely, and Bin ZhangAnswering this question requires knowledge that humans possess and apply,but machines cannot distill directly in communication. Luckily, graphs of (com-monsense) knowledge contain such knowledge. ConceptNet’s [29] triples statethat pianos have keys and are used to perform music, which supports the cor-rect option and discourages answer 2. WordNet [21] states specifically, thoughin natural language, that pianos are played by pressing keys. According to animage description in Visual Genome, a person could play piano while sitting andhaving their hands on the keyboard. In natural language, ATOMIC [26] indi-cates that before a person plays piano, they need to sit at it, be on stage, andreach for the keys. ATOMIC also lists strong feelings associated with playingpiano. FrameNet’s [1] frame of a performance contains two separate roles for theperformer and the audience, meaning that these two are distinct entities, whichcan be seen as evidence against answer 3.While these sources clearly provide complementary knowledge that can helpcommonsense reasoning, their different foci, representation formats, and sparseoverlap makes integration difficult. Taxonomies, like WordNet , organize concep-tual knowledge into a hierarchy of classes. An independent ontology, coupled withrich instance-level knowledge, is provided by Wikidata [34], a structured counter-part to Wikipedia. FrameNet, on the other hand, defines an orthogonal structureof frames and roles; each of which can be filled with a WordNet/Wikidata classor instance. Sources like ConceptNet or WebChild [31], provide more ‘episodic’commonsense knowledge, whereas ATOMIC captures pre- and post-situationsfor an event. Image description datasets, like Visual Genome [14], contain vi-sual commonsense knowledge. While links between these sources exist (mostlythrough WordNet synsets), the majority of their nodes and edges are disjoint.In this paper, we propose an approach for integrating these (and more sources)into a single Common Sense Knowledge Graph (CSKG). We suvey existingsources of commonsense knowledge to understand their particularities and wesummarize the key challenges on the road to their integration (section 2). Next,we devise five principles and a representation model for a consolidated CSKG(section 3). We apply our approach to build the first version of CSKG, by com-bining seven complementary, yet disjoint, sources. We compute several graphand text embeddings to facilitate reasoning over the graph. In section 4, we ana-lyze the content of the graph and the generated embeddings. We provide insightsinto the utility of CSKG for downstream reasoning on commonsense QuestionAnswering (QA) tasks in section 5. In section 6 we reflect on the learned lessonsand list the next steps for CSKG. We conclude in section 7.2 Problem statement2.1 Sources of Common Sense KnowledgeTable 1 summarizes the content, creation method, size, external mappings, andexample resources for representative public commonsense sources: ConceptNet [29],WebChild [31], ATOMIC [26], Wikidata [34], WordNet [21], Roget [13], Verb-Net [28], FrameNet [1], Visual Genome [14], and ImageNet [7]. Primarily, weCSKG: The CommonSense Knowledge Graph 3Table 1. Survey of existing sources of commonsense knowledge.describes creation size mappings examplesConceptNeteveryday ob-jects, actions,states, relations(multilingual)crowd-sourcing36 relations, 8Mnodes, 21M edgesWordNet,DBpedia,OpenCyc,Wiktionary/c/en/piano/c/en/piano/n/c/en/piano/n/wn/r/relatedToWebChildeveryday ob-jects, actions,states, relationscuratedautomaticextraction4 relation groups, 2Mnodes, 18M edgesWordNet hasTastefasterThanATOMIC event pre/post-conditionscrowd-sourcing9 relations, 300knodes, 877k edgesConceptNet,Cycwanted-toimpressedWikidata instances, con-cepts, relationscrowd-sourcing1.2k relations, 75Mobjects, 900M edgesvarious wd:Q1234 wdt:P31WordNet words, concepts,relationsmanual 10 relations, 155kwords, 176k synsetsdog.n.01hypernymyRoget words, relations manual 2 relations, 72kwords, 1.4M edgestruncateantonymVerbNet verbs, rela-tionsmanual 273 top classes 23roles, 5.3k sensesFrameNet,WordNetperform-vperformance-26.7-1FrameNet frames, roles, re-lationsmanual 1.9k edges, 1.2kframes, 12k roles,13k lexical unitsActivityChange ofleadershipNewleaderVisualGenomeimage objects,relations, at-tributescrowd-sourcing42k relations, 3.8Mnodes, 2.3M edges,2.8M attributesWordNet fire hydrantwhite dogImageNet image objects crowd-sourcing14M images, 22ksynsetsWordNet dog.n.01observe that the commonsense knowledge is spread over a number of sourceswith different focus: commonsense knowledge graphs (e.g., ConceptNet), general-domain knowledge graphs (e.g., Wikidata), lexical resources (e.g., WordNet,FrameNet), taxonomies (e.g., Wikidata, WordNet), and visual datasets (e.g.,Visual Genome) [11]. Therefore, these sources together cover a rich spectrum ofknowledge, ranging from everyday knowledge, through event-centric knowledgeand taxonomies, to visual knowledge. While the taxonomies have been createdmanually by experts, most of the commonsense and visual sources have beencreated by crowdsourcing or curated automatic extraction. Commonsense andcommon knowledge graphs (KGs) tend to be relatively large, with millions ofnodes and edges; whereas the taxonomies and the lexical sources are notablysmaller. Despite the diverse nature of these sources, we note that many containmappings to WordNet, as well as a number of other sources. These mappingsmight be incomplete, e.g., only a small portion of ATOMIC can be mapped toConceptNet. Nevertheless, these high-quality mappings provide an opening forconsolidation of commonsense knowledge, a goal we pursue in this paper.4 Filip Ilievski, Pedro Szekely, and Bin Zhang2.2 ChallengesCombining these sources in a single KG faces three key challenges:1. The sources follow different knowledge modeling approaches . One suchdifference concerns the relation set: there are very few relations in ConceptNetand WordNet, but (tens of) thousands of them in Wikidata and Visual Genome.Consolidation requires a global decision on how to model the relations. The gran-ularity of knowledge is another factor of variance. While regular RDF triples fitsome sources (e.g., ConceptNet), representing entire frames (e.g., in FrameNet),event conditions (e.g., in ATOMIC), or compositional image data (e.g., VisualGenome) might benefit from a more open format. An ideal representation wouldsupport the entire granularity spectrum.2. As a number of these sources have been created to support natural languageapplications, they often contain imprecise descriptions . Natural languagephrases are often the main node types in the provided knowledge sources, whichprovides the benefit of easier access for natural language algorithms, but it intro-duces ambiguity which might be undesired from a formal semantics perspective.An ideal representation would harmonize various phrasings of a concept, whileretaining easy and efficient linguistic access to these concepts via their labels.3. Although these sources contain links to existing ones, we observe sparseoverlap . As these external links are typically to WordNet, and vary in termsof their version (3.0 or 3.1) or target (lemma or synset), the sources are stilldisjoint and establishing (identity) connections is difficult. Bridging these gaps,through optimally leveraging existing links, or extending them with additionalones automatically, is a modeling and integration challenge.2.3 Prior consolidation effortsPrior efforts that combine pairs or small sets of (mostly lexical) commonsensesources exist. A unidirectional manual mapping from VerbNet classes to Word-Net and FrameNet is provided by the Unified Verb Index [33]. The PredicateMatrix [6] has a full automatic mapping between lexical resources, includingFrameNet, WordNet, and VerbNet. PreMOn [5] formalizes these in RDF. In [20],the authors produce partial mappings between WordNet and Wikipedia/DBpedia.Zareian et al. [37] combine edges from Visual Genome, WordNet, and Concept-Net to improve scene graph generation from an image. None of these effortsaspires to build a consolidated KG of commonsense knowledge.Most similar to our effort, BabelNet [22] integrates many sources, covers awide range of 284 languages, and primarily focuses on lexical and general-purposeresources, like WordNet, VerbNet, and Wiktionary. While we share the goal of in-tegrating valuable sources for downstream reasoning, and some of these sources(e.g., WordNet) overlap with BabelNet, our ambition is to support common-sense reasoning applications. For this reason, we focus on commonsense knowl-edge graphs, like ConceptNet and ATOMIC, or even visual sources, like VisualGenome, none of which are found in BabelNet.CSKG: The CommonSense Knowledge Graph 53 The Common Sense Knowledge Graph3.1 PrinciplesQuestion answering and natural language inference tasks require knowledge fromheterogeneous sources (section 2). To enable their joint usage, the sources needto be harmonized in a way that will allow straightforward access by linguistictools [18,16], easy splitting into arbitrary subsets, and computation of commonoperations, like (graph and word) embeddings or KG paths. For this purpose,we devise five principles for consolidatation of sources into a single commonsenseKG (CSKG), driven by pragmatic goals of simplicity, modularity, and utility:P1. Embrace heterogeneity of nodes One should preserve the natural nodediversity inherent to the variety of sources considered, which entails blurringthe distinction between objects (such as those in Visual Genome or Wikidata),classes (such as those in WordNet or ConceptNet), words (in Roget), actions (inATOMIC or ConceptNet), frames (in FrameNet), and states (as in ATOMIC). Italso allows formal nodes, describing unique objects, to co-exist with fuzzy nodesdescribing ambiguous lexical expressions.P2. Reuse edge types across resources To support reasoning algorithmslike KagNet [16], the set of edge types should be kept to minimum and reusedacross resources wherever possible. For instance, the ConceptNet edge type/r/LocatedNear could be reused to express spatial proximity in Visual Genome.P3. Leverage external links The individual graphs are mostly disjoint ac-cording to their formal knowledge. However, high-quality links may exist or maybe easily inferred, in order to connect these KGs and enable path finding. Forinstance, while ConceptNet and Visual Genome do not have direct connections,they can be partially aligned, as both have links to WordNet synsets.P4. Generate high-quality probabilistic links Inclusion of additional prob-abilistic links, either with off-the-shelf link prediction algorithms or with spe-cialized algorithms (e.g., see section 3.3), would improve the connectedness ofCSKG and help path finding algorithms reason over it. Given the heterogeneityof nodes (cf. P1), a ‘one-method-fits-all’ node resolution might not be suitable.P5. Enable access to labels The CSKG format should support easy andefficient natural language access. Labels and aliases associated with KG nodesprovide application-friendly and human-readable access to the CSKG, and canhelp us unify descriptions of the same/similar concept across sources.3.2 RepresentationWe model CSKG as a hyper-relational graph , describing edges in a tabularKGTK [10] format. We opted for this representation rather than the traditionalRDF/OWL2 because it allows us to fulfill our goals (of simplicity and utility)and follow our principles more directly, without compromising on the format. Forinstance, natural language access (principle P5) to RDF/OWL2 nodes requiresgraph traversal over its rdfs:label relations. Including both reliable and prob-abilistic nodes (P3 and P4) would require a mechanism to easily indicate edge6 Filip Ilievski, Pedro Szekely, and Bin Zhangweights, which in RDF/OWL2 entails inclusion of blank nodes, and a numberof additional edges. Moreover, the simplicity of our tabular format allows us touse standard off-the-shelf functionalities and mature tooling, like the pandas2andgraph-tool3libraries in Python, or graph embedding tools like [15], whichhave been conveniently wrapped by the KGTK [10] toolkit.4The edges in CSKG are described by ten columns. Following KGTK, theprimary information about an edge consists of its id,node1 ,relation , andnode2 . Next, we include four “lifted” edge columns, using KGTK’s abbreviatedway of representing triples about the primary elements, such as node1;label orrelation;label (label of node1 and of relation ). Each edge is completed bytwo qualifiers: source , which specifies the source(s) of the edge (e.g., “CN”for ConceptNet), and sentence , containing the linguistic lexicalization of atriple, if given by the original source. Auxiliary KGTK files can be added todescribe additional knowledge about some edges, such as their weight, throughthe corresponding edge ids. We provide further documentation at: https://cskg.readthedocs.io/ .3.3 ConsolidationCurrently, CSKG integrates seven sources, selected based on their popularityin existing QA work: a commonsense knowledge graph ConceptNet, a visualcommonsense source Visual Genome, a procedural source ATOMIC, a general-domain source Wikidata, and three lexical sources, WordNet, Roget, and FrameNet.Here, we briefly present our design decisions per source, the mappings that fa-cilitate their integration, and further refinements on CSKG.3.3.1 Individual sources We keep the original edges of ConceptNet 5.7expressed with 47 relations in total. We also include the entire ATOMIC KG,preserving the original nodes and its nine relations. To enhance lexical match-ing between ATOMIC and other sources, we add normalized labels of its nodes,e.g., adding a second label “accepts invitation” to the original one “personX ac-cepts personY’s invitation”. We import four node types from FrameNet : frames,frame elements (FEs), lexical units (LUs), and semantic types (STs), and wereuse 5 categories of FrameNet edges: frame-frame (13 edge types), frame-FE (1edge type), frame-LU (1 edge type), FE-ST (1 edge type), and ST-ST (3 edgetypes). Following principle P2 on edge type reuse, we map these 19 edge typesto 9 relations in ConceptNet, e.g., iscausative ofis converted to /r/Causes .Roget We include all synonyms and antonyms between words in Roget, byreusing the ConceptNet relations /r/Synonym and /r/Antonym (P2). We rep-resent Visual Genome as a KG, by representing its image objects as Word-Net synsets (e.g., wn:shoe.n.01 ). We express relationships between objects viaConceptNet’s /r/LocatedNear edge type. Object attributes are represented by2https://pandas.pydata.org/3https://graph-tool.skewed.de/4CSKG can be transformed to RDF with kgtk generate-wikidata-triples .CSKG: The CommonSense Knowledge Graph 7different edge types, conditioned on their part-of-speech: we reuse ConceptNet’s/r/CapableOf for verbs, while we introduce a new relation mw:MayHavePropertyfor adjective attributes. We include the Wikidata-CS subset of Wikidata , ex-tracted in [12]. Its 101k statements have been manually mapped to 15 Concept-Net relations. We include four relations from WordNet v3.0 by mapping themto three ConceptNet relations: hypernymy (using /r/IsA ), part and memberholonymy (through /r/PartOf ), and substance meronymy (with /r/MadeOf ).3.3.2 Mappings We perform node resolution by applying existing identitymappings (P3) and generating probabilistic mappings automatically (P4). We in-troduce a dedicated relation, mw:SameAs , to indicate identity between two nodes.WordNet-WordNet The WordNet v3.1 identifiers in ConceptNet and theWordNet v3.0 synsets from Visual Genome are aligned by leveraging ILI: theWordNet InterLingual Index,5which generates 117,097 mw:SameAs mappings.WordNet-Wikidata We generate links between WordNet synsets and Wiki-data nodes as follows. For each synset, we retrieve 50 candidate nodes from acustomized index of Wikidata. Then, we compute sentence embeddings of thedescriptions of the synset and each of the Wikidata candidates by using a pre-trained XLNet model [36]. We create a mw:SameAs edge between the synset andthe Wikidata candidate with highest cosine similarity of their embeddings. Eachmapping is validated by one student. In total, 17 students took part in this vali-dation. Out of the 112k edges produced by the algorithm, the manual validationmarked 57,145 as correct. We keep these in CSKG and discard the rest.FrameNet-ConceptNet We link FrameNet nodes to ConceptNet in two ways.FrameNet LUs are mapped to ConceptNet nodes through the Predicate Ma-trix [6] with 3 ,016mw:SameAs edges. Then, we use 200k hand-labeled sentencesfrom the FrameNet corpus, each annotated with a target frame, a set of FEs,and their associated words. We treat these words as LUs of the correspondingFE, and ground them to ConceptNet with the rule-based method of [16].Lexical matching We establish 74,259 mw:SameAs links between nodes in ATOMIC,ConceptNet, and Roget by exact lexical match of their labels. We restrict thismatching to lexical nodes (e.g., /c/en/cat and not /c/en/cat/n/wn/animal ).3.3.3 Refinement We consolidate the seven sources and their interlinks asfollows. After transforming them to the representation described in the past twosections, we concatenate them in a single graph. We deduplicate this graph andappend all mappings, resulting in CSKG* . Finally, we apply the mappings tomerge identical nodes (connected with mw:SameAs ) and perform a final dedupli-cation of the edges, resulting in our consolidated CSKG graph. The entire pro-cedure of importing the individual sources and consolidating them into CSKGis implemented with KGTK operations [10], and can be found on our GitHub.65https://github.com/globalwordnet/ili6https://github.com/usc-isi-i2/cskg/blob/master/consolidation/create_cskg.sh8 Filip Ilievski, Pedro Szekely, and Bin ZhangFig. 1. Snippet of CSKG for the example task of section 1. CSKG combines: 1) lexicalnodes (piano, keys, music; in blue), 2) synsets like piano (artifact), seat (dramaturgy)(in green), and 3) frames ( fn:noise makers ) and frame elements ( fn:fe:use ) (in pur-ple). The link between piano and piano (artifact) is missing, but trivial to infer.3.4 EmbeddingsEmbeddings provide a convenient entry point to KGs and enable reasoning onboth intrinsic and downstream tasks. For instance, many reasoning applications(cf. [18,16]) of ConceptNet leverage their NumberBatch embeddings [29].Motivatedby these observations, we aspire to produce high-quality embeddings of theCSKG graph. We experiment with two families of embedding algorithms. Onthe one hand, we produce variants of popular graph embeddings: TransE [4],DistMult [35], ComplEx [32], and RESCAL [24]. On the other hand, we pro-duce various text (Transformer-based) embeddings based on BERT-large [8].For BERT, we first create a sentence for each node, based on a template thatencompasses its neighborhood, which is then encoded with BERT’s sentencetransformer model. All embeddings are computed with the KGTK operationsgraph-embeddings andtext-embeddings . We analyze them in section 4.2.The CSKG embeddings are publicly available at http://shorturl.at/pAGX8 .4 AnalysisFigure 1 shows a snippet of CSKG that corresponds to the task in section 1. Fol-lowing P1, CSKG combines: 1) lexical nodes (piano, keys, music), 2) synsets likepiano (artifact), seat (dramaturgy) (in green), and 3) frames ( fn:noise makers )and frame elements ( fn:fe:use ). According to P2, we reuse edge types whereapplicable: for instance, we use ConceptNet’s LocatedNear relation to formal-ize Visual Genome’s proximity information between a woman and a piano. Weleverage external links to WordNet to consolidate synsets across sources (P3). Wegenerate further links (P4) to connect FrameNet frames and frame elements toConceptNet nodes, and to consolidate the representation of piano (artifact)between Wikidata and WordNet. In the remainder of this section, we performqualitative analysis of CSKG and its embeddings.CSKG: The CommonSense Knowledge Graph 9Table 2. CSKG statistics. Abbreviations: CN=ConceptNet, VG=Visual Genome,WN=WordNet, RG=Roget, WD=Wikidata, FN=FrameNet, AT=ATOMIC. Relationnumbers in brackets are before consolidating to ConceptNet.AT CN FN RG VG WD WN CSKG* CSKG#nodes 304,909 1,787,373 15,652 71,804 11,264 91,294 71,243 2,414,813 2,160,968#edges 732,723 3,423,004 29,873 1,403,955 2,587,623 111,276 101,771 6,349,731 6,001,531#relations 9 47 9 (23) 2 3 (42k) 3 15 (45) 59 58avg degree 4.81 3.83 3.82 39.1 459.45 2.44 2.86 5.26 5.55std degree 0.07 0.02 0.13 0.34 35.81 0.02 0.05 0.02 0.03Table 3. Nodes with highest centrality score according to PageRank and HITS. Nodelabels indicated in bold.PageRank HITS hubs HITS authorities/c/en/ chromatic /a/wn /c/en/ red /c/en/ blue/c/en/ organic compound /c/en/ yellow /c/en/ red/c/en/ chemical compound /n /c/en/ green /c/en/ silver/c/en/ change /n/wn/artifact /c/en/ silver /c/en/ green/c/en/ natural science /n/wn/cognition /c/en/ blue /c/en/ gold4.1 StatisticsBasic statistics of CSKG are shown in Table 2. In total, our mappings produce251,517 mw:SameAs links and 45,659 fn:HasLexicalUnit links. After refinement,i.e., removal of the duplicates and merging of the identical nodes, CSKG consistsof 2.2 million nodes and 6 million edges. In terms of edges, its largest subgraphis ConceptNet (3.4 million), whereas ATOMIC comes second with 733 thousandedges. These two graphs also contribute the largest number of nodes to CSKG.The three most common relations in CSKG are: /r/RelatedTo (1.7 million),/r/Synonym (1.2 million), and /r/Antonym (401 thousand edges).Connectivity and centrality The mean degree of CSKG grows by 5.5%(from 5.26 to 5.55) after merging identical nodes. Compared to ConceptNet,its degree is 45% higher, due to its increased number of edges while keepingthe number of nodes nearly constant. The best connected subgraphs are Vi-sual Genome and Roget. CSKG’s high connectivity is owed largely to these twosources and our mappings, as the other five sources have degrees below that ofCSKG. The abnormally large node degrees and variance of Visual Genome aredue to its annotation guidelines that dictate all concept-to-concept informationto be annotated, and our modeling choice to represent its nodes through theirsynsets. We report that the in-degree and out-degree distributions of CSKG haveZipfian shapes, a notable difference being that the maximal in degree is nearlydouble compared to its maximal out degree (11k vs 6.4k). To understand betterthe central nodes in CSKG, we compute PageRank and HITS metrics. The top-5results are shown in Table 3. We observe that the node with highest PageRank10 Filip Ilievski, Pedro Szekely, and Bin ZhangTable 4. Top-5 most similar nodes for /c/en/turtle/n/wn/animal (E1) and/c/en/happy (E2) according to TransE and BERT.TransE BERTE1 /c/en/chelonian/n/wn/animal /c/en/glyptemys/n/c/en/mud turtle/n/wn/animal /c/en/pelocomastes/n/c/en/cooter/n/wn/animal /c/en/staurotypus/n/c/en/common snapping turtle/n/wn/animal /c/en/parahydraspis/n/c/en/sea turtle/n/wn/animal /c/en/trachemys/nE2 /c/en/excited /c/en/bring happiness/c/en/satisfied /c/en/new happiness/c/en/smile mood at:like aparty isagood wayto.../c/en/pleased /c/en/encouraging person’s talent/c/en/joyful at:happy that they went tothepartyFig. 2. UMAP visualization of 5,000 randomly sampled nodes from CSKG, representedby TransE (left) and BERT (right) embeddings. Colors signify node sources.has label “chromatic”, while all dominant HITS hubs and authorities are colors,revealing that knowledge on colors of real-world object is common in CSKG.PageRank also reveals that knowledge on natural and chemical processes is well-represented in CSKG. Finally, we note that the top-centrality nodes are generallydescribed by multiple subgraphs, e.g., c/en/natural science/n/wn/cognitionis found in ConceptNet and WordNet, whereas the color nodes (e.g., /c/en/red )are shared between Roget and ConceptNet.4.2 Analysis of the CSKG embeddingsWe randomly sample 5,000 nodes from CSKG and visualize their embeddingscomputed with an algorithm from each family: TransE and BERT. The resultsare shown in Figure 2. We observe that graph embeddings group nodes from thesame source together. This is because graph embeddings tend to focus on thegraph structure, and because most links in CSKG are still within sources. Weobserve that the sources are more intertwined in the case of the BERT embed-CSKG: The CommonSense Knowledge Graph 11dings, because of the emphasis on lexical over structural similarity. Moreover,in both plots Roget is dispersed around the ConceptNet nodes, which is likelydue to its broad coverage of concepts, that maps both structurally and lexicallyto ConceptNet. At the same time, while ATOMIC overlaps with a subset ofConceptNet [26], the two sources mostly cover different areas of the space.Table 4 shows the top-5 most similar neighbors for /c/en/turtle/n/wn/animaland/c/en/happy according to TransE and BERT. We note that while graph em-beddings favor nodes that are structurally similar (e.g., /c/en/turtle/n/wn/animaland /c/en/chelonian/n/wn/animal are both animals in WordNet), text em-beddings give much higher importance to lexical similarity of nodes or theirneighbors, even when the nodes are disconnected in CSKG (e.g., /c/en/happyandat:happy that they went totheparty ). These results are expected con-sidering the approach behind each algorithm.Word association with embeddings To quantify the utility of differentembeddings, we evaluate them on the USF-FAN [23] benchmark, which con-tains crowdsourced common sense associations for 5,019 “stimulus” concepts inEnglish. For instance, the associations provided for dayare:night ,light ,sun,time ,week , and break . The associations are ordered descendingly based on theirfrequency. With each algorithm, we produce a top-K most similar neighbors listbased on the embedding of the stimulus concept. Here, Kis the number of asso-ciations for a concept, which varies across stimuli. If CSKG has multiple nodesfor the stimulus label, we average their embeddings. For the graph embeddings,we use logistic loss function, using a dot comparator, a learning rate of 0.1, anddimension 100. The BERT text embeddings have dimension 1024, which is thenative dimension of this language model. As the text embedding models oftenfavor surface form similarity (e.g., associations like daily forday), we devisevariants of this method that excludes associations with Levenshtein similarityhigher than a threshold t.We evaluate by comparing the embedding-based list to the benchmark one,through customary ranking metrics, like Mean Average Precision (MAP) andNormalized Discounted Cumulative Gain (NDCG). Our investigations show thatTransE is the best-performing algorithm overall, with MAP of 0.207 and NDCGof 0.530. The optimal BERT variant uses threshold of t= 0.9, scoring with MAPof 0.209 and NDCG of 0.268. The obtained MAP scores indicate that the embed-dings capture relevant signals, yet, a principled solution to USF-FAN requires amore sophisticated embedding search method that can capture various forms ofboth relatedness and similarity. In the future, we aim to investigate embeddingtechniques that integrate structural and content information like RDF2Vec [25],and evaluate on popular word similarity datasets like WordSim-353 [9].5 ApplicationsAs the creation of CSKG is largely driven by downstream reasoning needs, wenow investigate its relevance for commonsense question answering: 1) we measure12 Filip Ilievski, Pedro Szekely, and Bin ZhangTable 5. Number of triples retrieved with ConceptNet and CSKG on different datasets.train dev#Questions ConceptNet CSKG #Questions ConceptNet CSKGCSQA 9,741 78,729 125,552 1,221 9,758 15,662SIQA 33,410 126,596 266,937 1,954 7,850 16,149PIQA 16,113 18,549 59,684 1,838 2,170 6,840aNLI 169,654 257,163 638,841 1,532 5,603 13,582its ability to contribute novel evidence to support reasoning, and 2) we measureits role in pre-training language models for zero-shot downstream reasoning.5.1 Retrieving evidence from CSKGWe measure the relevance of CSKG for commonsense question answering tasks,by comparing the number of retrieved triples that connect keywords in the ques-tion and in the answers. For this purpose, we adapt the lexical grounding inHyKAS [18] to retrieve triples from CSKG instead of its default knowledgesource, ConceptNet. We expect that CSKG can provide much more evidencethan ConceptNet, both in terms of number of triples and their diversity. Weexperiment with four commonsense datasets: CommonSense QA (CSQA) [30],Social IQA (SIQA) [27], Physical IQA (PIQA) [3], and abductive NLI (aNLI) [2].As shown in Table 5, CSKG significantly increases the number of evidence triplesthat connect terms in questions with terms in answers, in comparison to Concept-Net. We note that the increase is on average 2-3 times, the expected exceptionbeing CSQA, which was inferred from ConceptNet.We inspect a sample of questions to gain insight into whether the addi-tional triples are relevant and could benefit reasoning. For instance, let us con-sider the CSQA question “Bob the lizard lives in a warm place with lots ofwater. Where does he probably live?”, whose correct answer is “tropical rainfor-est” . In addition to the ConceptNet triple /c/en/lizard /c/en/AtLocation/c/en/tropical rainforest , CSKG provides two additional triples, statingthat tropical is an instance of place and that water may have property trop-ical. The first additional edge stems from our mappings from FrameNet to Con-ceptNet, whereas the second comes from Visual Genome. We note that, whileCSKG increases the coverage with respect to available commonsense knowledge,it is also incomplete: in the above example, useful information such as warmtemperatures being typical for tropical rainforests is still absent.5.2 Pre-training language models with CSKGWe have studied the role of various subsets of CSKG for downstream QA reason-ing extensively in [19]. Here, CSKG or its subsets were transformed into artificialcommonsense question answering tasks. These tasks were then used instead oftraining data to pre-train language models, like RoBERTa and GPT-2. SuchCSKG: The CommonSense Knowledge Graph 13Table 6. Zero-shot evaluation results with different combinations of models and knowl-edge sources, across five commonsense tasks, as reported in [19]. CWWV combines Con-ceptNet, Wikidata, WordNet, and Visual Genome. CSKG is a union of ATOMIC andCWWV.We report mean accuracy over three runs, with 95% confidence interval.Model KG aNLI CSQA PIQA SIQA WGGPT2-L ATOMIC 59.2(±0.3) 48 .0(±0.9) 67 .5(±0.7) 53 .5(±0.4) 54 .7(±0.6)GPT2-L CWWV 58.3(±0.4) 46 .2(±1.0) 68 .6(±0.7) 48 .0(±0.7) 52 .8(±0.9)GPT2-L CSKG 59.0(±0.5) 48 .6(±1.0) 68 .6(±0.9) 53 .3(±0.5) 54 .1(±0.5)RoBERTa-L ATOMIC 70 .8(±1.2) 64 .2(±0.7) 72 .1(±0.5) 63 .1(±1.5) 59 .6(±0.3)RoBERTa-L CWWV 70.0(±0.3)67 .9(±0.8) 72 .0(±0.7) 54 .8(±1.2) 59 .4(±0.5)RoBERTa-L CSKG 70.5(±0.2) 67 .4(±0.8)72 .4(±0.4)63 .2(±0.7)60 .9(±0.8)Human - 91.4 88.9 94.9 86.9 94.1a CSKG-based per-trained language model was then ‘frozen’ and evaluated ina zero-shot manner across a wide variety of commonsense tasks, ranging fromquestion answering through pronoun resolution and natural language inference.We select key results from these experiments in Table 6. The results demon-strate that no single knowledge source suffices for all benchmarks and that usingCSKG is overall beneficial compared to using its subgraphs, thus directly showingthe benefit of commonsense knowledge consolidation. In a follow-up study [11],we further exploit the consolidation in CSKG to pre-train the language modelswith one dimension (knowledge type) at a time, noting that certain dimensionsof knowledge (e.g., temporal knowledge) are much more useful for reasoning thanothers, like lexical knowledge. In both cases, the kind of knowledge that bene-fits each task is ultimately conditioned on the alignment between this knowledgeand the targeted task, indicating that subsequent work should further investigatehow to dynamically align knowledge with the task at hand.6 DiscussionOur analysis in section 4 revealed that the connectivity in CSKG is higherthan merely concatenation of the individual sources, due to our mappings acrosssources and the merge of identical nodes. Its KGTK format allowed us to seam-lessly compute and evaluate a series of embeddings, observing that TransE andBERT with additional filtering are the two best-performing and complementaryalgorithms. The novel evidence brought by CSKG on downstream QA tasks (sec-tion 5) is a signal that can be exploited by reasoning systems to enhance theirperformance and robustness, as shown in [19]. Yet, the quest to a rich, high-coverage CSKG is far from completed. We briefly discuss two key challenges,while broader discussion can be found in [11].Node resolution As large part of CSKG consists of lexical nodes, it suf-fers from the standard challenges of linguistic ambiguity and variance. For in-14 Filip Ilievski, Pedro Szekely, and Bin Zhangstance, there are 18 nodes in CSKG that have the label ‘scene’, which includesWordNet or OpenCyc synsets, Wikidata Qnodes, frame elements, and a lexicalnode. Variance is another challenge, as /c/en/caffeine ,/c/en/caffine , and/c/en/the active ingredient caffeine are all separate nodes in ConceptNet(and in CSKG). We are currently investigating techniques for node resolutionapplicable to the heterogeneity of commonsense knowledge in CSKG.Semantic enrichment We have normalized the edge types across sources toa single, ConceptNet-centric, set of 58 relations. In [11], we classify all CSKG’srelations into 13 dimensions, enabling us to consolidate the edge types further.At the same time, some of these relations hide fine-grained distinctions, for exam-ple, WebChild [31] defines 19 specific property relations, including temperature,shape, and color, all of which correspond to ConceptNet’s /r/HasProperty . Anovel future direction is to produce hierarchy for each of the relations, and re-fine existing triples by using a more specific relation (e.g., use the predicate‘temperature’ instead of ‘property’ when the object of the triple is ‘cold’).7 Conclusions and Future WorkWhile current commonsense knowledge sources contain complementary knowl-edge that would be beneficial as a whole for downstream tasks, such usage isprevented by different modeling approaches, foci, and sparsity of available map-pings. Optimizing for simplicity, modularity, and utility, we proposed a hyper-relational graph representation that describes many nodes with a few edge types,maximizes the high-quality links across subgraphs, and enables natural languageaccess. We applied this representation approach to consolidate a commonsenseknowledge graph (CSKG) from seven very diverse and disjoint sources: a text-based commonsense knowledge graph ConceptNet, a general-purpose taxonomyWikidata, an image description dataset Visual Genome, a procedural knowledgesource ATOMIC, and three lexical sources: WordNet, Roget, and FrameNet.CSKG describes 2.2 million nodes with 6 million statements. Our analysis showedthat CSKG is a well-connected graph and more than ‘a simple sum of its parts’ .Together with CSKG, we also publicly release a series of graph and text embed-dings of the CSKG nodes, to facilitate future usage of the graph. Our analysisshowed that graph and text embeddings of CSKG have complementary notionsof similarity, as the former focus on structural patterns, while the latter on lex-ical features of the node’s label and of its neighborhood. Applying CSKG ondownstream commonsense reasoning tasks, like QA, showed an increased recallas well as an advantage when pre-training a language model to reason acrossdatasets in a zero-shot fashion. Key standing challenges for CSKG include se-mantic consolidation of its nodes and refinement of its property hierarchy. Note-books for analyzing these resources can be found on our public GitHub page:https://github.com/usc-isi-i2/cskg/tree/master/ESWC2021 .CSKG: The CommonSense Knowledge Graph 15AcknowledgementsThis work is sponsored by the DARPA MCS program under Contract No.N660011924033 with the United States Office Of Naval Research, and by theAir Force Research Laboratory under agreement number FA8750-20-2-10002.References1. Baker, C.F., Fillmore, C.J., Lowe, J.B.: The berkeley framenet project. In: Pro-ceedings of the 17th international conference on Computational linguistics (1998)2. Bhagavatula, C., Bras, R.L., Malaviya, C., Sakaguchi, K., Holtzman, A., Rashkin,H., Downey, D., Yih, S.W.t., Choi, Y.: Abductive commonsense reasoning. arXivpreprint arXiv:1908.05739 (2019)3. Bisk, Y., Zellers, R., Bras, R.L., Gao, J., Choi, Y.: Piqa: Reasoning about physicalcommonsense in natural language. arXiv preprint arXiv:1911.11641 (2019)4. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translatingembeddings for modeling multi-relational data. Advances in neural informationprocessing systems 26, 2787–2795 (2013)5. Corcoglioniti, F., Rospocher, M., Aprosio, A.P., Tonelli, S.: Premon: a lemon exten-sion for exposing predicate models as linked data. In: Proceedings of the Tenth In-ternational Conference on Language Resources and Evaluation (LREC’16) (2016)6. De Lacalle, M.L., Laparra, E., Aldabe, I., Rigau, G.: Predicate matrix: automat-ically extending the semantic interoperability between predicate resources. Lan-guage Resources and Evaluation 50(2), 263–289 (2016)7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer visionand pattern recognition. pp. 248–255. Ieee (2009)8. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirec-tional transformers for language understanding. arXiv preprint arXiv:1810.04805(2018)9. Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G.,Ruppin, E.: Placing search in context: The concept revisited. In: Proceedings ofthe 10th international conference on World Wide Web. pp. 406–414 (2001)10. Ilievski, F., Garijo, D., Chalupsky, H., Divvala, N.T., Yao, Y., Rogers, C., Li, R.,Liu, J., Singh, A., Schwabe, D., Szekely, P.: Kgtk: A toolkit for large knowledgegraph manipulation and analysis. ISWC (2020)11. Ilievski, F., Oltramari, A., Ma, K., Zhang, B., McGuinness, D.L., Szekely, P.: Di-mensions of commonsense knowledge. arXiv preprint arXiv:2101.04640 (2021)12. Ilievski, F., Szekely, P., Schwabe, D.: Commonsense knowledge in wikidata. Pro-ceedings of the Wikidata workshop, ISWC (2020)13. Kipfer, B.: Roget’s 21st century thesaurus in dictionary form ( ́ ed. 3). (2005)14. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S.,Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting languageand vision using crowdsourced dense image annotations. International Journal ofComputer Vision 123(1), 32–73 (2017)15. Lerer, A., Wu, L., Shen, J., Lacroix, T., Wehrstedt, L., Bose, A., Peysakhovich,A.: Pytorch-biggraph: A large-scale graph embedding system. arXiv preprintarXiv:1903.12287 (2019)16 Filip Ilievski, Pedro Szekely, and Bin Zhang16. Lin, B.Y., Chen, X., Chen, J., Ren, X.: Kagnet: Knowledge-aware graph networksfor commonsense reasoning. arXiv preprint arXiv:1909.02151 (2019)17. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M.,Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretrainingapproach. arXiv preprint arXiv:1907.11692 (2019)18. Ma, K., Francis, J., Lu, Q., Nyberg, E., Oltramari, A.: Towards generalizable neuro-symbolic systems for commonsense question answering. EMNLP-COIN (2019)19. Ma, K., Ilievski, F., Francis, J., Bisk, Y., Nyberg, E., Oltramari, A.: Knowledge-driven Data Construction for Zero-shot Evaluation in Commonsense Question An-swering. In: 35th AAAI Conference on Artificial Intelligence (2021)20. McCrae, J.P.: Mapping wordnet instances to wikipedia. In: Proceedings of the 9thGlobal WordNet Conference (GWC 2018). pp. 62–69 (2018)21. Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM38(11), 39–41 (1995)22. Navigli, R., Ponzetto, S.P.: Babelnet: Building a very large multilingual semanticnetwork. In: Proceedings of ACL (2010)23. Nelson, D.L., McEvoy, C.L., Schreiber, T.A.: The university of south florida freeassociation, rhyme, and word fragment norms. Behavior Research Methods, In-struments, & Computers 36(3), 402–407 (2004)24. Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning onmulti-relational data. In: Icml. vol. 11, pp. 809–816 (2011)25. Ristoski, P., Paulheim, H.: Rdf2vec: Rdf graph embeddings for data mining. In:International Semantic Web Conference. pp. 498–514. Springer (2016)26. Sap, M., Le Bras, R., Allaway, E., Bhagavatula, C., Lourie, N., Rashkin, H., Roof,B., Smith, N.A., Choi, Y.: Atomic: An atlas of machine commonsense for if-thenreasoning. In: Proceedings of the AAAI Conference on Artificial Intelligence (2019)27. Sap, M., Rashkin, H., Chen, D., LeBras, R., Choi, Y.: Socialiqa: Commonsensereasoning about social interactions. arXiv preprint arXiv:1904.09728 (2019)28. Schuler, K.K.: Verbnet: A broad-coverage, comprehensive verb lexicon (2005)29. Speer, R., Chin, J., Havasi, C.: Conceptnet 5.5: An open multilingual graph of gen-eral knowledge. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)30. Talmor, A., Herzig, J., Lourie, N., Berant, J.: Commonsenseqa: A question answer-ing challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937(2018)31. Tandon, N., De Melo, G., Weikum, G.: Webchild 2.0: Fine-grained commonsenseknowledge distillation. In: ACL 2017, System Demonstrations (2017)32. Trouillon, T., Welbl, J., Riedel, S., Gaussier, ́E., Bouchard, G.: Complex embed-dings for simple link prediction. ICML (2016)33. Trumbo, D.: Increasing the usability of research lexica. Ph.D. thesis, University ofColorado at Boulder (2006)34. Vrandeˇ ci ́ c, D., Kr ̈ otzsch, M.: Wikidata: a free collaborative knowledgebase. Com-munications of the ACM 57(10), 78–85 (2014)35. Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relations forlearning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014)36. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet:Generalized autoregressive pretraining for language understanding. In: Advancesin neural information processing systems. pp. 5754–5764 (2019)37. Zareian, A., Karaman, S., Chang, S.F.: Bridging knowledge graphs to generatescene graphs. arXiv preprint arXiv:2001.02314 (2020)38. Zellers, R., Bisk, Y., Schwartz, R., Choi, Y.: Swag: A large-scale adversarial datasetfor grounded commonsense inference. arXiv preprint arXiv:1808.05326 (2018)
vHZTInipu39
Paper 37 review - Updated after rebuttal
1: Weak Accept
The paper describes CSKG, the first integrated knowledge graph that connects several common sense knowledge graphs. I think the paper takes an interesting research direction and the work has the potential to benefit the community. However, the paper has several weakness and does not fully articulate the benefits of the reported work. First and foremost, I don't think the introduction has done a good job motivating this research. I am not sure I fully understand the example here, which does not look like a question and was unclear what the machine is asked to do. Authors should articulate how the different common sense KGs are complementing each other, perhaps with alternative examples, and explain what kind of tasks can benefit from such an integrated KG and how. For example, what is ConceptNet primarily used for research/applications and what about ATOMIC? How exactly could the merge of the two create new usage scenarios? Second, the related work section does not do a good job explaining the limitations of SoA. The last paragraph mentions a few projects of a similar nature but there is no discussion of what their limitations are. This is problematic because it is difficult for readers to understand how this work differs and what additional values it brings. Third, during the actual integration, the authors did not explain 1) how these common sense KGs are selected; or 2) how the pairs of KGs were selected for mapping. For 1), there needs to be a strong justification of choosing these seven KGs. If it is not randomly done, then what is the intuition behind? Is it based on maximising the potential benefits of CSKG, but how? For 2), fundamentally the question is how did you choose which KG to map to which others? For example, the authors mentioned 3 pairs of KGs for structured mapping, why so? Why could it not be simply a pair-wise mapping between every two KGs? The choice of such mapping processes needs to follow a systematic approach, which needs to be explained and justified. Finally, the results do not convey a clear message. I am not sure how to interpret the embedding results on page 11. How do we know if the results are good or bad? What can we compare them against? The results on QA are also not without anomalies. There are a few occasions where CSKG underperformed ATOMIC. What does this mean? Does that indicate problems in the integrated KG? If so, is the problem due to mapping errors, or else? ========Update after rebuttal======== The authors gave a thorough reply addressing most of my questions. I am willing to revise my score in line with other reviewers who are more knowledgeable in this area than me. I would like to ask authors to consider revising their paper to explicitly address some of my questions. At least - the fact that there is no one-size-fit-all solution when integrating multiple KGs and how this justifies their design - some reflection on the message we should take from the CSKG results, the fact that it is not beneficial to all cases (your point 6).
2: The reviewer is willing to defend the evaluation but not sufficiently familiar with the state of the art or the specific topic of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title CSKG: The CommonSense Knowledge Graph ### Paper Abstract Sources of commonsense knowledge aim to support applications in natural language understanding, computer vision, and knowledge graphs. These sources contain complementary knowledge to each other, which makes their integration desired. Yet, such integration is not trivial because of their different foci, modeling approaches, and sparse overlap. In this paper, we propose to consolidate commonsense knowledge by following five principles. We apply these principles to combine seven key sources into a first integrated CommonSense Knowledge Graph (CSKG). We perform analysis of CSKG and its various text and graph embeddings, showing that CSKG is a well-connected graph and that its embeddings provide a useful entry point to the graph. Moreover, we show the impact of CSKG as a source for reasoning evidence retrieval, and for pre-training language models for generalizable downstream reasoning. CSKG and all its embeddings are made publicly available to support further research on commonsense knowledge integration and reasoning. ### Paper Keywords ["Commonsense Knowledge", "Knowledge graphs", "Embeddings"] ### Paper Content CSKG: The CommonSense Knowledge GraphFilip Ilievski, Pedro Szekely, and Bin ZhangInformation Sciences Institute, University of Southern Californiafilievski,pszekely,binzhang g@isi.eduAbstract. Sources of commonsense knowledge support applications innatural language understanding, computer vision, and knowledge graphs.Given their complementarity, their integration is desired. Yet, their dif-ferent foci, modeling approaches, and sparse overlap make integrationdifficult. In this paper, we consolidate commonsense knowledge by fol-lowing five principles, which we apply to combine seven key sources intoa first integrated CommonSense Knowledge Graph (CSKG). We analyzeCSKG and its various text and graph embeddings, showing that CSKGis well-connected and that its embeddings provide a useful entry point tothe graph. We demonstrate how CSKG can provide evidence for gener-alizable downstream reasoning and for pre-training of language models.CSKG and all its embeddings are made publicly available to supportfurther research on commonsense knowledge integration and reasoning.Resource type : Knowledge graphLicense : CC BY-SA 4.0DOI :https://doi.org/10.5281/zenodo.4331372Repository :https://github.com/usc-isi-i2/cskgKeywords: commonsense knowledge ·knowledge graph ·embeddings1 IntroductionRecent commonsense reasoning benchmarks [27,3] and neural advancements [17,16]shed a new light on the longstanding task of capturing, representing, and rea-soning over commonsense knowledge. While state-of-the-art language models[8,17] capture linguistic patterns that allow them to perform well on common-sense reasoning tasks after fine-tuning, their robustness and explainability couldbenefit from integration with structured knowledge, as shown by KagNet [16]and HyKAS [18]. Let us consider an example task question from the SWAGdataset [38],1which describes a woman that takes a sit at the piano:Q: On stage, a woman takes a seat at the piano. She:1. sits on a bench as her sister plays with the doll.2. smiles with someone as the music plays.3. is in the crowd, watching the dancers.-> 4. nervously sets her fingers on the keys.1The multiple-choice task of choosing an intuitive follow-up scene is customarycalled question answering [19,38], despite the absence of a formal question.2 Filip Ilievski, Pedro Szekely, and Bin ZhangAnswering this question requires knowledge that humans possess and apply,but machines cannot distill directly in communication. Luckily, graphs of (com-monsense) knowledge contain such knowledge. ConceptNet’s [29] triples statethat pianos have keys and are used to perform music, which supports the cor-rect option and discourages answer 2. WordNet [21] states specifically, thoughin natural language, that pianos are played by pressing keys. According to animage description in Visual Genome, a person could play piano while sitting andhaving their hands on the keyboard. In natural language, ATOMIC [26] indi-cates that before a person plays piano, they need to sit at it, be on stage, andreach for the keys. ATOMIC also lists strong feelings associated with playingpiano. FrameNet’s [1] frame of a performance contains two separate roles for theperformer and the audience, meaning that these two are distinct entities, whichcan be seen as evidence against answer 3.While these sources clearly provide complementary knowledge that can helpcommonsense reasoning, their different foci, representation formats, and sparseoverlap makes integration difficult. Taxonomies, like WordNet , organize concep-tual knowledge into a hierarchy of classes. An independent ontology, coupled withrich instance-level knowledge, is provided by Wikidata [34], a structured counter-part to Wikipedia. FrameNet, on the other hand, defines an orthogonal structureof frames and roles; each of which can be filled with a WordNet/Wikidata classor instance. Sources like ConceptNet or WebChild [31], provide more ‘episodic’commonsense knowledge, whereas ATOMIC captures pre- and post-situationsfor an event. Image description datasets, like Visual Genome [14], contain vi-sual commonsense knowledge. While links between these sources exist (mostlythrough WordNet synsets), the majority of their nodes and edges are disjoint.In this paper, we propose an approach for integrating these (and more sources)into a single Common Sense Knowledge Graph (CSKG). We suvey existingsources of commonsense knowledge to understand their particularities and wesummarize the key challenges on the road to their integration (section 2). Next,we devise five principles and a representation model for a consolidated CSKG(section 3). We apply our approach to build the first version of CSKG, by com-bining seven complementary, yet disjoint, sources. We compute several graphand text embeddings to facilitate reasoning over the graph. In section 4, we ana-lyze the content of the graph and the generated embeddings. We provide insightsinto the utility of CSKG for downstream reasoning on commonsense QuestionAnswering (QA) tasks in section 5. In section 6 we reflect on the learned lessonsand list the next steps for CSKG. We conclude in section 7.2 Problem statement2.1 Sources of Common Sense KnowledgeTable 1 summarizes the content, creation method, size, external mappings, andexample resources for representative public commonsense sources: ConceptNet [29],WebChild [31], ATOMIC [26], Wikidata [34], WordNet [21], Roget [13], Verb-Net [28], FrameNet [1], Visual Genome [14], and ImageNet [7]. Primarily, weCSKG: The CommonSense Knowledge Graph 3Table 1. Survey of existing sources of commonsense knowledge.describes creation size mappings examplesConceptNeteveryday ob-jects, actions,states, relations(multilingual)crowd-sourcing36 relations, 8Mnodes, 21M edgesWordNet,DBpedia,OpenCyc,Wiktionary/c/en/piano/c/en/piano/n/c/en/piano/n/wn/r/relatedToWebChildeveryday ob-jects, actions,states, relationscuratedautomaticextraction4 relation groups, 2Mnodes, 18M edgesWordNet hasTastefasterThanATOMIC event pre/post-conditionscrowd-sourcing9 relations, 300knodes, 877k edgesConceptNet,Cycwanted-toimpressedWikidata instances, con-cepts, relationscrowd-sourcing1.2k relations, 75Mobjects, 900M edgesvarious wd:Q1234 wdt:P31WordNet words, concepts,relationsmanual 10 relations, 155kwords, 176k synsetsdog.n.01hypernymyRoget words, relations manual 2 relations, 72kwords, 1.4M edgestruncateantonymVerbNet verbs, rela-tionsmanual 273 top classes 23roles, 5.3k sensesFrameNet,WordNetperform-vperformance-26.7-1FrameNet frames, roles, re-lationsmanual 1.9k edges, 1.2kframes, 12k roles,13k lexical unitsActivityChange ofleadershipNewleaderVisualGenomeimage objects,relations, at-tributescrowd-sourcing42k relations, 3.8Mnodes, 2.3M edges,2.8M attributesWordNet fire hydrantwhite dogImageNet image objects crowd-sourcing14M images, 22ksynsetsWordNet dog.n.01observe that the commonsense knowledge is spread over a number of sourceswith different focus: commonsense knowledge graphs (e.g., ConceptNet), general-domain knowledge graphs (e.g., Wikidata), lexical resources (e.g., WordNet,FrameNet), taxonomies (e.g., Wikidata, WordNet), and visual datasets (e.g.,Visual Genome) [11]. Therefore, these sources together cover a rich spectrum ofknowledge, ranging from everyday knowledge, through event-centric knowledgeand taxonomies, to visual knowledge. While the taxonomies have been createdmanually by experts, most of the commonsense and visual sources have beencreated by crowdsourcing or curated automatic extraction. Commonsense andcommon knowledge graphs (KGs) tend to be relatively large, with millions ofnodes and edges; whereas the taxonomies and the lexical sources are notablysmaller. Despite the diverse nature of these sources, we note that many containmappings to WordNet, as well as a number of other sources. These mappingsmight be incomplete, e.g., only a small portion of ATOMIC can be mapped toConceptNet. Nevertheless, these high-quality mappings provide an opening forconsolidation of commonsense knowledge, a goal we pursue in this paper.4 Filip Ilievski, Pedro Szekely, and Bin Zhang2.2 ChallengesCombining these sources in a single KG faces three key challenges:1. The sources follow different knowledge modeling approaches . One suchdifference concerns the relation set: there are very few relations in ConceptNetand WordNet, but (tens of) thousands of them in Wikidata and Visual Genome.Consolidation requires a global decision on how to model the relations. The gran-ularity of knowledge is another factor of variance. While regular RDF triples fitsome sources (e.g., ConceptNet), representing entire frames (e.g., in FrameNet),event conditions (e.g., in ATOMIC), or compositional image data (e.g., VisualGenome) might benefit from a more open format. An ideal representation wouldsupport the entire granularity spectrum.2. As a number of these sources have been created to support natural languageapplications, they often contain imprecise descriptions . Natural languagephrases are often the main node types in the provided knowledge sources, whichprovides the benefit of easier access for natural language algorithms, but it intro-duces ambiguity which might be undesired from a formal semantics perspective.An ideal representation would harmonize various phrasings of a concept, whileretaining easy and efficient linguistic access to these concepts via their labels.3. Although these sources contain links to existing ones, we observe sparseoverlap . As these external links are typically to WordNet, and vary in termsof their version (3.0 or 3.1) or target (lemma or synset), the sources are stilldisjoint and establishing (identity) connections is difficult. Bridging these gaps,through optimally leveraging existing links, or extending them with additionalones automatically, is a modeling and integration challenge.2.3 Prior consolidation effortsPrior efforts that combine pairs or small sets of (mostly lexical) commonsensesources exist. A unidirectional manual mapping from VerbNet classes to Word-Net and FrameNet is provided by the Unified Verb Index [33]. The PredicateMatrix [6] has a full automatic mapping between lexical resources, includingFrameNet, WordNet, and VerbNet. PreMOn [5] formalizes these in RDF. In [20],the authors produce partial mappings between WordNet and Wikipedia/DBpedia.Zareian et al. [37] combine edges from Visual Genome, WordNet, and Concept-Net to improve scene graph generation from an image. None of these effortsaspires to build a consolidated KG of commonsense knowledge.Most similar to our effort, BabelNet [22] integrates many sources, covers awide range of 284 languages, and primarily focuses on lexical and general-purposeresources, like WordNet, VerbNet, and Wiktionary. While we share the goal of in-tegrating valuable sources for downstream reasoning, and some of these sources(e.g., WordNet) overlap with BabelNet, our ambition is to support common-sense reasoning applications. For this reason, we focus on commonsense knowl-edge graphs, like ConceptNet and ATOMIC, or even visual sources, like VisualGenome, none of which are found in BabelNet.CSKG: The CommonSense Knowledge Graph 53 The Common Sense Knowledge Graph3.1 PrinciplesQuestion answering and natural language inference tasks require knowledge fromheterogeneous sources (section 2). To enable their joint usage, the sources needto be harmonized in a way that will allow straightforward access by linguistictools [18,16], easy splitting into arbitrary subsets, and computation of commonoperations, like (graph and word) embeddings or KG paths. For this purpose,we devise five principles for consolidatation of sources into a single commonsenseKG (CSKG), driven by pragmatic goals of simplicity, modularity, and utility:P1. Embrace heterogeneity of nodes One should preserve the natural nodediversity inherent to the variety of sources considered, which entails blurringthe distinction between objects (such as those in Visual Genome or Wikidata),classes (such as those in WordNet or ConceptNet), words (in Roget), actions (inATOMIC or ConceptNet), frames (in FrameNet), and states (as in ATOMIC). Italso allows formal nodes, describing unique objects, to co-exist with fuzzy nodesdescribing ambiguous lexical expressions.P2. Reuse edge types across resources To support reasoning algorithmslike KagNet [16], the set of edge types should be kept to minimum and reusedacross resources wherever possible. For instance, the ConceptNet edge type/r/LocatedNear could be reused to express spatial proximity in Visual Genome.P3. Leverage external links The individual graphs are mostly disjoint ac-cording to their formal knowledge. However, high-quality links may exist or maybe easily inferred, in order to connect these KGs and enable path finding. Forinstance, while ConceptNet and Visual Genome do not have direct connections,they can be partially aligned, as both have links to WordNet synsets.P4. Generate high-quality probabilistic links Inclusion of additional prob-abilistic links, either with off-the-shelf link prediction algorithms or with spe-cialized algorithms (e.g., see section 3.3), would improve the connectedness ofCSKG and help path finding algorithms reason over it. Given the heterogeneityof nodes (cf. P1), a ‘one-method-fits-all’ node resolution might not be suitable.P5. Enable access to labels The CSKG format should support easy andefficient natural language access. Labels and aliases associated with KG nodesprovide application-friendly and human-readable access to the CSKG, and canhelp us unify descriptions of the same/similar concept across sources.3.2 RepresentationWe model CSKG as a hyper-relational graph , describing edges in a tabularKGTK [10] format. We opted for this representation rather than the traditionalRDF/OWL2 because it allows us to fulfill our goals (of simplicity and utility)and follow our principles more directly, without compromising on the format. Forinstance, natural language access (principle P5) to RDF/OWL2 nodes requiresgraph traversal over its rdfs:label relations. Including both reliable and prob-abilistic nodes (P3 and P4) would require a mechanism to easily indicate edge6 Filip Ilievski, Pedro Szekely, and Bin Zhangweights, which in RDF/OWL2 entails inclusion of blank nodes, and a numberof additional edges. Moreover, the simplicity of our tabular format allows us touse standard off-the-shelf functionalities and mature tooling, like the pandas2andgraph-tool3libraries in Python, or graph embedding tools like [15], whichhave been conveniently wrapped by the KGTK [10] toolkit.4The edges in CSKG are described by ten columns. Following KGTK, theprimary information about an edge consists of its id,node1 ,relation , andnode2 . Next, we include four “lifted” edge columns, using KGTK’s abbreviatedway of representing triples about the primary elements, such as node1;label orrelation;label (label of node1 and of relation ). Each edge is completed bytwo qualifiers: source , which specifies the source(s) of the edge (e.g., “CN”for ConceptNet), and sentence , containing the linguistic lexicalization of atriple, if given by the original source. Auxiliary KGTK files can be added todescribe additional knowledge about some edges, such as their weight, throughthe corresponding edge ids. We provide further documentation at: https://cskg.readthedocs.io/ .3.3 ConsolidationCurrently, CSKG integrates seven sources, selected based on their popularityin existing QA work: a commonsense knowledge graph ConceptNet, a visualcommonsense source Visual Genome, a procedural source ATOMIC, a general-domain source Wikidata, and three lexical sources, WordNet, Roget, and FrameNet.Here, we briefly present our design decisions per source, the mappings that fa-cilitate their integration, and further refinements on CSKG.3.3.1 Individual sources We keep the original edges of ConceptNet 5.7expressed with 47 relations in total. We also include the entire ATOMIC KG,preserving the original nodes and its nine relations. To enhance lexical match-ing between ATOMIC and other sources, we add normalized labels of its nodes,e.g., adding a second label “accepts invitation” to the original one “personX ac-cepts personY’s invitation”. We import four node types from FrameNet : frames,frame elements (FEs), lexical units (LUs), and semantic types (STs), and wereuse 5 categories of FrameNet edges: frame-frame (13 edge types), frame-FE (1edge type), frame-LU (1 edge type), FE-ST (1 edge type), and ST-ST (3 edgetypes). Following principle P2 on edge type reuse, we map these 19 edge typesto 9 relations in ConceptNet, e.g., iscausative ofis converted to /r/Causes .Roget We include all synonyms and antonyms between words in Roget, byreusing the ConceptNet relations /r/Synonym and /r/Antonym (P2). We rep-resent Visual Genome as a KG, by representing its image objects as Word-Net synsets (e.g., wn:shoe.n.01 ). We express relationships between objects viaConceptNet’s /r/LocatedNear edge type. Object attributes are represented by2https://pandas.pydata.org/3https://graph-tool.skewed.de/4CSKG can be transformed to RDF with kgtk generate-wikidata-triples .CSKG: The CommonSense Knowledge Graph 7different edge types, conditioned on their part-of-speech: we reuse ConceptNet’s/r/CapableOf for verbs, while we introduce a new relation mw:MayHavePropertyfor adjective attributes. We include the Wikidata-CS subset of Wikidata , ex-tracted in [12]. Its 101k statements have been manually mapped to 15 Concept-Net relations. We include four relations from WordNet v3.0 by mapping themto three ConceptNet relations: hypernymy (using /r/IsA ), part and memberholonymy (through /r/PartOf ), and substance meronymy (with /r/MadeOf ).3.3.2 Mappings We perform node resolution by applying existing identitymappings (P3) and generating probabilistic mappings automatically (P4). We in-troduce a dedicated relation, mw:SameAs , to indicate identity between two nodes.WordNet-WordNet The WordNet v3.1 identifiers in ConceptNet and theWordNet v3.0 synsets from Visual Genome are aligned by leveraging ILI: theWordNet InterLingual Index,5which generates 117,097 mw:SameAs mappings.WordNet-Wikidata We generate links between WordNet synsets and Wiki-data nodes as follows. For each synset, we retrieve 50 candidate nodes from acustomized index of Wikidata. Then, we compute sentence embeddings of thedescriptions of the synset and each of the Wikidata candidates by using a pre-trained XLNet model [36]. We create a mw:SameAs edge between the synset andthe Wikidata candidate with highest cosine similarity of their embeddings. Eachmapping is validated by one student. In total, 17 students took part in this vali-dation. Out of the 112k edges produced by the algorithm, the manual validationmarked 57,145 as correct. We keep these in CSKG and discard the rest.FrameNet-ConceptNet We link FrameNet nodes to ConceptNet in two ways.FrameNet LUs are mapped to ConceptNet nodes through the Predicate Ma-trix [6] with 3 ,016mw:SameAs edges. Then, we use 200k hand-labeled sentencesfrom the FrameNet corpus, each annotated with a target frame, a set of FEs,and their associated words. We treat these words as LUs of the correspondingFE, and ground them to ConceptNet with the rule-based method of [16].Lexical matching We establish 74,259 mw:SameAs links between nodes in ATOMIC,ConceptNet, and Roget by exact lexical match of their labels. We restrict thismatching to lexical nodes (e.g., /c/en/cat and not /c/en/cat/n/wn/animal ).3.3.3 Refinement We consolidate the seven sources and their interlinks asfollows. After transforming them to the representation described in the past twosections, we concatenate them in a single graph. We deduplicate this graph andappend all mappings, resulting in CSKG* . Finally, we apply the mappings tomerge identical nodes (connected with mw:SameAs ) and perform a final dedupli-cation of the edges, resulting in our consolidated CSKG graph. The entire pro-cedure of importing the individual sources and consolidating them into CSKGis implemented with KGTK operations [10], and can be found on our GitHub.65https://github.com/globalwordnet/ili6https://github.com/usc-isi-i2/cskg/blob/master/consolidation/create_cskg.sh8 Filip Ilievski, Pedro Szekely, and Bin ZhangFig. 1. Snippet of CSKG for the example task of section 1. CSKG combines: 1) lexicalnodes (piano, keys, music; in blue), 2) synsets like piano (artifact), seat (dramaturgy)(in green), and 3) frames ( fn:noise makers ) and frame elements ( fn:fe:use ) (in pur-ple). The link between piano and piano (artifact) is missing, but trivial to infer.3.4 EmbeddingsEmbeddings provide a convenient entry point to KGs and enable reasoning onboth intrinsic and downstream tasks. For instance, many reasoning applications(cf. [18,16]) of ConceptNet leverage their NumberBatch embeddings [29].Motivatedby these observations, we aspire to produce high-quality embeddings of theCSKG graph. We experiment with two families of embedding algorithms. Onthe one hand, we produce variants of popular graph embeddings: TransE [4],DistMult [35], ComplEx [32], and RESCAL [24]. On the other hand, we pro-duce various text (Transformer-based) embeddings based on BERT-large [8].For BERT, we first create a sentence for each node, based on a template thatencompasses its neighborhood, which is then encoded with BERT’s sentencetransformer model. All embeddings are computed with the KGTK operationsgraph-embeddings andtext-embeddings . We analyze them in section 4.2.The CSKG embeddings are publicly available at http://shorturl.at/pAGX8 .4 AnalysisFigure 1 shows a snippet of CSKG that corresponds to the task in section 1. Fol-lowing P1, CSKG combines: 1) lexical nodes (piano, keys, music), 2) synsets likepiano (artifact), seat (dramaturgy) (in green), and 3) frames ( fn:noise makers )and frame elements ( fn:fe:use ). According to P2, we reuse edge types whereapplicable: for instance, we use ConceptNet’s LocatedNear relation to formal-ize Visual Genome’s proximity information between a woman and a piano. Weleverage external links to WordNet to consolidate synsets across sources (P3). Wegenerate further links (P4) to connect FrameNet frames and frame elements toConceptNet nodes, and to consolidate the representation of piano (artifact)between Wikidata and WordNet. In the remainder of this section, we performqualitative analysis of CSKG and its embeddings.CSKG: The CommonSense Knowledge Graph 9Table 2. CSKG statistics. Abbreviations: CN=ConceptNet, VG=Visual Genome,WN=WordNet, RG=Roget, WD=Wikidata, FN=FrameNet, AT=ATOMIC. Relationnumbers in brackets are before consolidating to ConceptNet.AT CN FN RG VG WD WN CSKG* CSKG#nodes 304,909 1,787,373 15,652 71,804 11,264 91,294 71,243 2,414,813 2,160,968#edges 732,723 3,423,004 29,873 1,403,955 2,587,623 111,276 101,771 6,349,731 6,001,531#relations 9 47 9 (23) 2 3 (42k) 3 15 (45) 59 58avg degree 4.81 3.83 3.82 39.1 459.45 2.44 2.86 5.26 5.55std degree 0.07 0.02 0.13 0.34 35.81 0.02 0.05 0.02 0.03Table 3. Nodes with highest centrality score according to PageRank and HITS. Nodelabels indicated in bold.PageRank HITS hubs HITS authorities/c/en/ chromatic /a/wn /c/en/ red /c/en/ blue/c/en/ organic compound /c/en/ yellow /c/en/ red/c/en/ chemical compound /n /c/en/ green /c/en/ silver/c/en/ change /n/wn/artifact /c/en/ silver /c/en/ green/c/en/ natural science /n/wn/cognition /c/en/ blue /c/en/ gold4.1 StatisticsBasic statistics of CSKG are shown in Table 2. In total, our mappings produce251,517 mw:SameAs links and 45,659 fn:HasLexicalUnit links. After refinement,i.e., removal of the duplicates and merging of the identical nodes, CSKG consistsof 2.2 million nodes and 6 million edges. In terms of edges, its largest subgraphis ConceptNet (3.4 million), whereas ATOMIC comes second with 733 thousandedges. These two graphs also contribute the largest number of nodes to CSKG.The three most common relations in CSKG are: /r/RelatedTo (1.7 million),/r/Synonym (1.2 million), and /r/Antonym (401 thousand edges).Connectivity and centrality The mean degree of CSKG grows by 5.5%(from 5.26 to 5.55) after merging identical nodes. Compared to ConceptNet,its degree is 45% higher, due to its increased number of edges while keepingthe number of nodes nearly constant. The best connected subgraphs are Vi-sual Genome and Roget. CSKG’s high connectivity is owed largely to these twosources and our mappings, as the other five sources have degrees below that ofCSKG. The abnormally large node degrees and variance of Visual Genome aredue to its annotation guidelines that dictate all concept-to-concept informationto be annotated, and our modeling choice to represent its nodes through theirsynsets. We report that the in-degree and out-degree distributions of CSKG haveZipfian shapes, a notable difference being that the maximal in degree is nearlydouble compared to its maximal out degree (11k vs 6.4k). To understand betterthe central nodes in CSKG, we compute PageRank and HITS metrics. The top-5results are shown in Table 3. We observe that the node with highest PageRank10 Filip Ilievski, Pedro Szekely, and Bin ZhangTable 4. Top-5 most similar nodes for /c/en/turtle/n/wn/animal (E1) and/c/en/happy (E2) according to TransE and BERT.TransE BERTE1 /c/en/chelonian/n/wn/animal /c/en/glyptemys/n/c/en/mud turtle/n/wn/animal /c/en/pelocomastes/n/c/en/cooter/n/wn/animal /c/en/staurotypus/n/c/en/common snapping turtle/n/wn/animal /c/en/parahydraspis/n/c/en/sea turtle/n/wn/animal /c/en/trachemys/nE2 /c/en/excited /c/en/bring happiness/c/en/satisfied /c/en/new happiness/c/en/smile mood at:like aparty isagood wayto.../c/en/pleased /c/en/encouraging person’s talent/c/en/joyful at:happy that they went tothepartyFig. 2. UMAP visualization of 5,000 randomly sampled nodes from CSKG, representedby TransE (left) and BERT (right) embeddings. Colors signify node sources.has label “chromatic”, while all dominant HITS hubs and authorities are colors,revealing that knowledge on colors of real-world object is common in CSKG.PageRank also reveals that knowledge on natural and chemical processes is well-represented in CSKG. Finally, we note that the top-centrality nodes are generallydescribed by multiple subgraphs, e.g., c/en/natural science/n/wn/cognitionis found in ConceptNet and WordNet, whereas the color nodes (e.g., /c/en/red )are shared between Roget and ConceptNet.4.2 Analysis of the CSKG embeddingsWe randomly sample 5,000 nodes from CSKG and visualize their embeddingscomputed with an algorithm from each family: TransE and BERT. The resultsare shown in Figure 2. We observe that graph embeddings group nodes from thesame source together. This is because graph embeddings tend to focus on thegraph structure, and because most links in CSKG are still within sources. Weobserve that the sources are more intertwined in the case of the BERT embed-CSKG: The CommonSense Knowledge Graph 11dings, because of the emphasis on lexical over structural similarity. Moreover,in both plots Roget is dispersed around the ConceptNet nodes, which is likelydue to its broad coverage of concepts, that maps both structurally and lexicallyto ConceptNet. At the same time, while ATOMIC overlaps with a subset ofConceptNet [26], the two sources mostly cover different areas of the space.Table 4 shows the top-5 most similar neighbors for /c/en/turtle/n/wn/animaland/c/en/happy according to TransE and BERT. We note that while graph em-beddings favor nodes that are structurally similar (e.g., /c/en/turtle/n/wn/animaland /c/en/chelonian/n/wn/animal are both animals in WordNet), text em-beddings give much higher importance to lexical similarity of nodes or theirneighbors, even when the nodes are disconnected in CSKG (e.g., /c/en/happyandat:happy that they went totheparty ). These results are expected con-sidering the approach behind each algorithm.Word association with embeddings To quantify the utility of differentembeddings, we evaluate them on the USF-FAN [23] benchmark, which con-tains crowdsourced common sense associations for 5,019 “stimulus” concepts inEnglish. For instance, the associations provided for dayare:night ,light ,sun,time ,week , and break . The associations are ordered descendingly based on theirfrequency. With each algorithm, we produce a top-K most similar neighbors listbased on the embedding of the stimulus concept. Here, Kis the number of asso-ciations for a concept, which varies across stimuli. If CSKG has multiple nodesfor the stimulus label, we average their embeddings. For the graph embeddings,we use logistic loss function, using a dot comparator, a learning rate of 0.1, anddimension 100. The BERT text embeddings have dimension 1024, which is thenative dimension of this language model. As the text embedding models oftenfavor surface form similarity (e.g., associations like daily forday), we devisevariants of this method that excludes associations with Levenshtein similarityhigher than a threshold t.We evaluate by comparing the embedding-based list to the benchmark one,through customary ranking metrics, like Mean Average Precision (MAP) andNormalized Discounted Cumulative Gain (NDCG). Our investigations show thatTransE is the best-performing algorithm overall, with MAP of 0.207 and NDCGof 0.530. The optimal BERT variant uses threshold of t= 0.9, scoring with MAPof 0.209 and NDCG of 0.268. The obtained MAP scores indicate that the embed-dings capture relevant signals, yet, a principled solution to USF-FAN requires amore sophisticated embedding search method that can capture various forms ofboth relatedness and similarity. In the future, we aim to investigate embeddingtechniques that integrate structural and content information like RDF2Vec [25],and evaluate on popular word similarity datasets like WordSim-353 [9].5 ApplicationsAs the creation of CSKG is largely driven by downstream reasoning needs, wenow investigate its relevance for commonsense question answering: 1) we measure12 Filip Ilievski, Pedro Szekely, and Bin ZhangTable 5. Number of triples retrieved with ConceptNet and CSKG on different datasets.train dev#Questions ConceptNet CSKG #Questions ConceptNet CSKGCSQA 9,741 78,729 125,552 1,221 9,758 15,662SIQA 33,410 126,596 266,937 1,954 7,850 16,149PIQA 16,113 18,549 59,684 1,838 2,170 6,840aNLI 169,654 257,163 638,841 1,532 5,603 13,582its ability to contribute novel evidence to support reasoning, and 2) we measureits role in pre-training language models for zero-shot downstream reasoning.5.1 Retrieving evidence from CSKGWe measure the relevance of CSKG for commonsense question answering tasks,by comparing the number of retrieved triples that connect keywords in the ques-tion and in the answers. For this purpose, we adapt the lexical grounding inHyKAS [18] to retrieve triples from CSKG instead of its default knowledgesource, ConceptNet. We expect that CSKG can provide much more evidencethan ConceptNet, both in terms of number of triples and their diversity. Weexperiment with four commonsense datasets: CommonSense QA (CSQA) [30],Social IQA (SIQA) [27], Physical IQA (PIQA) [3], and abductive NLI (aNLI) [2].As shown in Table 5, CSKG significantly increases the number of evidence triplesthat connect terms in questions with terms in answers, in comparison to Concept-Net. We note that the increase is on average 2-3 times, the expected exceptionbeing CSQA, which was inferred from ConceptNet.We inspect a sample of questions to gain insight into whether the addi-tional triples are relevant and could benefit reasoning. For instance, let us con-sider the CSQA question “Bob the lizard lives in a warm place with lots ofwater. Where does he probably live?”, whose correct answer is “tropical rainfor-est” . In addition to the ConceptNet triple /c/en/lizard /c/en/AtLocation/c/en/tropical rainforest , CSKG provides two additional triples, statingthat tropical is an instance of place and that water may have property trop-ical. The first additional edge stems from our mappings from FrameNet to Con-ceptNet, whereas the second comes from Visual Genome. We note that, whileCSKG increases the coverage with respect to available commonsense knowledge,it is also incomplete: in the above example, useful information such as warmtemperatures being typical for tropical rainforests is still absent.5.2 Pre-training language models with CSKGWe have studied the role of various subsets of CSKG for downstream QA reason-ing extensively in [19]. Here, CSKG or its subsets were transformed into artificialcommonsense question answering tasks. These tasks were then used instead oftraining data to pre-train language models, like RoBERTa and GPT-2. SuchCSKG: The CommonSense Knowledge Graph 13Table 6. Zero-shot evaluation results with different combinations of models and knowl-edge sources, across five commonsense tasks, as reported in [19]. CWWV combines Con-ceptNet, Wikidata, WordNet, and Visual Genome. CSKG is a union of ATOMIC andCWWV.We report mean accuracy over three runs, with 95% confidence interval.Model KG aNLI CSQA PIQA SIQA WGGPT2-L ATOMIC 59.2(±0.3) 48 .0(±0.9) 67 .5(±0.7) 53 .5(±0.4) 54 .7(±0.6)GPT2-L CWWV 58.3(±0.4) 46 .2(±1.0) 68 .6(±0.7) 48 .0(±0.7) 52 .8(±0.9)GPT2-L CSKG 59.0(±0.5) 48 .6(±1.0) 68 .6(±0.9) 53 .3(±0.5) 54 .1(±0.5)RoBERTa-L ATOMIC 70 .8(±1.2) 64 .2(±0.7) 72 .1(±0.5) 63 .1(±1.5) 59 .6(±0.3)RoBERTa-L CWWV 70.0(±0.3)67 .9(±0.8) 72 .0(±0.7) 54 .8(±1.2) 59 .4(±0.5)RoBERTa-L CSKG 70.5(±0.2) 67 .4(±0.8)72 .4(±0.4)63 .2(±0.7)60 .9(±0.8)Human - 91.4 88.9 94.9 86.9 94.1a CSKG-based per-trained language model was then ‘frozen’ and evaluated ina zero-shot manner across a wide variety of commonsense tasks, ranging fromquestion answering through pronoun resolution and natural language inference.We select key results from these experiments in Table 6. The results demon-strate that no single knowledge source suffices for all benchmarks and that usingCSKG is overall beneficial compared to using its subgraphs, thus directly showingthe benefit of commonsense knowledge consolidation. In a follow-up study [11],we further exploit the consolidation in CSKG to pre-train the language modelswith one dimension (knowledge type) at a time, noting that certain dimensionsof knowledge (e.g., temporal knowledge) are much more useful for reasoning thanothers, like lexical knowledge. In both cases, the kind of knowledge that bene-fits each task is ultimately conditioned on the alignment between this knowledgeand the targeted task, indicating that subsequent work should further investigatehow to dynamically align knowledge with the task at hand.6 DiscussionOur analysis in section 4 revealed that the connectivity in CSKG is higherthan merely concatenation of the individual sources, due to our mappings acrosssources and the merge of identical nodes. Its KGTK format allowed us to seam-lessly compute and evaluate a series of embeddings, observing that TransE andBERT with additional filtering are the two best-performing and complementaryalgorithms. The novel evidence brought by CSKG on downstream QA tasks (sec-tion 5) is a signal that can be exploited by reasoning systems to enhance theirperformance and robustness, as shown in [19]. Yet, the quest to a rich, high-coverage CSKG is far from completed. We briefly discuss two key challenges,while broader discussion can be found in [11].Node resolution As large part of CSKG consists of lexical nodes, it suf-fers from the standard challenges of linguistic ambiguity and variance. For in-14 Filip Ilievski, Pedro Szekely, and Bin Zhangstance, there are 18 nodes in CSKG that have the label ‘scene’, which includesWordNet or OpenCyc synsets, Wikidata Qnodes, frame elements, and a lexicalnode. Variance is another challenge, as /c/en/caffeine ,/c/en/caffine , and/c/en/the active ingredient caffeine are all separate nodes in ConceptNet(and in CSKG). We are currently investigating techniques for node resolutionapplicable to the heterogeneity of commonsense knowledge in CSKG.Semantic enrichment We have normalized the edge types across sources toa single, ConceptNet-centric, set of 58 relations. In [11], we classify all CSKG’srelations into 13 dimensions, enabling us to consolidate the edge types further.At the same time, some of these relations hide fine-grained distinctions, for exam-ple, WebChild [31] defines 19 specific property relations, including temperature,shape, and color, all of which correspond to ConceptNet’s /r/HasProperty . Anovel future direction is to produce hierarchy for each of the relations, and re-fine existing triples by using a more specific relation (e.g., use the predicate‘temperature’ instead of ‘property’ when the object of the triple is ‘cold’).7 Conclusions and Future WorkWhile current commonsense knowledge sources contain complementary knowl-edge that would be beneficial as a whole for downstream tasks, such usage isprevented by different modeling approaches, foci, and sparsity of available map-pings. Optimizing for simplicity, modularity, and utility, we proposed a hyper-relational graph representation that describes many nodes with a few edge types,maximizes the high-quality links across subgraphs, and enables natural languageaccess. We applied this representation approach to consolidate a commonsenseknowledge graph (CSKG) from seven very diverse and disjoint sources: a text-based commonsense knowledge graph ConceptNet, a general-purpose taxonomyWikidata, an image description dataset Visual Genome, a procedural knowledgesource ATOMIC, and three lexical sources: WordNet, Roget, and FrameNet.CSKG describes 2.2 million nodes with 6 million statements. Our analysis showedthat CSKG is a well-connected graph and more than ‘a simple sum of its parts’ .Together with CSKG, we also publicly release a series of graph and text embed-dings of the CSKG nodes, to facilitate future usage of the graph. Our analysisshowed that graph and text embeddings of CSKG have complementary notionsof similarity, as the former focus on structural patterns, while the latter on lex-ical features of the node’s label and of its neighborhood. Applying CSKG ondownstream commonsense reasoning tasks, like QA, showed an increased recallas well as an advantage when pre-training a language model to reason acrossdatasets in a zero-shot fashion. Key standing challenges for CSKG include se-mantic consolidation of its nodes and refinement of its property hierarchy. Note-books for analyzing these resources can be found on our public GitHub page:https://github.com/usc-isi-i2/cskg/tree/master/ESWC2021 .CSKG: The CommonSense Knowledge Graph 15AcknowledgementsThis work is sponsored by the DARPA MCS program under Contract No.N660011924033 with the United States Office Of Naval Research, and by theAir Force Research Laboratory under agreement number FA8750-20-2-10002.References1. Baker, C.F., Fillmore, C.J., Lowe, J.B.: The berkeley framenet project. In: Pro-ceedings of the 17th international conference on Computational linguistics (1998)2. Bhagavatula, C., Bras, R.L., Malaviya, C., Sakaguchi, K., Holtzman, A., Rashkin,H., Downey, D., Yih, S.W.t., Choi, Y.: Abductive commonsense reasoning. arXivpreprint arXiv:1908.05739 (2019)3. Bisk, Y., Zellers, R., Bras, R.L., Gao, J., Choi, Y.: Piqa: Reasoning about physicalcommonsense in natural language. arXiv preprint arXiv:1911.11641 (2019)4. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translatingembeddings for modeling multi-relational data. Advances in neural informationprocessing systems 26, 2787–2795 (2013)5. Corcoglioniti, F., Rospocher, M., Aprosio, A.P., Tonelli, S.: Premon: a lemon exten-sion for exposing predicate models as linked data. In: Proceedings of the Tenth In-ternational Conference on Language Resources and Evaluation (LREC’16) (2016)6. De Lacalle, M.L., Laparra, E., Aldabe, I., Rigau, G.: Predicate matrix: automat-ically extending the semantic interoperability between predicate resources. Lan-guage Resources and Evaluation 50(2), 263–289 (2016)7. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer visionand pattern recognition. pp. 248–255. Ieee (2009)8. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirec-tional transformers for language understanding. arXiv preprint arXiv:1810.04805(2018)9. Finkelstein, L., Gabrilovich, E., Matias, Y., Rivlin, E., Solan, Z., Wolfman, G.,Ruppin, E.: Placing search in context: The concept revisited. In: Proceedings ofthe 10th international conference on World Wide Web. pp. 406–414 (2001)10. Ilievski, F., Garijo, D., Chalupsky, H., Divvala, N.T., Yao, Y., Rogers, C., Li, R.,Liu, J., Singh, A., Schwabe, D., Szekely, P.: Kgtk: A toolkit for large knowledgegraph manipulation and analysis. ISWC (2020)11. Ilievski, F., Oltramari, A., Ma, K., Zhang, B., McGuinness, D.L., Szekely, P.: Di-mensions of commonsense knowledge. arXiv preprint arXiv:2101.04640 (2021)12. Ilievski, F., Szekely, P., Schwabe, D.: Commonsense knowledge in wikidata. Pro-ceedings of the Wikidata workshop, ISWC (2020)13. Kipfer, B.: Roget’s 21st century thesaurus in dictionary form ( ́ ed. 3). (2005)14. Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S.,Kalantidis, Y., Li, L.J., Shamma, D.A., et al.: Visual genome: Connecting languageand vision using crowdsourced dense image annotations. International Journal ofComputer Vision 123(1), 32–73 (2017)15. Lerer, A., Wu, L., Shen, J., Lacroix, T., Wehrstedt, L., Bose, A., Peysakhovich,A.: Pytorch-biggraph: A large-scale graph embedding system. arXiv preprintarXiv:1903.12287 (2019)16 Filip Ilievski, Pedro Szekely, and Bin Zhang16. Lin, B.Y., Chen, X., Chen, J., Ren, X.: Kagnet: Knowledge-aware graph networksfor commonsense reasoning. arXiv preprint arXiv:1909.02151 (2019)17. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M.,Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretrainingapproach. arXiv preprint arXiv:1907.11692 (2019)18. Ma, K., Francis, J., Lu, Q., Nyberg, E., Oltramari, A.: Towards generalizable neuro-symbolic systems for commonsense question answering. EMNLP-COIN (2019)19. Ma, K., Ilievski, F., Francis, J., Bisk, Y., Nyberg, E., Oltramari, A.: Knowledge-driven Data Construction for Zero-shot Evaluation in Commonsense Question An-swering. In: 35th AAAI Conference on Artificial Intelligence (2021)20. McCrae, J.P.: Mapping wordnet instances to wikipedia. In: Proceedings of the 9thGlobal WordNet Conference (GWC 2018). pp. 62–69 (2018)21. Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM38(11), 39–41 (1995)22. Navigli, R., Ponzetto, S.P.: Babelnet: Building a very large multilingual semanticnetwork. In: Proceedings of ACL (2010)23. Nelson, D.L., McEvoy, C.L., Schreiber, T.A.: The university of south florida freeassociation, rhyme, and word fragment norms. Behavior Research Methods, In-struments, & Computers 36(3), 402–407 (2004)24. Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning onmulti-relational data. In: Icml. vol. 11, pp. 809–816 (2011)25. Ristoski, P., Paulheim, H.: Rdf2vec: Rdf graph embeddings for data mining. In:International Semantic Web Conference. pp. 498–514. Springer (2016)26. Sap, M., Le Bras, R., Allaway, E., Bhagavatula, C., Lourie, N., Rashkin, H., Roof,B., Smith, N.A., Choi, Y.: Atomic: An atlas of machine commonsense for if-thenreasoning. In: Proceedings of the AAAI Conference on Artificial Intelligence (2019)27. Sap, M., Rashkin, H., Chen, D., LeBras, R., Choi, Y.: Socialiqa: Commonsensereasoning about social interactions. arXiv preprint arXiv:1904.09728 (2019)28. Schuler, K.K.: Verbnet: A broad-coverage, comprehensive verb lexicon (2005)29. Speer, R., Chin, J., Havasi, C.: Conceptnet 5.5: An open multilingual graph of gen-eral knowledge. In: Thirty-First AAAI Conference on Artificial Intelligence (2017)30. Talmor, A., Herzig, J., Lourie, N., Berant, J.: Commonsenseqa: A question answer-ing challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937(2018)31. Tandon, N., De Melo, G., Weikum, G.: Webchild 2.0: Fine-grained commonsenseknowledge distillation. In: ACL 2017, System Demonstrations (2017)32. Trouillon, T., Welbl, J., Riedel, S., Gaussier, ́E., Bouchard, G.: Complex embed-dings for simple link prediction. ICML (2016)33. Trumbo, D.: Increasing the usability of research lexica. Ph.D. thesis, University ofColorado at Boulder (2006)34. Vrandeˇ ci ́ c, D., Kr ̈ otzsch, M.: Wikidata: a free collaborative knowledgebase. Com-munications of the ACM 57(10), 78–85 (2014)35. Yang, B., Yih, W.t., He, X., Gao, J., Deng, L.: Embedding entities and relations forlearning and inference in knowledge bases. arXiv preprint arXiv:1412.6575 (2014)36. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R., Le, Q.V.: Xlnet:Generalized autoregressive pretraining for language understanding. In: Advancesin neural information processing systems. pp. 5754–5764 (2019)37. Zareian, A., Karaman, S., Chang, S.F.: Bridging knowledge graphs to generatescene graphs. arXiv preprint arXiv:2001.02314 (2020)38. Zellers, R., Bisk, Y., Schwartz, R., Choi, Y.: Swag: A large-scale adversarial datasetfor grounded commonsense inference. arXiv preprint arXiv:1808.05326 (2018)<|im_end|> <|im_start|>assistant ### Review Title Paper 37 review - Updated after rebuttal ### Review Text The paper describes CSKG, the first integrated knowledge graph that connects several common sense knowledge graphs. I think the paper takes an interesting research direction and the work has the potential to benefit the community. However, the paper has several weakness and does not fully articulate the benefits of the reported work. First and foremost, I don't think the introduction has done a good job motivating this research. I am not sure I fully understand the example here, which does not look like a question and was unclear what the machine is asked to do. Authors should articulate how the different common sense KGs are complementing each other, perhaps with alternative examples, and explain what kind of tasks can benefit from such an integrated KG and how. For example, what is ConceptNet primarily used for research/applications and what about ATOMIC? How exactly could the merge of the two create new usage scenarios? Second, the related work section does not do a good job explaining the limitations of SoA. The last paragraph mentions a few projects of a similar nature but there is no discussion of what their limitations are. This is problematic because it is difficult for readers to understand how this work differs and what additional values it brings. Third, during the actual integration, the authors did not explain 1) how these common sense KGs are selected; or 2) how the pairs of KGs were selected for mapping. For 1), there needs to be a strong justification of choosing these seven KGs. If it is not randomly done, then what is the intuition behind? Is it based on maximising the potential benefits of CSKG, but how? For 2), fundamentally the question is how did you choose which KG to map to which others? For example, the authors mentioned 3 pairs of KGs for structured mapping, why so? Why could it not be simply a pair-wise mapping between every two KGs? The choice of such mapping processes needs to follow a systematic approach, which needs to be explained and justified. Finally, the results do not convey a clear message. I am not sure how to interpret the embedding results on page 11. How do we know if the results are good or bad? What can we compare them against? The results on QA are also not without anomalies. There are a few occasions where CSKG underperformed ATOMIC. What does this mean? Does that indicate problems in the integrated KG? If so, is the problem due to mapping errors, or else? ========Update after rebuttal======== The authors gave a thorough reply addressing most of my questions. I am willing to revise my score in line with other reviewers who are more knowledgeable in this area than me. I would like to ask authors to consider revising their paper to explicitly address some of my questions. At least - the fact that there is no one-size-fit-all solution when integrating multiple KGs and how this justifies their design - some reflection on the message we should take from the CSKG results, the fact that it is not beneficial to all cases (your point 6). ### Review Rating 1: Weak Accept ### Review Confidence 2: The reviewer is willing to defend the evaluation but not sufficiently familiar with the state of the art or the specific topic of the paper<|im_end|> <|im_end|>
eom0IUrF__F
ICLR.cc/2021/Conference
2021
CoCo: Controllable Counterfactuals for Evaluating Dialogue State Trackers
["SHIYANG LI", "Semih Yavuz", "Kazuma Hashimoto", "Jia Li", "Tong Niu", "Nazneen Rajani", "Xifeng Yan", "Yingbo Zhou", "Caiming Xiong"]
Dialogue state trackers have made significant progress on benchmark datasets, but their generalization capability to novel and realistic scenarios beyond the held- out conversations is less understood. We propose controllable counterfactuals (COCO) to bridge this gap and evaluate dialogue state tracking (DST) models on novel scenarios, i.e., would the system successfully tackle the request if the user responded differently but still consistently with the dialogue flow? COCO leverages turn-level belief states as counterfactual conditionals to produce novel conversation scenarios in two steps: (i) counterfactual goal generation at turn- level by dropping and adding slots followed by replacing slot values, (ii) counterfactual conversation generation that is conditioned on (i) and consistent with the dialogue flow. Evaluating state-of-the-art DST models on MultiWOZ dataset with COCO-generated counterfactuals results in a significant performance drop of up to 30.8% (from 49.4% to 18.6%) in absolute joint goal accuracy. In comparison, widely used techniques like paraphrasing only affect the accuracy by at most 2%. Human evaluations show that COCO-generated conversations perfectly reflect the underlying user goal with more than 95% accuracy and are as human-like as the original conversations, further strengthening its reliability and promise to be adopted as part of the robustness evaluation of DST models.
["task-oriented dialogue", "dialogue state tracking", "robustness", "dst", "evaluation"]
ABSTRACTDialogue state trackers have made significant progress on benchmark datasets, buttheir generalization capability to novel and realistic scenarios beyond the held-out conversations is less understood. We propose controllable counterfactuals(COCO) to bridge this gap and evaluate dialogue state tracking (DST) modelson novel scenarios, i.e., would the system successfully tackle the request if theuser responded differently but still consistently with the dialogue flow? C OCOleverages turn-level belief states as counterfactual conditionals to produce novelconversation scenarios in two steps: (i) counterfactual goal generation at turn-level by dropping and adding slots followed by replacing slot values, (ii) coun-terfactual conversation generation that is conditioned on (i) and consistent withthe dialogue flow. Evaluating state-of-the-art DST models on MultiWOZ datasetwith C OCO-generated counterfactuals results in a significant performance drop ofup to 30.8% (from 49.4% to 18.6%) in absolute joint goal accuracy. In compari-son, widely used techniques like paraphrasing only affect the accuracy by at most2%. Human evaluations show that COCO-generated conversations perfectly re-flect the underlying user goal with more than 95% accuracy and are as human-likeas the original conversations, further strengthening its reliability and promise tobe adopted as part of the robustness evaluation of DST models.11 I NTRODUCTIONTask-oriented dialogue (TOD) systems have recently attracted growing attention and achieved sub-stantial progress (Zhang et al., 2019b; Neelakantan et al., 2019; Peng et al., 2020; Wang et al.,2020b;a), partly made possible by the construction of large-scale datasets (Budzianowski et al.,2018; Byrne et al., 2019; Rastogi et al., 2019). Dialogue state tracking (DST) is a backbone of TODsystems, where it is responsible for extracting the user’s goal represented as a set of slot-value pairs(e.g., ( area,center ), (food,British )), as illustrated in the upper part of Figure 1. The DST module’soutput is treated as the summary of the user’s goal so far in the dialogue and directly consumedby the subsequent dialogue policy component to determine the system’s next action and response.Hence, the accuracy of the DST module is critical to prevent downstream error propagation (Liu andLane, 2018), affecting the end-to-end performance of the whole system.With the advent of representation learning in NLP (Pennington et al., 2014; Devlin et al., 2019;Radford et al., 2019), the accuracy of DST models has increased from 15.8% (in 2018) to 55.7%(in 2020). While measuring the held-out accuracy is often useful, practitioners consistently overes-timate their model’s generalization (Ribeiro et al., 2020; Patel et al., 2008) since test data is usuallycollected in the same way as training data. In line with this hypothesis, Table 1 demonstrates thatthere is a substantial overlap of the slot values between training and evaluation sets of the MultiWOZDST benchmark (Budzianowski et al., 2018). In Table 2, we observe that the slot co-occurrencedistributions for evaluation sets tightly align with that of train split, hinting towards the potentialEqual Contribution. Work was done during Shiyang’s internship at Salesforce Research.1Code is available at https://github.com/salesforce/coco-dst1Published as a conference paper at ICLR 2021data attraction-name hotel-name restaurant-name taxi-departure taxi-destination train-departure train-destinationdev 94.5 96.4 97.3 98.6 98.2 99.6 99.6test 96.2 98.4 96.8 95.6 99.5 99.4 99.4Table 1: The percentage (%) of domain-slot values in dev/test sets covered by training data.slot name data area book day book time food name price rangebook peopletrain 1.9 38.8 39.2 2.1 16.4 1.5dev 1.9 38.9 38.9 1.9 16.3 2.2test 2.7 36.9 37.7 1.6 18.7 2.4Table 2: Co-occurrence distribution(%) of book people slot with other slots in restaurant domain within thesame user utterance. It rarely co-occurs with particulars slots (e.g., food), which hinders the evaluation of DSTmodels on realistic user utterances such as “ I want to book a Chinese restaurant for 8 people. ”limitation of the held-out accuracy in reflecting the actual generalization capability of DST models.Inspired by this phenomenon, we aim to address and provide insights into the following question:how well do state-of-the-art DST models generalize to the novel but realistic scenarios that are notcaptured well enough by the held-out evaluation set?Most prior work (Iyyer et al., 2018; Jin et al., 2019) focus on adversarial example generation forrobustness evaluation. They often rely on perturbations made directly on test examples in the held-out set and assume direct access to evaluated models’ gradients or outputs. Adversarial examplesgenerated by these methods are often unnatural or obtained to hurt target models deliberately. It isimperative to emphasize here that both our primary goal and approach significantly differ from theprevious line of work: (i) Our goal is to evaluate DST models beyond held-out accuracy, (ii) Weleverage turn-level structured meaning representation (belief state) along with its dialogue historyas conditions to generate user response without relying on the original user utterance, (iii) Ourapproach is entirely model-agnostic, assuming no access to evaluated DST models, (iv) Perhapsmost importantly, we aim to produce novel but realistic and meaningful conversation scenarios ratherthan intentionally adversarial ones.We propose controllable counterfactuals (COCO) as a principled, model-agnostic approach to gener-ate novel scenarios beyond the held-out conversations. Our approach is inspired by the combinationof two natural questions: how would DST systems react to (1) unseen slot values and (2) rare but re-alistic slot combinations? C OCOfirst encapsulates these two aspects under a unified concept calledcounterfactual goal obtained by a stochastic policy of dropping and adding slots to the original turn-level belief state followed by replacing slot values. In the second step, C OCOconditions on thedialogue history and the counterfactual goal to generate counterfactual conversation . We cast theactual utterance generation as a conditional language modeling objective. This formulation allowsus to plug-in a pretrained encoder-decoder architecture (Raffel et al., 2020) as the backbone thatpowers the counterfactual conversation generation. We also propose a strategy to filter utterancesthat fail to reflect the counterfactual goal exactly. We consider value substitution (VS), as presentedin Figure 1, as a special C OCOcase that only replaces the slot values in the original utterance with-out adding or dropping slots. When we use VS as a fall-back strategy for C OCO(i.e., apply VSwhen C OCOfails to generate valid user responses after filtering), we call it C OCO+.Evaluating three strong DST models (Wu et al., 2019; Heck et al., 2020; Hosseini-Asl et al., 2020)with our proposed controllable counterfactuals generated by C OCOand C OCO+ shows that theperformance of each significantly drops (up to 30.8%) compared to their joint goal accuracy on theoriginal MultiWOZ held-out evaluation set. On the other hand, we find that these models are, infact, quite robust to paraphrasing with back-translation, where their performance only drops up to2%. Analyzing the effect of data augmentation with C OCO+ shows that it consistently improvesthe robustness of the investigated DST models on counterfactual conversations generated by each ofVS, C OCOand C OCO+. More interestingly, the same data augmentation strategy improves the jointgoal accuracy of the best of these strong DST models by 1.3% on the original MultiWOZ evaluationset. Human evaluations show that C OCO-generated counterfactual conversations perfectly reflectthe underlying user goal with more than 95% accuracy and are found to be quite close to originalconversations in terms of their human-like scoring. This further proves our proposed approach’sreliability and potential to be adopted as part of DST models’ robustness evaluation.2Published as a conference paper at ICLR 2021UHVWDXUDQWDUHDFHQWHU!>6\VWHP@+HOORKRZFDQ,KHOS\RX">8VHU@,QHHGWRILQGDUHVWDXUDQWLQWKHFHQWHUUHVWDXUDQWDUHDFHQWHU!>6\VWHP@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">8VHU@,WQHHGVWRVHUYH%ULWLVKIRRGDQG,¶GOLNHDUHVHUYDWLRQIRUUHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWERRNWLPH!UHVWDXUDQWDUHDFHQWHU!UHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWERRNWLPH!'KCNQIWG)NQY6WTPNGXGN%GNKGH5VCVG'KCNQIWGNGXGN%GNKGH5VCVGUHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNWLPH!UHVWDXUDQWDUHDFHQWHU!UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNWLPH!UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNSHRSOH!UHVWDXUDQWDUHDFHQWHU!UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNSHRSOH!>6\VWHP@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">96@,WQHHGVWRVHUYH&KLQHVHIRRGDQG,¶GOLNHDUHVHUYDWLRQIRU>6\VWHP@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">&R&R@,WVKRXOGVHUYH&KLQHVHIRRGDQG,QHHGWRERRNDWDEOHIRUSHRSOH6WTP6WTP856WTP&Q&Q6WTPFigure 1: The upper left is a dialogue example between user andsystem with its turn-level and dialogue-levelbelief states on the upper right. The lower left are valid user utterance variations generated by VS and CoCowith their corresponding belief states derived from the original ones on the right.2 R ELATED WORKDialogue State Tracking. DST has been a core component in current state-of-the-art TOD systems.Traditional approaches usually rely on hand-crafted features or domain-specific lexicon (Hendersonet al., 2014; Wen et al., 2017) and require a predefined ontology, making them hard to extend tounseen values. To tackle this issue, various methods have been proposed. Gao et al. (2019) treatsDST as a reading comprehension problem and predicts slot values with start and end positions inthe dialogue context. Zhang et al. (2019a) proposes DS-DST, a dual-strategy model that predictsvalues in domains with a few possible candidates from classifiers and others from span extractors.Furthermore, Heck et al. (2020) proposes TripPy, a triple copy strategy model, which allows it tocopy values from the context, previous turns’ predictions and system informs.An alternative to classification and span prediction is value generation. Wu et al. (2019) generatesslot values with a pointer generator network See et al. (2017) without relying on fixed vocabulariesand spans. (Hosseini-Asl et al., 2020) models DST as a conditional generation problem and directlyfinetunes GPT2 (Radford et al., 2019) on DST task and achieves state-of-the-art on the MultiWOZ.Adversarial Example Generation. Adversarial example generation has been commonly studied incomputer vision (Szegedy et al., 2014; Goodfellow et al., 2015). Recently, it has received growingattention in NLP domain as well. Papernot et al. (2016) finds adversarial examples in the embeddingspace, and then remapped them to the discrete space. Alzantot et al. (2018) proposes a population-based word replacing method and aims to generate fluent adversarial sentences. These methods oftenedit the original data greedily assuming access to the model’s gradients or outputs besides queryingthe underlying model many times (Jin et al., 2019). Alternative line of work investigates generatingadversarial examples in a model-agnostic way. Iyyer et al. (2018) proposes to generate adversarialparaphrases of original data with different syntactic structures. Jia and Liang (2017) automaticallygenerates sentences with key word overlappings of questions in SQuAD (Rajpurkar et al., 2016) todistract computer systems without changing the correct answer or misleading humans.Although different methods have been proposed to evaluate the robustness of NLP models, majorityof the prior work in this line focus either on text classification, neural machine translation or readingcomprehension problems. Perhaps the most similar existing work with ours are (Einolghozati et al.,2019) and (Cheng et al., 2019). Einolghozati et al. (2019) focuses on intent classification and slottagging in TOD while Cheng et al. (2019) targets at synthetic competitive negotiation dialogues(Lewis et al., 2017) without DST component. In this work, however, we focus on evaluating a corecomponent of state-of-the-art TOD, DST, on the widely used benchmark, MultiWOZ. To the best ofour knowledge, ours is the first work to systematically evaluate the robustness of DST models.3 B ACKGROUNDMulti-domain DST task definition. LetXt=f(Usys1;Uusr1);:::;(Usyst;Uusrt)gdenote a sequenceof turns of a dialogue until the t-th turn, where UsysiandUusri(1it) denote system and userutterance at the i-th turn, respectively. In multi-domain DST, each turn ( Usysi;Uusri) talks about aspecific domain (e.g., hotel ), and a certain number of slots (e.g., price range ) in that domain. Wedenote allNpossible domain-slot pairs as S=fS1;:::SNg. The task is to track the value for each3Published as a conference paper at ICLR 2021Sj(1jN) overXt(e.g., hotel-price range ,cheap ). Belief states can be considered at twogranularities: turn-level ( Lt) and dialog-level ( Bt).Lttracks the information introduced in the lastturn whileBttracks the accumulated state from the first turn to the last. As illustrated in the upperpart of Figure 1, when the dialogue flow arrives at the second turn, B2becomesf(restaurant-area ,center ), (restaurant-food ,British ), (restaurant-book time ,18:00 )g, whileL2isf(restaurant-food ,British ), (restaurant-book time ,18:00 )g, essentially tracking the update to Btby the last turn.Problem definition. Given a tuple <Xt;Lt;Bt>, our goal is to generate a new user utterance ^Uusrtto form a novel conversation scenario ^Xt=f(Usys1;Uusr1);:::;(Usyst;^Uusrt)gby replacing the originaluser utterance Uusrtwith ^Uusrt. To preserve the coherence of dialogue flow, we cast the problem asgenerating an alternative user utterance ^Uusrtconditioned on a modified ^Ltderived from originalturn-level belief state Ltin a way that is consistent with the global belief state Bt. This formulationnaturally allows for producing a new tuple <^Xt;^Lt;^Bt>controllable by ^Lt, where ^Btis inducedbyBtbased on the difference between Ltand^Lt. As illustrated in the lower part of Figure 1, Uusr2isreplaced with the two alternative utterances that are natural and coherent with the dialogue history.We propose to use the resulting set of <^Xt;^Lt;^Bt>to probe the DST models.Paraphrase baseline with back-translation. Paraphrasing the original utterance Uusrtis a naturalway to generate ^Uusrt. With the availability of advanced neural machine translation (NMT) mod-els, round-trip translation between two languages (i.e., back-translation (BT)) has become a widelyused method to obtain paraphrases for downstream applications (Yu et al., 2018). We use publiclyavailable pretrained English!German (log(gje)) and German!English (log(ejg)) NMT models.2We translate Uusrtfrom English to German with a beam size K, and then translate each of the Khypotheses back to English with the beam size K. Consequently, we generate K2paraphrase candi-dates of ^Uusrtand then rank them according to their round-trip confidence score log(gje) + log(ejg).As paraphrases are expected to preserve the meaning of Uusrt, we set ^Lt=Ltand^Bt=Bt.4 C OCOAs illustrated in Figure 2, C OCOconsists of three main pillars. We first train a conditional userutterance generation model p(UusrtjUsyst;Lt)using original dialogues. Secondly, we modify Ltinto a possibly arbitrary ^Ltby our counterfactual goal generator. Given ^LtandUsyst, we sample^Uusrtp(^UusrtjUsyst;^Lt)with beam search followed by two orthogonal filtering mechanisms tofurther eliminate user utterances that fail to reflect the counterfactual goal ^Lt.4.1 V ALUE SUBSTITUTIONA robust DST model should correctly reflect value changes in user utterances when tracking user’sgoal. However, slot-value combinations, e.g. ( restaurant-book time ,18:00 ), in evaluation sets arelimited and even have significant overlaps with training data as shown in Table 1. To evaluate DSTmodels with more diverse patterns, we propose a Value Substitution (VS) method to generate ^Uusrt.Specifically, for each value of SjinLt, if the value only appears in Uusrtrather thanUsyst, we allowit to be substituted. Otherwise, we keep it as is. This heuristic is based on the following threeobservations: (1) if the value comes from Usyst, e.g. TOD system’s recommendation of restaurantfood, changing it may make the dialogue flow less natural and coherent (2) if it never appears in thedialogue flow, e.g. yesofhotel-parking , changing it may cause belief state label errors (3) if it onlyappears inUusrt, it is expected that changing the value won’t cause issues in (1) and (2).For values that can be substituted, new values are sampled from a Slot-Value Dictionary , a predefinedvalue set for each domain-slot. These new values are then used to update their counterparts in Uusrt,LtandBt. We defer the details of slot-value dictionary to section 4.2. After the update, we get ^Uusrt,^Ltand^Bt, and can use <^Xt;^Lt;^Bt>to evaluate the performance of DST models. An example ofhow VS works is illustrated in the lower part of Figure 1. At the second turn, as British and18:00are inL2and only appear in Uusr2rather thanUsys2, we can replace them with Chinese and17:00 that2https://pytorch.org/hub/pytorch_fairseq_translation4Published as a conference paper at ICLR 2021>6\V@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">8VHU@ ,W QHHGV WR VHUYH %ULWLVK IRRGDQG,¶GOLNHDUHVHUYDWLRQIRU>%HOLHI@UHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWERRNWLPH!7VVGTCPEG*GPGTCVKQP/QFGN'WTKPI6TCKPKPI2JCUGX&KDQJH&RXQWHUIDFWXDO*RDO*HQHUDWRU'URS$GGUHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWIRRG&KLQHVH!ͲYCPVVQDQQMCVCDNGCVC&JKPGUGTGUVCWTCPV=&NCUUKȤGT)KNVGT䘟5NQV8CNWG/CVEJ)KNVGTૃ?5WTGͲYCPVVQDQQMC&JKPGUGTGUVCWTCPVHQTRGQRNGCV=&NCUUKȤGT)KNVGTૃ5NQV8CNWG/CVEJ)KNVGT䘟?;GUͲYCPVVQDQQMCVCDNGHQTCVC&JKPGUGTGUVCWTCPV=&NCUUKȤGT)KNVGT䘟5NQV8CNWG/CVEJ)KNVGT䘟?>%HOLHI@UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNSHRSOH!7VVGTCPEG*GPGTCVKQP/QFGN'WTKPIͲPHGTGPEG2JCUG6ORWYDOXH'LFW6ORWFRPELQDWLRQ'LFWTGUVCWTCPVHQQFTGUVCWTCPVDQQMRGQRNGTGUVCWTCPVDQQMFC[%GCO5GCTEJUHVWDXUDQWIRRG&KLQHVH,QGLDQUHVWDXUDQWERRNSHRSOH>6\V@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">,QSXW@V\VWHP,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH"VWDWHUHVWDXUDQWIRRG%ULWLVKUHVWDXUDQWERRNWLPHV!>2XWSXW@,WQHHGVWRVHUYH%ULWLVKIRRGDQG,¶GOLNHDUHVHUYDWLRQIRUV!Figure 2: The overall pipeline of CoCo. The very left part represents the training phase of utterance generationmodel, where the concatenation of UsystandLtis processed by the encoder, which the decoder then conditionson to generate the user utterance Uusrt. The input and output of this model is shown within the box at thelower-left. The right part depicts the inference phase, where the counterfactual goal generator first modifies theoriginal belief Ltfed from the left part into a new one ^Lt, which is then fed to the trained utterance generatoralong with the same conversation history to generate ^Uusrtby beam search followed by filtering undesiredutterances. Note that conversational turns in inference phase don’t have to originate from training phase.are sampled from a slot-value dictionary, respectively, to get ^Uusr2,^L2and^X2without interruptingthe naturalness of the dialogue flow.4.2 C ONTROLLABLE COUNTERFACTUAL GENERATIONBack-translation (BT) and value-substitution (VS) provide controllability at different granularities.BT only provides syntactic variety while preserving the meaning, hence the belief state. VS canonly replace the values of the existing slots in an utterance while still having to exactly retain allthe slots. However, neither of them are able to explore conversations with even slightly modifiedset of slots. We propose a principled approach to unlock the capability of conversation generationthat generalizes beyond just transformation of existing utterances. We cast it as a task of generatingnovel user utterances ( Uusrt) from a given conversation history ( Usyst) and a turn-level user goal ( Lt).We propose to tackle this problem with a conditional generation model that utilizes a pre-trained encoder-decoder architecture (Raffel et al., 2020; Lewis et al., 2020) to approximatep(UusrtjUsyst;Lt), where the concatenation of UsystandLtis used as input to the encoder and Uusrtis set to be the target sequence to be generated by the decoder, as illustrated in the lower-left ofFigure 2. To learn this distribution, we factorize it by chain rule (Bengio et al., 2003) and train aneural network with parameters to minimize the aggregated negative log-likelihood Jgenover eachdialogue turn tuple (Usyst;Lt;Uusrt)whereUusrt= (Uusrt;1;Uusrt;2;:::;Uusrt;nt)andUusrt;kis itsk-th token:3p(UusrtjUsyst;Lt) =ntYk=1p(Uusrt;kjUusrt;<k;Usyst;Lt);Jgen=ntXk=1logp(Uusrt;kjUusrt;<k;Usyst;Lt)(1)Once the parameters of the goal-conditioned utterance generation model pare learned from thesetuples, it gives us the unique ability to generate novel conversation turns by plugging in an arbitrarybut consistent counterfactual goal ^Ltderived from Lt. An example of how the counterfactual goalgenerator operates is shown in the middle part of Figure 2. The counterfactual goal generator hasthree components, namely operation, slot-value dictionary and slot-combination dictionary.Operation decides to apply which combination of the following three meta-operations, namelydrop,change andaddonLt.Drop is used to remove values from a non-empty slot in Lt.Changeborrows the same operation in VS, to substitute existing values. Add allows us to add new domain-slot values into Lt, giving us the power of generating valid but more complicated ^Uusrt.3More details can be found in Appendix E.1.5Published as a conference paper at ICLR 2021Slot-Value Dictionary has a pre-defined value set Svaljfor eachSj. Once change and/or addmeta-operation is activated for Sj, counterfactual goal generator will randomly sample a value from Svalj.Slot-Combination Dictionary has a predefined domain-slot set Saddjfor eachSj. When addmeta-operation is activated, counterfactual goal generator will sample a domain-slot from the intersectionamong allSaddj, whereSjhas non-empty values within Lt. Once a new domains-slot is sampled,its value will then be sampled from its corresponding value set as defined in slot-value dictionary.GivenLt, the counterfactual goal generator first takes Ltas its input, and sequentially applies drop,change andaddto output ^Lt. Given ^LtandUsyst, we can sample ^Uusrtp(^UusrtjUsyst;^Lt)with beamsearch. We use a rule-based method to get ^Btof^Xt. Specifically, we obtain Bt1by calculating theset difference of BtandLt. Given Bt1and^Lt, we update the domain-slot in Bt1if its value in^Ltis not none , otherwise we keep its value as it is in Bt1following (Chao and Lane, 2019). Afterthe update, we get ^Btand use it as the dialogue-level label of ^Xt.4.3 F ILTERINGWe have presented methods to generate ^Uusrt, but how do we make sure that the generated utterancecorrectly reflects the user goal represented by ^Lt? To motivate our methods, we take an examplegenerated by beam search located at the lower right of Figure 2 for illustration. In this example, thefirst hypothesis doesn’t include value 2forrestaurant-book people that is within ^Lt. On the contrary,the second hypothesis includes value 18:00 forrestaurant-book time that is not part of ^Lt. We callthese two phenomenons de-generation andover-generation , respectively. Filtering candidates withthese issues is thus an important step to make sure ( Usyst,^Uusrt) perfectly expresses the user goals in^Lt. We propose two filtering methods, namely slot-value match filter andclassifier filter , to alleviatede-generation andover-generation issues, respectively.Slot-Value Match Filter. To tackle with de-generation issue, we choose a subset of values in ^Lt(values that should only appear in ^Uusrtrather thanUsyst) to eliminate candidates that fail to containall the values in the subset.4In Figure 2, the first hypothesis from the beam search output will beeliminated by this filter because it does not include the value 2forrestaurant-book people in^Lt.Classifier Filter. As shown in Table 2, the slot restaurant-book people frequently appears togetherwith restaurant-book time in the data used to train our generation model p(^UusrtjUsyst;^Lt), whichmay cause the resulting generation model to fall into over-generation issue. To deal with this over-generation problem, we propose to use a N-way multi-label classifier to eliminate such candidates.We employ BERT-base (Devlin et al., 2019) as its backbone:HCLSt=BERT ([CLS][Xt1][SEP][Usyst][Uusrt])2Rdemb(2)whereHCLSt2Rdembis the representations of CLS token of BERT with dimension demb. We thenfeedHCLStinto a linear projection layer followed by Sigmoid function:P=Sigmoid (W(HCLSt))2RN;Jcls=1NNXj=1(YjlogPj+ (1Yj)log(1Pj)) (3)whereW2RNdembis the trainable weight of the linear projection layer and Pjis probability thatslotSjappears att-th turn ofXtwithYjas its label. The classifier is trained with Jcls, i.e. themean binary cross entropy loss of every slot Sjand achieves a precision of 92.3% and a recall of93.5% on the development set5. During inference, the classifier takes ^Xtas input and predictswhether a slot Siappears att-th turn or not with threshold 0.5. We use this filter to eliminategenerated candidates for which the classifier predicts at least one slot Sjmentioned in (Usyst;^Uusrt)whileSj=2^Lt. In Figure 2, our classifier filter eliminates the second hypothesis from the output ofbeam search because ^Ltdoes not contain the slot restaurant-book time while it is mentioned in thegenerated utterance.4Forhotel-parking andhotel-internet , we use parking andwifias their corresponding values for filtering.5We defer further details of the classifier to Appendix E.2.6Published as a conference paper at ICLR 20215 E XPERIMENTS5.1 E XPERIMENTAL SETUPWe consider three strong multi-domain DST models to evaluate the effect of C OCO-generated coun-terfactual conversations in several scenarios. T RADE (Wu et al., 2019) builds upon pointer gener-ator network and contains a slot classification gate and a state generator module to generate states.TRIPPY(Heck et al., 2020) introduces a classification gate and a triple copy module, allowing themodel to copy values either from the conversation context or previous turns’ predictions or systeminforms. S IMPLE TOD(Hosseini-Asl et al., 2020) models DST as a conditional generation problemwith conversation history as its condition and belief state as its target and finetunes on GPT2.Evaluation. We train each of these three models following their publicly released implementationson the standard train/dev/test split of MultiWOZ 2.1 (Eric et al., 2019). We use the joint goalaccuracy to evaluate the performance of DST models. It is 1.0 if and only if the set of ( domain-slot ,value) pairs in the model output exactly matches the oracle one, otherwise 0.Slot-Value Dictionary. We carefully design two sets of slot-value dictionaries to capture the effectof unseen slot values from two perspectives, namely in-domain (I) and out-of-domain (O).Iis a dic-tionary that maps each slot to a set of values that appear in MultiWOZ test set, but not in the trainingset.6On the other hand, we construct Ousing external values (e.g., hotel names from Wikipedia) thatfall completely outside of the MultiWOZ data for the slots (e.g., hotel-name ,restaurant-name , etc.).Otherwise, we follow a similar fall-back strategy for slots (e.g., hotel-internet ) with no possibleexternal values beyond the ones (e.g., yesandno) in the original data.Slot-Combination Dictionary. As illustrated in Table 2, held-out evaluation set follows almost thesame slot co-occurrence distribution with training data. This makes it difficult to estimate how wellDST models would generalize on the valid conversation scenarios that just do not obey the samedistribution. C OCO’s flexibility at generating a conversation for an arbitrary turn-level belief statenaturally allows us to seek an answer to this question. To this end, we design three slot combina-tion dictionaries, namely freq,neuandrare. A slot combination dictionary directly controls howdifferent slots can be combined while generating counterfactual goals. As suggested by their names,freq contains frequently co-occurring slot combinations (e.g., book people is combined only withbook day andbook time slots), while rare is the opposite of freqgrouping rarely co-occurring slotstogether, and neuis more neutral allowing any meaningful combination within the same domain.75.2 M AIN RESULTSBefore reporting our results, it is important to note that several different post-processing strategiesare used by different DST models. To make a fair comparison across different models, we follow thesame post-processing strategy employed by S IMPLE TODevaluation script for T RADE and T RIPPYas well. We summarize our main results in Figure 3. While all three DST models are quite robust toback-translation (BT)8, their performance significantly drop on counterfactual conversations gener-ated by each of VS, C OCOand C OCO+ compared to MultiWOZ held-out set accuracy (original).Unseen Slot-Value Generalization. We analyze the effect of unseen slot values for the two dictio-naries ( IandO) introduced in the previous section compared to the original set of slot values thathave large overlap with the training data. Results presented on the left part of Figure 3 show that theperformance of DST models significantly drops up to 11.8% compared to original accuracy even onthe simple counterfactuals generated by VS strategy using in-domain unseen slot-value dictionary(I). Furthermore, using out-of-domain slot-value dictionary (O) results in about 10% additional dropin accuracy consistently across the three models. Consistent and similar drop in accuracy suggeststhat T RADE , SIMPLE TOD, and T RIPPYare almost equally susceptible to unseen slot values.Generalization to Novel Scenarios. The right section of Figure 3 presents the main results in oureffort to answer the central question we posed at the beginning of this paper. Based on these re-6When this set is empty for a slot (e.g., hotel-area ), we use the set of all possible values (e.g., center ,east,west,south ,north ) for this slot from training data. Please see Appendix I for further details.7Please see Appendix H for further details.8Similar to C OCO, we back-translate only the turns with non-empty turn-level belief states and apply slot-value match filter. We fall back to original user utterance if none of the paraphrases passes the filter.7Published as a conference paper at ICLR 202149.447.437.627.727.922.826.221.022.818.613.856.055.146.034.134.628.931.626.427.323.416.061.361.052.843.044.839.442.337.939.135.518.5102030405060OriginalBTVS*VSCoCo(freq)CoCo+(freq)CoCo(neu)CoCo+(neu)CoCo(rare)CoCo+(rare)Lower BoundTRADESimpleTodTripPyFigure 3: Joint goal accuracy (%) across different methods. “Original” refers to the results on the originalheld-out test set. * denotes results obtained from in-domain unseen slot-value dictionary ( I). VS, C OCOandCOCO+ results use out-of-domain slot-value dictionary ( O). For brevity, we omit C OCOand C OCO+ resultsusing in-domain slot-value dictionary. See Appendix C for the full results. freq,neu, and rare indicate whichslot-combination dictionary is used. Lower bound refers to the percentage of correct predictions on turns withempty turn-level belief state over original held-out test set.sults, we see that state-of-the-art DST models are having a serious difficulty generalizing to novelscenarios generated by both C OCOand C OCO+ using three different slot combination strategies.The generalization difficulty becomes even more serious on counterfactuals generated by C OCO+.As expected, the performance drop consistently increases as we start combining less and less fre-quently co-occurring slots (ranging from freq torare) while generating our counterfactual goals.In particular, C OCO+(rare) counterfactuals drops the accuracy of T RADE from 49.4% to 18.6%,pushing its performance very close to its lower bound of 13.8%. Even the performance of the mostrobust model (T RIPPY) among the three drops by up to 25.8%, concluding that held-out accuracyfor state-of-the-art DST models may not sufficiently reflect their generalization capabilities.Transferability Across Models. As highlighted before, a significant difference and advantage ofour proposed approach lies in its model-agnostic nature, making it immediately applicable for eval-uation of any DST model. As can be inferred from Figure 3, the effect of C OCO-generated coun-terfactuals on the joint goal accuracy is quite consistent across all three DST models. This resultempirically proves the transferability of C OCO, strengthening its reliability and applicability to begenerally employed as a robustness evaluation of DST models by the future research.5.3 H UMAN EVALUATIONHumanlikeliness CorrectnessHuman 87% 85%COCO(ori) 90% 91%COCO(freq) 90% 99%COCO(neu) 79% 98%COCO(rare) 82% 96%Table 3: Human evaluation.We next examine the quality of our generated data from twoperspectives: “human likeliness” and “turn-level belief statecorrectness”. The human likeliness evaluates whether a userutterance is fluent and consistent with its dialog context. Theturn-level belief state correctness evaluates whether ( Usyst,^Uusrt) exactly expresses goals in ^Lt. Both metrics are based onbinary evaluation. We randomly sample 100 turns in the orig-inal test data and their corresponding CoCo-generated ones.For the C OCO-generated data, we have two different settings to examine its quality. The first isto use the original turn-level belief state to generate user utterance, denoted by C OCO(ori). Thesecond setting is to verify the quality of the conversations generated by C OCO(freq)-, C OCO(neu)-and C OCO(rare) as they hurt the DST models’ accuracy significantly as shown in Figure 3. For eachresult row reported in Table 3, we ask three individuals with proficient English and advanced NLPbackground to conduct the evaluation, and use majority voting to determine the final scores.We can see that CoCo(ori) generated conversations are almost as human-like as original conversa-tions. Furthermore, C OCO(ori) generated slightly more “correct” responses than the original utter-ances in MultiWoZ 2.1. A presumable reason is that annotation errors exist in MultiWoZ 2.1, whileour C OCOare trained on recently released cleaner MultiWoZ 2.2, making generated data have higherquality. In addition, all three variants of the C OCO-generated conversations consistently outper-8Published as a conference paper at ICLR 202149.450.027.729.222.824.221.022.218.620.956.056.234.136.028.931.826.430.023.427.161.362.643.055.039.455.037.953.335.556.2152535455565OriginalOriginal◇VSVS◇CoCo+(freq)CoCo+(freq)◇CoCo+(neu)CoCo+(neu)◇CoCo+(rare)CoCo+(rare)◇TRADESimpleTodTripPyFigure 4: Comparison of retrained DST models (indicated by ) on C OCO+(rare)-augmented training datawith their counterparts trained on original MultiWOZ train split.form human response in terms of the turn-level belief state correctness. Although C OCO(neu) andCOCO(rare) are slightly less human-like than the original human response, C OCO(freq)-generatedutterances have similar human-likeness as original ones. These results demonstrate the effectivenessof our proposed approach in generating not only high-fidelity but also human-like user utterances,proving its potential to be adopted as part of robustness evaluation of DST models.5.4 A NALYSIS OF COCO+ASDATA AUGMENTATION DEFENSESo far, we have focused on the generalization capability of DST models on C OCO-generated conver-sations using different slot-value and slot-combination dictionaries. We have observed that all threeDST models are consistently most susceptible to conversations generated by C OCO+(rare) strategy.Instead, we now seek to answer the following question: Would using conversations generated byCOCO+(rare) to augment the training data help these DST models in better generalizing to unseenslot values and/or novel scenarios? Towards exploring this direction in a principled way, we designa new slot value dictionary ( train-O ) similar to out-of-domain unseen slot-value dictionary ( O). Fora fair comparison, we make sure that the slot values in train-O (please refer to Appendix I for thecomplete dictionary) do not overlap with the one ( O) used for generating test conversations.We first retrain each DST model on the MultiWOZ training split augmented with C OCO+(rare)-generated conversations using train-O slot-value dictionary. Retrained DST models are then evalu-ated on original test set as well as on the counterfactual test sets generated by VS and various ver-sions of C OCO+. Results presented in Figure 4 show that retraining on the C OCO+(rare)-augmentedtraining data improves the robustness of all three DST models across the board. Most notably, itrebounds the performance of T RIPPYon C OCO+(rare)-generated test set from 35.5% to 56.2%,significantly closing the gap with its performance (61.3%) on the original test set. We also observethat retrained DST models obtain an improved joint goal accuracy on the original MultiWOZ testset compared to their counterparts trained only on the original MultiWOZ train split, further validat-ing the quality of C OCO-generated conversations. Finally, we would like to highlight that retrainedTRIPPYachieves 62.6% joint goal accuracy, improving the previous state-of-the-art by 1.3%. Weleave the exploration of how to fully harness C OCOas a data augmentation approach as future work.6 C ONCLUSIONWe propose a principled, model-agnostic approach (C OCO) to evaluate dialogue state trackers be-yond the held-out evaluation set. We show that state-of-the-art DST models’ performance signif-icantly drop when evaluated on the C OCO-generated conversations. Human evaluations validatethat they have high-fidelity and are human-like. Hence, we conclude that these strong DST modelshave difficulty in generalizing to novel scenarios with unseen slot values and rare slot combina-tions, confirming the limitations of relying only on the held-out accuracy. When explored as a dataaugmentation method, C OCOconsistently improves state-of-the-art DST models not only on theCOCO-generated evaluation set but also on the original test set. This further proves the benefit andpotential of our approach to be adopted as part of a more comprehensive evaluation of DST models.9Published as a conference paper at ICLR 2021
CvZ6d0LROU
Creating challenging examples for DST systems: interesting results but may need more clarification in the methods section
6: Marginally above acceptance threshold
Update after reading author response: Thank you for the thoughtful response. I appreciate that you have added extra implementation details and am changing my score to a 6. Regarding the limitations in scope: My apologies if my review was confusingly worded. I just wanted to clarify that I had meant that the counterfactual generation method itself may have limitations (not the high-level DST task, which I agree has broad uses). The concern is that the adversarial example generation strategy might be too domain-specific to transfer easily to other tasks, and that this might limit the impact of the proposed counterfactual generation method. I think this is somewhat related to the parts of the methodology that R1 described as "ad-hoc", "engineering intensive", or reliant on heuristics. I still have some reservations about the transferability of the proposed methods, though the authors' response to R1 did clarify a bit on this point. ------------------------------------------------------------------------------------------------------------------------------------------- Original Review: This paper presents CoCo, a new method for generating adversarial examples for DST tasks using “controllable counterfactual” generation. Unlike other approaches for adversarial example generation, this approach is model agnostic. They also demonstrate the effectiveness of the method by applying the Multi-woz dataset where s.o.t.a. Model performance drops by ~30 points. I might lean towards rejecting this paper because there are several points in the methods that are still unclear to me. However, I hope the authors may be able to clarify some of my questions (I’ve listed a few in the limitations section of my review). Another potential drawback is that the methods described here are limited in scope to the dst domain, so it may not be as useful to other ML researchers more broadly. Strengths: - CoCo is a very effective method. Model performance drops by about 30 points, demonstrating that the adversarial examples generated are extremely challenging. - The human study demonstrates that the created examples are still human-like and are actually often rated “more correct” than the original Multi-Woz responses. Limitations: - The proposed methods are using a lot of pre-existing components and limited in scope to this one domain. While the created counterfactuals are useful as a challenge dataset for DST systems, the overall approach in this paper may not be more broadly impactful outside of this task. - There are a few points in the methodology that seem a bit unclear or should be described more fully: - Can you provide more details about the conditional generation model (p): How is it being trained? What architecture is being used? - The Slot-Value match filter: How are you judging whether the candidate “contains” the value? (Is this exact match?) - Classifier filter: Can you provide more details on the classifier architecture and how it is being trained? What is the precision and recall of its predictions? Questions: - In section 5.4: Any thoughts on why data augmentation is more effective for the Trippy model than the other two models? Minor Feedback: - The CoCo name could be easily confused with Microsoft Coco, a commonly used computer vision dataset. Authors might consider changing the name to avoid confusion. - The font in Figures 3 and 4 is tiny and hard to read. I recommend making it bigger.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title CoCo: Controllable Counterfactuals for Evaluating Dialogue State Trackers ### Paper Abstract Dialogue state trackers have made significant progress on benchmark datasets, but their generalization capability to novel and realistic scenarios beyond the held- out conversations is less understood. We propose controllable counterfactuals (COCO) to bridge this gap and evaluate dialogue state tracking (DST) models on novel scenarios, i.e., would the system successfully tackle the request if the user responded differently but still consistently with the dialogue flow? COCO leverages turn-level belief states as counterfactual conditionals to produce novel conversation scenarios in two steps: (i) counterfactual goal generation at turn- level by dropping and adding slots followed by replacing slot values, (ii) counterfactual conversation generation that is conditioned on (i) and consistent with the dialogue flow. Evaluating state-of-the-art DST models on MultiWOZ dataset with COCO-generated counterfactuals results in a significant performance drop of up to 30.8% (from 49.4% to 18.6%) in absolute joint goal accuracy. In comparison, widely used techniques like paraphrasing only affect the accuracy by at most 2%. Human evaluations show that COCO-generated conversations perfectly reflect the underlying user goal with more than 95% accuracy and are as human-like as the original conversations, further strengthening its reliability and promise to be adopted as part of the robustness evaluation of DST models. ### Paper Keywords ["task-oriented dialogue", "dialogue state tracking", "robustness", "dst", "evaluation"] ### Paper Content ABSTRACTDialogue state trackers have made significant progress on benchmark datasets, buttheir generalization capability to novel and realistic scenarios beyond the held-out conversations is less understood. We propose controllable counterfactuals(COCO) to bridge this gap and evaluate dialogue state tracking (DST) modelson novel scenarios, i.e., would the system successfully tackle the request if theuser responded differently but still consistently with the dialogue flow? C OCOleverages turn-level belief states as counterfactual conditionals to produce novelconversation scenarios in two steps: (i) counterfactual goal generation at turn-level by dropping and adding slots followed by replacing slot values, (ii) coun-terfactual conversation generation that is conditioned on (i) and consistent withthe dialogue flow. Evaluating state-of-the-art DST models on MultiWOZ datasetwith C OCO-generated counterfactuals results in a significant performance drop ofup to 30.8% (from 49.4% to 18.6%) in absolute joint goal accuracy. In compari-son, widely used techniques like paraphrasing only affect the accuracy by at most2%. Human evaluations show that COCO-generated conversations perfectly re-flect the underlying user goal with more than 95% accuracy and are as human-likeas the original conversations, further strengthening its reliability and promise tobe adopted as part of the robustness evaluation of DST models.11 I NTRODUCTIONTask-oriented dialogue (TOD) systems have recently attracted growing attention and achieved sub-stantial progress (Zhang et al., 2019b; Neelakantan et al., 2019; Peng et al., 2020; Wang et al.,2020b;a), partly made possible by the construction of large-scale datasets (Budzianowski et al.,2018; Byrne et al., 2019; Rastogi et al., 2019). Dialogue state tracking (DST) is a backbone of TODsystems, where it is responsible for extracting the user’s goal represented as a set of slot-value pairs(e.g., ( area,center ), (food,British )), as illustrated in the upper part of Figure 1. The DST module’soutput is treated as the summary of the user’s goal so far in the dialogue and directly consumedby the subsequent dialogue policy component to determine the system’s next action and response.Hence, the accuracy of the DST module is critical to prevent downstream error propagation (Liu andLane, 2018), affecting the end-to-end performance of the whole system.With the advent of representation learning in NLP (Pennington et al., 2014; Devlin et al., 2019;Radford et al., 2019), the accuracy of DST models has increased from 15.8% (in 2018) to 55.7%(in 2020). While measuring the held-out accuracy is often useful, practitioners consistently overes-timate their model’s generalization (Ribeiro et al., 2020; Patel et al., 2008) since test data is usuallycollected in the same way as training data. In line with this hypothesis, Table 1 demonstrates thatthere is a substantial overlap of the slot values between training and evaluation sets of the MultiWOZDST benchmark (Budzianowski et al., 2018). In Table 2, we observe that the slot co-occurrencedistributions for evaluation sets tightly align with that of train split, hinting towards the potentialEqual Contribution. Work was done during Shiyang’s internship at Salesforce Research.1Code is available at https://github.com/salesforce/coco-dst1Published as a conference paper at ICLR 2021data attraction-name hotel-name restaurant-name taxi-departure taxi-destination train-departure train-destinationdev 94.5 96.4 97.3 98.6 98.2 99.6 99.6test 96.2 98.4 96.8 95.6 99.5 99.4 99.4Table 1: The percentage (%) of domain-slot values in dev/test sets covered by training data.slot name data area book day book time food name price rangebook peopletrain 1.9 38.8 39.2 2.1 16.4 1.5dev 1.9 38.9 38.9 1.9 16.3 2.2test 2.7 36.9 37.7 1.6 18.7 2.4Table 2: Co-occurrence distribution(%) of book people slot with other slots in restaurant domain within thesame user utterance. It rarely co-occurs with particulars slots (e.g., food), which hinders the evaluation of DSTmodels on realistic user utterances such as “ I want to book a Chinese restaurant for 8 people. ”limitation of the held-out accuracy in reflecting the actual generalization capability of DST models.Inspired by this phenomenon, we aim to address and provide insights into the following question:how well do state-of-the-art DST models generalize to the novel but realistic scenarios that are notcaptured well enough by the held-out evaluation set?Most prior work (Iyyer et al., 2018; Jin et al., 2019) focus on adversarial example generation forrobustness evaluation. They often rely on perturbations made directly on test examples in the held-out set and assume direct access to evaluated models’ gradients or outputs. Adversarial examplesgenerated by these methods are often unnatural or obtained to hurt target models deliberately. It isimperative to emphasize here that both our primary goal and approach significantly differ from theprevious line of work: (i) Our goal is to evaluate DST models beyond held-out accuracy, (ii) Weleverage turn-level structured meaning representation (belief state) along with its dialogue historyas conditions to generate user response without relying on the original user utterance, (iii) Ourapproach is entirely model-agnostic, assuming no access to evaluated DST models, (iv) Perhapsmost importantly, we aim to produce novel but realistic and meaningful conversation scenarios ratherthan intentionally adversarial ones.We propose controllable counterfactuals (COCO) as a principled, model-agnostic approach to gener-ate novel scenarios beyond the held-out conversations. Our approach is inspired by the combinationof two natural questions: how would DST systems react to (1) unseen slot values and (2) rare but re-alistic slot combinations? C OCOfirst encapsulates these two aspects under a unified concept calledcounterfactual goal obtained by a stochastic policy of dropping and adding slots to the original turn-level belief state followed by replacing slot values. In the second step, C OCOconditions on thedialogue history and the counterfactual goal to generate counterfactual conversation . We cast theactual utterance generation as a conditional language modeling objective. This formulation allowsus to plug-in a pretrained encoder-decoder architecture (Raffel et al., 2020) as the backbone thatpowers the counterfactual conversation generation. We also propose a strategy to filter utterancesthat fail to reflect the counterfactual goal exactly. We consider value substitution (VS), as presentedin Figure 1, as a special C OCOcase that only replaces the slot values in the original utterance with-out adding or dropping slots. When we use VS as a fall-back strategy for C OCO(i.e., apply VSwhen C OCOfails to generate valid user responses after filtering), we call it C OCO+.Evaluating three strong DST models (Wu et al., 2019; Heck et al., 2020; Hosseini-Asl et al., 2020)with our proposed controllable counterfactuals generated by C OCOand C OCO+ shows that theperformance of each significantly drops (up to 30.8%) compared to their joint goal accuracy on theoriginal MultiWOZ held-out evaluation set. On the other hand, we find that these models are, infact, quite robust to paraphrasing with back-translation, where their performance only drops up to2%. Analyzing the effect of data augmentation with C OCO+ shows that it consistently improvesthe robustness of the investigated DST models on counterfactual conversations generated by each ofVS, C OCOand C OCO+. More interestingly, the same data augmentation strategy improves the jointgoal accuracy of the best of these strong DST models by 1.3% on the original MultiWOZ evaluationset. Human evaluations show that C OCO-generated counterfactual conversations perfectly reflectthe underlying user goal with more than 95% accuracy and are found to be quite close to originalconversations in terms of their human-like scoring. This further proves our proposed approach’sreliability and potential to be adopted as part of DST models’ robustness evaluation.2Published as a conference paper at ICLR 2021UHVWDXUDQWDUHDFHQWHU!>6\VWHP@+HOORKRZFDQ,KHOS\RX">8VHU@,QHHGWRILQGDUHVWDXUDQWLQWKHFHQWHUUHVWDXUDQWDUHDFHQWHU!>6\VWHP@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">8VHU@,WQHHGVWRVHUYH%ULWLVKIRRGDQG,¶GOLNHDUHVHUYDWLRQIRUUHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWERRNWLPH!UHVWDXUDQWDUHDFHQWHU!UHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWERRNWLPH!'KCNQIWG)NQY6WTPNGXGN%GNKGH5VCVG'KCNQIWGNGXGN%GNKGH5VCVGUHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNWLPH!UHVWDXUDQWDUHDFHQWHU!UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNWLPH!UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNSHRSOH!UHVWDXUDQWDUHDFHQWHU!UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNSHRSOH!>6\VWHP@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">96@,WQHHGVWRVHUYH&KLQHVHIRRGDQG,¶GOLNHDUHVHUYDWLRQIRU>6\VWHP@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">&R&R@,WVKRXOGVHUYH&KLQHVHIRRGDQG,QHHGWRERRNDWDEOHIRUSHRSOH6WTP6WTP856WTP&Q&Q6WTPFigure 1: The upper left is a dialogue example between user andsystem with its turn-level and dialogue-levelbelief states on the upper right. The lower left are valid user utterance variations generated by VS and CoCowith their corresponding belief states derived from the original ones on the right.2 R ELATED WORKDialogue State Tracking. DST has been a core component in current state-of-the-art TOD systems.Traditional approaches usually rely on hand-crafted features or domain-specific lexicon (Hendersonet al., 2014; Wen et al., 2017) and require a predefined ontology, making them hard to extend tounseen values. To tackle this issue, various methods have been proposed. Gao et al. (2019) treatsDST as a reading comprehension problem and predicts slot values with start and end positions inthe dialogue context. Zhang et al. (2019a) proposes DS-DST, a dual-strategy model that predictsvalues in domains with a few possible candidates from classifiers and others from span extractors.Furthermore, Heck et al. (2020) proposes TripPy, a triple copy strategy model, which allows it tocopy values from the context, previous turns’ predictions and system informs.An alternative to classification and span prediction is value generation. Wu et al. (2019) generatesslot values with a pointer generator network See et al. (2017) without relying on fixed vocabulariesand spans. (Hosseini-Asl et al., 2020) models DST as a conditional generation problem and directlyfinetunes GPT2 (Radford et al., 2019) on DST task and achieves state-of-the-art on the MultiWOZ.Adversarial Example Generation. Adversarial example generation has been commonly studied incomputer vision (Szegedy et al., 2014; Goodfellow et al., 2015). Recently, it has received growingattention in NLP domain as well. Papernot et al. (2016) finds adversarial examples in the embeddingspace, and then remapped them to the discrete space. Alzantot et al. (2018) proposes a population-based word replacing method and aims to generate fluent adversarial sentences. These methods oftenedit the original data greedily assuming access to the model’s gradients or outputs besides queryingthe underlying model many times (Jin et al., 2019). Alternative line of work investigates generatingadversarial examples in a model-agnostic way. Iyyer et al. (2018) proposes to generate adversarialparaphrases of original data with different syntactic structures. Jia and Liang (2017) automaticallygenerates sentences with key word overlappings of questions in SQuAD (Rajpurkar et al., 2016) todistract computer systems without changing the correct answer or misleading humans.Although different methods have been proposed to evaluate the robustness of NLP models, majorityof the prior work in this line focus either on text classification, neural machine translation or readingcomprehension problems. Perhaps the most similar existing work with ours are (Einolghozati et al.,2019) and (Cheng et al., 2019). Einolghozati et al. (2019) focuses on intent classification and slottagging in TOD while Cheng et al. (2019) targets at synthetic competitive negotiation dialogues(Lewis et al., 2017) without DST component. In this work, however, we focus on evaluating a corecomponent of state-of-the-art TOD, DST, on the widely used benchmark, MultiWOZ. To the best ofour knowledge, ours is the first work to systematically evaluate the robustness of DST models.3 B ACKGROUNDMulti-domain DST task definition. LetXt=f(Usys1;Uusr1);:::;(Usyst;Uusrt)gdenote a sequenceof turns of a dialogue until the t-th turn, where UsysiandUusri(1it) denote system and userutterance at the i-th turn, respectively. In multi-domain DST, each turn ( Usysi;Uusri) talks about aspecific domain (e.g., hotel ), and a certain number of slots (e.g., price range ) in that domain. Wedenote allNpossible domain-slot pairs as S=fS1;:::SNg. The task is to track the value for each3Published as a conference paper at ICLR 2021Sj(1jN) overXt(e.g., hotel-price range ,cheap ). Belief states can be considered at twogranularities: turn-level ( Lt) and dialog-level ( Bt).Lttracks the information introduced in the lastturn whileBttracks the accumulated state from the first turn to the last. As illustrated in the upperpart of Figure 1, when the dialogue flow arrives at the second turn, B2becomesf(restaurant-area ,center ), (restaurant-food ,British ), (restaurant-book time ,18:00 )g, whileL2isf(restaurant-food ,British ), (restaurant-book time ,18:00 )g, essentially tracking the update to Btby the last turn.Problem definition. Given a tuple <Xt;Lt;Bt>, our goal is to generate a new user utterance ^Uusrtto form a novel conversation scenario ^Xt=f(Usys1;Uusr1);:::;(Usyst;^Uusrt)gby replacing the originaluser utterance Uusrtwith ^Uusrt. To preserve the coherence of dialogue flow, we cast the problem asgenerating an alternative user utterance ^Uusrtconditioned on a modified ^Ltderived from originalturn-level belief state Ltin a way that is consistent with the global belief state Bt. This formulationnaturally allows for producing a new tuple <^Xt;^Lt;^Bt>controllable by ^Lt, where ^Btis inducedbyBtbased on the difference between Ltand^Lt. As illustrated in the lower part of Figure 1, Uusr2isreplaced with the two alternative utterances that are natural and coherent with the dialogue history.We propose to use the resulting set of <^Xt;^Lt;^Bt>to probe the DST models.Paraphrase baseline with back-translation. Paraphrasing the original utterance Uusrtis a naturalway to generate ^Uusrt. With the availability of advanced neural machine translation (NMT) mod-els, round-trip translation between two languages (i.e., back-translation (BT)) has become a widelyused method to obtain paraphrases for downstream applications (Yu et al., 2018). We use publiclyavailable pretrained English!German (log(gje)) and German!English (log(ejg)) NMT models.2We translate Uusrtfrom English to German with a beam size K, and then translate each of the Khypotheses back to English with the beam size K. Consequently, we generate K2paraphrase candi-dates of ^Uusrtand then rank them according to their round-trip confidence score log(gje) + log(ejg).As paraphrases are expected to preserve the meaning of Uusrt, we set ^Lt=Ltand^Bt=Bt.4 C OCOAs illustrated in Figure 2, C OCOconsists of three main pillars. We first train a conditional userutterance generation model p(UusrtjUsyst;Lt)using original dialogues. Secondly, we modify Ltinto a possibly arbitrary ^Ltby our counterfactual goal generator. Given ^LtandUsyst, we sample^Uusrtp(^UusrtjUsyst;^Lt)with beam search followed by two orthogonal filtering mechanisms tofurther eliminate user utterances that fail to reflect the counterfactual goal ^Lt.4.1 V ALUE SUBSTITUTIONA robust DST model should correctly reflect value changes in user utterances when tracking user’sgoal. However, slot-value combinations, e.g. ( restaurant-book time ,18:00 ), in evaluation sets arelimited and even have significant overlaps with training data as shown in Table 1. To evaluate DSTmodels with more diverse patterns, we propose a Value Substitution (VS) method to generate ^Uusrt.Specifically, for each value of SjinLt, if the value only appears in Uusrtrather thanUsyst, we allowit to be substituted. Otherwise, we keep it as is. This heuristic is based on the following threeobservations: (1) if the value comes from Usyst, e.g. TOD system’s recommendation of restaurantfood, changing it may make the dialogue flow less natural and coherent (2) if it never appears in thedialogue flow, e.g. yesofhotel-parking , changing it may cause belief state label errors (3) if it onlyappears inUusrt, it is expected that changing the value won’t cause issues in (1) and (2).For values that can be substituted, new values are sampled from a Slot-Value Dictionary , a predefinedvalue set for each domain-slot. These new values are then used to update their counterparts in Uusrt,LtandBt. We defer the details of slot-value dictionary to section 4.2. After the update, we get ^Uusrt,^Ltand^Bt, and can use <^Xt;^Lt;^Bt>to evaluate the performance of DST models. An example ofhow VS works is illustrated in the lower part of Figure 1. At the second turn, as British and18:00are inL2and only appear in Uusr2rather thanUsys2, we can replace them with Chinese and17:00 that2https://pytorch.org/hub/pytorch_fairseq_translation4Published as a conference paper at ICLR 2021>6\V@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">8VHU@ ,W QHHGV WR VHUYH %ULWLVK IRRGDQG,¶GOLNHDUHVHUYDWLRQIRU>%HOLHI@UHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWERRNWLPH!7VVGTCPEG*GPGTCVKQP/QFGN'WTKPI6TCKPKPI2JCUGX&KDQJH&RXQWHUIDFWXDO*RDO*HQHUDWRU'URS$GGUHVWDXUDQWIRRG%ULWLVK!UHVWDXUDQWIRRG&KLQHVH!ͲYCPVVQDQQMCVCDNGCVC&JKPGUGTGUVCWTCPV=&NCUUKȤGT)KNVGT䘟5NQV8CNWG/CVEJ)KNVGTૃ?5WTGͲYCPVVQDQQMC&JKPGUGTGUVCWTCPVHQTRGQRNGCV=&NCUUKȤGT)KNVGTૃ5NQV8CNWG/CVEJ)KNVGT䘟?;GUͲYCPVVQDQQMCVCDNGHQTCVC&JKPGUGTGUVCWTCPV=&NCUUKȤGT)KNVGT䘟5NQV8CNWG/CVEJ)KNVGT䘟?>%HOLHI@UHVWDXUDQWIRRG&KLQHVH!UHVWDXUDQWERRNSHRSOH!7VVGTCPEG*GPGTCVKQP/QFGN'WTKPIͲPHGTGPEG2JCUG6ORWYDOXH'LFW6ORWFRPELQDWLRQ'LFWTGUVCWTCPVHQQFTGUVCWTCPVDQQMRGQRNGTGUVCWTCPVDQQMFC[%GCO5GCTEJUHVWDXUDQWIRRG&KLQHVH,QGLDQUHVWDXUDQWERRNSHRSOH>6\V@,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH">,QSXW@V\VWHP,KDYHPDQ\RSWLRQV'R\RXKDYHDQ\SUHIHUHQFH"VWDWHUHVWDXUDQWIRRG%ULWLVKUHVWDXUDQWERRNWLPHV!>2XWSXW@,WQHHGVWRVHUYH%ULWLVKIRRGDQG,¶GOLNHDUHVHUYDWLRQIRUV!Figure 2: The overall pipeline of CoCo. The very left part represents the training phase of utterance generationmodel, where the concatenation of UsystandLtis processed by the encoder, which the decoder then conditionson to generate the user utterance Uusrt. The input and output of this model is shown within the box at thelower-left. The right part depicts the inference phase, where the counterfactual goal generator first modifies theoriginal belief Ltfed from the left part into a new one ^Lt, which is then fed to the trained utterance generatoralong with the same conversation history to generate ^Uusrtby beam search followed by filtering undesiredutterances. Note that conversational turns in inference phase don’t have to originate from training phase.are sampled from a slot-value dictionary, respectively, to get ^Uusr2,^L2and^X2without interruptingthe naturalness of the dialogue flow.4.2 C ONTROLLABLE COUNTERFACTUAL GENERATIONBack-translation (BT) and value-substitution (VS) provide controllability at different granularities.BT only provides syntactic variety while preserving the meaning, hence the belief state. VS canonly replace the values of the existing slots in an utterance while still having to exactly retain allthe slots. However, neither of them are able to explore conversations with even slightly modifiedset of slots. We propose a principled approach to unlock the capability of conversation generationthat generalizes beyond just transformation of existing utterances. We cast it as a task of generatingnovel user utterances ( Uusrt) from a given conversation history ( Usyst) and a turn-level user goal ( Lt).We propose to tackle this problem with a conditional generation model that utilizes a pre-trained encoder-decoder architecture (Raffel et al., 2020; Lewis et al., 2020) to approximatep(UusrtjUsyst;Lt), where the concatenation of UsystandLtis used as input to the encoder and Uusrtis set to be the target sequence to be generated by the decoder, as illustrated in the lower-left ofFigure 2. To learn this distribution, we factorize it by chain rule (Bengio et al., 2003) and train aneural network with parameters to minimize the aggregated negative log-likelihood Jgenover eachdialogue turn tuple (Usyst;Lt;Uusrt)whereUusrt= (Uusrt;1;Uusrt;2;:::;Uusrt;nt)andUusrt;kis itsk-th token:3p(UusrtjUsyst;Lt) =ntYk=1p(Uusrt;kjUusrt;<k;Usyst;Lt);Jgen=ntXk=1logp(Uusrt;kjUusrt;<k;Usyst;Lt)(1)Once the parameters of the goal-conditioned utterance generation model pare learned from thesetuples, it gives us the unique ability to generate novel conversation turns by plugging in an arbitrarybut consistent counterfactual goal ^Ltderived from Lt. An example of how the counterfactual goalgenerator operates is shown in the middle part of Figure 2. The counterfactual goal generator hasthree components, namely operation, slot-value dictionary and slot-combination dictionary.Operation decides to apply which combination of the following three meta-operations, namelydrop,change andaddonLt.Drop is used to remove values from a non-empty slot in Lt.Changeborrows the same operation in VS, to substitute existing values. Add allows us to add new domain-slot values into Lt, giving us the power of generating valid but more complicated ^Uusrt.3More details can be found in Appendix E.1.5Published as a conference paper at ICLR 2021Slot-Value Dictionary has a pre-defined value set Svaljfor eachSj. Once change and/or addmeta-operation is activated for Sj, counterfactual goal generator will randomly sample a value from Svalj.Slot-Combination Dictionary has a predefined domain-slot set Saddjfor eachSj. When addmeta-operation is activated, counterfactual goal generator will sample a domain-slot from the intersectionamong allSaddj, whereSjhas non-empty values within Lt. Once a new domains-slot is sampled,its value will then be sampled from its corresponding value set as defined in slot-value dictionary.GivenLt, the counterfactual goal generator first takes Ltas its input, and sequentially applies drop,change andaddto output ^Lt. Given ^LtandUsyst, we can sample ^Uusrtp(^UusrtjUsyst;^Lt)with beamsearch. We use a rule-based method to get ^Btof^Xt. Specifically, we obtain Bt1by calculating theset difference of BtandLt. Given Bt1and^Lt, we update the domain-slot in Bt1if its value in^Ltis not none , otherwise we keep its value as it is in Bt1following (Chao and Lane, 2019). Afterthe update, we get ^Btand use it as the dialogue-level label of ^Xt.4.3 F ILTERINGWe have presented methods to generate ^Uusrt, but how do we make sure that the generated utterancecorrectly reflects the user goal represented by ^Lt? To motivate our methods, we take an examplegenerated by beam search located at the lower right of Figure 2 for illustration. In this example, thefirst hypothesis doesn’t include value 2forrestaurant-book people that is within ^Lt. On the contrary,the second hypothesis includes value 18:00 forrestaurant-book time that is not part of ^Lt. We callthese two phenomenons de-generation andover-generation , respectively. Filtering candidates withthese issues is thus an important step to make sure ( Usyst,^Uusrt) perfectly expresses the user goals in^Lt. We propose two filtering methods, namely slot-value match filter andclassifier filter , to alleviatede-generation andover-generation issues, respectively.Slot-Value Match Filter. To tackle with de-generation issue, we choose a subset of values in ^Lt(values that should only appear in ^Uusrtrather thanUsyst) to eliminate candidates that fail to containall the values in the subset.4In Figure 2, the first hypothesis from the beam search output will beeliminated by this filter because it does not include the value 2forrestaurant-book people in^Lt.Classifier Filter. As shown in Table 2, the slot restaurant-book people frequently appears togetherwith restaurant-book time in the data used to train our generation model p(^UusrtjUsyst;^Lt), whichmay cause the resulting generation model to fall into over-generation issue. To deal with this over-generation problem, we propose to use a N-way multi-label classifier to eliminate such candidates.We employ BERT-base (Devlin et al., 2019) as its backbone:HCLSt=BERT ([CLS][Xt1][SEP][Usyst][Uusrt])2Rdemb(2)whereHCLSt2Rdembis the representations of CLS token of BERT with dimension demb. We thenfeedHCLStinto a linear projection layer followed by Sigmoid function:P=Sigmoid (W(HCLSt))2RN;Jcls=1NNXj=1(YjlogPj+ (1Yj)log(1Pj)) (3)whereW2RNdembis the trainable weight of the linear projection layer and Pjis probability thatslotSjappears att-th turn ofXtwithYjas its label. The classifier is trained with Jcls, i.e. themean binary cross entropy loss of every slot Sjand achieves a precision of 92.3% and a recall of93.5% on the development set5. During inference, the classifier takes ^Xtas input and predictswhether a slot Siappears att-th turn or not with threshold 0.5. We use this filter to eliminategenerated candidates for which the classifier predicts at least one slot Sjmentioned in (Usyst;^Uusrt)whileSj=2^Lt. In Figure 2, our classifier filter eliminates the second hypothesis from the output ofbeam search because ^Ltdoes not contain the slot restaurant-book time while it is mentioned in thegenerated utterance.4Forhotel-parking andhotel-internet , we use parking andwifias their corresponding values for filtering.5We defer further details of the classifier to Appendix E.2.6Published as a conference paper at ICLR 20215 E XPERIMENTS5.1 E XPERIMENTAL SETUPWe consider three strong multi-domain DST models to evaluate the effect of C OCO-generated coun-terfactual conversations in several scenarios. T RADE (Wu et al., 2019) builds upon pointer gener-ator network and contains a slot classification gate and a state generator module to generate states.TRIPPY(Heck et al., 2020) introduces a classification gate and a triple copy module, allowing themodel to copy values either from the conversation context or previous turns’ predictions or systeminforms. S IMPLE TOD(Hosseini-Asl et al., 2020) models DST as a conditional generation problemwith conversation history as its condition and belief state as its target and finetunes on GPT2.Evaluation. We train each of these three models following their publicly released implementationson the standard train/dev/test split of MultiWOZ 2.1 (Eric et al., 2019). We use the joint goalaccuracy to evaluate the performance of DST models. It is 1.0 if and only if the set of ( domain-slot ,value) pairs in the model output exactly matches the oracle one, otherwise 0.Slot-Value Dictionary. We carefully design two sets of slot-value dictionaries to capture the effectof unseen slot values from two perspectives, namely in-domain (I) and out-of-domain (O).Iis a dic-tionary that maps each slot to a set of values that appear in MultiWOZ test set, but not in the trainingset.6On the other hand, we construct Ousing external values (e.g., hotel names from Wikipedia) thatfall completely outside of the MultiWOZ data for the slots (e.g., hotel-name ,restaurant-name , etc.).Otherwise, we follow a similar fall-back strategy for slots (e.g., hotel-internet ) with no possibleexternal values beyond the ones (e.g., yesandno) in the original data.Slot-Combination Dictionary. As illustrated in Table 2, held-out evaluation set follows almost thesame slot co-occurrence distribution with training data. This makes it difficult to estimate how wellDST models would generalize on the valid conversation scenarios that just do not obey the samedistribution. C OCO’s flexibility at generating a conversation for an arbitrary turn-level belief statenaturally allows us to seek an answer to this question. To this end, we design three slot combina-tion dictionaries, namely freq,neuandrare. A slot combination dictionary directly controls howdifferent slots can be combined while generating counterfactual goals. As suggested by their names,freq contains frequently co-occurring slot combinations (e.g., book people is combined only withbook day andbook time slots), while rare is the opposite of freqgrouping rarely co-occurring slotstogether, and neuis more neutral allowing any meaningful combination within the same domain.75.2 M AIN RESULTSBefore reporting our results, it is important to note that several different post-processing strategiesare used by different DST models. To make a fair comparison across different models, we follow thesame post-processing strategy employed by S IMPLE TODevaluation script for T RADE and T RIPPYas well. We summarize our main results in Figure 3. While all three DST models are quite robust toback-translation (BT)8, their performance significantly drop on counterfactual conversations gener-ated by each of VS, C OCOand C OCO+ compared to MultiWOZ held-out set accuracy (original).Unseen Slot-Value Generalization. We analyze the effect of unseen slot values for the two dictio-naries ( IandO) introduced in the previous section compared to the original set of slot values thathave large overlap with the training data. Results presented on the left part of Figure 3 show that theperformance of DST models significantly drops up to 11.8% compared to original accuracy even onthe simple counterfactuals generated by VS strategy using in-domain unseen slot-value dictionary(I). Furthermore, using out-of-domain slot-value dictionary (O) results in about 10% additional dropin accuracy consistently across the three models. Consistent and similar drop in accuracy suggeststhat T RADE , SIMPLE TOD, and T RIPPYare almost equally susceptible to unseen slot values.Generalization to Novel Scenarios. The right section of Figure 3 presents the main results in oureffort to answer the central question we posed at the beginning of this paper. Based on these re-6When this set is empty for a slot (e.g., hotel-area ), we use the set of all possible values (e.g., center ,east,west,south ,north ) for this slot from training data. Please see Appendix I for further details.7Please see Appendix H for further details.8Similar to C OCO, we back-translate only the turns with non-empty turn-level belief states and apply slot-value match filter. We fall back to original user utterance if none of the paraphrases passes the filter.7Published as a conference paper at ICLR 202149.447.437.627.727.922.826.221.022.818.613.856.055.146.034.134.628.931.626.427.323.416.061.361.052.843.044.839.442.337.939.135.518.5102030405060OriginalBTVS*VSCoCo(freq)CoCo+(freq)CoCo(neu)CoCo+(neu)CoCo(rare)CoCo+(rare)Lower BoundTRADESimpleTodTripPyFigure 3: Joint goal accuracy (%) across different methods. “Original” refers to the results on the originalheld-out test set. * denotes results obtained from in-domain unseen slot-value dictionary ( I). VS, C OCOandCOCO+ results use out-of-domain slot-value dictionary ( O). For brevity, we omit C OCOand C OCO+ resultsusing in-domain slot-value dictionary. See Appendix C for the full results. freq,neu, and rare indicate whichslot-combination dictionary is used. Lower bound refers to the percentage of correct predictions on turns withempty turn-level belief state over original held-out test set.sults, we see that state-of-the-art DST models are having a serious difficulty generalizing to novelscenarios generated by both C OCOand C OCO+ using three different slot combination strategies.The generalization difficulty becomes even more serious on counterfactuals generated by C OCO+.As expected, the performance drop consistently increases as we start combining less and less fre-quently co-occurring slots (ranging from freq torare) while generating our counterfactual goals.In particular, C OCO+(rare) counterfactuals drops the accuracy of T RADE from 49.4% to 18.6%,pushing its performance very close to its lower bound of 13.8%. Even the performance of the mostrobust model (T RIPPY) among the three drops by up to 25.8%, concluding that held-out accuracyfor state-of-the-art DST models may not sufficiently reflect their generalization capabilities.Transferability Across Models. As highlighted before, a significant difference and advantage ofour proposed approach lies in its model-agnostic nature, making it immediately applicable for eval-uation of any DST model. As can be inferred from Figure 3, the effect of C OCO-generated coun-terfactuals on the joint goal accuracy is quite consistent across all three DST models. This resultempirically proves the transferability of C OCO, strengthening its reliability and applicability to begenerally employed as a robustness evaluation of DST models by the future research.5.3 H UMAN EVALUATIONHumanlikeliness CorrectnessHuman 87% 85%COCO(ori) 90% 91%COCO(freq) 90% 99%COCO(neu) 79% 98%COCO(rare) 82% 96%Table 3: Human evaluation.We next examine the quality of our generated data from twoperspectives: “human likeliness” and “turn-level belief statecorrectness”. The human likeliness evaluates whether a userutterance is fluent and consistent with its dialog context. Theturn-level belief state correctness evaluates whether ( Usyst,^Uusrt) exactly expresses goals in ^Lt. Both metrics are based onbinary evaluation. We randomly sample 100 turns in the orig-inal test data and their corresponding CoCo-generated ones.For the C OCO-generated data, we have two different settings to examine its quality. The first isto use the original turn-level belief state to generate user utterance, denoted by C OCO(ori). Thesecond setting is to verify the quality of the conversations generated by C OCO(freq)-, C OCO(neu)-and C OCO(rare) as they hurt the DST models’ accuracy significantly as shown in Figure 3. For eachresult row reported in Table 3, we ask three individuals with proficient English and advanced NLPbackground to conduct the evaluation, and use majority voting to determine the final scores.We can see that CoCo(ori) generated conversations are almost as human-like as original conversa-tions. Furthermore, C OCO(ori) generated slightly more “correct” responses than the original utter-ances in MultiWoZ 2.1. A presumable reason is that annotation errors exist in MultiWoZ 2.1, whileour C OCOare trained on recently released cleaner MultiWoZ 2.2, making generated data have higherquality. In addition, all three variants of the C OCO-generated conversations consistently outper-8Published as a conference paper at ICLR 202149.450.027.729.222.824.221.022.218.620.956.056.234.136.028.931.826.430.023.427.161.362.643.055.039.455.037.953.335.556.2152535455565OriginalOriginal◇VSVS◇CoCo+(freq)CoCo+(freq)◇CoCo+(neu)CoCo+(neu)◇CoCo+(rare)CoCo+(rare)◇TRADESimpleTodTripPyFigure 4: Comparison of retrained DST models (indicated by ) on C OCO+(rare)-augmented training datawith their counterparts trained on original MultiWOZ train split.form human response in terms of the turn-level belief state correctness. Although C OCO(neu) andCOCO(rare) are slightly less human-like than the original human response, C OCO(freq)-generatedutterances have similar human-likeness as original ones. These results demonstrate the effectivenessof our proposed approach in generating not only high-fidelity but also human-like user utterances,proving its potential to be adopted as part of robustness evaluation of DST models.5.4 A NALYSIS OF COCO+ASDATA AUGMENTATION DEFENSESo far, we have focused on the generalization capability of DST models on C OCO-generated conver-sations using different slot-value and slot-combination dictionaries. We have observed that all threeDST models are consistently most susceptible to conversations generated by C OCO+(rare) strategy.Instead, we now seek to answer the following question: Would using conversations generated byCOCO+(rare) to augment the training data help these DST models in better generalizing to unseenslot values and/or novel scenarios? Towards exploring this direction in a principled way, we designa new slot value dictionary ( train-O ) similar to out-of-domain unseen slot-value dictionary ( O). Fora fair comparison, we make sure that the slot values in train-O (please refer to Appendix I for thecomplete dictionary) do not overlap with the one ( O) used for generating test conversations.We first retrain each DST model on the MultiWOZ training split augmented with C OCO+(rare)-generated conversations using train-O slot-value dictionary. Retrained DST models are then evalu-ated on original test set as well as on the counterfactual test sets generated by VS and various ver-sions of C OCO+. Results presented in Figure 4 show that retraining on the C OCO+(rare)-augmentedtraining data improves the robustness of all three DST models across the board. Most notably, itrebounds the performance of T RIPPYon C OCO+(rare)-generated test set from 35.5% to 56.2%,significantly closing the gap with its performance (61.3%) on the original test set. We also observethat retrained DST models obtain an improved joint goal accuracy on the original MultiWOZ testset compared to their counterparts trained only on the original MultiWOZ train split, further validat-ing the quality of C OCO-generated conversations. Finally, we would like to highlight that retrainedTRIPPYachieves 62.6% joint goal accuracy, improving the previous state-of-the-art by 1.3%. Weleave the exploration of how to fully harness C OCOas a data augmentation approach as future work.6 C ONCLUSIONWe propose a principled, model-agnostic approach (C OCO) to evaluate dialogue state trackers be-yond the held-out evaluation set. We show that state-of-the-art DST models’ performance signif-icantly drop when evaluated on the C OCO-generated conversations. Human evaluations validatethat they have high-fidelity and are human-like. Hence, we conclude that these strong DST modelshave difficulty in generalizing to novel scenarios with unseen slot values and rare slot combina-tions, confirming the limitations of relying only on the held-out accuracy. When explored as a dataaugmentation method, C OCOconsistently improves state-of-the-art DST models not only on theCOCO-generated evaluation set but also on the original test set. This further proves the benefit andpotential of our approach to be adopted as part of a more comprehensive evaluation of DST models.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Creating challenging examples for DST systems: interesting results but may need more clarification in the methods section ### Review Text Update after reading author response: Thank you for the thoughtful response. I appreciate that you have added extra implementation details and am changing my score to a 6. Regarding the limitations in scope: My apologies if my review was confusingly worded. I just wanted to clarify that I had meant that the counterfactual generation method itself may have limitations (not the high-level DST task, which I agree has broad uses). The concern is that the adversarial example generation strategy might be too domain-specific to transfer easily to other tasks, and that this might limit the impact of the proposed counterfactual generation method. I think this is somewhat related to the parts of the methodology that R1 described as "ad-hoc", "engineering intensive", or reliant on heuristics. I still have some reservations about the transferability of the proposed methods, though the authors' response to R1 did clarify a bit on this point. ------------------------------------------------------------------------------------------------------------------------------------------- Original Review: This paper presents CoCo, a new method for generating adversarial examples for DST tasks using “controllable counterfactual” generation. Unlike other approaches for adversarial example generation, this approach is model agnostic. They also demonstrate the effectiveness of the method by applying the Multi-woz dataset where s.o.t.a. Model performance drops by ~30 points. I might lean towards rejecting this paper because there are several points in the methods that are still unclear to me. However, I hope the authors may be able to clarify some of my questions (I’ve listed a few in the limitations section of my review). Another potential drawback is that the methods described here are limited in scope to the dst domain, so it may not be as useful to other ML researchers more broadly. Strengths: - CoCo is a very effective method. Model performance drops by about 30 points, demonstrating that the adversarial examples generated are extremely challenging. - The human study demonstrates that the created examples are still human-like and are actually often rated “more correct” than the original Multi-Woz responses. Limitations: - The proposed methods are using a lot of pre-existing components and limited in scope to this one domain. While the created counterfactuals are useful as a challenge dataset for DST systems, the overall approach in this paper may not be more broadly impactful outside of this task. - There are a few points in the methodology that seem a bit unclear or should be described more fully: - Can you provide more details about the conditional generation model (p): How is it being trained? What architecture is being used? - The Slot-Value match filter: How are you judging whether the candidate “contains” the value? (Is this exact match?) - Classifier filter: Can you provide more details on the classifier architecture and how it is being trained? What is the precision and recall of its predictions? Questions: - In section 5.4: Any thoughts on why data augmentation is more effective for the Trippy model than the other two models? Minor Feedback: - The CoCo name could be easily confused with Microsoft Coco, a commonly used computer vision dataset. Authors might consider changing the name to avoid confusion. - The font in Figures 3 and 4 is tiny and hard to read. I recommend making it bigger. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
SkgRW64twr
ICLR.cc/2020/Conference
2020
Deep Multi-View Learning via Task-Optimal CCA
["Heather D. Couture", "Roland Kwitt", "J.S. Marron", "Melissa Troester", "Charles M. Perou", "Marc Niethammer"]
Canonical Correlation Analysis (CCA) is widely used for multimodal data analysis and, more recently, for discriminative tasks such as multi-view learning; however, it makes no use of class labels. Recent CCA methods have started to address this weakness but are limited in that they do not simultaneously optimize the CCA projection for discrimination and the CCA projection itself, or they are linear only. We address these deficiencies by simultaneously optimizing a CCA-based and a task objective in an end-to-end manner. Together, these two objectives learn a non-linear CCA projection to a shared latent space that is highly correlated and discriminative. Our method shows a significant improvement over previous state-of-the-art (including deep supervised approaches) for cross-view classification (8.5% increase), regularization with a second view during training when only one view is available at test time (2.2-3.2%), and semi-supervised learning (15%) on real data.
["multi-view", "components analysis", "CCA", "representation learning", "deep learning"]
ABSTRACTMulti-view learning seeks to form better models by making use of multiple featuresets representing the same samples. Exploiting feature correlations during train-ing can enable better models. The traditional method for computing correlationsbetween feature sets is Canonical Correlation Analysis (CCA), which finds linearprojections that maximize correlation between feature vectors, essentially com-puting a shared embedding for two views of data. More recently, CCA has beenused for multi-view discriminative tasks; however, CCA makes no use of classlabels. Recent CCA methods have started to address this weakness but are limitedin that they do not simultaneously optimize the CCA projection for discriminationand the CCA projection itself, or they are linear only. We address these defi-ciencies by simultaneously optimizing a CCA-based and a task objective in anend-to-end manner. Together, these two objectives learn a non-linear CCA pro-jection to a shared latent space that is highly correlated and discriminative. Ourmethod shows a significant improvement over previous state-of-the-art (includingdeep supervised approaches) for cross-view classification (8.5% increase), regu-larization with a second view during training when only one view is available attest time (2.2-3.2%), and semi-supervised learning (15%) on real data.1 I NTRODUCTIONParallel modalities of data are increasingly common in a variety of applications, including imagesand text, audio and video, parallel texts of different languages, and a variety of medical imagingand omics modalities for each patient. Each view provides essential information for classificationand, when used together, can form a more accurate model. This is especially important for difficultdiscriminative tasks such as those with a small training set size. Canonical Correlation Analysis(CCA) is the most common method for computing a shared representation from two views of databy computing a space in which they are maximally correlated (Hotelling, 1936; Bie et al., 2005).In this paper we will demonstrate that, through optimizing for both discriminative features andcorrelation between views, we can improve classification accuracy for three real world scenarios.CCA is an unsupervised method but has been applied to many discriminative tasks (Kan et al., 2015;Sargin et al., 2007; Arora & Livescu, 2012). While some of the correlated CCA features are usefulfor discriminative tasks, many represent properties that are of no use for classification and obscurecorrelated information that is beneficial. This problem is magnified with recent non-linear extensionsof CCA that use deep learning to make significant strides in improving correlation (Andrew et al.,2013; Wang et al., 2015a; 2016; Chang et al., 2018) but often at the expense of discriminativecapability (cf.x5.1). Therefore, we present Task-Optimal CCA (TOCCA), a new deep learningtechnique to project the data from two views to a shared space that is also discriminative (Fig. 1).Implementing a task-optimal variant of CCA required a fundamental change in formulation. Weshow that the CCA objective can equivalently be expressed as an `2distance minimization in theshared space plus an orthogonality constraint. Orthogonality constraints help regularize neuralnetworks (NNs) (Huang et al., 2018); we present three techniques to accomplish this. While ourmethod is derived from CCA, by manipulating the orthogonality constraints, we obtain deep CCAapproaches that compute a shared latent space that is also discriminative.Our family of solutions for supervised CCA required a crucial and non-trivial change in formulation.We demonstrate the effectiveness and versatility of our model for three different tasks: 1) cross-view classification on a variation of MNIST (LeCun, 1998), 2) regularization when two views are1Under review as a conference paper at ICLR 2020X1 → A1X2 → A2Class 1Class 2Class 3Train:y ← f(A1)Test:y ← f(A2)Goal: shared embedding that is also discriminativeX2X1NN NNA1 A2Solution: multi-task NN Applications :Cross-view classificationTrain:y ← f(A1,A2)Test:y ← f(A1)Multi-view regularization during trainingTrain:y ← f(A1,A2)Test:y ← f(A1,A2)Multi-view predictionDiscriminativeCorrelatedFigure 1: Our goal with Task-Optimal CCA is to compute a shared space that is also discriminative. We dothis by using NNs to compute an embedding for each view while simultaneously optimizing for correlationin the embedded space and a task-optimal objective. This setup is beneficial for three scenarios: 1) training aclassifier with the embedding from one view and testing with the embedding of the other view ( x5.1), 2) whentwo views are available for training but only one at test time ( x5.2), and 3) when both views are used for bothtraining and testing ( x5.3). The embeddings for views X1andX2are represented by A1andA2, respectively.A classifierf(A)then predicts the class for each sample. In order to compare with unsupervised variants ofCCA, the classifier fmay be computed subsequent to the CCA embedding.available for training but only one at test time on a cancer imaging and genomic data set withonly 1,000 samples, and 3) semi-supervised representation learning to improve speech recognition.All experiments showed a significant improvement in accuracy over previous state-of-the-art. Inaddition, our approach is more robust in the small sample size regime than alternative methods.Overall, our experiments on real data show the effectiveness of our method in learning a sharedspace that is more discriminative than previous methods for a variety of practical problems.2 R ELATED WORKCCA was initially used for unsupervised data analysis to gain insights into components shared bytwo sources (Andrew et al., 2013; Wang et al., 2015a; 2016). CCA has also been used to computea shared latent space for cross-view classification (Kan et al., 2015; Wang et al., 2015a; Chandaret al., 2016; Chang et al., 2018), for representation learning on multiple views that are then joined forprediction (Sargin et al., 2007; Dorfer et al., 2016b), and for classification from a single view whena second view is available during training (Arora & Livescu, 2012). Recent non-linear extensionsof CCA implemented via NNs make significant improvements in correlation (Andrew et al., 2013;Wang et al., 2015a; 2016; Chang et al., 2018) but with little focus on discriminative capability.Most prior work that boosts the discriminative capability of CCA is linear only (Lee et al., 2015;Singanamalli et al., 2014; Duan et al., 2016). More recent work using NNs still remains limitedin that it optimizes discriminative capability for an intermediate representation rather than the finalCCA projection (Dorfer et al., 2016b), or optimizes the CCA objective only during pre-training, notwhile training the task objective (Dorfer et al., 2018). We advocate to jointly optimize CCA and adiscriminative objective by computing the CCA projection within a network layer while applying atask-driven operation such as classification. Experimental results show that our method significantlyimproves upon previous work (Dorfer et al., 2016b; 2018) due to its focus on both the shared latentspace and a task-driven objective. The latter is particularly important on small training set sizes.While alternative approaches to multi-view learning via CCA exist, they typically focus on a recon-struction objective. That is, they transform the input into a shared space such that the input could bereconstructed – either individually or reconstructing one view from the other. Variations of coupleddictionary learning (Shekhar et al., 2014; Xu et al., 2015; Cha et al., 2015; Bahrampour et al., 2015)and autoencoders (Wang et al., 2015a; Bhatt et al., 2017) have been used in this context. CCA-basedobjectives, such as the model used in this work, instead learn a transformation to a shared spacewithout the need for reconstructing the input. This task may be easier and sufficient in producing arepresentation for multi-view classification (Wang et al., 2015a).2Under review as a conference paper at ICLR 20203 B ACKGROUNDWe first introduce CCA and present our task-driven approach in x4. Linear and non-linear CCAare unsupervised and find the shared signal between a pair of data sources, by maximizing thesum correlation between corresponding projections. Let X12Rd1nandX22Rd2nbe mean-centered input data from two different views with nsamples and d1,d2features, respectively.CCA. The objective is to maximize the correlation between a1=w>1X1anda2=w>2X2, wherew1andw2are projection vectors (Hotelling, 1936). The first canonical directions are found viaarg maxw1;w2corrw>1X1;w>2X2and subsequent projections are found by maximizing the same correlation but in orthogonal di-rections. Combining the projection vectors into matrices W1= [w(1)1;:::;w(k)1]andW2=[w(1)2;:::;w(k)2](kmin(d1;d2)), CCA can be reformulated as a trace maximization under or-thonormality constraints on the projections, i.e.,arg maxW1;W2tr(W>112W2)s.t.W>11W1=W>22W2=I (1)for covariance matrices 1=X1XT1,2=X2XT2, and cross-covariance matrix 12=X1XT2.LetT=1=21121=22 and its singular value decomposition (SVD) be T=U1diag()U>2with singular values = [1;:::; min(d1;d2)]in descending order. W1andW2are computed fromthe topksingular vectors of TasW1=1=21U(1:k)1 andW2=1=22U(1:k)2 where U(1:k)denotes thekfirst columns of matrix U. The sum correlation in the projection space is equivalent tokXi=1corrw(i)1>X1;w(i)2)>X2=kXi=12i; (2)i.e., the sum of the top ksingular values. A regularized variation of CCA (RCCA) ensuresthat the covariance matrices are positive definite by computing the covariance matrices as ^1=1n1X1X>1+rIand^2=1n1X2X>2+rI, for regularization parameter r>0and identity matrixI(Bilenko & Gallant, 2016).DCCA . Deep CCA adds non-linear projections to CCA by non-linearly mapping the input via a mul-tilayer perceptron (MLP). In particular, inputs X1andX2are mapped via non-linear functions f1andf2, parameterized by 1and2, resulting in activations A1=f1(X1;1)andA2=f2(X2;2)(assumed to be mean centered) (Andrew et al., 2013). When implemented by a NN, A1andA2are the output activations of the final layer with dofeatures. Fig. 2(a) shows the network structure.DCCA optimizes the same objective as CCA (equation 1) but using activations A1andA2. Reg-ularized covariance matrices are computed accordingly and the solution for W1andW2can becomputed using SVD just as with linear CCA. When k=do(i.e., the number of CCA componentsis equal to the number of features in A1andA2), optimizing the sum correlation in the projectionspace (equation 2) is equivalent to optimizing the following matrix trace norm objective (TNO)LTNO(A1;A2) =kTktr=trT>T1=2;where T=1=21121=22 as in CCA (Andrew et al., 2013). DCCA optimizes this objectivedirectly, without a need to compute the CCA projection within the network. The TNO is optimizedfirst, followed by a linear CCA operation before downstream tasks like classification are performed.This formulation does not allow for combining directly with a supervised term.SoftCCA. While DCCA enforces orthogonality constraints on projections W>1A1andW>2A2,SoftCCA relaxes them using regularization (Chang et al., 2018). Final projection matrices W1andW2are integrated into f1andf2as the top network layer. The trace objective for DCCA inequation 1 can be rewritten as minimizing the `2distance between the projections when each featureinA1andA2is normalized to a unit variance (Li et al., 2003), leading to1L`2dist(A1;A2) =kA1A2k2F:Regularization in SoftCCA penalizes the off-diagonal elements of the covariance matrix1We use this`2distance objective in our formulation.3Under review as a conference paper at ICLR 2020MLPMLPTNOX1X2f2f1A2A1MLPMLPX1X2f2f1A2A1Decorr.Decorr.MLPMLPX1X2f2f1A2A1WhiteningWhitening`2LossMLPTask lossB2B1YMLPMLPX1X2f2f1MLPTask lossYDecorr.Decorr.ftaskftask`2LossA1A212Improvediscriminability1Max. sum correlation (equiv. to l2loss)2Such that projections are orthogonal(a) DCCA(c)TOCCA-W (Ours )(b) SoftCCA(d)TOCCA-SD (Ours )`2LossFigure 2: Deep CCA architectures: (a) DCCA maximizes the sum correlation in projection space by optimizingan equivalent loss, the trace norm objective (TNO) (Andrew et al., 2013); (b) SoftCCA relaxes the orthogonalityconstraints by regularizing with soft decorrelation (Decorr) and optimizes the `2distance in the projectionspace (equivalent to sum correlation with activations normalized to unit variance) (Chang et al., 2018). OurTOCCA methods add a task loss and apply CCA orthogonality constraints by regularizing in two ways: (c)TOCCA-W uses whitening and (d) TOCCA-SD uses Decorr. The third method that we propose, TOCCA-ND ,simply removes the Decorr components of TOCCA-SD ., using a running average computed over batches as ^and a loss ofLDecorr(A) =Pdoi6=ij^i;jj.Overall, the SoftCCA loss takes the formL`2dist(A1;A2) +LDecorr(A1) +LDecorr(A2):Supervised CCA methods. CCA, DCCA, and SoftCCA are all unsupervised methods to learn aprojection to a shared space in which the data is maximally correlated. Although these methods haveshown utility for discriminative tasks, a CCA decomposition may not be optimal for classificationbecause features that are correlated may not be discriminative. Our experiments will show thatmaximizing the correlation objective too much can degrade performance on discriminative tasks.CCA has previously been extended to supervised settings in three ways: 1) with methods that arelinear only (Singanamalli et al., 2014; Lee et al., 2015; Kan et al., 2015; Duan et al., 2016), 2) bymaximizing the total correlation between each view and the training labels in addition to each pairof views (Lee et al., 2015; Singanamalli et al., 2014), and 3) with Linear Discriminant Analysis(LDA)-style approaches to encourage class separation (Kan et al., 2015; Dorfer et al., 2016b; El-madany et al., 2016).2LDA approaches to supervision are generative rather than discriminative.Importantly, we will show in x5.3 that encouraging class separation with an LDA-style objectiveperforms significantly inferior to a softmax. Further, Dorfer et al. (2016b) did not apply LDA tothe shared space itself but to the NN layer below it, and Elmadany et al. (2016) did not validate theshared space created, only its use in multi-view classification using both views for training and test.Dorfer et. al’s CCA Layer (CCAL) is the closest to our method. It optimizes a task loss operatingon a CCA projection; however, the CCA objective itself is only optimized during pre-training, notin an end-to-end manner (Dorfer et al., 2018). Further, their goal is retrieval with a pairwise rankloss, not classification. Instead of computing the CCA projection explicitly within the network, weoptimize the non-linear mapping into the shared space together with the task objective, requiringa fundamental change in formulation. We optimize for the shared space with the `2distance be-tween activations (similar to SoftCCA) and propose three different ways to apply the orthogonalityconstraints of CCA.4 T ASK-OPTIMAL CCA (TOCCA)To compute a shared latent space that is also discriminative, we reformulate DCCA to add a task-driven term to the optimization objective. The CCA component finds features that are correlated2Gatto & Dos Santos (2017) use a similar technique with LDA but apply it as a convolutional filter on asingle view; it is not a multi-view method.4Under review as a conference paper at ICLR 2020between views, while the task component ensures that they are also discriminative. This model canbe used for representation learning on multiple views before joining representations for prediction(Sargin et al., 2007; Dorfer et al., 2016b) and for classification when two views are available fortraining but only one at test time (Arora & Livescu, 2012). In x5, we demonstrate both use cases onreal data. Our methods and related NN models from the literature are summarized in Tab. A2; Fig. 2shows schematic diagrams.Challenges and solutions. While DCCA optimizes the sum correlation with an equivalent lossfunction (TNO), the CCA projection itself is computed only after optimization. Hence, the projec-tions cannot be used to optimize another task simultaneously. The main challenge in developinga task-optimal form of deep CCA that discriminates based on the CCA projection is in computingthis projection within the network – a necessary step to enable simultaneous training of both objec-tives. We tackle this by focusing on the two components of DCCA: maximizing the sum correlationbetween activations A1andA2and enforcing orthonormality constraints within A1andA2. Weachieve both by transforming the CCA objective and present three methods that progressively relaxthe orthogonality constraints.We further improve upon DCCA by enabling mini-batch computations for improved flexibility andtest performance. DCCA was developed for large batches because correlation is not separable acrossbatches. While large batch implementations of stochastic gradient optimization can increase com-putational efficiency via parallelism, small batch training provides more up-to-date gradient calcu-lations, allowing a wider range of learning rates and improving test accuracy (Masters & Luschi,2018). We reformulate the correlation objective as the `2distance (following SoftCCA), enablingseparability across batches. We ensure a normalization to one via batch normalization without thescale and shift parameters (Ioffe & Szegedy, 2015). Wang et al. (2016) also developed a stochasticmini-batch solution to DCCA but handled the orthonormality constraints in a different way (dis-cussed below).Task-driven objective. First, we apply non-linear functions f1andf2with parameters (via MLPs)to each view X1andX2, i.e., A1=f1(X1;1)andA2=f2(X2;2). Second, a task-specificfunctionftask(A;task)operates on the outputs A1andA2. In particular, f1andf2are optimizedso that the`2distance between A1andA2is minimized; therefore, ftaskcan be trained to operateon both inputs A1andA2. We combine CCA and task-driven objectives as a weighted sum witha hyperparameter for tuning. This model is flexible, in that the task-driven goal can be used forclassification (Krizhevsky et al., 2012; Dorfer et al., 2016a), regression (Katzman et al., 2016),clustering (Caron et al., 2018), or any other task. Other prior attempts to integrate a classifier intodeep CCA only used LDA (Kan et al., 2015; Dorfer et al., 2016b; Elmadany et al., 2016). SeeTab. A2 for an overview.Orthogonality constraints. The remaining complications for mini-batch optimization are the or-thogonality constraints, for which we propose three solutions, each handling the orthogonality con-straints of CCA in a different way: whitening, soft decorrelation, and no decorrelation.1) Whitening ( TOCCA-W ).CCA applies orthogonality constraints to A1andA2. We accomplishthis with a linear whitening transformation that transforms the activations such that their covari-ance becomes the identity matrix, i.e., features are uncorrelated. Decorrelated Batch Normalization(DBN) has previously been used to regularize deep models by decorrelating features (Huang et al.,2018) and inspired our solution. In particular, we apply a transformation B=UA to make Borthonormal, i.e., BB>=I.We use a Zero-phase Component Analysis (ZCA) whitening transform composed of three steps:rotate the data to decorrelate it, rescale each axis, and rotate back to the original space. Each trans-formation is learned from the data. Any matrix URdodosatisfying U>U=1whitens thedata, where denotes the covariance matrix of A. AsUis only defined up to a rotation, it isnot unique. PCA whitening follows the first two steps and uses the eigendecomposition of :UPCA =1=2V>for=diag(1;:::; do)andV= [v1;:::;vdo], where (i;vi)are theeigenvalue, eigenvector pairs of . As PCA whitening suffers from stochastic axis swapping, neu-rons are not stable between batches (Huang et al., 2018). ZCA whitening uses the transformationUZCA=V1=2VTin which PCA whitening is first applied, followed by a rotation back to theoriginal space. Adding the rotation Vbrings the whitened data Bas close as possible to the originaldataA(Kessy et al., 2015).5Under review as a conference paper at ICLR 2020Computation of UZCAis clearly depend on . While Huang et al. (2018) used a running averageofUZCAover batches, we apply this stochastic approximation to for each view using the update(k)=(k1)+(1)bfor batchkwhere bis the covariance matrix for the current batch and2(0;1)is the momentum. We then compute the ZCA transformation from (k)to do whiteningasB=fZCA(A) =U(k)ZCAA. At test time, U(k)from the last training batch is used. Algorithm A1describes ZCA whitening in greater detail. In summary, TOCCA-W integrates both the correlationand task-driven objectives, with decorrelation performed by whitening, intoLtask(ftask(B1);Y) +Ltask(ftask(B2);Y) +L`2dist(B1;B2);where B1andB2are whitened outputs of A1andA2, respectively, and Yis the class labels. Thisis a novel approach to integrating the orthogonality constraints of CCA into a NN as it is the firstto use ZCA whitening in this manner. Wang et al. (2016)’s stochastic mini-batch solution to DCCAused nonlinear orthogonal iterations and does not state what type of whitening operation was used.2) Soft decorrelation ( TOCCA-SD ).While fully independent components may be beneficial inregularizing NNs on some data sets, a softer decorrelation may be more suitable on others. In thissecond formulation we relax the orthogonality constraints using regularization, following the Decorrloss of SoftCCA (Chang et al., 2018). The loss function for this formulation isLtask(ftask(A1);Y)+Ltask(ftask(A2);Y)+1L`2dist(A1;A2)+2LDecorr(A1)+LDecorr(A2):While this solution is based on SoftCCA, our experiments ( x5) will demonstrate that the task com-ponent is essential when using the model for classification.3) No decorrelation ( TOCCA-ND ).When CCA is used in an unsupervised manner, some form oforthogonality constraint or decorrelation is necessary to ensure that f1andf2do not simply producemultiple copies of the same feature. While this result could maximize the sum correlation, it is nothelpful in capturing useful projections. In the task-driven setting, the discriminative term ensuresthat the features in f1andf2are not replicates of the same information. TOCCA-ND thereforeremoves the decorrelation term entirely, forming the simpler objectiveLtask(ftask(A1);Y) +Ltask(ftask(A2);Y) +L`2dist(A1;A2):These three models allow testing whether whitening or decorrelation benefit a task-driven model.Computational complexity. Due to the eigendecomposition, TOCCA-W has a complexity of O(d3o)compared to O(d2o)forTOCCA-SD , with respect to output dimension do. However,dois typicallysmall (100) and this extra computation is only performed once per batch. The difference inruntime is less than 6.5% for a batch size of 100 or 9.4% for a batch size of 30 (Tab. A4).Summary. All three variants are motivated by adding a task-driven component to deep CCA.TOCCA-ND is the most relaxed and directly attempts to obtain identical latent representations. Ex-periments will show that whitening ( TOCCA-W ) and soft decorrelation ( TOCCA-SD ) provide a ben-eficial regularization. Further, since the `2distance that we optimize was shown to be equivalentto the sum correlation (cf. x3 SoftCCA paragraph), all three TOCCA models maintain the goals ofCCA, just with different relaxations of the orthogonality constraints. Our method is the first to si-multaneously optimize for CCA and a discriminative task with end-to-end training. See Tab. A2 foran overview.5 E XPERIMENTSWe validated our methods on three different data sets: MNIST handwritten digits, the CarolinaBreast Cancer Study (CBCS) using imaging and genomic features, and speech data from the Wis-consin X-ray Microbeam Database (XRMB). Our experiments show the utility of our methods for 1)cross-view classification, 2) regularization with a second view during training when only one view isavailable at test time, and 3) representation learning on multiple views that are joined for prediction.Implementation.3Each layer of our network consists of a fully connected layer, followed by aReLU activation and batch normalization (Ioffe & Szegedy, 2015). Our implementations of DCCA,SoftCCA, and Joint DCCA/DeepLDA (Dorfer et al., 2016b) also use ReLU activation and batch3Code is submitted with this paper and will also be available publicly on GitHub after the review period.6Under review as a conference paper at ICLR 2020normalization. We modified CCAL- Lrank(Dorfer et al., 2018) to use a softmax function and cross-entropy loss for classification, instead of a pairwise ranking loss for retrieval, referring to this modi-fication as CCAL-Lce. We used the Nadam optimizer and tuned hyperparameters on a validation setvia random search; settings and ranges are specified in Tab. A3. The same hyperparameter tuningprocedure was used for our methods and those we compare with. We used Keras with the Theanobackend and an Nvidia GeForce GTX 1080 Ti.The following experiments compare our methods with two linear methods (CCA and RCCA),two unsupervised deep methods (DCCA and SoftCCA), and two supervised deep methods (JointDCCA/DeepLDA and CCAL- Lce). Many other variants exist ( x3), but the ones we selected arethe current state-of-the-art in each of these classes. We did not run a direct comparison with Wanget al. (2015a) as Chang et al. (2018) already showed that SoftCCA is superior. We chose JointDCCA/DeepLDA to represent supervised LDA-style CCA methods rather than comparing with allmethods in this group (Kan et al., 2015; Elmadany et al., 2016)4.5.1 C ROSS -VIEW CLASSIFICATION ON MNIST DIGITSWe formed a multi-view data set from the MNIST handwritten digit data set (LeCun, 1998). Fol-lowing Andrew et al. (2013), we split each 2828image in half horizontally, creating left and rightviews that are each 1428pixels. All images were flattened into a vector with 392 features. Thefull data set consists of 60k training images and 10k test images. We used a random set of up to 50kfor training and the remaining training images for validation. We used the full 10k image test set.In order to validate both the discriminativeness of the embedding and the success in finding a sharedspace, we studied performance on cross-view classification. We evaluated cross-view classificationaccuracy by first computing the projection for each view, then we trained a linear SVM on oneview’s projection, and finally we used the other view’s projection at test time. While the task-drivenmethods presented in this work learn a classifier within the model, this test setup enables a faircomparison with the unsupervised CCA variants and validates the discriminativity of the featureslearned. It is also the standard method in the literature to test CCA methods for classification.Notably, using the built-in softmax classifier (not shown) performed similarly to the SVM, as muchof the power of our methods comes from the representation learning part. We do not compare witha simple supervised NN because this setup does not learn the shared space necessary for cross-viewclassification. We report results averaged over five randomly selected training/validation sets; thetest set always remained the same.Correlation vs. classification accuracy We first demonstrate the importance of adding a task-driven component to DCCA by showing that maximizing the sum correlation between views is notsufficient. Fig. 3 ( left) shows the sum correlation vs. cross-view classification accuracy across manydifferent hyperparameter settings for DCCA (Andrew et al., 2013), SoftCCA (Chang et al., 2018),andTOCCA . We used 50 components for each; thus, the maximum sum correlation was 50. Thesum correlation was measured after applying linear CCA to ensure that components were indepen-dent. With DCCA a larger correlation tended to produce a larger classification accuracy, but therewas still a large variance in classification accuracy amongst hyperparameter settings that produceda similar sum correlation. For example, with the two farthest right points in the plot (colored red),their classification accuracy differs by 10%, and they are not even the points with the best classi-fication accuracy (colored purple). The pattern is different for SoftCCA. There was an increase inclassification accuracy as sum correlation increased but only up to a point. For higher sum corre-lations, the classification accuracy varied even more from 20% to 80%. Further experiments (notshown) have indicated that when the sole objective is correlation, some of the projection directionsare simply not discriminative, particularly when there are a large number of classes. Hence, optimiz-ing for sum correlation alone does not guarantee a discriminative model. TOCCA-W andTOCCA-SDshow a much greater classification accuracy across a wide range of correlations and, overall, the bestaccuracy when correlation is greatest.Effect of batch size. Fig. 3 ( right ) plots the batch size vs. classification accuracy for a training setsize of 10;000. We tested batch sizes from 10to10;000; a batch size of 10 or 30 was best for all4While Elmadany et al. (2016) ran experiments on MNIST, they used the embeddings from both views fortraining and test; hence, their results are not directly comparable to our cross-view classification results. Whenwe did test multi-view classification on MNIST, we achieved 98.5% vs. their reported 97.2%.7Under review as a conference paper at ICLR 2020010203040500.00.20.40.60.81.0010203040500.00.20.40.60.81.0010203040500.00.20.40.60.81.0010203040500.00.20.40.60.81.0AccuracySum correlationAccuracySum correlationAccuracySum correlationAccuracySum correlationDCCA SoftCCATOCCA-SD TOCCA-WConfig. with two highest sum correlationsConfig. with highest accuracyAccuracyAccuracyTraining set size10301001,00010,0000.890.900.910.920.93 0.93CCARCCADCCASoftCCAJoint DCCA/DeepLDACCAL-LceTOCCA-WTOCCA-SDTOCCA-ND1.0AccuracyBatch sizeFigure 3: Left: Sum correlation vs. cross-view classification accuracy (on MNIST) across different hyper-parameter settings on a training set size of 10,000 for DCCA (Andrew et al., 2013), SoftCCA (Chang et al.,2018), TOCCA-W , and TOCCA-SD . For unsupervised methods (DCCA and SoftCCA), large correlations donot necessarily imply good accuracy. Right : The effect of batch size on classification accuracy for each TOCCAmethod on MNIST (training set size of 10,000), and the effect of training set size on classification accuracy foreach method. Our TOCCA variants out-performed all others across all training set sizes.CCA RCCA DCCA SoftCCA CCAL- Lce TOCCA-W0123456789012345678901234567890123456789012345678901234567890123456789Figure 4:t-SNE plots for CCA methods on our variation of MNIST. Each method was used to computeprojections for the two views (left and right sides of the images) using 10,000 training examples. The plotsshow a visualization of the projection for the left view with each digit colored differently. TOCCA-SD andTOCCA-ND (not shown) produced similar results to TOCCA-W .three variations of TOCCA . This is in line with previous work that found the best performance with abatch size between 2 and 32 (Masters & Luschi, 2018). We used a batch size of 32 in the remainingexperiments on MNIST.Effect of training set size. We manipulated the training set size in order to study the robustness ofour methods. In particular, Fig. 3 ( right ) shows the cross-view classification accuracy for trainingset sizes from n= 300 to50;000. While we expected that performance would decrease for smallertraining set sizes, some methods were more susceptible to this degradation than others. The clas-sification accuracy with CCA dropped significantly for n= 300 and1;000, due to overfitting andinstability issues related to the covariance and cross-covariance matrices. SoftCCA shows similarbehavior (prior work (Chang et al., 2018) on this method did not test such small training set sizes).Across all training set sizes, our TOCCA variations consistently exhibited good performance, e.g.,increasing classification accuracy from 78.3% to 86.7% for n= 1;000and from 86.1% to 94.6% forn= 50;000withTOCCA-SD . Increases in accuracy over TOCCA-ND were small, indicating that thedifferent decorrelation schemes have only a small effect on this data set; the task-driven componentis the main reason for the success of our method. In particular, the classification accuracy withn= 1;000did better than the unsupervised DCCA method on n= 10;000. Further, TOCCA withn= 300 did better than linear methods on n= 50;000, clearly showing the benefits of the proposedformulation. We also examined the CCA projections qualitatively via a 2D t-SNE embedding (VanDer Maaten & Hinton, 2008). Fig. 4 shows the CCA projection of the left view for each method. Asexpected, the task-driven variant produced more clearly separated classes.8Under review as a conference paper at ICLR 2020Table 1: Classification accuracy for different methods of predicting Basal genomic subtype from images orgrade from gene expression. Linear SVM and DNN were trained on a single view, while all other methods weretrained with both views. By regularizing with the second view during training, all TOCCA variants improvedclassification accuracy. The standard error is in parentheses.Method Training data Test data Task AccuracyLinear SVM Image only Image Basal 0.777 (0.003)NN Image only Image Basal 0.808 (0.006)CCAL-Lce Image+GE Image Basal 0.807 (0.008)TOCCA-W Image+GE Image Basal 0.830 (0.006)TOCCA-SD Image+GE Image Basal 0.818 (0.006)TOCCA-ND Image+GE Image Basal 0.816 (0.004)Method Training data Test data Task AccuracyLinear SVM GE only GE Grade 0.832 (0.012)NN GE only GE Grade 0.830 (0.012)CCAL-Lce GE+image GE Grade 0.804 (0.022)TOCCA-W GE+image GE Grade 0.862 (0.013)TOCCA-SD GE+image GE Grade 0.856 (0.011)TOCCA-ND GE+image GE Grade 0.856 (0.011)5.2 R EGULARIZATION FOR CANCER CLASSIFICATIONIn this experiment, we address the following question: Given two views available for training butonly one at test time, does the additional view help to regularize the model?We study this question using 1,003 patient samples with image and genomic data from CBCS5(Troester et al., 2018). Images consisted of four cores per patient from a tissue microarray thatwas stained with hematoxylin and eosin. Image features were extracted using a VGG16 backbone(Simonyan & Zisserman, 2015), pre-trained on ImageNet, by taking the mean of the 512D outputof the fourth set of conv. layers across the tissue region and further averaging across all core imagesfor the same patient. For gene expression (GE), we used the set of 50 genes in the PAM50 array(Parker et al., 2009). The data set was randomly split into half for training and one quarter forvalidation/testing; we report the mean over eight cross-validation runs. Classification tasks includedpredicting 1) Basal vs. non-Basal genomic subtype using images, which is typically done from GE,and 2) predicting grade 1 vs. 3 from GE, typically done from images. This is not a multi-taskclassification setup; it is a means for one view to stabilize the representation of the other. The firsttask is also a valuable clinical use case. Genomic analysis is expensive and not routinely performed,while histologic imaging is standard practice by pathologists for detecting cancer and assessingits aggressiveness. In working with our clinical collaborators, our goal has been to predict tumorsubtypes from images - something that is too complex for pathologists. We hope that this willone day make tumor subtypes accessible to more patients and improve treatment decisions. Thisexperiment demonstrates that the second view of data can help regularize during training even if itis not available for test patients.We tested different classifier training methods when only one view was available at test time: a) alinear SVM trained on one view, b) a deep NN trained on one view using the same architecture asthe lower layers of TOCCA , c) CCAL-Lcetrained on both views, d) TOCCA trained on both views.Tab. 1 lists the classification accuracy for each method and task. When predicting genomic subtypeBasal from images, all our methods showed an improvement in classification accuracy; the bestresult was with TOCCA-W , which produced a 2.2% improvement. For predicting grade from GE,all our methods again improved the accuracy – by up to 3.2% with TOCCA-W . These results showthat having additional information during training can boost performance at test time. Notably, thisexperiment used a static set of pre-trained VGG16 image features in order to assess the utility of themethod. The network itself could be fine-tuned end-to-end with our TOCCA model, providing aneasy opportunity for data augmentation and likely further improvements in classification accuracy.5.3 S EMI-SUPERVISED LEARNING FOR SPEECH RECOGNITIONOur final experiments use speech data from XRMB, consisting of simultaneously recorded acousticand articulatory measurements. Prior work has shown that CCA-based algorithms can improvephonetic recognition (Wang et al., 2015b;a; 2016; Dorfer et al., 2016b). The 45 speakers were splitinto 35 for training, 2 for validation, and 8 for testing – a total of 1,429,236 samples for training,85,297 for validation, and 111,314 for testing.6The acoustic features are 112D and the articulatoryones are 273D. We removed the per-speaker mean & variance for both views. Samples are annotatedwith one of 38 phonetic labels.5http://cbcs.web.unc.edu/for-researchers/6http://ttic.uchicago.edu/ ̃klivescu/XRMB_data/full/README9Under review as a conference paper at ICLR 2020Table 5: XRMB classification results.Method Task AccuracyBaseline - 0.591CCA - 0.589RCCA - 0.588DCCA - 0.620SoftCCA - 0.635Joint DCCA/DeepLDA LDA 0.633CCAL- Lce Softmax 0.642TOCCA-W LDA 0.710TOCCA-SD LDA 0.677TOCCA-ND LDA 0.677TOCCA-W Softmax 0.795TOCCA-SD Softmax 0.785TOCCA-ND Softmax 0.785Our task on this data set was representation learn-ing for multi-view prediction – that is, usingboth views of data to learn a shared discrimina-tive representation. We trained each model us-ing both views and their labels. To test eachCCA model, we followed prior work and concate-nated the original input features from both viewswith the projections from both views. Due to thelarge training set size, we used a Linear Discrimi-nant Analysis (LDA) classifier for efficiency. Thesame construction was used at test time. Thissetup was used to assess whether a task-optimalDCCA model can improve discriminative power.We tested TOCCA with a task-driven loss of LDA(Dorfer et al., 2016a) or softmax to demonstratethe flexibility of our model.Table 6: Semi-supervised classi-fication results on XRMB usingTOCCA-W .Labeled data Accuracy100% 0.79530% 0.76210% 0.7453% 0.6841% 0.637We compared the discriminability of a variety of methods to learna shared latent representation. Tab. 5 lists the classification re-sults with a baseline that used only the original input features forLDA. Although deep methods, i.e., DCCA and SoftCCA, improvedupon the linear methods, all TOCCA variations significantly outper-formed previous state-of-the-art techniques. Using softmax consis-tently beat LDA by a large margin. TOCCA-SD andTOCCA-NDproduced equivalent results as a weight of 0on the decorrelationterm performed best. However, TOCCA-W showed the best resultwith an improvement of 15% over the best alternative method.TOCCA can also be used in a semi-supervised manner when labels are available for only somesamples. Tab. 6 lists the results for TOCCA-W in this setting. With 0% labeled data, the result wouldbe similar to DCCA. Notably, a large improvement over the unsupervised results in Tab. 5 is seeneven with labels for only 10% of the training samples.6 D ISCUSSIONWe proposed a method to find a shared latent space that is also discriminative by adding a task-driven component to deep CCA while enabling end-to-end training. This required a fundamentalchange in formulation because Deep CCA does not compute the embeddings directly as it optimizesan equivalent objective; therefore, we could not simply add an additional term. Instead, we found analternative formulation by replacing the CCA projection with `2distance minimization and orthogo-nality constraints on the activations, and we implemented this in three different ways. TOCCA -W orTOCCA-SD performed the best, dependent on the data set – both of which include some means ofdecorrelation to provide a regularizing effect to the model and thereby outperforming TOCCA-ND .TOCCA showed large improvements over state-of-the-art in cross-view classification accuracy onMNIST and significantly increased robustness to a small training set size. On CBCS, TOCCA pro-vided a regularizing effect when both views were available for training but only one at test time.TOCCA also produced a large increase over state-of-the-art for multi-view representation learningon a much larger data set, XRMB. On this data set we also demonstrated a semi-supervised approachto get a large increase in classification accuracy with only a small proportion of the labels. Using asimilar technique, our method could be applied when some samples are missing a second view.Classification tasks using a softmax operation or LDA were explored in this work; however, theformulation presented can also be used with other tasks such as regression or clustering. Anotherpossible avenue for future work entails extracting components shared by both views as well asindividual components. This approach has been developed for dictionary learning (Lock et al., 2013;Ray et al., 2014; Feng et al., 2018) but could be extended to deep CCA-based methods. Finally, wehave yet to apply data augmentation to the proposed framework; this could provide a significantbenefit for small training sets.10Under review as a conference paper at ICLR 2020
SJl61Mt2FS
Official Blind Review #2
3: Weak Reject
Originality: CCA is a generative model that learns a shared subspace based on two (or multi) views of the data. Being generative, it might not have strong discriminative power for some downstream classification tasks. Previous approaches to infuse discriminative power into the shared subspace estimated by CCA are linear. So, this paper proposes to learn 1) non-linear 2) discriminative subspaces for CCA. The paper accomplishes this by simply adding a task specific term to the optimization objective of DeepCCA (Andrew et. al. 2013), which involves just adding a task-specific MLP on top and minimizing the associated loss-function. 1). The novelty of the proposed approach is limited. It just adds an extra term (extra neural network layer) with a corresponding weighting hyperparameter to the objective function of a previous method (DeepCCA) without much motivation. 2). The experimental setup and results are sound but some of the tasks seem contrived to show the improved performance of TOCCA methods. For instance, in the cross-view MNIST classification the authors use only projection from one view at training time and use the other view at test-time. What's the motivation for this setup? Why not split the data into train and test set by splitting observations, then train on both the views at train time and test on the held-out observations at test-time? I hope I am not missing something. 3). Similarly, for the "Regularization for Cancer Classification" task, it's assumed that only one view is available at test time. Why is that? What are the real-world examples of such setups? Quality: The paper is technically sound, though it is a trivial extension of a previous method. The experimental setup is somewhat contrived to show the superiority of the proposed method. Clarity: The paper is well organized and is well written in general. The supplementary material contains more results and code will be available after the review period. Significance: The paper solves an important problem by infusing discriminative power into generative subspaces learned by CCA but the results are not that important in my eyes. Since the empirical setup is a little contrived it is hard to even know whether a simple two-step approach that first estimates CCA subspace and then uses those projections in a SVM or MLP would perform comparable or better if given a fair-chance to compete.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Deep Multi-View Learning via Task-Optimal CCA ### Paper Abstract Canonical Correlation Analysis (CCA) is widely used for multimodal data analysis and, more recently, for discriminative tasks such as multi-view learning; however, it makes no use of class labels. Recent CCA methods have started to address this weakness but are limited in that they do not simultaneously optimize the CCA projection for discrimination and the CCA projection itself, or they are linear only. We address these deficiencies by simultaneously optimizing a CCA-based and a task objective in an end-to-end manner. Together, these two objectives learn a non-linear CCA projection to a shared latent space that is highly correlated and discriminative. Our method shows a significant improvement over previous state-of-the-art (including deep supervised approaches) for cross-view classification (8.5% increase), regularization with a second view during training when only one view is available at test time (2.2-3.2%), and semi-supervised learning (15%) on real data. ### Paper Keywords ["multi-view", "components analysis", "CCA", "representation learning", "deep learning"] ### Paper Content ABSTRACTMulti-view learning seeks to form better models by making use of multiple featuresets representing the same samples. Exploiting feature correlations during train-ing can enable better models. The traditional method for computing correlationsbetween feature sets is Canonical Correlation Analysis (CCA), which finds linearprojections that maximize correlation between feature vectors, essentially com-puting a shared embedding for two views of data. More recently, CCA has beenused for multi-view discriminative tasks; however, CCA makes no use of classlabels. Recent CCA methods have started to address this weakness but are limitedin that they do not simultaneously optimize the CCA projection for discriminationand the CCA projection itself, or they are linear only. We address these defi-ciencies by simultaneously optimizing a CCA-based and a task objective in anend-to-end manner. Together, these two objectives learn a non-linear CCA pro-jection to a shared latent space that is highly correlated and discriminative. Ourmethod shows a significant improvement over previous state-of-the-art (includingdeep supervised approaches) for cross-view classification (8.5% increase), regu-larization with a second view during training when only one view is available attest time (2.2-3.2%), and semi-supervised learning (15%) on real data.1 I NTRODUCTIONParallel modalities of data are increasingly common in a variety of applications, including imagesand text, audio and video, parallel texts of different languages, and a variety of medical imagingand omics modalities for each patient. Each view provides essential information for classificationand, when used together, can form a more accurate model. This is especially important for difficultdiscriminative tasks such as those with a small training set size. Canonical Correlation Analysis(CCA) is the most common method for computing a shared representation from two views of databy computing a space in which they are maximally correlated (Hotelling, 1936; Bie et al., 2005).In this paper we will demonstrate that, through optimizing for both discriminative features andcorrelation between views, we can improve classification accuracy for three real world scenarios.CCA is an unsupervised method but has been applied to many discriminative tasks (Kan et al., 2015;Sargin et al., 2007; Arora & Livescu, 2012). While some of the correlated CCA features are usefulfor discriminative tasks, many represent properties that are of no use for classification and obscurecorrelated information that is beneficial. This problem is magnified with recent non-linear extensionsof CCA that use deep learning to make significant strides in improving correlation (Andrew et al.,2013; Wang et al., 2015a; 2016; Chang et al., 2018) but often at the expense of discriminativecapability (cf.x5.1). Therefore, we present Task-Optimal CCA (TOCCA), a new deep learningtechnique to project the data from two views to a shared space that is also discriminative (Fig. 1).Implementing a task-optimal variant of CCA required a fundamental change in formulation. Weshow that the CCA objective can equivalently be expressed as an `2distance minimization in theshared space plus an orthogonality constraint. Orthogonality constraints help regularize neuralnetworks (NNs) (Huang et al., 2018); we present three techniques to accomplish this. While ourmethod is derived from CCA, by manipulating the orthogonality constraints, we obtain deep CCAapproaches that compute a shared latent space that is also discriminative.Our family of solutions for supervised CCA required a crucial and non-trivial change in formulation.We demonstrate the effectiveness and versatility of our model for three different tasks: 1) cross-view classification on a variation of MNIST (LeCun, 1998), 2) regularization when two views are1Under review as a conference paper at ICLR 2020X1 → A1X2 → A2Class 1Class 2Class 3Train:y ← f(A1)Test:y ← f(A2)Goal: shared embedding that is also discriminativeX2X1NN NNA1 A2Solution: multi-task NN Applications :Cross-view classificationTrain:y ← f(A1,A2)Test:y ← f(A1)Multi-view regularization during trainingTrain:y ← f(A1,A2)Test:y ← f(A1,A2)Multi-view predictionDiscriminativeCorrelatedFigure 1: Our goal with Task-Optimal CCA is to compute a shared space that is also discriminative. We dothis by using NNs to compute an embedding for each view while simultaneously optimizing for correlationin the embedded space and a task-optimal objective. This setup is beneficial for three scenarios: 1) training aclassifier with the embedding from one view and testing with the embedding of the other view ( x5.1), 2) whentwo views are available for training but only one at test time ( x5.2), and 3) when both views are used for bothtraining and testing ( x5.3). The embeddings for views X1andX2are represented by A1andA2, respectively.A classifierf(A)then predicts the class for each sample. In order to compare with unsupervised variants ofCCA, the classifier fmay be computed subsequent to the CCA embedding.available for training but only one at test time on a cancer imaging and genomic data set withonly 1,000 samples, and 3) semi-supervised representation learning to improve speech recognition.All experiments showed a significant improvement in accuracy over previous state-of-the-art. Inaddition, our approach is more robust in the small sample size regime than alternative methods.Overall, our experiments on real data show the effectiveness of our method in learning a sharedspace that is more discriminative than previous methods for a variety of practical problems.2 R ELATED WORKCCA was initially used for unsupervised data analysis to gain insights into components shared bytwo sources (Andrew et al., 2013; Wang et al., 2015a; 2016). CCA has also been used to computea shared latent space for cross-view classification (Kan et al., 2015; Wang et al., 2015a; Chandaret al., 2016; Chang et al., 2018), for representation learning on multiple views that are then joined forprediction (Sargin et al., 2007; Dorfer et al., 2016b), and for classification from a single view whena second view is available during training (Arora & Livescu, 2012). Recent non-linear extensionsof CCA implemented via NNs make significant improvements in correlation (Andrew et al., 2013;Wang et al., 2015a; 2016; Chang et al., 2018) but with little focus on discriminative capability.Most prior work that boosts the discriminative capability of CCA is linear only (Lee et al., 2015;Singanamalli et al., 2014; Duan et al., 2016). More recent work using NNs still remains limitedin that it optimizes discriminative capability for an intermediate representation rather than the finalCCA projection (Dorfer et al., 2016b), or optimizes the CCA objective only during pre-training, notwhile training the task objective (Dorfer et al., 2018). We advocate to jointly optimize CCA and adiscriminative objective by computing the CCA projection within a network layer while applying atask-driven operation such as classification. Experimental results show that our method significantlyimproves upon previous work (Dorfer et al., 2016b; 2018) due to its focus on both the shared latentspace and a task-driven objective. The latter is particularly important on small training set sizes.While alternative approaches to multi-view learning via CCA exist, they typically focus on a recon-struction objective. That is, they transform the input into a shared space such that the input could bereconstructed – either individually or reconstructing one view from the other. Variations of coupleddictionary learning (Shekhar et al., 2014; Xu et al., 2015; Cha et al., 2015; Bahrampour et al., 2015)and autoencoders (Wang et al., 2015a; Bhatt et al., 2017) have been used in this context. CCA-basedobjectives, such as the model used in this work, instead learn a transformation to a shared spacewithout the need for reconstructing the input. This task may be easier and sufficient in producing arepresentation for multi-view classification (Wang et al., 2015a).2Under review as a conference paper at ICLR 20203 B ACKGROUNDWe first introduce CCA and present our task-driven approach in x4. Linear and non-linear CCAare unsupervised and find the shared signal between a pair of data sources, by maximizing thesum correlation between corresponding projections. Let X12Rd1nandX22Rd2nbe mean-centered input data from two different views with nsamples and d1,d2features, respectively.CCA. The objective is to maximize the correlation between a1=w>1X1anda2=w>2X2, wherew1andw2are projection vectors (Hotelling, 1936). The first canonical directions are found viaarg maxw1;w2corrw>1X1;w>2X2and subsequent projections are found by maximizing the same correlation but in orthogonal di-rections. Combining the projection vectors into matrices W1= [w(1)1;:::;w(k)1]andW2=[w(1)2;:::;w(k)2](kmin(d1;d2)), CCA can be reformulated as a trace maximization under or-thonormality constraints on the projections, i.e.,arg maxW1;W2tr(W>112W2)s.t.W>11W1=W>22W2=I (1)for covariance matrices 1=X1XT1,2=X2XT2, and cross-covariance matrix 12=X1XT2.LetT=1=21121=22 and its singular value decomposition (SVD) be T=U1diag()U>2with singular values = [1;:::; min(d1;d2)]in descending order. W1andW2are computed fromthe topksingular vectors of TasW1=1=21U(1:k)1 andW2=1=22U(1:k)2 where U(1:k)denotes thekfirst columns of matrix U. The sum correlation in the projection space is equivalent tokXi=1corrw(i)1>X1;w(i)2)>X2=kXi=12i; (2)i.e., the sum of the top ksingular values. A regularized variation of CCA (RCCA) ensuresthat the covariance matrices are positive definite by computing the covariance matrices as ^1=1n1X1X>1+rIand^2=1n1X2X>2+rI, for regularization parameter r>0and identity matrixI(Bilenko & Gallant, 2016).DCCA . Deep CCA adds non-linear projections to CCA by non-linearly mapping the input via a mul-tilayer perceptron (MLP). In particular, inputs X1andX2are mapped via non-linear functions f1andf2, parameterized by 1and2, resulting in activations A1=f1(X1;1)andA2=f2(X2;2)(assumed to be mean centered) (Andrew et al., 2013). When implemented by a NN, A1andA2are the output activations of the final layer with dofeatures. Fig. 2(a) shows the network structure.DCCA optimizes the same objective as CCA (equation 1) but using activations A1andA2. Reg-ularized covariance matrices are computed accordingly and the solution for W1andW2can becomputed using SVD just as with linear CCA. When k=do(i.e., the number of CCA componentsis equal to the number of features in A1andA2), optimizing the sum correlation in the projectionspace (equation 2) is equivalent to optimizing the following matrix trace norm objective (TNO)LTNO(A1;A2) =kTktr=trT>T1=2;where T=1=21121=22 as in CCA (Andrew et al., 2013). DCCA optimizes this objectivedirectly, without a need to compute the CCA projection within the network. The TNO is optimizedfirst, followed by a linear CCA operation before downstream tasks like classification are performed.This formulation does not allow for combining directly with a supervised term.SoftCCA. While DCCA enforces orthogonality constraints on projections W>1A1andW>2A2,SoftCCA relaxes them using regularization (Chang et al., 2018). Final projection matrices W1andW2are integrated into f1andf2as the top network layer. The trace objective for DCCA inequation 1 can be rewritten as minimizing the `2distance between the projections when each featureinA1andA2is normalized to a unit variance (Li et al., 2003), leading to1L`2dist(A1;A2) =kA1A2k2F:Regularization in SoftCCA penalizes the off-diagonal elements of the covariance matrix1We use this`2distance objective in our formulation.3Under review as a conference paper at ICLR 2020MLPMLPTNOX1X2f2f1A2A1MLPMLPX1X2f2f1A2A1Decorr.Decorr.MLPMLPX1X2f2f1A2A1WhiteningWhitening`2LossMLPTask lossB2B1YMLPMLPX1X2f2f1MLPTask lossYDecorr.Decorr.ftaskftask`2LossA1A212Improvediscriminability1Max. sum correlation (equiv. to l2loss)2Such that projections are orthogonal(a) DCCA(c)TOCCA-W (Ours )(b) SoftCCA(d)TOCCA-SD (Ours )`2LossFigure 2: Deep CCA architectures: (a) DCCA maximizes the sum correlation in projection space by optimizingan equivalent loss, the trace norm objective (TNO) (Andrew et al., 2013); (b) SoftCCA relaxes the orthogonalityconstraints by regularizing with soft decorrelation (Decorr) and optimizes the `2distance in the projectionspace (equivalent to sum correlation with activations normalized to unit variance) (Chang et al., 2018). OurTOCCA methods add a task loss and apply CCA orthogonality constraints by regularizing in two ways: (c)TOCCA-W uses whitening and (d) TOCCA-SD uses Decorr. The third method that we propose, TOCCA-ND ,simply removes the Decorr components of TOCCA-SD ., using a running average computed over batches as ^and a loss ofLDecorr(A) =Pdoi6=ij^i;jj.Overall, the SoftCCA loss takes the formL`2dist(A1;A2) +LDecorr(A1) +LDecorr(A2):Supervised CCA methods. CCA, DCCA, and SoftCCA are all unsupervised methods to learn aprojection to a shared space in which the data is maximally correlated. Although these methods haveshown utility for discriminative tasks, a CCA decomposition may not be optimal for classificationbecause features that are correlated may not be discriminative. Our experiments will show thatmaximizing the correlation objective too much can degrade performance on discriminative tasks.CCA has previously been extended to supervised settings in three ways: 1) with methods that arelinear only (Singanamalli et al., 2014; Lee et al., 2015; Kan et al., 2015; Duan et al., 2016), 2) bymaximizing the total correlation between each view and the training labels in addition to each pairof views (Lee et al., 2015; Singanamalli et al., 2014), and 3) with Linear Discriminant Analysis(LDA)-style approaches to encourage class separation (Kan et al., 2015; Dorfer et al., 2016b; El-madany et al., 2016).2LDA approaches to supervision are generative rather than discriminative.Importantly, we will show in x5.3 that encouraging class separation with an LDA-style objectiveperforms significantly inferior to a softmax. Further, Dorfer et al. (2016b) did not apply LDA tothe shared space itself but to the NN layer below it, and Elmadany et al. (2016) did not validate theshared space created, only its use in multi-view classification using both views for training and test.Dorfer et. al’s CCA Layer (CCAL) is the closest to our method. It optimizes a task loss operatingon a CCA projection; however, the CCA objective itself is only optimized during pre-training, notin an end-to-end manner (Dorfer et al., 2018). Further, their goal is retrieval with a pairwise rankloss, not classification. Instead of computing the CCA projection explicitly within the network, weoptimize the non-linear mapping into the shared space together with the task objective, requiringa fundamental change in formulation. We optimize for the shared space with the `2distance be-tween activations (similar to SoftCCA) and propose three different ways to apply the orthogonalityconstraints of CCA.4 T ASK-OPTIMAL CCA (TOCCA)To compute a shared latent space that is also discriminative, we reformulate DCCA to add a task-driven term to the optimization objective. The CCA component finds features that are correlated2Gatto & Dos Santos (2017) use a similar technique with LDA but apply it as a convolutional filter on asingle view; it is not a multi-view method.4Under review as a conference paper at ICLR 2020between views, while the task component ensures that they are also discriminative. This model canbe used for representation learning on multiple views before joining representations for prediction(Sargin et al., 2007; Dorfer et al., 2016b) and for classification when two views are available fortraining but only one at test time (Arora & Livescu, 2012). In x5, we demonstrate both use cases onreal data. Our methods and related NN models from the literature are summarized in Tab. A2; Fig. 2shows schematic diagrams.Challenges and solutions. While DCCA optimizes the sum correlation with an equivalent lossfunction (TNO), the CCA projection itself is computed only after optimization. Hence, the projec-tions cannot be used to optimize another task simultaneously. The main challenge in developinga task-optimal form of deep CCA that discriminates based on the CCA projection is in computingthis projection within the network – a necessary step to enable simultaneous training of both objec-tives. We tackle this by focusing on the two components of DCCA: maximizing the sum correlationbetween activations A1andA2and enforcing orthonormality constraints within A1andA2. Weachieve both by transforming the CCA objective and present three methods that progressively relaxthe orthogonality constraints.We further improve upon DCCA by enabling mini-batch computations for improved flexibility andtest performance. DCCA was developed for large batches because correlation is not separable acrossbatches. While large batch implementations of stochastic gradient optimization can increase com-putational efficiency via parallelism, small batch training provides more up-to-date gradient calcu-lations, allowing a wider range of learning rates and improving test accuracy (Masters & Luschi,2018). We reformulate the correlation objective as the `2distance (following SoftCCA), enablingseparability across batches. We ensure a normalization to one via batch normalization without thescale and shift parameters (Ioffe & Szegedy, 2015). Wang et al. (2016) also developed a stochasticmini-batch solution to DCCA but handled the orthonormality constraints in a different way (dis-cussed below).Task-driven objective. First, we apply non-linear functions f1andf2with parameters (via MLPs)to each view X1andX2, i.e., A1=f1(X1;1)andA2=f2(X2;2). Second, a task-specificfunctionftask(A;task)operates on the outputs A1andA2. In particular, f1andf2are optimizedso that the`2distance between A1andA2is minimized; therefore, ftaskcan be trained to operateon both inputs A1andA2. We combine CCA and task-driven objectives as a weighted sum witha hyperparameter for tuning. This model is flexible, in that the task-driven goal can be used forclassification (Krizhevsky et al., 2012; Dorfer et al., 2016a), regression (Katzman et al., 2016),clustering (Caron et al., 2018), or any other task. Other prior attempts to integrate a classifier intodeep CCA only used LDA (Kan et al., 2015; Dorfer et al., 2016b; Elmadany et al., 2016). SeeTab. A2 for an overview.Orthogonality constraints. The remaining complications for mini-batch optimization are the or-thogonality constraints, for which we propose three solutions, each handling the orthogonality con-straints of CCA in a different way: whitening, soft decorrelation, and no decorrelation.1) Whitening ( TOCCA-W ).CCA applies orthogonality constraints to A1andA2. We accomplishthis with a linear whitening transformation that transforms the activations such that their covari-ance becomes the identity matrix, i.e., features are uncorrelated. Decorrelated Batch Normalization(DBN) has previously been used to regularize deep models by decorrelating features (Huang et al.,2018) and inspired our solution. In particular, we apply a transformation B=UA to make Borthonormal, i.e., BB>=I.We use a Zero-phase Component Analysis (ZCA) whitening transform composed of three steps:rotate the data to decorrelate it, rescale each axis, and rotate back to the original space. Each trans-formation is learned from the data. Any matrix URdodosatisfying U>U=1whitens thedata, where denotes the covariance matrix of A. AsUis only defined up to a rotation, it isnot unique. PCA whitening follows the first two steps and uses the eigendecomposition of :UPCA =1=2V>for=diag(1;:::; do)andV= [v1;:::;vdo], where (i;vi)are theeigenvalue, eigenvector pairs of . As PCA whitening suffers from stochastic axis swapping, neu-rons are not stable between batches (Huang et al., 2018). ZCA whitening uses the transformationUZCA=V1=2VTin which PCA whitening is first applied, followed by a rotation back to theoriginal space. Adding the rotation Vbrings the whitened data Bas close as possible to the originaldataA(Kessy et al., 2015).5Under review as a conference paper at ICLR 2020Computation of UZCAis clearly depend on . While Huang et al. (2018) used a running averageofUZCAover batches, we apply this stochastic approximation to for each view using the update(k)=(k1)+(1)bfor batchkwhere bis the covariance matrix for the current batch and2(0;1)is the momentum. We then compute the ZCA transformation from (k)to do whiteningasB=fZCA(A) =U(k)ZCAA. At test time, U(k)from the last training batch is used. Algorithm A1describes ZCA whitening in greater detail. In summary, TOCCA-W integrates both the correlationand task-driven objectives, with decorrelation performed by whitening, intoLtask(ftask(B1);Y) +Ltask(ftask(B2);Y) +L`2dist(B1;B2);where B1andB2are whitened outputs of A1andA2, respectively, and Yis the class labels. Thisis a novel approach to integrating the orthogonality constraints of CCA into a NN as it is the firstto use ZCA whitening in this manner. Wang et al. (2016)’s stochastic mini-batch solution to DCCAused nonlinear orthogonal iterations and does not state what type of whitening operation was used.2) Soft decorrelation ( TOCCA-SD ).While fully independent components may be beneficial inregularizing NNs on some data sets, a softer decorrelation may be more suitable on others. In thissecond formulation we relax the orthogonality constraints using regularization, following the Decorrloss of SoftCCA (Chang et al., 2018). The loss function for this formulation isLtask(ftask(A1);Y)+Ltask(ftask(A2);Y)+1L`2dist(A1;A2)+2LDecorr(A1)+LDecorr(A2):While this solution is based on SoftCCA, our experiments ( x5) will demonstrate that the task com-ponent is essential when using the model for classification.3) No decorrelation ( TOCCA-ND ).When CCA is used in an unsupervised manner, some form oforthogonality constraint or decorrelation is necessary to ensure that f1andf2do not simply producemultiple copies of the same feature. While this result could maximize the sum correlation, it is nothelpful in capturing useful projections. In the task-driven setting, the discriminative term ensuresthat the features in f1andf2are not replicates of the same information. TOCCA-ND thereforeremoves the decorrelation term entirely, forming the simpler objectiveLtask(ftask(A1);Y) +Ltask(ftask(A2);Y) +L`2dist(A1;A2):These three models allow testing whether whitening or decorrelation benefit a task-driven model.Computational complexity. Due to the eigendecomposition, TOCCA-W has a complexity of O(d3o)compared to O(d2o)forTOCCA-SD , with respect to output dimension do. However,dois typicallysmall (100) and this extra computation is only performed once per batch. The difference inruntime is less than 6.5% for a batch size of 100 or 9.4% for a batch size of 30 (Tab. A4).Summary. All three variants are motivated by adding a task-driven component to deep CCA.TOCCA-ND is the most relaxed and directly attempts to obtain identical latent representations. Ex-periments will show that whitening ( TOCCA-W ) and soft decorrelation ( TOCCA-SD ) provide a ben-eficial regularization. Further, since the `2distance that we optimize was shown to be equivalentto the sum correlation (cf. x3 SoftCCA paragraph), all three TOCCA models maintain the goals ofCCA, just with different relaxations of the orthogonality constraints. Our method is the first to si-multaneously optimize for CCA and a discriminative task with end-to-end training. See Tab. A2 foran overview.5 E XPERIMENTSWe validated our methods on three different data sets: MNIST handwritten digits, the CarolinaBreast Cancer Study (CBCS) using imaging and genomic features, and speech data from the Wis-consin X-ray Microbeam Database (XRMB). Our experiments show the utility of our methods for 1)cross-view classification, 2) regularization with a second view during training when only one view isavailable at test time, and 3) representation learning on multiple views that are joined for prediction.Implementation.3Each layer of our network consists of a fully connected layer, followed by aReLU activation and batch normalization (Ioffe & Szegedy, 2015). Our implementations of DCCA,SoftCCA, and Joint DCCA/DeepLDA (Dorfer et al., 2016b) also use ReLU activation and batch3Code is submitted with this paper and will also be available publicly on GitHub after the review period.6Under review as a conference paper at ICLR 2020normalization. We modified CCAL- Lrank(Dorfer et al., 2018) to use a softmax function and cross-entropy loss for classification, instead of a pairwise ranking loss for retrieval, referring to this modi-fication as CCAL-Lce. We used the Nadam optimizer and tuned hyperparameters on a validation setvia random search; settings and ranges are specified in Tab. A3. The same hyperparameter tuningprocedure was used for our methods and those we compare with. We used Keras with the Theanobackend and an Nvidia GeForce GTX 1080 Ti.The following experiments compare our methods with two linear methods (CCA and RCCA),two unsupervised deep methods (DCCA and SoftCCA), and two supervised deep methods (JointDCCA/DeepLDA and CCAL- Lce). Many other variants exist ( x3), but the ones we selected arethe current state-of-the-art in each of these classes. We did not run a direct comparison with Wanget al. (2015a) as Chang et al. (2018) already showed that SoftCCA is superior. We chose JointDCCA/DeepLDA to represent supervised LDA-style CCA methods rather than comparing with allmethods in this group (Kan et al., 2015; Elmadany et al., 2016)4.5.1 C ROSS -VIEW CLASSIFICATION ON MNIST DIGITSWe formed a multi-view data set from the MNIST handwritten digit data set (LeCun, 1998). Fol-lowing Andrew et al. (2013), we split each 2828image in half horizontally, creating left and rightviews that are each 1428pixels. All images were flattened into a vector with 392 features. Thefull data set consists of 60k training images and 10k test images. We used a random set of up to 50kfor training and the remaining training images for validation. We used the full 10k image test set.In order to validate both the discriminativeness of the embedding and the success in finding a sharedspace, we studied performance on cross-view classification. We evaluated cross-view classificationaccuracy by first computing the projection for each view, then we trained a linear SVM on oneview’s projection, and finally we used the other view’s projection at test time. While the task-drivenmethods presented in this work learn a classifier within the model, this test setup enables a faircomparison with the unsupervised CCA variants and validates the discriminativity of the featureslearned. It is also the standard method in the literature to test CCA methods for classification.Notably, using the built-in softmax classifier (not shown) performed similarly to the SVM, as muchof the power of our methods comes from the representation learning part. We do not compare witha simple supervised NN because this setup does not learn the shared space necessary for cross-viewclassification. We report results averaged over five randomly selected training/validation sets; thetest set always remained the same.Correlation vs. classification accuracy We first demonstrate the importance of adding a task-driven component to DCCA by showing that maximizing the sum correlation between views is notsufficient. Fig. 3 ( left) shows the sum correlation vs. cross-view classification accuracy across manydifferent hyperparameter settings for DCCA (Andrew et al., 2013), SoftCCA (Chang et al., 2018),andTOCCA . We used 50 components for each; thus, the maximum sum correlation was 50. Thesum correlation was measured after applying linear CCA to ensure that components were indepen-dent. With DCCA a larger correlation tended to produce a larger classification accuracy, but therewas still a large variance in classification accuracy amongst hyperparameter settings that produceda similar sum correlation. For example, with the two farthest right points in the plot (colored red),their classification accuracy differs by 10%, and they are not even the points with the best classi-fication accuracy (colored purple). The pattern is different for SoftCCA. There was an increase inclassification accuracy as sum correlation increased but only up to a point. For higher sum corre-lations, the classification accuracy varied even more from 20% to 80%. Further experiments (notshown) have indicated that when the sole objective is correlation, some of the projection directionsare simply not discriminative, particularly when there are a large number of classes. Hence, optimiz-ing for sum correlation alone does not guarantee a discriminative model. TOCCA-W andTOCCA-SDshow a much greater classification accuracy across a wide range of correlations and, overall, the bestaccuracy when correlation is greatest.Effect of batch size. Fig. 3 ( right ) plots the batch size vs. classification accuracy for a training setsize of 10;000. We tested batch sizes from 10to10;000; a batch size of 10 or 30 was best for all4While Elmadany et al. (2016) ran experiments on MNIST, they used the embeddings from both views fortraining and test; hence, their results are not directly comparable to our cross-view classification results. Whenwe did test multi-view classification on MNIST, we achieved 98.5% vs. their reported 97.2%.7Under review as a conference paper at ICLR 2020010203040500.00.20.40.60.81.0010203040500.00.20.40.60.81.0010203040500.00.20.40.60.81.0010203040500.00.20.40.60.81.0AccuracySum correlationAccuracySum correlationAccuracySum correlationAccuracySum correlationDCCA SoftCCATOCCA-SD TOCCA-WConfig. with two highest sum correlationsConfig. with highest accuracyAccuracyAccuracyTraining set size10301001,00010,0000.890.900.910.920.93 0.93CCARCCADCCASoftCCAJoint DCCA/DeepLDACCAL-LceTOCCA-WTOCCA-SDTOCCA-ND1.0AccuracyBatch sizeFigure 3: Left: Sum correlation vs. cross-view classification accuracy (on MNIST) across different hyper-parameter settings on a training set size of 10,000 for DCCA (Andrew et al., 2013), SoftCCA (Chang et al.,2018), TOCCA-W , and TOCCA-SD . For unsupervised methods (DCCA and SoftCCA), large correlations donot necessarily imply good accuracy. Right : The effect of batch size on classification accuracy for each TOCCAmethod on MNIST (training set size of 10,000), and the effect of training set size on classification accuracy foreach method. Our TOCCA variants out-performed all others across all training set sizes.CCA RCCA DCCA SoftCCA CCAL- Lce TOCCA-W0123456789012345678901234567890123456789012345678901234567890123456789Figure 4:t-SNE plots for CCA methods on our variation of MNIST. Each method was used to computeprojections for the two views (left and right sides of the images) using 10,000 training examples. The plotsshow a visualization of the projection for the left view with each digit colored differently. TOCCA-SD andTOCCA-ND (not shown) produced similar results to TOCCA-W .three variations of TOCCA . This is in line with previous work that found the best performance with abatch size between 2 and 32 (Masters & Luschi, 2018). We used a batch size of 32 in the remainingexperiments on MNIST.Effect of training set size. We manipulated the training set size in order to study the robustness ofour methods. In particular, Fig. 3 ( right ) shows the cross-view classification accuracy for trainingset sizes from n= 300 to50;000. While we expected that performance would decrease for smallertraining set sizes, some methods were more susceptible to this degradation than others. The clas-sification accuracy with CCA dropped significantly for n= 300 and1;000, due to overfitting andinstability issues related to the covariance and cross-covariance matrices. SoftCCA shows similarbehavior (prior work (Chang et al., 2018) on this method did not test such small training set sizes).Across all training set sizes, our TOCCA variations consistently exhibited good performance, e.g.,increasing classification accuracy from 78.3% to 86.7% for n= 1;000and from 86.1% to 94.6% forn= 50;000withTOCCA-SD . Increases in accuracy over TOCCA-ND were small, indicating that thedifferent decorrelation schemes have only a small effect on this data set; the task-driven componentis the main reason for the success of our method. In particular, the classification accuracy withn= 1;000did better than the unsupervised DCCA method on n= 10;000. Further, TOCCA withn= 300 did better than linear methods on n= 50;000, clearly showing the benefits of the proposedformulation. We also examined the CCA projections qualitatively via a 2D t-SNE embedding (VanDer Maaten & Hinton, 2008). Fig. 4 shows the CCA projection of the left view for each method. Asexpected, the task-driven variant produced more clearly separated classes.8Under review as a conference paper at ICLR 2020Table 1: Classification accuracy for different methods of predicting Basal genomic subtype from images orgrade from gene expression. Linear SVM and DNN were trained on a single view, while all other methods weretrained with both views. By regularizing with the second view during training, all TOCCA variants improvedclassification accuracy. The standard error is in parentheses.Method Training data Test data Task AccuracyLinear SVM Image only Image Basal 0.777 (0.003)NN Image only Image Basal 0.808 (0.006)CCAL-Lce Image+GE Image Basal 0.807 (0.008)TOCCA-W Image+GE Image Basal 0.830 (0.006)TOCCA-SD Image+GE Image Basal 0.818 (0.006)TOCCA-ND Image+GE Image Basal 0.816 (0.004)Method Training data Test data Task AccuracyLinear SVM GE only GE Grade 0.832 (0.012)NN GE only GE Grade 0.830 (0.012)CCAL-Lce GE+image GE Grade 0.804 (0.022)TOCCA-W GE+image GE Grade 0.862 (0.013)TOCCA-SD GE+image GE Grade 0.856 (0.011)TOCCA-ND GE+image GE Grade 0.856 (0.011)5.2 R EGULARIZATION FOR CANCER CLASSIFICATIONIn this experiment, we address the following question: Given two views available for training butonly one at test time, does the additional view help to regularize the model?We study this question using 1,003 patient samples with image and genomic data from CBCS5(Troester et al., 2018). Images consisted of four cores per patient from a tissue microarray thatwas stained with hematoxylin and eosin. Image features were extracted using a VGG16 backbone(Simonyan & Zisserman, 2015), pre-trained on ImageNet, by taking the mean of the 512D outputof the fourth set of conv. layers across the tissue region and further averaging across all core imagesfor the same patient. For gene expression (GE), we used the set of 50 genes in the PAM50 array(Parker et al., 2009). The data set was randomly split into half for training and one quarter forvalidation/testing; we report the mean over eight cross-validation runs. Classification tasks includedpredicting 1) Basal vs. non-Basal genomic subtype using images, which is typically done from GE,and 2) predicting grade 1 vs. 3 from GE, typically done from images. This is not a multi-taskclassification setup; it is a means for one view to stabilize the representation of the other. The firsttask is also a valuable clinical use case. Genomic analysis is expensive and not routinely performed,while histologic imaging is standard practice by pathologists for detecting cancer and assessingits aggressiveness. In working with our clinical collaborators, our goal has been to predict tumorsubtypes from images - something that is too complex for pathologists. We hope that this willone day make tumor subtypes accessible to more patients and improve treatment decisions. Thisexperiment demonstrates that the second view of data can help regularize during training even if itis not available for test patients.We tested different classifier training methods when only one view was available at test time: a) alinear SVM trained on one view, b) a deep NN trained on one view using the same architecture asthe lower layers of TOCCA , c) CCAL-Lcetrained on both views, d) TOCCA trained on both views.Tab. 1 lists the classification accuracy for each method and task. When predicting genomic subtypeBasal from images, all our methods showed an improvement in classification accuracy; the bestresult was with TOCCA-W , which produced a 2.2% improvement. For predicting grade from GE,all our methods again improved the accuracy – by up to 3.2% with TOCCA-W . These results showthat having additional information during training can boost performance at test time. Notably, thisexperiment used a static set of pre-trained VGG16 image features in order to assess the utility of themethod. The network itself could be fine-tuned end-to-end with our TOCCA model, providing aneasy opportunity for data augmentation and likely further improvements in classification accuracy.5.3 S EMI-SUPERVISED LEARNING FOR SPEECH RECOGNITIONOur final experiments use speech data from XRMB, consisting of simultaneously recorded acousticand articulatory measurements. Prior work has shown that CCA-based algorithms can improvephonetic recognition (Wang et al., 2015b;a; 2016; Dorfer et al., 2016b). The 45 speakers were splitinto 35 for training, 2 for validation, and 8 for testing – a total of 1,429,236 samples for training,85,297 for validation, and 111,314 for testing.6The acoustic features are 112D and the articulatoryones are 273D. We removed the per-speaker mean & variance for both views. Samples are annotatedwith one of 38 phonetic labels.5http://cbcs.web.unc.edu/for-researchers/6http://ttic.uchicago.edu/ ̃klivescu/XRMB_data/full/README9Under review as a conference paper at ICLR 2020Table 5: XRMB classification results.Method Task AccuracyBaseline - 0.591CCA - 0.589RCCA - 0.588DCCA - 0.620SoftCCA - 0.635Joint DCCA/DeepLDA LDA 0.633CCAL- Lce Softmax 0.642TOCCA-W LDA 0.710TOCCA-SD LDA 0.677TOCCA-ND LDA 0.677TOCCA-W Softmax 0.795TOCCA-SD Softmax 0.785TOCCA-ND Softmax 0.785Our task on this data set was representation learn-ing for multi-view prediction – that is, usingboth views of data to learn a shared discrimina-tive representation. We trained each model us-ing both views and their labels. To test eachCCA model, we followed prior work and concate-nated the original input features from both viewswith the projections from both views. Due to thelarge training set size, we used a Linear Discrimi-nant Analysis (LDA) classifier for efficiency. Thesame construction was used at test time. Thissetup was used to assess whether a task-optimalDCCA model can improve discriminative power.We tested TOCCA with a task-driven loss of LDA(Dorfer et al., 2016a) or softmax to demonstratethe flexibility of our model.Table 6: Semi-supervised classi-fication results on XRMB usingTOCCA-W .Labeled data Accuracy100% 0.79530% 0.76210% 0.7453% 0.6841% 0.637We compared the discriminability of a variety of methods to learna shared latent representation. Tab. 5 lists the classification re-sults with a baseline that used only the original input features forLDA. Although deep methods, i.e., DCCA and SoftCCA, improvedupon the linear methods, all TOCCA variations significantly outper-formed previous state-of-the-art techniques. Using softmax consis-tently beat LDA by a large margin. TOCCA-SD andTOCCA-NDproduced equivalent results as a weight of 0on the decorrelationterm performed best. However, TOCCA-W showed the best resultwith an improvement of 15% over the best alternative method.TOCCA can also be used in a semi-supervised manner when labels are available for only somesamples. Tab. 6 lists the results for TOCCA-W in this setting. With 0% labeled data, the result wouldbe similar to DCCA. Notably, a large improvement over the unsupervised results in Tab. 5 is seeneven with labels for only 10% of the training samples.6 D ISCUSSIONWe proposed a method to find a shared latent space that is also discriminative by adding a task-driven component to deep CCA while enabling end-to-end training. This required a fundamentalchange in formulation because Deep CCA does not compute the embeddings directly as it optimizesan equivalent objective; therefore, we could not simply add an additional term. Instead, we found analternative formulation by replacing the CCA projection with `2distance minimization and orthogo-nality constraints on the activations, and we implemented this in three different ways. TOCCA -W orTOCCA-SD performed the best, dependent on the data set – both of which include some means ofdecorrelation to provide a regularizing effect to the model and thereby outperforming TOCCA-ND .TOCCA showed large improvements over state-of-the-art in cross-view classification accuracy onMNIST and significantly increased robustness to a small training set size. On CBCS, TOCCA pro-vided a regularizing effect when both views were available for training but only one at test time.TOCCA also produced a large increase over state-of-the-art for multi-view representation learningon a much larger data set, XRMB. On this data set we also demonstrated a semi-supervised approachto get a large increase in classification accuracy with only a small proportion of the labels. Using asimilar technique, our method could be applied when some samples are missing a second view.Classification tasks using a softmax operation or LDA were explored in this work; however, theformulation presented can also be used with other tasks such as regression or clustering. Anotherpossible avenue for future work entails extracting components shared by both views as well asindividual components. This approach has been developed for dictionary learning (Lock et al., 2013;Ray et al., 2014; Feng et al., 2018) but could be extended to deep CCA-based methods. Finally, wehave yet to apply data augmentation to the proposed framework; this could provide a significantbenefit for small training sets.10Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text Originality: CCA is a generative model that learns a shared subspace based on two (or multi) views of the data. Being generative, it might not have strong discriminative power for some downstream classification tasks. Previous approaches to infuse discriminative power into the shared subspace estimated by CCA are linear. So, this paper proposes to learn 1) non-linear 2) discriminative subspaces for CCA. The paper accomplishes this by simply adding a task specific term to the optimization objective of DeepCCA (Andrew et. al. 2013), which involves just adding a task-specific MLP on top and minimizing the associated loss-function. 1). The novelty of the proposed approach is limited. It just adds an extra term (extra neural network layer) with a corresponding weighting hyperparameter to the objective function of a previous method (DeepCCA) without much motivation. 2). The experimental setup and results are sound but some of the tasks seem contrived to show the improved performance of TOCCA methods. For instance, in the cross-view MNIST classification the authors use only projection from one view at training time and use the other view at test-time. What's the motivation for this setup? Why not split the data into train and test set by splitting observations, then train on both the views at train time and test on the held-out observations at test-time? I hope I am not missing something. 3). Similarly, for the "Regularization for Cancer Classification" task, it's assumed that only one view is available at test time. Why is that? What are the real-world examples of such setups? Quality: The paper is technically sound, though it is a trivial extension of a previous method. The experimental setup is somewhat contrived to show the superiority of the proposed method. Clarity: The paper is well organized and is well written in general. The supplementary material contains more results and code will be available after the review period. Significance: The paper solves an important problem by infusing discriminative power into generative subspaces learned by CCA but the results are not that important in my eyes. Since the empirical setup is a little contrived it is hard to even know whether a simple two-step approach that first estimates CCA subspace and then uses those projections in a SVM or MLP would perform comparable or better if given a fair-chance to compete. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
H1kjdOYlx
ICLR.cc/2017/conference
2017
Modular Multitask Reinforcement Learning with Policy Sketches
["Jacob Andreas", "Dan Klein", "Sergey Levine"]
We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate each task with a sequence of named subtasks, providing high-level structural relationships among tasks, but not providing the detailed guidance required by previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). Our approach associates every subtask with its own modular subpolicy, and jointly optimizes over full task-specific policies by tying parameters across shared subpolicies. This optimization is accomplished via a simple decoupled actor–critic training objective that facilitates learning common behaviors from dissimilar reward functions. We evaluate the effectiveness of our approach on a maze navigation game and a 2-D Minecraft-inspired crafting game. Both games feature extremely sparse rewards that can be obtained only after completing a number of high-level subgoals (e.g. escaping from a sequence of locked rooms or collecting and combining various ingredients in the proper order). Experiments illustrate two main advantages of our approach. First, we outperform standard baselines that learn task-specific or shared monolithic policies. Second, our method naturally induces a library of primitive behaviors that can be recombined to rapidly acquire policies for new tasks.
["Reinforcement Learning", "Transfer Learning"]
ABSTRACTWe describe a framework for multitask deep reinforcement learning guided bypolicy sketches . Sketches annotate each task with a sequence of named subtasks,providing high-level structural relationships among tasks, but notproviding thedetailed guidance required by previous work on learning policy abstractions forRL (e.g. intermediate rewards, subtask completion signals, or intrinsic motiva-tions). Our approach associates every subtask with its own modular subpolicy,and jointly optimizes over full task-specific policies by tying parameters acrossshared subpolicies. This optimization is accomplished via a simple decoupledactor–critic training objective that facilitates learning common behaviors fromdissimilar reward functions. We evaluate the effectiveness of our approach on amaze navigation game and a 2-D Minecraft-inspired crafting game. Both gamesfeature extremely sparse rewards that can be obtained only after completing anumber of high-level subgoals (e.g. escaping from a sequence of locked rooms orcollecting and combining various ingredients in the proper order). Experimentsillustrate two main advantages of our approach. First, we outperform standardbaselines that learn task-specific or shared monolithic policies. Second, ourmethod naturally induces a library of primitive behaviors that can be recombinedto rapidly acquire policies for new tasks.1 I NTRODUCTIONπ1π2π3π1b1: get woodτ1: make planksb2: use workbenchΠ1b1: get woodτ2: make sticksb3: use toolshedπ1π3Π2π1π2K1K2Figure 1: Composing policies from subpolicies. Herewe have simplified versions of two tasks ( make planksandmake sticks , each associated with its own policy(1and2respectively). These policies share an ini-tial high-level action b1: both require the agent to getwood before taking it to an appropriate crafting station.By enforcing that the agent initially follows the samesubpolicy1in both tasks, we can learn a reusable rep-resentation of their shared structure.This paper describes a framework for learningcomposable deep subpolicies in a multitask set-ting, guided only by abstract policy sketches .We are interested in problems like the onesshown in Figure 1, with collections of tasksthat involve sparse rewards and long-term plan-ning, but which share structure in the form ofcommon subgoals or reusable high-level ac-tions. Our work aims to develop models thatcan learn efficiently from these sparse rewardsand rapidly adapt to new tasks, by exploitingthis shared structure and translating success onone task into progress on others. Our approachultimately induces a library of high-level ac-tions directly from symbolic annotations likethe ones marked K1andK2in the figure.This approach builds on a significant body ofresearch in reinforcement learning that focusesonhierarchical representations of behavior. Inthese approaches, a high-level controller learnsa policy over high-level actions—known var-iously as options (Sutton et al., 1999), skills1Under review as a conference paper at ICLR 2017(Konidaris & Barto, 2007), or primitives (Hauser et al., 2008)—which are themselves implementedas policies over low-level actions in the environment. While one line of research (e.g. Danielet al. (2012)) investigates learning hierarchical policies without any supervision, such hierarchiesare empirically difficult to learn directly from unconstrained interaction (Hengst, 2002). The bulk ofexisting work instead relies on additional information (in the form of intermediate rewards, subtaskcompletion signals, or intrinsic motivations) that guide the learner toward useful high-level actions.While effective, these approaches depend on state representations simple or structured enough thatsuitable reward signals can be effectively engineered by hand.Here we focus on multitask learning of hierarchical policies from a weaker form of supervision: attraining time, each task ( 1and2in Figure 1) is annotated with a sketch ( K1andK2) consisting of asequence of high-level action symbols ( b1,b2andb3)—with no information about how these actionsshould be implemented. Our approach associates each such high-level action with its own low-level subpolicy, and jointly optimizes over concatenated task-specific policies by tying parametersacross shared subpolicies. Our thesis is that even the minimal information about high-level policystructure contained in a sketch provides enough of a learning signal to induce general, reusablesubpolicies. Crucially, sketches are totally ungrounded in the representation of the world—theyrequire no intervention in a simulator or environment model.The present work may be viewed as an extension of recent approaches for learning compositionaldeep architectures from structured program descriptors (Andreas et al., 2016; Reed & de Freitas,2015). Here we focus on learning in interactive environments with reinforcement training signals.This extension presents a variety of technical challenges. Concretely, our contributions are:A general paradigm for multitask, hierarchical, deep reinforcement learning guided by ab-stract sketches of task-specific policies.A concrete agent architecture for learning in this paradigm, featuring a modular modelstructure and multitask actor–critic training objective.We evaluate our approach on two families of tasks: a maze navigation game (Figure 3a), in whichthe agent must navigate through a sequence of locked doors to reach a target room; and a 2-DMinecraft-inspired crafting game (Figure 3b), in which the agent must acquire particular resourcesby finding raw ingredients, combining them together in the proper order, and in some cases buildingintermediate tools that enable the agent to alter the environment itself. In both games, the agentreceives a reward only after the final goal is accomplished. For the most challenging tasks, involvingsequences of four or five high-level actions, a task-specific agent initially following a random policyessentially never discovers the reward signal.We evaluate a modular agent architecture trained with guidance from policy sketches under severaldifferent data conditions: (1) when learning the full collection of tasks jointly via reinforcement, (2)in a zero-shot setting where a policy sketch is available for the held-out task, and (3) in a adaptationsetting, where sketches are hidden and the agent must learn a policy over high-level actions. In allcases, our approach substantially outperforms standard policy optimization baselines.2 R ELATED WORKThe agent representation we describe in this paper belongs to the broader family of hierarchicalreinforcement learners described in the literature. As detailed in Section 3, our subpolicies may beviewed as a relaxation of the options framework first described by Sutton et al. (1999). A large bodyof work describes techniques for learning options and related abstract actions, in both single- andmultitask settings. For learning the implementation of options, most techniques rely on intermediatesupervisory signals, e.g. to encourage exploration (Kearns & Singh, 2002) or completion of pre-defined subtasks (Kulkarni et al., 2016). An alternative family of approaches employs either post-hoc analysis of already-learned policies to extract reusable sub-components (Stolle & Precup, 2002;Konidaris et al., 2011). Techniques for learning options with less guidance than the present workinclude Bacon & Precup (2015) and Vezhnevets et al. (2016), and other general hierarchical policylearners include Daniel et al. (2012), Bakker & Schmidhuber (2004) and Menache et al. (2002).Once a library of high-level actions exists, agents are faced with the problem of learning high-level(typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup,2Under review as a conference paper at ICLR 20172000). The learning problem we describe in this paper is in some sense the direct dual to theproblem of learning these high-level policies. There, the agent begins with an inventory of complexprimitives and must learn to model their behavior and select among them; here we begin knowingthe names of appropriate high-level actions but nothing about how they are implemented, and mustinfer implementations (but not, initially, high-level plans) from context. We expect that our approachcould be coupled with a generic learner of options policies to provide a general mechanism forhierarchical RL; we leave this for future work.Our approach is also inspired by a number of recent efforts toward compositional reasoning andinteraction with structured deep models. Such models have been previously used for tasks involvingquestion answering (Iyyer et al., 2014; Andreas et al., 2016) and relational reasoning (Socher et al.,2012), and more recently for multi-task, multi-robot transfer problems (Devin et al., 2016). Inthis work—as in existing approaches employing dynamically assembled modular networks—task-specific training signals are propagated through a collection of composed discrete structures withtied weights. Here the composed structures specify time-varying policies rather than feedforwardcomputations, and their parameters must be learned via interaction rather than direct supervision.Another closely related family of models includes neural programmers (Neelakantan et al., 2015)and programmer–interpreters (Reed & de Freitas, 2015), which generate discrete computationalstructures but require supervision in the form of output actions or full execution traces.A closely related line of work is the Hierarchical Abstract Machines (HAM) framework introducedby Parr & Russell (1998). Like our approach, HAMs begin with a representation of a high-levelpolicy as an automaton (or a more general computer program; Andre & Russell, 2001) and usereinforcement learning to fill in low-level details. Variations on this architecture have considered anumber of control constructs beyond the scope of the current paper (e.g. concurrency and recursion;Marthi et al., 2004). However, because these approaches attempt to learn a single representation ofthe Q function for all subtasks and contexts, they require extremely strong formal assumptions aboutthe form of the reward function and state representation (Andre & Russell, 2002) that the presentwork avoids by decoupling the policy representation from the value function.Our approach also bears some resemblance to the instruction following literature in natural languageprocessing. Existing work on instruction following falls into two broad categories: approaches thatrequire a highly structured (typically logical) action and world representations (Chen & Mooney,2011; Artzi & Zettlemoyer, 2013; Andreas & Klein, 2015; Tellex et al., 2011), and approaches thatrequire detailed supervision of action sequences or dense reward signals essentially equivalent tofull action traces (Branavan et al., 2009; V ogel & Jurafsky, 2010; Mei et al., 2016). By contrast,the framework we describe here involves no formal or logical language for describing plans, andno supervised action sequences. Additionally, the modular model described in this paper natruallysupports adaptation to tasks where no sketches are available, while all existing instruction followingmodels learn a joint policy over instructions and actions, and are unable to function in the absenceof instructions.3 L EARNING MODULAR POLICIESWe consider a multitask reinforcement learning problem arising from a family of infinite-horizondiscounted Markov decision processes in a shared environment. This environment is specified bya tuple (S;A;P; ), withSa set of states,Aa set of low-level actions, P:SAS ! Ra transition probability distribution, and a discount factor. Each task 2T is then specifiedby a pair (R;), withR:S! Ra task-specific reward function and :S! Ran initialdistribution over states. For a fixed sequence f(si;ai)gof states and actions obtained from a rolloutof a given policy, we will denote the empirical return starting in state siasqi:=P1j=ijR(sj). Inaddition to the components of a standard multitask RL problem, we assume that tasks are annotatedwith sketchesK, each consisting of a sequence (b1;b2;:::)of high-level symbolic labels drawnfrom a fixed vocabulary B. Our model associates each of these symbols with a randomly initializedmodular subpolicy. By sharing each subpolicy across all tasks annotated with the correspondingsymbol, our approach naturally learns the shared abstraction for the corresponding subtask, withoutrequiring any information about the grounding of that task to be explicitly specified by annotation.3Under review as a conference paper at ICLR 20173.1 M ODELWe exploit the structural information provided by sketches by constructing for each symbol bacorresponding subpolicyb. At each timestep, a subpolicy may select either a low-level actiona2A or a special STOP action. We denote the augmented state space A+:=A[f STOPg. Whilethis framework is agnostic to the implementation of subpolicies, we are especially interested in thecase where subpolicies are specified by deep networks. As shown in Figure 2, the experimentsin this paper represent each bas a neural network whose input is a representation of the currentstate, and whose output is a distribution over A+. While all action spaces in our experiments arediscrete, it is straightforward to instead allow this last layer to parameterize a mixed distributionover an underlying continuous action space and the STOP action. These subpolicies may be viewedas options of the kind described by Sutton et al. (1999), with the key distinction that they have noinitiation semantics, but are instead invokable everywhere, and have no explicit representation asa function from an initial state to a distribution over final states (instead implicitly using the STOPaction to terminate).Given a sketch, a task-specific policy is formed by concatenating its associated subpolicies insequence. In particular, the high-level policy maintains a subpolicy index i(initially 0), and executesactions from biuntil the STOP symbol is emitted, at which point control is passed to bi+1. We maythus think of as inducing a Markov chain over the state space SB , with transitions given by:(s;bi)!(s0;bi) with probabilityPa2Abi(ajs)P(s0js;a)!(s;bi+1)with probability bi(STOPjs)Note that is semi-Markov with respect to projection of the augmented state space SB onto theunderlying state space S. We denote the complete family of task-specific policies :=Sfg,and let each bbe an arbitrary function of the current environment state parameterized by someweight vector b. The learning problem is to optimize over all bto maximize the sum of expecteddiscounted rewards J() :=PJ() :=PEsiPiiR(si)across all tasks 2T.3.2 P OLICY OPTIMIZATIONs4s5s6s7a4a5a6stopa1a2a3stops1s2s3s4π1π2b2b1Figure 2: Model overview. Each subpol-icyis uniquely associated with a symbol bimplemented as a neural network that mapsfrom a statesito distributions over A+, andchooses an action aiby sampling from thisdistribution. Whenever the STOP action issampled, control advances to the next sub-policy in the sketch.Here that optimization is accomplished via a simple de-coupled actor–critic method. In a standard policy gradi-ent approach, with a single policy with parameters ,we compute gradient steps of the form (Williams, 1992):rJ() =Xirlog(aijsi)qic(s); (1)where the baseline or “critic” ccan be chosen indepen-dently of the future without introducing bias into the gra-dient. Recalling our previous definition of qias the em-pirical return starting from si, this form of the gradientcorresponds to a generalized advantage estimator (Schul-man et al., 2015) with = 1. Herecachieves close to theoptimal variance (Greensmith et al., 2004) when it is setexactly equal to the state-value function V(si) =Eqifor the target policy starting in state si.The situation becomes slightly more complicated when generalizing to modular policies built bysequencing subpolicies. In this case, we will have one subpolicy per symbol but one critic pertask. This is because subpolicies bmight participate in a number of composed policies , eachassociated with its own reward function R. Thus individual subpolicies are not uniquely identifiedwith value functions, and the aforementioned subpolicy-specific state-value estimator is no longerwell-defined. We extend the actor–critic method to incorporate the decoupling of policies from valuefunctions by allowing the critic to vary per-sample (that is, per-task-and-timestep) depending on thereward function with which the sample is associated. Noting that rbJ() =Pt:b2KrbJ(),i.e. the expected reward across all tasks in which bparticipates, we have:rJ() =XrJ() =XXirblogb(aijsi)qic(si); (2)4Under review as a conference paper at ICLR 2017where each state-action pair (si;ai)was selected by the subpolicy bin the context of the task .Now minimization of the gradient variance requires that each cactually depend on the task identity.(This follows immediately by applying the corresponding argument in Greensmith et al. (2004)individually to each term in the sum over in Equation 2.) Because the value function is itselfunknown, an approximation must be estimated from data. Here we allow these cto be implementedwith an arbitrary function approximator parameterized by a vector . This is trained to minimize asquared error criterion, with gradients given byr12Xi(qic(si))2=Xirc(si)qic(si): (3)Alternative forms of the advantage estimator (e.g. the TD residual R(si) +V(si+1)V(si)or any other member of the GAE family) can be easily substituted by simply maintaining one suchestimator per task. Experiments (Section 4.3) show that conditioning on both the state and thetask identity results in noticeable performance improvements, suggesting that the variance reductionprovided by this objective is important for efficient joint learning of modular policies.Algorithm 1 DO-STEP(;curriculum )1:D ;2:whilejDj<D do3:curriculum () .sample taskfrom curriculum (Section 3.3)4:d=f(si;ai;bi=K;i;qi;);:::g .do rollout5:D D[d6:forb2B;2Tdo7:d=f(si;ai;b0;qi;0)2D:b0=b;0=g8:b bDPdrlogb(aijsi)qic(si).update policy9: DPdrc(si)qic(si).update criticThe complete procedure for computing a single gradient step is given in Algorithm 1. (The outertraining loop over these steps, which is driven by a curriculum learning procedure, is described inthe following section and specified in Algorithm 2.) This is an on-policy algorithm. In each step, theagent samples tasks from a task distribution provided by a curriculum (described in the followingsubsection). The current family of policies is used to perform rollouts in each sampled task,accumulating the resulting tuples of (states, low-level actions, high-level symbols, rewards, and taskidentities) into a dataset D. OnceDreaches a maximum size D, it is used to compute gradientsw.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The stepsizesandin Algorithm 1 can be chosen adaptively using any first-order method.3.3 C URRICULUM LEARNINGFor complex tasks, like the one depicted in Figure 3b, it is difficult for the agent to discover any stateswith positive reward until many subpolicy behaviors have already been learned. It is thus a better useof the learner’s time to focus on “easy” tasks, where many rollouts will result in high reward fromwhich appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involvedhere: if the learner spends too much time on easy tasks before being made aware of the existenceof harder ones, it may overfit and learn subpolicies that no longer generalize or exhibit the desiredstructural properties.To avoid both of these problems, we use a curriculum learning scheme (Bengio et al., 2009) thatallows the model to smoothly scale up from easy tasks to more difficult ones while avoiding overfit-ting. Initially the model is presented with tasks associated with short sketches. Once average rewardon all these tasks reaches a certain threshold, the length limit is incremented. We assume that re-wards across tasks are normalized with maximum achievable reward 0< qi<1. Let ^Erdenotethe empirical estimate of the expected reward for the current policy on task t. Then at each timestep,tasks are sampled in proportion to 1^Er, which by assumption must be positive. Experimentsshow that both components of this curriculum learning scheme improve the rate at which the modelconverges to a good policy (Section 4.3).5Under review as a conference paper at ICLR 2017The complete curriculum-based training procedure is specified in Algorithm 2. Initially, the max-imum sketch length `maxis set to one, and the curriculum initialized to sample length-1 tasks uni-formly. (Neither of the environments we consider in this paper feature any length-1 tasks; in thiscase, observe that Algorithm 2 will simply advance to length-2 tasks without any parameter updates.)For each setting of `max, the algorithm uses the current collection of task policies to compute andapply the gradient step described in Algorithm 1. The rollouts obtained from the call to DO-STEPcan also be used to compute reward estimates ^Er; these estimates determine a new task distributionfor the curriculum. The inner loop is repeated until the reward threshold rminis exceeded, at whichpoint`maxis incremented and the process repeated over a (now-expanded) collection of tasks.4 E XPERIMENTSAs described in the introduction, we evaluate the performance of our approach in two environments:a maze navigation game and a crafting game. Both games involve nontrivial low-level control:agents must learn to avoid obstacles and interact with various kinds of objects. But the environmentsalso feature hierarchical structure: rewards are accessible only after the agent has completed two tofive high-level actions in the appropriate sequence.In all our experiments, we implement each subpolicy as a multilayer perceptron with ReLU nonlin-earities and a hidden layer with 128 hidden units, and each critic as a linear function of the currentstate. Each subpolicy network receives as input a set of features describing the current state of theenvironment, and outputs a distribution over actions. The agent acts at every timestep by samplingfrom this distribution. The gradient steps given in lines 8 and 9 of Algorithm 1 are implemented us-ing RMSP ROP (Tieleman, 2012) with a step size of 0.001 and gradient clipping to a unit norm. Wetake the batch size parameter Din Algorithm 1 to be 2000, and set = 0:9in both environments.For curriculum learning, the improvement threshold rgoodis set to 0.8.4.1 E NVIRONMENTSThe maze environment (Figure 3a) corresponds closely to the the “light world” described byKonidaris & Barto (2007). The agent is placed in a discrete world consisting of a series of rooms,some of which are connected by doors. Some doors require that the agent first pick up a key toopen them. For our experiments, each task corresponds to a goal room (always at the same positionrelative to the agent’s starting position) that the agent must reach by navigating through a sequenceof intermediate rooms. The agent has one sensor on each side of its body, which reports the distanceto keys, closed doors, and open doors in the corresponding direction. Sketches specify a particularsequence of directions for the agent to traverse between rooms to reach the goal. Mazes are sampledwith random sizes and random decisions about whether to connect rooms with open doors, lockeddoors, or no doors. The sketch always corresponds to a viable traversal from the start to the goalposition, but other (possibly shorter) traversals may also exist.The crafting environment (Figure 3b) is inspired by the popular game Minecraft, but is imple-mented in a 2-D grid world. The agent may interact with some objects in the world by facing themAlgorithm 2 TRAIN -POLICIES ()1:=INIT() .initialize subpolicies randomly2:`max 13:loop4:rmin 15: curriculum () =Unif(T0) .initialize`max-step curriculum uniformly6:T0=f2T:jKj`maxg7: whilermin<r gooddo8: DO-STEP(;curriculum ) .update parameters (Algorithm 1)9:Z=Pt2T0[1^Er]10: curriculum (t) =1[2T0](1^Er)=Z82T11:rmin min^Er12:`max `max+ 16Under review as a conference paper at ICLR 2017b1: rightτ: go to goalb2: downb3: down123K1234b1: get woodτ: get goldb2: get ironb3: use workbenchb4: get goldK(a) (b)Figure 3: Example tasks from the environments used in this paper. (a) In the maze environment, the agent mustreach a goal position by traversing right (1), down (2) and down again (3) through a sequence of rooms, someof which may have locked doors. (b) In the crafting environment, an agent seeking to pick up the gold nuggetin the top corner must first collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), anduse the bridge to cross the water (4).and executing a special INTERACT action. Interacting with raw materials initially scattered aroundthe environment causes them to be added to an inventory. Interacting with different crafting stationscauses objects in the agent’s inventory to be combined or transformed into other objects. Each taskin this game corresponds to some crafted object the agent must produce; the most complicated goalsrequire the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxeand a bridge) to reach ingredients located in initially inaccessible regions of the environment.A complete listing of tasks and sketches is given in Appendix A.4.2 M ULTITASK LEARNINGThe primary experimental question in this paper is whether the extra structure provided by policysketches alone is enough to enable fast learning of coupled policies across tasks. To evaluate this, wecompare our modular approach to two policy gradient baselines—one that learns an independentpolicy for each task and one that learns a joint policy across all tasks—as well as a critic-only Qreader baseline. For the independent model, task-specific policies are represented by networks withthe same structure as the modular subpolicies. The joint model conditions both on these environmentfeatures, as well as a feature vector encoding the complete sketch. The Q reader forms the same jointstate and action space described in Section 3.1, and learns a single feedforward network to map fromboth environment states and representations of action symbols onto Q values. This baseline can beviewed either as a chain-structured hierarchical abstract machine with a learned state abstractor(Andre & Russell, 2002), or as a standard instruction following baseline from the natural languageprocessing literature (V ogel & Jurafsky, 2010).(a) (b) (c)Figure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approachdescribed in this paper, while Independent learns a separate policy for each task, Joint learns a shared policythat conditions on the task identity, and Q reader learns a single network to map from states and action symbolsto Q values. Performance for the best iteration of the (off-policy) Q reader is plotted. (a) Performance ofthe three models in the maze environment. (b) Performance in the crafting environment. (c) Individual taskperformance for the modular model in the crafting domain. Colors correspond to task length. It can be seen thatthe sharp steps in the learning curve correspond to increases of `maxin the curriculum. The modular approachis eventually able to achieve high reward on all tasks, while the baseline models perform considerably worseon average.7Under review as a conference paper at ICLR 2017(a) (b)Figure 5: Ablation experiments. (a) The critic: lines labeled “task” include a baseline that varies with the taskidentity, while lines labeled “state” include a baseline that varies with the state identity. Estimating a baselinethat depends on both the representation of the current state and the identity of the current task is better thaneither alone or a constant baseline. (b) The curriculum: lines labeled “length” use a curriculum with iterativelyincreasing lengths, while lines labeled “weight” sample tasks in inverse proportion to their current reward.Adjusting the sampling distribution based on both task length and performance return improves convergence.Learning curves for baselines and the modular model are shown in Figure 4. It can be seen that inboth the maze domain and the crafting domain, our approach substantially outperforms the baselines:it induces policies with substantially higher average reward and converges more quickly than thepolicy gradient baselines. It can further be seen in Figure 4c that after policies have been learned onsimple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasksinvolve high-level actions not required for any of the short tasks (Appendix A).Having demonstrated the overall effectiveness of our approach, our remaining experiments explore(1) the importance of various components of the training procedure, and (2) the learned models’ability to generalize or adapt to held-out tasks. For compactness, we restrict our consideration onthe crafting domain, which features a larger and more diverse range of tasks and high-level actions.4.3 A BLATIONSIn addition to the overall modular parameter-tying structure induced by our sketches, the key com-ponents of our training procedure are the decoupled critic and the curriculum. Our next experimentsinvestigate the extent to which these are necessary for good performance.To evaluate the the critic, we consider three ablations: (1) removing the dependence of the model onthe environment state, in which case the baseline is a single scalar per task; (2) removing the depen-dence of the model on the task, in which case the baseline is a conventional generalized advantageestimator; and (3) removing both, in which case the baseline is a single scalar, as in a vanilla policygradient approach. Results are shown in Figure 5a. Introducing both state and task dependence intothe baseline leads to faster convergence of the model: the approach with a constant baseline achievesless than half the overall performance of the full critic after 3 million episodes. Introducing task andstate dependence independently improve this performance; combining them gives the best result.We also investigate two aspects of our curriculum learning scheme: starting with short examplesand moving to long ones, and sampling tasks in inverse proportion to their accumulated reward.Experiments are shown in Figure 5b. We again see that both components are essential for goodperformance. Sampling uniformly across all tasks of the target length results in slow convergence.4.4 Z ERO-SHOT AND ADAPTATION LEARNINGIn our final experiments, we consider the model’s ability to generalize to new tasks unseen at trainingtime. We consider two evaluation conditions: a zero-shot setting, in which the model is provided asketch for the new task and must immediately achieve good performance, and a adaptation setting,in which no sketch is provided and the model must learn the form of a suitable sketch by interactingwith the new task.8Under review as a conference paper at ICLR 2017Model MT 0-S Ad.Independent .44 – <.1Joint .49 <.1 –Modular .89 .77 .76Table 1: Model performance under var-ious evaluation conditions. MT is themultitask training condition describedin Section 4.2, while 0-SandAd.are re-spectively the zero-shot and adaptationexperiments described in Section 4.4.We hold out two length-four tasks from the full inventory usedin Section 4.2, and train on the remaining tasks. For zero-shot experiments, we simply form the concatenated policy de-scribed by the sketches of the held-out tasks, and repeatedlyexecute this policy (without learning) in order to obtain anestimate of its effectiveness. For adaptation experiments, weconsider ordinary reinforcement learning over Brather thanA, implementing the high-level learner with the same agentarchitecture as described in Section 3.1. Note that the Inde-pendent baseline cannot be applied to the zero-shot evalua-tion, while the joint baseline cannot be applied to the adapta-tion baseline (because it depends on pre-specified sketch fea-tures). Results are shown in Table 1. The held-out tasks are sufficiently challenging that the baselinesare unable to obtain more than negligible reward, while the modular model does comparatively well.5 C ONCLUSIONSWe have described an approach for multitask learning of neural network policies guided by symbolicpolicy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy,we have shown that it is possible to build agents that share behavior across tasks in order to achievesuccess in tasks with sparse and delayed rewards. This process induces an inventory of reusable andinterpretable subpolicies which can be employed for zero-shot generalization when further sketchesare available, and hierarchical reinforcement learning when they are not. Our work suggests thatthese sketches, which are easy to produce and require no grounding in the environment, provide aneffective scaffold for learning hierarchical policies from minimal supervision. We have released ourcode at http://github.com/jacobandreas/psketch .ACKNOWLEDGMENTSJA is supported by a Facebook Graduate Fellowship and a Huawei / Berkeley AI fellowship.
SyJDbsb4e
Review
3: Clear rejection
The paper proposes a new RL architecture that aims at learning policies from sketches i.e sequence of high-level operations to execute for solving a particular task. The model relies on a hierarchical structure where the sub-policy is chosen depending on the current operation to execute in the sketch . The learning algorithm is based on an extension of the actor-critic model for that particular case, and also involves curriculum learning techniques when the task to solve is hard. Experimental results are provided on different learning problems and compared to baseline methods. The paper is well-written and very easy to follow. I am not really convinced by the impact of such a paper since the problem solved here can be seen as an option-learning problem with a richer supervision (i.e the sequence of option is given). It thus corresponds to an easier problem with a limited impact. Moreover, I do not really understand to which concrete application this setting corresponds. For example, learning from natural langage instructions is clearly more relevant. So since the model proposed in this article is not a major contribution and shares many common ideas with existing hierarchical reinforcement learning methods, the paper lacks a strong motivation and/or concrete application. So, the paper only has a marginal interest for the RL community @pros: * Original problem with well design experiments * Simple adaptation of the actor-critic method to the problem of learning sub policies @cons: * Very simple task that can be seen as a simplification of more complex problems like options discovery, hierarchical RL or learning from instructions * No strong underlying applications that could help to 'reinforce' the interest of the approach
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Modular Multitask Reinforcement Learning with Policy Sketches ### Paper Abstract We describe a framework for multitask deep reinforcement learning guided by policy sketches. Sketches annotate each task with a sequence of named subtasks, providing high-level structural relationships among tasks, but not providing the detailed guidance required by previous work on learning policy abstractions for RL (e.g. intermediate rewards, subtask completion signals, or intrinsic motivations). Our approach associates every subtask with its own modular subpolicy, and jointly optimizes over full task-specific policies by tying parameters across shared subpolicies. This optimization is accomplished via a simple decoupled actor–critic training objective that facilitates learning common behaviors from dissimilar reward functions. We evaluate the effectiveness of our approach on a maze navigation game and a 2-D Minecraft-inspired crafting game. Both games feature extremely sparse rewards that can be obtained only after completing a number of high-level subgoals (e.g. escaping from a sequence of locked rooms or collecting and combining various ingredients in the proper order). Experiments illustrate two main advantages of our approach. First, we outperform standard baselines that learn task-specific or shared monolithic policies. Second, our method naturally induces a library of primitive behaviors that can be recombined to rapidly acquire policies for new tasks. ### Paper Keywords ["Reinforcement Learning", "Transfer Learning"] ### Paper Content ABSTRACTWe describe a framework for multitask deep reinforcement learning guided bypolicy sketches . Sketches annotate each task with a sequence of named subtasks,providing high-level structural relationships among tasks, but notproviding thedetailed guidance required by previous work on learning policy abstractions forRL (e.g. intermediate rewards, subtask completion signals, or intrinsic motiva-tions). Our approach associates every subtask with its own modular subpolicy,and jointly optimizes over full task-specific policies by tying parameters acrossshared subpolicies. This optimization is accomplished via a simple decoupledactor–critic training objective that facilitates learning common behaviors fromdissimilar reward functions. We evaluate the effectiveness of our approach on amaze navigation game and a 2-D Minecraft-inspired crafting game. Both gamesfeature extremely sparse rewards that can be obtained only after completing anumber of high-level subgoals (e.g. escaping from a sequence of locked rooms orcollecting and combining various ingredients in the proper order). Experimentsillustrate two main advantages of our approach. First, we outperform standardbaselines that learn task-specific or shared monolithic policies. Second, ourmethod naturally induces a library of primitive behaviors that can be recombinedto rapidly acquire policies for new tasks.1 I NTRODUCTIONπ1π2π3π1b1: get woodτ1: make planksb2: use workbenchΠ1b1: get woodτ2: make sticksb3: use toolshedπ1π3Π2π1π2K1K2Figure 1: Composing policies from subpolicies. Herewe have simplified versions of two tasks ( make planksandmake sticks , each associated with its own policy(1and2respectively). These policies share an ini-tial high-level action b1: both require the agent to getwood before taking it to an appropriate crafting station.By enforcing that the agent initially follows the samesubpolicy1in both tasks, we can learn a reusable rep-resentation of their shared structure.This paper describes a framework for learningcomposable deep subpolicies in a multitask set-ting, guided only by abstract policy sketches .We are interested in problems like the onesshown in Figure 1, with collections of tasksthat involve sparse rewards and long-term plan-ning, but which share structure in the form ofcommon subgoals or reusable high-level ac-tions. Our work aims to develop models thatcan learn efficiently from these sparse rewardsand rapidly adapt to new tasks, by exploitingthis shared structure and translating success onone task into progress on others. Our approachultimately induces a library of high-level ac-tions directly from symbolic annotations likethe ones marked K1andK2in the figure.This approach builds on a significant body ofresearch in reinforcement learning that focusesonhierarchical representations of behavior. Inthese approaches, a high-level controller learnsa policy over high-level actions—known var-iously as options (Sutton et al., 1999), skills1Under review as a conference paper at ICLR 2017(Konidaris & Barto, 2007), or primitives (Hauser et al., 2008)—which are themselves implementedas policies over low-level actions in the environment. While one line of research (e.g. Danielet al. (2012)) investigates learning hierarchical policies without any supervision, such hierarchiesare empirically difficult to learn directly from unconstrained interaction (Hengst, 2002). The bulk ofexisting work instead relies on additional information (in the form of intermediate rewards, subtaskcompletion signals, or intrinsic motivations) that guide the learner toward useful high-level actions.While effective, these approaches depend on state representations simple or structured enough thatsuitable reward signals can be effectively engineered by hand.Here we focus on multitask learning of hierarchical policies from a weaker form of supervision: attraining time, each task ( 1and2in Figure 1) is annotated with a sketch ( K1andK2) consisting of asequence of high-level action symbols ( b1,b2andb3)—with no information about how these actionsshould be implemented. Our approach associates each such high-level action with its own low-level subpolicy, and jointly optimizes over concatenated task-specific policies by tying parametersacross shared subpolicies. Our thesis is that even the minimal information about high-level policystructure contained in a sketch provides enough of a learning signal to induce general, reusablesubpolicies. Crucially, sketches are totally ungrounded in the representation of the world—theyrequire no intervention in a simulator or environment model.The present work may be viewed as an extension of recent approaches for learning compositionaldeep architectures from structured program descriptors (Andreas et al., 2016; Reed & de Freitas,2015). Here we focus on learning in interactive environments with reinforcement training signals.This extension presents a variety of technical challenges. Concretely, our contributions are:A general paradigm for multitask, hierarchical, deep reinforcement learning guided by ab-stract sketches of task-specific policies.A concrete agent architecture for learning in this paradigm, featuring a modular modelstructure and multitask actor–critic training objective.We evaluate our approach on two families of tasks: a maze navigation game (Figure 3a), in whichthe agent must navigate through a sequence of locked doors to reach a target room; and a 2-DMinecraft-inspired crafting game (Figure 3b), in which the agent must acquire particular resourcesby finding raw ingredients, combining them together in the proper order, and in some cases buildingintermediate tools that enable the agent to alter the environment itself. In both games, the agentreceives a reward only after the final goal is accomplished. For the most challenging tasks, involvingsequences of four or five high-level actions, a task-specific agent initially following a random policyessentially never discovers the reward signal.We evaluate a modular agent architecture trained with guidance from policy sketches under severaldifferent data conditions: (1) when learning the full collection of tasks jointly via reinforcement, (2)in a zero-shot setting where a policy sketch is available for the held-out task, and (3) in a adaptationsetting, where sketches are hidden and the agent must learn a policy over high-level actions. In allcases, our approach substantially outperforms standard policy optimization baselines.2 R ELATED WORKThe agent representation we describe in this paper belongs to the broader family of hierarchicalreinforcement learners described in the literature. As detailed in Section 3, our subpolicies may beviewed as a relaxation of the options framework first described by Sutton et al. (1999). A large bodyof work describes techniques for learning options and related abstract actions, in both single- andmultitask settings. For learning the implementation of options, most techniques rely on intermediatesupervisory signals, e.g. to encourage exploration (Kearns & Singh, 2002) or completion of pre-defined subtasks (Kulkarni et al., 2016). An alternative family of approaches employs either post-hoc analysis of already-learned policies to extract reusable sub-components (Stolle & Precup, 2002;Konidaris et al., 2011). Techniques for learning options with less guidance than the present workinclude Bacon & Precup (2015) and Vezhnevets et al. (2016), and other general hierarchical policylearners include Daniel et al. (2012), Bakker & Schmidhuber (2004) and Menache et al. (2002).Once a library of high-level actions exists, agents are faced with the problem of learning high-level(typically semi-Markov) policies that invoke appropriate high-level actions in sequence (Precup,2Under review as a conference paper at ICLR 20172000). The learning problem we describe in this paper is in some sense the direct dual to theproblem of learning these high-level policies. There, the agent begins with an inventory of complexprimitives and must learn to model their behavior and select among them; here we begin knowingthe names of appropriate high-level actions but nothing about how they are implemented, and mustinfer implementations (but not, initially, high-level plans) from context. We expect that our approachcould be coupled with a generic learner of options policies to provide a general mechanism forhierarchical RL; we leave this for future work.Our approach is also inspired by a number of recent efforts toward compositional reasoning andinteraction with structured deep models. Such models have been previously used for tasks involvingquestion answering (Iyyer et al., 2014; Andreas et al., 2016) and relational reasoning (Socher et al.,2012), and more recently for multi-task, multi-robot transfer problems (Devin et al., 2016). Inthis work—as in existing approaches employing dynamically assembled modular networks—task-specific training signals are propagated through a collection of composed discrete structures withtied weights. Here the composed structures specify time-varying policies rather than feedforwardcomputations, and their parameters must be learned via interaction rather than direct supervision.Another closely related family of models includes neural programmers (Neelakantan et al., 2015)and programmer–interpreters (Reed & de Freitas, 2015), which generate discrete computationalstructures but require supervision in the form of output actions or full execution traces.A closely related line of work is the Hierarchical Abstract Machines (HAM) framework introducedby Parr & Russell (1998). Like our approach, HAMs begin with a representation of a high-levelpolicy as an automaton (or a more general computer program; Andre & Russell, 2001) and usereinforcement learning to fill in low-level details. Variations on this architecture have considered anumber of control constructs beyond the scope of the current paper (e.g. concurrency and recursion;Marthi et al., 2004). However, because these approaches attempt to learn a single representation ofthe Q function for all subtasks and contexts, they require extremely strong formal assumptions aboutthe form of the reward function and state representation (Andre & Russell, 2002) that the presentwork avoids by decoupling the policy representation from the value function.Our approach also bears some resemblance to the instruction following literature in natural languageprocessing. Existing work on instruction following falls into two broad categories: approaches thatrequire a highly structured (typically logical) action and world representations (Chen & Mooney,2011; Artzi & Zettlemoyer, 2013; Andreas & Klein, 2015; Tellex et al., 2011), and approaches thatrequire detailed supervision of action sequences or dense reward signals essentially equivalent tofull action traces (Branavan et al., 2009; V ogel & Jurafsky, 2010; Mei et al., 2016). By contrast,the framework we describe here involves no formal or logical language for describing plans, andno supervised action sequences. Additionally, the modular model described in this paper natruallysupports adaptation to tasks where no sketches are available, while all existing instruction followingmodels learn a joint policy over instructions and actions, and are unable to function in the absenceof instructions.3 L EARNING MODULAR POLICIESWe consider a multitask reinforcement learning problem arising from a family of infinite-horizondiscounted Markov decision processes in a shared environment. This environment is specified bya tuple (S;A;P; ), withSa set of states,Aa set of low-level actions, P:SAS ! Ra transition probability distribution, and a discount factor. Each task 2T is then specifiedby a pair (R;), withR:S! Ra task-specific reward function and :S! Ran initialdistribution over states. For a fixed sequence f(si;ai)gof states and actions obtained from a rolloutof a given policy, we will denote the empirical return starting in state siasqi:=P1j=ijR(sj). Inaddition to the components of a standard multitask RL problem, we assume that tasks are annotatedwith sketchesK, each consisting of a sequence (b1;b2;:::)of high-level symbolic labels drawnfrom a fixed vocabulary B. Our model associates each of these symbols with a randomly initializedmodular subpolicy. By sharing each subpolicy across all tasks annotated with the correspondingsymbol, our approach naturally learns the shared abstraction for the corresponding subtask, withoutrequiring any information about the grounding of that task to be explicitly specified by annotation.3Under review as a conference paper at ICLR 20173.1 M ODELWe exploit the structural information provided by sketches by constructing for each symbol bacorresponding subpolicyb. At each timestep, a subpolicy may select either a low-level actiona2A or a special STOP action. We denote the augmented state space A+:=A[f STOPg. Whilethis framework is agnostic to the implementation of subpolicies, we are especially interested in thecase where subpolicies are specified by deep networks. As shown in Figure 2, the experimentsin this paper represent each bas a neural network whose input is a representation of the currentstate, and whose output is a distribution over A+. While all action spaces in our experiments arediscrete, it is straightforward to instead allow this last layer to parameterize a mixed distributionover an underlying continuous action space and the STOP action. These subpolicies may be viewedas options of the kind described by Sutton et al. (1999), with the key distinction that they have noinitiation semantics, but are instead invokable everywhere, and have no explicit representation asa function from an initial state to a distribution over final states (instead implicitly using the STOPaction to terminate).Given a sketch, a task-specific policy is formed by concatenating its associated subpolicies insequence. In particular, the high-level policy maintains a subpolicy index i(initially 0), and executesactions from biuntil the STOP symbol is emitted, at which point control is passed to bi+1. We maythus think of as inducing a Markov chain over the state space SB , with transitions given by:(s;bi)!(s0;bi) with probabilityPa2Abi(ajs)P(s0js;a)!(s;bi+1)with probability bi(STOPjs)Note that is semi-Markov with respect to projection of the augmented state space SB onto theunderlying state space S. We denote the complete family of task-specific policies :=Sfg,and let each bbe an arbitrary function of the current environment state parameterized by someweight vector b. The learning problem is to optimize over all bto maximize the sum of expecteddiscounted rewards J() :=PJ() :=PEsiPiiR(si)across all tasks 2T.3.2 P OLICY OPTIMIZATIONs4s5s6s7a4a5a6stopa1a2a3stops1s2s3s4π1π2b2b1Figure 2: Model overview. Each subpol-icyis uniquely associated with a symbol bimplemented as a neural network that mapsfrom a statesito distributions over A+, andchooses an action aiby sampling from thisdistribution. Whenever the STOP action issampled, control advances to the next sub-policy in the sketch.Here that optimization is accomplished via a simple de-coupled actor–critic method. In a standard policy gradi-ent approach, with a single policy with parameters ,we compute gradient steps of the form (Williams, 1992):rJ() =Xirlog(aijsi)qic(s); (1)where the baseline or “critic” ccan be chosen indepen-dently of the future without introducing bias into the gra-dient. Recalling our previous definition of qias the em-pirical return starting from si, this form of the gradientcorresponds to a generalized advantage estimator (Schul-man et al., 2015) with = 1. Herecachieves close to theoptimal variance (Greensmith et al., 2004) when it is setexactly equal to the state-value function V(si) =Eqifor the target policy starting in state si.The situation becomes slightly more complicated when generalizing to modular policies built bysequencing subpolicies. In this case, we will have one subpolicy per symbol but one critic pertask. This is because subpolicies bmight participate in a number of composed policies , eachassociated with its own reward function R. Thus individual subpolicies are not uniquely identifiedwith value functions, and the aforementioned subpolicy-specific state-value estimator is no longerwell-defined. We extend the actor–critic method to incorporate the decoupling of policies from valuefunctions by allowing the critic to vary per-sample (that is, per-task-and-timestep) depending on thereward function with which the sample is associated. Noting that rbJ() =Pt:b2KrbJ(),i.e. the expected reward across all tasks in which bparticipates, we have:rJ() =XrJ() =XXirblogb(aijsi)qic(si); (2)4Under review as a conference paper at ICLR 2017where each state-action pair (si;ai)was selected by the subpolicy bin the context of the task .Now minimization of the gradient variance requires that each cactually depend on the task identity.(This follows immediately by applying the corresponding argument in Greensmith et al. (2004)individually to each term in the sum over in Equation 2.) Because the value function is itselfunknown, an approximation must be estimated from data. Here we allow these cto be implementedwith an arbitrary function approximator parameterized by a vector . This is trained to minimize asquared error criterion, with gradients given byr12Xi(qic(si))2=Xirc(si)qic(si): (3)Alternative forms of the advantage estimator (e.g. the TD residual R(si) +V(si+1)V(si)or any other member of the GAE family) can be easily substituted by simply maintaining one suchestimator per task. Experiments (Section 4.3) show that conditioning on both the state and thetask identity results in noticeable performance improvements, suggesting that the variance reductionprovided by this objective is important for efficient joint learning of modular policies.Algorithm 1 DO-STEP(;curriculum )1:D ;2:whilejDj<D do3:curriculum () .sample taskfrom curriculum (Section 3.3)4:d=f(si;ai;bi=K;i;qi;);:::g .do rollout5:D D[d6:forb2B;2Tdo7:d=f(si;ai;b0;qi;0)2D:b0=b;0=g8:b bDPdrlogb(aijsi)qic(si).update policy9: DPdrc(si)qic(si).update criticThe complete procedure for computing a single gradient step is given in Algorithm 1. (The outertraining loop over these steps, which is driven by a curriculum learning procedure, is described inthe following section and specified in Algorithm 2.) This is an on-policy algorithm. In each step, theagent samples tasks from a task distribution provided by a curriculum (described in the followingsubsection). The current family of policies is used to perform rollouts in each sampled task,accumulating the resulting tuples of (states, low-level actions, high-level symbols, rewards, and taskidentities) into a dataset D. OnceDreaches a maximum size D, it is used to compute gradientsw.r.t. both policy and critic parameters, and the parameter vectors are updated accordingly. The stepsizesandin Algorithm 1 can be chosen adaptively using any first-order method.3.3 C URRICULUM LEARNINGFor complex tasks, like the one depicted in Figure 3b, it is difficult for the agent to discover any stateswith positive reward until many subpolicy behaviors have already been learned. It is thus a better useof the learner’s time to focus on “easy” tasks, where many rollouts will result in high reward fromwhich appropriate subpolicy behavior can be inferred. But there is a fundamental tradeoff involvedhere: if the learner spends too much time on easy tasks before being made aware of the existenceof harder ones, it may overfit and learn subpolicies that no longer generalize or exhibit the desiredstructural properties.To avoid both of these problems, we use a curriculum learning scheme (Bengio et al., 2009) thatallows the model to smoothly scale up from easy tasks to more difficult ones while avoiding overfit-ting. Initially the model is presented with tasks associated with short sketches. Once average rewardon all these tasks reaches a certain threshold, the length limit is incremented. We assume that re-wards across tasks are normalized with maximum achievable reward 0< qi<1. Let ^Erdenotethe empirical estimate of the expected reward for the current policy on task t. Then at each timestep,tasks are sampled in proportion to 1^Er, which by assumption must be positive. Experimentsshow that both components of this curriculum learning scheme improve the rate at which the modelconverges to a good policy (Section 4.3).5Under review as a conference paper at ICLR 2017The complete curriculum-based training procedure is specified in Algorithm 2. Initially, the max-imum sketch length `maxis set to one, and the curriculum initialized to sample length-1 tasks uni-formly. (Neither of the environments we consider in this paper feature any length-1 tasks; in thiscase, observe that Algorithm 2 will simply advance to length-2 tasks without any parameter updates.)For each setting of `max, the algorithm uses the current collection of task policies to compute andapply the gradient step described in Algorithm 1. The rollouts obtained from the call to DO-STEPcan also be used to compute reward estimates ^Er; these estimates determine a new task distributionfor the curriculum. The inner loop is repeated until the reward threshold rminis exceeded, at whichpoint`maxis incremented and the process repeated over a (now-expanded) collection of tasks.4 E XPERIMENTSAs described in the introduction, we evaluate the performance of our approach in two environments:a maze navigation game and a crafting game. Both games involve nontrivial low-level control:agents must learn to avoid obstacles and interact with various kinds of objects. But the environmentsalso feature hierarchical structure: rewards are accessible only after the agent has completed two tofive high-level actions in the appropriate sequence.In all our experiments, we implement each subpolicy as a multilayer perceptron with ReLU nonlin-earities and a hidden layer with 128 hidden units, and each critic as a linear function of the currentstate. Each subpolicy network receives as input a set of features describing the current state of theenvironment, and outputs a distribution over actions. The agent acts at every timestep by samplingfrom this distribution. The gradient steps given in lines 8 and 9 of Algorithm 1 are implemented us-ing RMSP ROP (Tieleman, 2012) with a step size of 0.001 and gradient clipping to a unit norm. Wetake the batch size parameter Din Algorithm 1 to be 2000, and set = 0:9in both environments.For curriculum learning, the improvement threshold rgoodis set to 0.8.4.1 E NVIRONMENTSThe maze environment (Figure 3a) corresponds closely to the the “light world” described byKonidaris & Barto (2007). The agent is placed in a discrete world consisting of a series of rooms,some of which are connected by doors. Some doors require that the agent first pick up a key toopen them. For our experiments, each task corresponds to a goal room (always at the same positionrelative to the agent’s starting position) that the agent must reach by navigating through a sequenceof intermediate rooms. The agent has one sensor on each side of its body, which reports the distanceto keys, closed doors, and open doors in the corresponding direction. Sketches specify a particularsequence of directions for the agent to traverse between rooms to reach the goal. Mazes are sampledwith random sizes and random decisions about whether to connect rooms with open doors, lockeddoors, or no doors. The sketch always corresponds to a viable traversal from the start to the goalposition, but other (possibly shorter) traversals may also exist.The crafting environment (Figure 3b) is inspired by the popular game Minecraft, but is imple-mented in a 2-D grid world. The agent may interact with some objects in the world by facing themAlgorithm 2 TRAIN -POLICIES ()1:=INIT() .initialize subpolicies randomly2:`max 13:loop4:rmin 15: curriculum () =Unif(T0) .initialize`max-step curriculum uniformly6:T0=f2T:jKj`maxg7: whilermin<r gooddo8: DO-STEP(;curriculum ) .update parameters (Algorithm 1)9:Z=Pt2T0[1^Er]10: curriculum (t) =1[2T0](1^Er)=Z82T11:rmin min^Er12:`max `max+ 16Under review as a conference paper at ICLR 2017b1: rightτ: go to goalb2: downb3: down123K1234b1: get woodτ: get goldb2: get ironb3: use workbenchb4: get goldK(a) (b)Figure 3: Example tasks from the environments used in this paper. (a) In the maze environment, the agent mustreach a goal position by traversing right (1), down (2) and down again (3) through a sequence of rooms, someof which may have locked doors. (b) In the crafting environment, an agent seeking to pick up the gold nuggetin the top corner must first collect wood (1) and iron (2), use a workbench to turn them into a bridge (3), anduse the bridge to cross the water (4).and executing a special INTERACT action. Interacting with raw materials initially scattered aroundthe environment causes them to be added to an inventory. Interacting with different crafting stationscauses objects in the agent’s inventory to be combined or transformed into other objects. Each taskin this game corresponds to some crafted object the agent must produce; the most complicated goalsrequire the agent to also craft intermediate ingredients, and in some cases build tools (like a pickaxeand a bridge) to reach ingredients located in initially inaccessible regions of the environment.A complete listing of tasks and sketches is given in Appendix A.4.2 M ULTITASK LEARNINGThe primary experimental question in this paper is whether the extra structure provided by policysketches alone is enough to enable fast learning of coupled policies across tasks. To evaluate this, wecompare our modular approach to two policy gradient baselines—one that learns an independentpolicy for each task and one that learns a joint policy across all tasks—as well as a critic-only Qreader baseline. For the independent model, task-specific policies are represented by networks withthe same structure as the modular subpolicies. The joint model conditions both on these environmentfeatures, as well as a feature vector encoding the complete sketch. The Q reader forms the same jointstate and action space described in Section 3.1, and learns a single feedforward network to map fromboth environment states and representations of action symbols onto Q values. This baseline can beviewed either as a chain-structured hierarchical abstract machine with a learned state abstractor(Andre & Russell, 2002), or as a standard instruction following baseline from the natural languageprocessing literature (V ogel & Jurafsky, 2010).(a) (b) (c)Figure 4: Comparing modular learning from sketches with standard RL baselines. Modular is the approachdescribed in this paper, while Independent learns a separate policy for each task, Joint learns a shared policythat conditions on the task identity, and Q reader learns a single network to map from states and action symbolsto Q values. Performance for the best iteration of the (off-policy) Q reader is plotted. (a) Performance ofthe three models in the maze environment. (b) Performance in the crafting environment. (c) Individual taskperformance for the modular model in the crafting domain. Colors correspond to task length. It can be seen thatthe sharp steps in the learning curve correspond to increases of `maxin the curriculum. The modular approachis eventually able to achieve high reward on all tasks, while the baseline models perform considerably worseon average.7Under review as a conference paper at ICLR 2017(a) (b)Figure 5: Ablation experiments. (a) The critic: lines labeled “task” include a baseline that varies with the taskidentity, while lines labeled “state” include a baseline that varies with the state identity. Estimating a baselinethat depends on both the representation of the current state and the identity of the current task is better thaneither alone or a constant baseline. (b) The curriculum: lines labeled “length” use a curriculum with iterativelyincreasing lengths, while lines labeled “weight” sample tasks in inverse proportion to their current reward.Adjusting the sampling distribution based on both task length and performance return improves convergence.Learning curves for baselines and the modular model are shown in Figure 4. It can be seen that inboth the maze domain and the crafting domain, our approach substantially outperforms the baselines:it induces policies with substantially higher average reward and converges more quickly than thepolicy gradient baselines. It can further be seen in Figure 4c that after policies have been learned onsimple tasks, the model is able to rapidly adapt to more complex ones, even when the longer tasksinvolve high-level actions not required for any of the short tasks (Appendix A).Having demonstrated the overall effectiveness of our approach, our remaining experiments explore(1) the importance of various components of the training procedure, and (2) the learned models’ability to generalize or adapt to held-out tasks. For compactness, we restrict our consideration onthe crafting domain, which features a larger and more diverse range of tasks and high-level actions.4.3 A BLATIONSIn addition to the overall modular parameter-tying structure induced by our sketches, the key com-ponents of our training procedure are the decoupled critic and the curriculum. Our next experimentsinvestigate the extent to which these are necessary for good performance.To evaluate the the critic, we consider three ablations: (1) removing the dependence of the model onthe environment state, in which case the baseline is a single scalar per task; (2) removing the depen-dence of the model on the task, in which case the baseline is a conventional generalized advantageestimator; and (3) removing both, in which case the baseline is a single scalar, as in a vanilla policygradient approach. Results are shown in Figure 5a. Introducing both state and task dependence intothe baseline leads to faster convergence of the model: the approach with a constant baseline achievesless than half the overall performance of the full critic after 3 million episodes. Introducing task andstate dependence independently improve this performance; combining them gives the best result.We also investigate two aspects of our curriculum learning scheme: starting with short examplesand moving to long ones, and sampling tasks in inverse proportion to their accumulated reward.Experiments are shown in Figure 5b. We again see that both components are essential for goodperformance. Sampling uniformly across all tasks of the target length results in slow convergence.4.4 Z ERO-SHOT AND ADAPTATION LEARNINGIn our final experiments, we consider the model’s ability to generalize to new tasks unseen at trainingtime. We consider two evaluation conditions: a zero-shot setting, in which the model is provided asketch for the new task and must immediately achieve good performance, and a adaptation setting,in which no sketch is provided and the model must learn the form of a suitable sketch by interactingwith the new task.8Under review as a conference paper at ICLR 2017Model MT 0-S Ad.Independent .44 – <.1Joint .49 <.1 –Modular .89 .77 .76Table 1: Model performance under var-ious evaluation conditions. MT is themultitask training condition describedin Section 4.2, while 0-SandAd.are re-spectively the zero-shot and adaptationexperiments described in Section 4.4.We hold out two length-four tasks from the full inventory usedin Section 4.2, and train on the remaining tasks. For zero-shot experiments, we simply form the concatenated policy de-scribed by the sketches of the held-out tasks, and repeatedlyexecute this policy (without learning) in order to obtain anestimate of its effectiveness. For adaptation experiments, weconsider ordinary reinforcement learning over Brather thanA, implementing the high-level learner with the same agentarchitecture as described in Section 3.1. Note that the Inde-pendent baseline cannot be applied to the zero-shot evalua-tion, while the joint baseline cannot be applied to the adapta-tion baseline (because it depends on pre-specified sketch fea-tures). Results are shown in Table 1. The held-out tasks are sufficiently challenging that the baselinesare unable to obtain more than negligible reward, while the modular model does comparatively well.5 C ONCLUSIONSWe have described an approach for multitask learning of neural network policies guided by symbolicpolicy sketches. By associating each symbol appearing in a sketch with a modular neural subpolicy,we have shown that it is possible to build agents that share behavior across tasks in order to achievesuccess in tasks with sparse and delayed rewards. This process induces an inventory of reusable andinterpretable subpolicies which can be employed for zero-shot generalization when further sketchesare available, and hierarchical reinforcement learning when they are not. Our work suggests thatthese sketches, which are easy to produce and require no grounding in the environment, provide aneffective scaffold for learning hierarchical policies from minimal supervision. We have released ourcode at http://github.com/jacobandreas/psketch .ACKNOWLEDGMENTSJA is supported by a Facebook Graduate Fellowship and a Huawei / Berkeley AI fellowship.<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text The paper proposes a new RL architecture that aims at learning policies from sketches i.e sequence of high-level operations to execute for solving a particular task. The model relies on a hierarchical structure where the sub-policy is chosen depending on the current operation to execute in the sketch . The learning algorithm is based on an extension of the actor-critic model for that particular case, and also involves curriculum learning techniques when the task to solve is hard. Experimental results are provided on different learning problems and compared to baseline methods. The paper is well-written and very easy to follow. I am not really convinced by the impact of such a paper since the problem solved here can be seen as an option-learning problem with a richer supervision (i.e the sequence of option is given). It thus corresponds to an easier problem with a limited impact. Moreover, I do not really understand to which concrete application this setting corresponds. For example, learning from natural langage instructions is clearly more relevant. So since the model proposed in this article is not a major contribution and shares many common ideas with existing hierarchical reinforcement learning methods, the paper lacks a strong motivation and/or concrete application. So, the paper only has a marginal interest for the RL community @pros: * Original problem with well design experiments * Simple adaptation of the actor-critic method to the problem of learning sub policies @cons: * Very simple task that can be seen as a simplification of more complex problems like options discovery, hierarchical RL or learning from instructions * No strong underlying applications that could help to 'reinforce' the interest of the approach ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
r1Chut9xl
ICLR.cc/2017/conference
2017
Inference and Introspection in Deep Generative Models of Sparse Data
["Rahul G. Krishnan", "Matthew Hoffman"]
Deep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model.
["Unsupervised Learning", "Deep learning"]
ABSTRACTDeep generative models such as deep latent Gaussian models (DLGMs) are pow-erful and popular density estimators. However, they have been applied almostexclusively to dense data such as images; DLGMs are rarely applied to sparse,high-dimensional integer data such as word counts or product ratings. One reasonis that the standard training procedures find poor local optima when applied tosuch data. We propose two techniques that alleviate this problem, significantlyimproving our ability to fit DLGMs to sparse, high-dimensional data. Having fitthese models, we are faced with another challenge: how to use and interpret therepresentation that we have learned? To that end, we propose a method that extractsdistributed representations of features via a simple linearization of the model.1 I NTRODUCTIONDeep latent Gaussian models (DLGMs, a.k.a. variational autoencoders; Rezende et al., 2014; Kingmaet al., 2014) have led a resurgence in the use of deep generative models for density estimation. DLGMsassume that observed vectors xare generated by applying a nonlinear transformation (defined by aneural network with parameters ) to a vector of Gaussian random variables z.Learning in DLGMs proceeds by approximately maximizing the average marginal likelihood p(x) Rzp(z)p(xjz)dzof the observations x. Computing the true marginal likelihood is intractable, sowe resort to variational expectation-maximization (Bishop, 2006), an approximation to maximum-likelihood estimation. To learn the parameters of the generative model, the procedure needs to find adistributionq(zjx)that approximates the posterior distribution p(zjx)of the latent vector zgiven theobservations x. In the past, such qdistributions were fit using iterative optimization procedures (e.g.,Hoffman et al., 2013). But Rezende et al. (2014) and Kingma et al. (2014) showed that q(zjx)can beparameterized by a feedforward “inference network” with parameters , speeding up learning. Thisinference network is trained jointly with the generative model; as training proceeds, the inferencenetwork learns to approximate posterior inference on the generative model, and the generative modelimproves itself using the output of the inference network.Embedded within this procedure, however, lies a potential problem: both the inference network andthe generative model are initialized randomly. Early on in learning, the inference network’s q(zjx)distributions will be poor approximations to the true posterior p(zjx), and the gradients used to updatethe parameters of the generative model will therefore be poor approximations to the gradients ofthe true log-likelihood logp(x). Previous stochastic variational inference methods (Hoffman et al.,2013) were slower, but suffered less from this problem since for every data-point, a set of variationalparameters was optimized within the inner loop of learning. In this work, we investigate blendingthe two methodologies for learning models of sparse data. In particular, we use the parameterspredicted by the inference network as an initialization and optimize them further during learning.When modeling high-dimensional sparse data, we show that updating the local variational parametersyields generative models with better held-out likelihood, particularly for deeper generative models.What purpose is served by fitting bigger, deeper, more powerful generative models? Breiman (2001)argues that statistical discriminative modeling falls into two schools of thought: the data modelingculture and the algorithmic modeling culture. The former advocates the use of predictive modelsthat assume interpretable, mechanistic processes while the latter advocates the use of black boxtechniques with an emphasis on prediction accuracy. Breiman’s arguments also ring true about the1Under review as a conference paper at ICLR 2017zx Figure 1 :Deep Latent Gaussian Model: The Bayesian network depicted here comprises asingle latent variable with the conditional probability p(xjz)defined by a deep neural network withparameter. The dotted line represents the inference network parameterized by , which is usedfor posterior inference at train and test time.divide between deep generative models with complex conditional distributions and simpler, moreinterpretable statistical models. Consider a classic model such as Latent Dirichlet Allocation (Bleiet al., 2003). It is outperformed in held-out likelihood (Miao et al., 2016) by deeper generative modelsand assumes a simple probabilistic process for data generation that is unlikely to hold in reality. Yet,its generative semantics lend it a distinct advantage: interpretability. The word-topic matrix in themodel allows practitioners to read off what the model has learned about the data. Is there a naturalway to interpret the generative model when the conditional distributions are parameterized by a deepneural network?Our second contribution is to introduce a simple, easy to implement method to interpret what is beinglearned by generative models such as DLGMs whose conditional probabilities are parameterized bydeep neural networks. Our hope is to narrow the perceived gulf between a complex generative model’srepresentational power and its interpretability. We use the Jacobian of the conditional distributionwith respect to latent variables in the Bayesian network to form embeddings (or Jacobian vectors) ofthe observations. We investigate the properties of the Jacobian vectors obtained from deeper, morenon-linear generative models.2 B ACKGROUNDGenerative Model: We consider learning in generative models of the form shown in Figure 1. Weobserve a set of Dword count vectors x1:D, wherexdvdenotes the number of times that word indexv2f1;:::;Vgappears in document d. We assume we are given the total number of words perdocumentNdPvxdv, and thatxdwas generated via the following generative process:zdN(0;I);(zd)MLP(zd;);(zd)expf(zd)gPvexpf(zd)vg;xdMultinomial( (zd);Nd):(1)That is, we draw a Gaussian random vector, pass it through a multilayer perceptron (MLP) withparameters, pass the resulting vector through the softmax (a.k.a. multinomial logistic) function, andsampleNdtimes from the resulting distribution over the vocabulary.1Variational Learning: For ease of exposition notation we drop the subscript on xdto formxreferring to a single data point. We need to approximate the intractable posterior distribution p(zjx)during learning. Using the well-known variational principle, we can obtain the lower bound on thelog marginal likelihood of the data (or L(x;;)) in Eq. 2. where the inequality is by Jensen’sinequality.logp(x;)Eq(zjx)[logp(xjz))]KL(q(zjx)jjp(z) ) =L(x;;); (2)We leverage an inference network orrecognition network (Hinton et al., 1995), a neural network whichapproximates the intractable posterior, during learning. This is a parametric conditional distributionthat is optimized to perform inference. Kingma & Welling (2014); Rezende et al. (2014) use a neuralnet (with parameters ) to parameterize q(zjx). The challenge in the resulting optimization problemis that the lower bound (2)includes an expectation w.r.t. q(zjx), which implicitly depends on thenetwork parameters . This difficulty is overcome by using stochastic backpropagation .With a normal distribution as our variational approximation we have that q(zjx)N((x);(x)).(x);(x)are functions of the observation x, and we denote by (x) :=1In keeping with common practice, we neglect the multinomial base measure termN!x1!xV!, which amountsto assuming that the words are observed in a particular order.2Under review as a conference paper at ICLR 2017f(x);(x)gthe local variational parameters predicted by the inference network. A sim-ple transformation allows one to obtain unbiased Monte Carlo estimates of the gradients ofEq(zjx)[logp(xjz))]with respect to . If we assume the prior p(z)is also normally distributed, theKLand its gradients may be obtained analytically. Throughout this paper we will use to denote theparameters of the generative model, and to denote the parameters of the inference network.3 M ETHODOLOGYInference with Global Information: Sparse data typically exhibits long tails and learning in thepresence of rare features is challenging. Inference networks learn to regress to the optimal posteriorparameters for every data point and global information about the relative frequencies of the individualfeatures in the training distribution may present valuable information during learning.The simplest way to incorporate first order statistics across the training data into the inferential processis to condition on tf-idf (Baeza-Yates et al., 1999) features instead of the raw-counts. tf-idf is one ofthe most widely used techniques in information retrieval. In the context of building bag-of-wordsrepresentations for documents, tf-idf re-weight features to increase the influence of rarer wordswhile decreasing the influence of common words appearing in all documents. The tf-idf-transformedword-count vector is ~xdvxdvlogDPd0minfxd0v;1g. After applying transform, the resulting vector ~xis normalized by its L2 norm. It’s worthwhile to note that leveraging first-order statistics for inferenceis difficult in the traditional paradigm of tracking variational parameters for each data point but iseasy with inference networks.Optimizing Local Variational Parameters: The inference network initially comprises a randomlyinitialized neural network. The predictions of the inference network early in optimization aresuboptimal variational parameters used to derive gradients of the parameters of the generative model.This induces noise and bias to the gradients used to update the parameters of the generative model;this noise and bias may push the generative model towards a poor local optimum. Previous work hassuggested that deep neural networks (which form the conditional probability distributions p(xjz))are sensitive to initialization (Glorot & Bengio, 2010; Larochelle et al., 2009).To avoid these issues, we only use the local variational parameters (x)predicted by the inferencenetwork to initialize an iterative optimizer that maximizes the ELBO with respect to ; we use theoptimized variational parameters^ (x)to derive gradients for the generative model. We then train theinference network using stochastic backpropagation and gradient descent, holding the parameters ofthe generative model fixed. Our procedure is detailed in Algorithm 1.Algorithm 1 Pseudocode for Learning: We evaluate expectations in L(x)(see Eq. 2) using a single samplefrom the variational distribution and aggregate gradients across mini-batches. M= 1corresponds to performingno additional optimization to the variational parameters We update ; (x);using stochastic gradient descentwith adaptive learning rates ; (x);obtained via ADAM (Kingma & Ba, 2015)Inputs : DatasetD:= [x1;:::;xD], Inference Model: q(zjx), Generative Model: p(xjz);p(z)while notConverged ()do1. Sample datapoint: xD2. Estimate local variational parameters (x)1usingq(zjx)3. Estimate (x)M^ (x) = arg max (x)L(x;; (x))via SGD as:m= 1;:::;M , (x)m+1= (x)m+ (x)@L(x;; (x)m)@ (x)m4. Updateas: +rL(x;; (x)M)5. Updateas: +rL(x;; (x))end whileIntrospection: Linear models are inherently interpretable. Consider linear regression, factor analysis(Spearman, 1904), and latent Dirichlet allocation (LDA; Blei et al., 2003), which (standardizingnotation) assume the following relationships:Regression: E[yjx] =Wx+b; Factor Analysis: xN(0;I);E[yjx] =Wx+b;Latent Dirichlet Allocation: xDirichlet();E[yjx] =Wx: (3)3Under review as a conference paper at ICLR 2017In each case, we need only inspect the parameter matrix Wto answer the question “what happenstoyif we increase xka little?” The answer is clear— ymoves in the direction of the kth row ofW.We can ask this question differently and get the same answer: “what is the derivative@E[yjx]@x?” Theanswer is simply the parameter matrix W.For models as in Fig 1, the variability in the training data is assumed to be due to the single latentstatez. The relationship between latent variables zand observations xcannot be quickly read offof the parameters . But we can still ask what happens if we perturb zby some small dz—thisis simply the directional derivative@E[xjz]@zdz. We can interpret this Jacobian matrix in much thesame way we would a factor loading matrix, with two main differences. First, the Jacobian matrix@E[xjz]@zvaries withz—the interpretation of zmay change significantly depending on context. Second,DLGMs exhibit rotational symmetry—the prior on zis rotationally symmetric, and the MLP canapply arbitrary rotations to zbefore applying any nonlinearities, so a priori there is no “natural” set ofbasis vectors for z. For a given Jacobian matrix, however, we can find the most significant directionsvia a singular value decomposition (SVD).Jacobian Vectors: We present our method to generate embeddings from Bayesian networks of theform Figure 1. We consider three variants of Jacobian embedding vectors, based on the unnormalizedpotentials from the MLP, logarithmic probabilities, and linear probabilities respectively:J(z)pot=@(z)@zJ(z)log=@log(z)@zJ(z)prob=@(z)@z(4)For anyz,fJ(z)log;J(z)pot;J(z)probg2RVKwhereKis the latent dimension and Vis thedimensionality of the observations. It is this matrix that we use to form embeddings. We denote by uithe Jacobian vector obtained from the row of the Jacobian matrix. When not referring to a particularvariant, we useJ(z)to denote the Jacobian matrix. J(z)is a function of zleaving open the choiceof where to evaluate this function. The semantics of our generative model suggest a natural choice:Jmean:=Ep(z)[J(z)]:This set of embeddings captures the variation in the output distribution withrespect to the latent state across the prior distribution of the generative model. Additionally, one mayalso evaluate the Jacobian at the approximate posterior corresponding to an observation x. We studyhow this may be used to obtain contextual word-vectors.In frameworks that support automatic differentiation (e.g., Theano; Theano Development Team,2016),J(z)is readily available and we estimate Jmean via Monte-Carlo sampling from the prior.Deriving Jacobian Vectors: For simplicity, we derive the functional form of the Jacobian in a linearmodel i.e where (zd) =Wzd(c.f Eq 1). We drop the subscript dand denote by i(z), theithelement of the vector (z).p(xi= 1jz) =exp(i(z))Pjexp(j(z))andi(z) =wTizFor linear models, rzi(z) =widirectly corresponds to J(z)pot. Noting thatrzexp(i(z)) =exp(i(z))rzi(z)andrzPjexp(j(z)) =Pjexp(j(z))rzj(z), we estimateJ(z)probas:rzp(xi= 1jz) =rzexp(i(z))Pjexp(j(z))=Pjexp(j(z))rzexp(i(z))exp(i(z))rzPjexp(j(z))(Pjexp(j(z)))2=Pjexp(j(z)) exp(i(z))wiexp(i(z))Pjexp(j(z))wj(Pjexp(j(z)))2=p(xi= 1jz)wip(xi= 1jz)Xjp(xj= 1jz)wj=p(xi= 1jz)(wiXjp(xj= 1jz)wj)Similarly, we may compute J(z)log:rzlogp(xi= 1jz) =wiXjp(xj= 1jz)wj=Xjp(xj= 1jz)(wiwj) (5)4Under review as a conference paper at ICLR 2017Denoting a word-pair vector as wiwj, wherewi;wjare columns of the matrix W. If we define theset of all word-pair vectors as S, then Eq 5 captures the idea that the vector representation for a wordilies in the convex hull of S. Furthermore, the word vector’s location in CONV (S)is determined bythe likelihood of the pairing word ( xj) under the model p(xj= 1jz).When we use a non-linear conditional probability distribution J(z)logbecomes:rzlogp(xi=1jz) =Pjp(xj= 1jz)(rzi(z)rzj(z))whererzi(z)is a non-linear function of z. To thebest of our knowledge, Jacobian Vectors and their properties have not been studied.4 R ELATED WORKLearning in Deep Generative Models: Salakhutdinov & Larochelle (2010) optimize the local varia-tional parameters obtained from an inference network when learning deep Boltzmann machines. ForDLGMs, Hjelm et al. (2016) also consider the optimization of the local variational parameters, thoughtheir exposition focuses on deriving an importance-sampling-based bound to use during learning indeep generative models with discrete latent variables. Their experimental results suggest the proce-dure does not improve performance much on the binarized MNIST dataset. This is consistent withour experience—we found that our secondary optimization procedure helped more when modelingsparse, high-dimensional text data than when modeling MNIST.Leveraging Gradient Information: The algorithmic procedure for obtaining Jacobian Vectors that wepropose resembles that used to derive Fisher Score features. For an data point Xunder a parametericdistribution p(X;), the Fisher scores is defined as UX=rlogp(X;). Jaakkola & Haussler(2007) similarly use UXto form a kernel function for subsequent use in a discriminative classifier.The intuition behind such methods is to note that the derivative of the log-probability with respect tothe parameters of the generative model encodes all the variability in the input under the generativeprocess. We rely on a related intuition, although our motivations are different; we are interested incharacterizing isolated features such as words, not vector observations such as documents. Also,we consider Jacobians with respect to per-observation latent variables z, rather than globally sharedparameters.In the context of discriminative modeling, (Erhan et al., 2009) use gradient information to study thepatterns with which neurons are activated in a deep neural networks while (Wang et al., 2016) use thespectra of the Jacobian to study the complexity of the functions learned by neural networks.Introspection via Embeddings: Landauer et al. (1998) proposed latent semantic analysis, one of theearliest works to create vector space representations of documents. Bengio et al. (2003); Mikolov& Dean (2016) propose log-linear models to create word-representations from document corporain an unsupervised fashion. Rudolph et al. (2016) describe a family of models to create contextualembeddings where the conditional distributions that lie in the exponential family. Finally, Choi et al.(2016) propose a variant of Word2Vec to create representations of diagnosis codes from temporalElectronic Health Record data. The models above explicitly condition the probability of a wordon its nearby context. In contrast, our model models the probability of a word as it appears in thedocument (or rather, conditioned on its global context). Augmenting the generative model in Figure 1to incorporate local context is a possible direction for future work.Miao et al. (2016) learn a shallow log-linear model on text data and obtain embeddings for words fromthe weight matrix that parameterize their generative model. Li et al. (2016) propose a modification toLDA that explicitly models representations for word in addition to modeling the word-topic structure.5 E VALUATIONText Data: We study the effect of further optimization of the variational parameters and inferencewith tf-idf features on the two datasets of varying size: the smaller 20Newsgroups (Lang, 2008)(train/valid/test: 10768=500=7505 ,V:2000 ) and the larger RCV2 (Lewis et al., 2004) dataset(train/valid/test: 789414 /5000 /10000 ,V:10000 ). We follow the preprocessing procedure defined in(Miao et al., 2016) for both datasets. We also train models on the Wikipedia corpus used in (Huanget al., 2012). We remove stop words, words appearing less than ten times in the dataset. and select ourvocabulary to comprise the union of the top 20000 words in the corpus, the words in the WordSim3535Under review as a conference paper at ICLR 2017(Finkelstein et al., 2001) and the words in the Stanford Contextual Word Similarity Dataset (SCWS)(Huang et al., 2012). The resulting dataset is of size train/valid: 1212781=1000 andV:20253 .EHR Data: We train shallow and deep generative models on a dataset constructed from ElectronicMedical Records. The dataset comprises 185000 patients where each patient’s data across time wasaggregated to create a bag-of-diagnosis-codes representation of the patient. The vocabulary comprisesfour different kinds of medical diagnosis codes: ICD9 (diagnosis), LOINC (laboratory tests), NDC(prescription medication), CPT (procedures). For a single patient, we have 51321 diagnosis codes.Training Procedure: On all datasets, we train shallow log-linear models ( (z) =Wz+b) anddeeper three-layer DLGMs ( (z) =MLP(z;)). We vary the number of secondary optimizationstepsM= 1;100(cf. Algorithm 1) to study the effect of optimization on (x)with ADAM (Kingma& Ba, 2015). We use a mini-batch size of 500, a learning rate of 0:01for (x)and0:0008 for;.The inference network was fixed to a two-layer MLP whose intermediate hidden layer h(x)was usedto parameterize the mean and diagonal log-variance (x);log (x). To evaluate the quality of thelearned generative models, we report an upper bound on perplexity (Mnih & Gregor, 2014) given byexp(1NPi1Nilogp(xi))where logp(xi)is replaced by Eq 2. The notation 3- M100-tfidf indicatesa model where the MLP parameterizing (z)has three hidden layers, the local variational parametersare updated 100times before an update of and tf-idf features were used in the inference network.Improving Learning: Table 1 depicts our results on 20newsgroups and RCV2. On the smallerdataset, we find that the deeper models overfit quickly and are outperformed by shallow generativemodels. On the larger datasets, the deeper models’ capacity is more readily utilized yielding bettergeneralization. The use of tf-idf features always helps learning on smaller datasets. On larger datasets,the benefits are smaller when we also optimize (x). Finally, the optimization of the local variationalparameters appears to help most on the larger datasets. To investigate how this occurs, we plot theheld-out likelihood versus epochs. For models trained on the larger RCV2 (Figure 2a) and Wikipedia(Figure 2b) datasets, the larger deep generative models converge to better solutions (and in fewerpasses through the data) with the additional optimization of (x).To study the effect of investigate where optimizing (x)is particularly effective on, we train a threelayer model on different subsets of the Wikipedia dataset. The subsets are created by selecting thetopKmost frequently occurring features in the data. Our rationale is that by holding everythingfixed and varying the level of sparsity (datasets with smaller values of Kare less sparse) in thedata, we can begin to understand when our method is most helpful. On held-out data, we computethe difference between the perplexity when the model is trained with M= 1 (denotedPM1) andM= 100 (denotedPM100) and compute the relative decrease in perplexity obtained asPM1PM100PM100.The results are depicted in Figure 2c where we see that our method improves learning as a functionof the dimensionality of the data.In Table 5 in the supplementary material, we study the effect of varying the parameters of theinference network. There, we perform a small grid search over the hidden dimension and the numberof layers in the inference network and find that optimizing the variational parameters continues toproduces models with lower overall perplexity.Jacobian Vectors: Our first avenue for introspection into the learned generative model is usinglog-singular values of the Jacobian matrix. Since the Jacobian matrix precisely encodes how sensitivethe outputs are with respect to the inputs, the log-singular value spectrum of this matrix directlycaptures the amount of variance in the data explained by the latent space. Said differently, we canread off the number of active units in the DLGM or V AE by counting the number of log-singularvalues larger than zero. Furthermore, this method of introspection depends only on the parametersof the generative model. In Figure 2d, 2e, we see that for larger models continuing to optimizethe variational parameters allows us to learn models that use many more of the available latentdimensions. This suggests that, when fit to text data, DLGMs may be particularly susceptible to theoverpruning phenomenon noted by Burda et al. (2015). In Figure 2, the lower held-out perplexity andthe increased utilization of the latent space suggest that the continued optimization of the variationalparameters yields more powerful generative models.We investigate how the Jacobian matrix may be used for model introspection by studying thequalitative properties of Jlogmean on DLGMs (of type “3-M100-tfidf”) trained on two, diverse setsof data. We form a Monte Carlo estimate of Jlogmean using 400samples. The cosine distance is6Under review as a conference paper at ICLR 2017Table 1: Test Perplexity: Left: Baselines Results on the 20newsgroups and RCV1-v2 dataset Legend: LDA(Blei et al., 2003), Replicated Softmax (RSM) (Hinton & Salakhutdinov, 2009), Sigmoid Belief Networks (SBN)and Deep Autoregressive Networks (DARN) (Mnih & Gregor, 2014), Neural Variational Document Model(Miao et al., 2016). Kdenotes the latent dimension in our notation. Right: DLGMs on text data with K= 100 .We vary the features presented to the inference network q(zjx)during learning between: normalized countvectors (xPVi=1xi, denoted “norm”) and normalized tf-idf (denoted “tf-idf”) features.ModelK 20News RCV1-v2LDA 50 1091 1437LDA 200 1058 1142RSM 50 953 988SBN 50 909 784fDARN 50 917 724fDARN 200 — 598NVDM 50 836 563NVDM 200 852 550DLGM20News RCV1-v2M1 M100 M1 M111-M1-norm 964 816 498 4791-M100-norm 1182 831 485 4533-M1-norm 1040 866 408 3603-M100-norm 1341 894 378 3291-M1-tfidf 895 785 475 4531-M100-tfidf 917 792 480 4513-M1-tfidf 1027 852 391 3463-M100-tfidf 1029 833 377 3270 50 100 150 200Epochs300400500600700800Upper Bound on Held-out Perplexity1-M13-M13-M1001-M100(a) RCV20 5 10 15 20 25 30Epochs1000120014001600180020002200Upper Bound on Held-out Perplexity3-M11-M13-M100 (b) Wikipedia100050001000020254Sorted Number of Features0.000.020.040.060.080.100.12Decrease in PerplexityPM1−PM100PM1(c) Perplexity vs Features0 20 40 60 80 100Number of singular values2101234567Log Singular Values of Jlogmean3-M13-M1001-M11-M100(d) RCV20 20 40 60 80 100Number of singular values210123456Log Singular Values of Jlogmean3-M13-M1001-M11-M100 (e) WikipediaFigure 2: Mechanics of Learning: Validation Perplexity and Log-singular Values of Jlogmean:Best viewedin color. For the RCV2 and Wikipedia (large) datasets, we visualize the validation perplexity as a function ofepochs. The solid lines indicate the validation perplexity for M= 1 and the dotted lines indicate M= 100 .The x-axis is notdirectly comparable on running times since larger values of Mtake longer during training. Wefind that learning with M= 100 takes approximately 15times as long per mini-batch of size 500on the textdatasets. Figure 2c compares relative differences in the final held-out perplexity, denoted P, between modelstrained using M= 1andM= 100 . On the x-axis, we vary the number of features used in the dataset. Figure2d, 2e depict the sorted log singular values of Jlogmean.used to define neighbors of words in the embedding space of the Jacobian and spectral clustering(V on Luxburg, 2007) is used to form clusters.In Table 2a, we visualize some of the nearest neighbors of words using Jlogmean obtained from modelstrained on the Wikipedia dataset. The neighbors are semantically sensible. Instead of evaluating theJacobian atLpointsz1:Lp(z), one may instead evaluate it at z1:Lq(zjx)for somex. In Table2b, we select three polysemous query words alongside “context words” that disambiguate the query’smeaning. For each word-context pair, we create a document comprising a subset of words in the the7Under review as a conference paper at ICLR 2017(a)Word Embeddings (Nearest Neighbors): We visualize nearest neighbors of word embeddings. We excludeplurals of the query and other words in the neighborhood.Query Neighborhoodintelligence espionage, secrecy, interrogation, counterterrorismzen dharma, buddhism, buddhas, meditation,yogaartificial artificially, molecules, synthetic, solublemilitary civilian, armys, commanders, infantry(b)Word Embeddings (Polysemy): We visualize the nearest neighbors under the Jacobian vector induced bythe posterior distribution of a document created based on the context word.Word Context Neighboring Wordscrane construction lifting, usaaf, spanned, crushed, liftbird erected, parkland, locally, farmland, causewaybank river watershed, footpath, confluence, drains, tributarymoney banking, government, bankers, comptroller, fiscalfires burn ignition, combustion, engines, fuel, enginelayoff thunderstorm, grassy, surrounded, walkway, burning(c)Medical Embeddings (Nearest Neighbors): We evaluate nearest neighbors of selected diagnosis, drug andprocedure codes (ignoring duplicates and shortening some code names). Metformin, Glimepiride, Pioglitazoneand Avandia are diabetic drugs. A contour meter is an instrument to track blood glucose. Advair, Albuterol,Proventil and Spiriva are prescribed to patients with chronic obstructive pulmonary disease (COPD)Code Neighboring CodesMetformin Glimepiride, Avandia, Contour Meter, PioglitazoneSpiriva (Bronchodilator) Advair , Albuterol , ProventilAsbestosis Exposure To Asbestos , Coal Workers’ Pneumoconiosis, Ct Scan ChestTable 2: Qualitative evaluation of Jacobian Vectors : In Table 2a and 2b, we evaluate the embeddings ofwords. In Table 8b and 2c we evaluate embeddings of medical diagnosis codes.Models Spearman 100Huang(G) 22.8Huang 71.3Glove 75.9C&W 55.3ESA 75Tiered Pruned tf-idf 76.91-M1Jprobmean 69.73-M100Jprobmean 59.6(a)WordSim353: “G” denotes the model inHuang et al. learned only with global contextin the document.Models Spearman 100Huang (S) 58.6Huang (M) 65.7C&W 57tf-idf-S 26.3Pruned tf-idf-S 62.5Pruned tf-idf-M 60.51-M1Jprobmean 61.73-M100Jprobmean 59.5(b)SCWS: (S) denotes a single prototypeapproach versus (M) that denotes a multi-prototype approach (that leverages context)Table 3: Semantic Similarity in Words: The baseline results are taken from (Huang et al., 2012). C&W usesembeddings from the language model of (Collobert & Weston, 2008). Glove corresponds to embeddings by(Pennington et al., 2014). The learning algorithm for our embeddings does not use local context.context’s Wikipedia page. Then, we use the learned inference network to perform posterior inferenceto evaluateJlogmean at the corresponding q(zjx). This yields a set of contextual Jacobian vectors. Wedisplay the nearest neighbors for each word under different contextual Jacobian vectors and findthat, while not always perfect, they capture different contextually relevant semantics. The take-awayhere is that by combining posterior inference in this Bayesian network with our methodology ofintrospecting the model, one obtains different context-specific representations for the observationsdespite not having been trained to capture this explicitly.In Table 8b (appendix), we visualize clusters formed from the embeddings of medical diagnosiscodes to find that they exhibit topical coherence. In Table 2c, the nearest neighbors of drugs includeother drugs prescribed in conjunction with or as a replacement for the query drug. For diagnosiscodes such as “Asbestosis”, the nearest neighbors are symptoms and procedures associated with the8Under review as a conference paper at ICLR 2017Table 4: Medical Relatedness Measure: Evaluating the quality of embedding using medical (NDF-RT andCCS) ontologies. Each column corresponds to a measure of how well the embedding space is amenable toperforming analogical reasoning (NDF-RT) or clusters meaningfully (CCS). A higher number is better. SCUIscorresponds to the application of the method developed by (Choi et al., 2016) on data released by (Finlaysonet al., 2014). The learning algorithm for our embeddings does not use local context.Models MRM NDF-RT MRM NDF-RT MRM CCS MRM CCS(May Treat) (May Prevent) (Fine Grained) (Coarse Grained)(De Vine et al., 2014) 53.21 57.14 22.63 24.56(Choi et al., 2016) 59.40 55.71 44.80 47.43SCUI 52.75 48.57 34.16 37.311-M1Jpotmean 59.63 32.86 31.58 33.883-M100Jpotmean 60.32 38.57 37.77 40.87disease. Finally, for a qualitative evaluation of Jacobian vectors obtained from a model trained onmovie ratings, we refer the reader to the appendix.The Semantics of Embeddings: We evaluate the vector space representations that we obtain fromJmean on benchmarks (such as WordSim353 (Finkelstein et al., 2001) and SCWS (Huang et al.,2012)) that attempt to measure the similarity of words. The algorithmically derived measure ofsimilarity is compared to a human-annotated score (between one and ten) using the Spearman rankcorrelation. The models that we compare to primarily use local context, which yields a more precisesignal about the meanings of particular words. Closest to us in terms of training procedure is (Huang(G)) in Table 3a, whose model we outperform. Finding ways to incorporate local context is fertileground for future work on models tailor-made for extracting embeddings.For medical codes, we follow the method in (Choi et al., 2016). The authors build two kinds ofevaluations to estimate whether an embedding space of medical diagnosis codes captures medicallyrelated concepts well. MRM NDF-RT (Medical Relatedness Measure under NDF-RT) leverages adatabase (NDF-RT) to evaluate how good an embedding space is at answering analogical queriesbetween drugs and diseases such as uDiabetesuMetformin(uLung CanceruTarceva ). (Metformin isa diabetic drug and Tarceva is used in the treatment of lung cancer). The evaluation ( MRM CCS)measures if the neighborhood of the diagnosis codes is medically coherent using a predefined medicalontology (CCS) as ground truth. The number computed may be thought of as a measure of precision,where a higher number is better. We refer the reader to the appendix for additional details.Table 4 details the results on evaluating the medical embeddings. Once again, the baselines wecompare (Choi et al., 2016) are variants of Word2Vec that maximize the likelihood of the diagnosiscodes conditioned on carefully crafted contexts. Our method performs comparably to the baselines,even though it relies exclusively on global context and was not designed with this task in mind. Thissetting depicts an instance where Jacobian vectors resulting from a deeper, better-trained modeloutperform those from a shallow model, highlighting the importance of a method of interpretationagnostic to the structure of the conditional probability functions in the generative model.Between the three choices of Jacobian vectors, we found that all three perform comparably on theword similarity evaluation with Jprobmean slightly outperforming the others. On the medical data, withwe found similar results aside from a few cases where Jprobmean did not perform well. For deeper models,we found that optimizing (x)improved the quality of the obtained Jacobian vectors on text andmedical data. The full versions of Tables 3 and 4 can be found in the appendix.6 D ISCUSSIONWe explored techniques to improve inference and learning in deep generative models of sparsenon-negative data. We also developed and explored a novel, simple, yet effective method to interpretthe structure of the non-linear generative model via embeddings obtained from the Jacobian matrixrelating latent variables to observations. The embeddings are evaluated qualitatively and quantitatively,and were seen to exhibit interesting semantic structure across a variety of domains. Studying theeffects of varying the priors on the latent variables, conditioning on context, and varying the neuralarchitectures that parameterize the conditional distributions suggest avenues for blending ideas fromgenerative modeling and Bayesian inference into building more powerful embeddings for data.9Under review as a conference paper at ICLR 2017
H1Dc01zNe
Decent paper, but lacking novelty
5: Marginally below acceptance threshold
This paper introduces three tricks for training deep latent variable models on sparse discrete data: 1) tf-idf weighting 2) Iteratively optimizing variational parameters after initializing them with an inference network 3) A technique for improving the interpretability of the deep model The first idea is sensible but rather trivial as a contribution. The second idea is also sensible, but is conceptually not novel. What is new is the finding that it works well for the dataset used in this paper. The third idea is interesting, and seems to give qualitatively reasonable results. The quantitative semantic similarity results don’t seem that convincing, but I am not very familiar with the relevant literature and therefore cannot make a confident judgement on this issue.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Inference and Introspection in Deep Generative Models of Sparse Data ### Paper Abstract Deep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to sparse, high-dimensional integer data such as word counts or product ratings. One reason is that the standard training procedures find poor local optima when applied to such data. We propose two techniques that alleviate this problem, significantly improving our ability to fit DLGMs to sparse, high-dimensional data. Having fit these models, we are faced with another challenge: how to use and interpret the representation that we have learned? To that end, we propose a method that extracts distributed representations of features via a simple linearization of the model. ### Paper Keywords ["Unsupervised Learning", "Deep learning"] ### Paper Content ABSTRACTDeep generative models such as deep latent Gaussian models (DLGMs) are pow-erful and popular density estimators. However, they have been applied almostexclusively to dense data such as images; DLGMs are rarely applied to sparse,high-dimensional integer data such as word counts or product ratings. One reasonis that the standard training procedures find poor local optima when applied tosuch data. We propose two techniques that alleviate this problem, significantlyimproving our ability to fit DLGMs to sparse, high-dimensional data. Having fitthese models, we are faced with another challenge: how to use and interpret therepresentation that we have learned? To that end, we propose a method that extractsdistributed representations of features via a simple linearization of the model.1 I NTRODUCTIONDeep latent Gaussian models (DLGMs, a.k.a. variational autoencoders; Rezende et al., 2014; Kingmaet al., 2014) have led a resurgence in the use of deep generative models for density estimation. DLGMsassume that observed vectors xare generated by applying a nonlinear transformation (defined by aneural network with parameters ) to a vector of Gaussian random variables z.Learning in DLGMs proceeds by approximately maximizing the average marginal likelihood p(x) Rzp(z)p(xjz)dzof the observations x. Computing the true marginal likelihood is intractable, sowe resort to variational expectation-maximization (Bishop, 2006), an approximation to maximum-likelihood estimation. To learn the parameters of the generative model, the procedure needs to find adistributionq(zjx)that approximates the posterior distribution p(zjx)of the latent vector zgiven theobservations x. In the past, such qdistributions were fit using iterative optimization procedures (e.g.,Hoffman et al., 2013). But Rezende et al. (2014) and Kingma et al. (2014) showed that q(zjx)can beparameterized by a feedforward “inference network” with parameters , speeding up learning. Thisinference network is trained jointly with the generative model; as training proceeds, the inferencenetwork learns to approximate posterior inference on the generative model, and the generative modelimproves itself using the output of the inference network.Embedded within this procedure, however, lies a potential problem: both the inference network andthe generative model are initialized randomly. Early on in learning, the inference network’s q(zjx)distributions will be poor approximations to the true posterior p(zjx), and the gradients used to updatethe parameters of the generative model will therefore be poor approximations to the gradients ofthe true log-likelihood logp(x). Previous stochastic variational inference methods (Hoffman et al.,2013) were slower, but suffered less from this problem since for every data-point, a set of variationalparameters was optimized within the inner loop of learning. In this work, we investigate blendingthe two methodologies for learning models of sparse data. In particular, we use the parameterspredicted by the inference network as an initialization and optimize them further during learning.When modeling high-dimensional sparse data, we show that updating the local variational parametersyields generative models with better held-out likelihood, particularly for deeper generative models.What purpose is served by fitting bigger, deeper, more powerful generative models? Breiman (2001)argues that statistical discriminative modeling falls into two schools of thought: the data modelingculture and the algorithmic modeling culture. The former advocates the use of predictive modelsthat assume interpretable, mechanistic processes while the latter advocates the use of black boxtechniques with an emphasis on prediction accuracy. Breiman’s arguments also ring true about the1Under review as a conference paper at ICLR 2017zx Figure 1 :Deep Latent Gaussian Model: The Bayesian network depicted here comprises asingle latent variable with the conditional probability p(xjz)defined by a deep neural network withparameter. The dotted line represents the inference network parameterized by , which is usedfor posterior inference at train and test time.divide between deep generative models with complex conditional distributions and simpler, moreinterpretable statistical models. Consider a classic model such as Latent Dirichlet Allocation (Bleiet al., 2003). It is outperformed in held-out likelihood (Miao et al., 2016) by deeper generative modelsand assumes a simple probabilistic process for data generation that is unlikely to hold in reality. Yet,its generative semantics lend it a distinct advantage: interpretability. The word-topic matrix in themodel allows practitioners to read off what the model has learned about the data. Is there a naturalway to interpret the generative model when the conditional distributions are parameterized by a deepneural network?Our second contribution is to introduce a simple, easy to implement method to interpret what is beinglearned by generative models such as DLGMs whose conditional probabilities are parameterized bydeep neural networks. Our hope is to narrow the perceived gulf between a complex generative model’srepresentational power and its interpretability. We use the Jacobian of the conditional distributionwith respect to latent variables in the Bayesian network to form embeddings (or Jacobian vectors) ofthe observations. We investigate the properties of the Jacobian vectors obtained from deeper, morenon-linear generative models.2 B ACKGROUNDGenerative Model: We consider learning in generative models of the form shown in Figure 1. Weobserve a set of Dword count vectors x1:D, wherexdvdenotes the number of times that word indexv2f1;:::;Vgappears in document d. We assume we are given the total number of words perdocumentNdPvxdv, and thatxdwas generated via the following generative process:zdN(0;I);(zd)MLP(zd;);(zd)expf(zd)gPvexpf(zd)vg;xdMultinomial( (zd);Nd):(1)That is, we draw a Gaussian random vector, pass it through a multilayer perceptron (MLP) withparameters, pass the resulting vector through the softmax (a.k.a. multinomial logistic) function, andsampleNdtimes from the resulting distribution over the vocabulary.1Variational Learning: For ease of exposition notation we drop the subscript on xdto formxreferring to a single data point. We need to approximate the intractable posterior distribution p(zjx)during learning. Using the well-known variational principle, we can obtain the lower bound on thelog marginal likelihood of the data (or L(x;;)) in Eq. 2. where the inequality is by Jensen’sinequality.logp(x;)Eq(zjx)[logp(xjz))]KL(q(zjx)jjp(z) ) =L(x;;); (2)We leverage an inference network orrecognition network (Hinton et al., 1995), a neural network whichapproximates the intractable posterior, during learning. This is a parametric conditional distributionthat is optimized to perform inference. Kingma & Welling (2014); Rezende et al. (2014) use a neuralnet (with parameters ) to parameterize q(zjx). The challenge in the resulting optimization problemis that the lower bound (2)includes an expectation w.r.t. q(zjx), which implicitly depends on thenetwork parameters . This difficulty is overcome by using stochastic backpropagation .With a normal distribution as our variational approximation we have that q(zjx)N((x);(x)).(x);(x)are functions of the observation x, and we denote by (x) :=1In keeping with common practice, we neglect the multinomial base measure termN!x1!xV!, which amountsto assuming that the words are observed in a particular order.2Under review as a conference paper at ICLR 2017f(x);(x)gthe local variational parameters predicted by the inference network. A sim-ple transformation allows one to obtain unbiased Monte Carlo estimates of the gradients ofEq(zjx)[logp(xjz))]with respect to . If we assume the prior p(z)is also normally distributed, theKLand its gradients may be obtained analytically. Throughout this paper we will use to denote theparameters of the generative model, and to denote the parameters of the inference network.3 M ETHODOLOGYInference with Global Information: Sparse data typically exhibits long tails and learning in thepresence of rare features is challenging. Inference networks learn to regress to the optimal posteriorparameters for every data point and global information about the relative frequencies of the individualfeatures in the training distribution may present valuable information during learning.The simplest way to incorporate first order statistics across the training data into the inferential processis to condition on tf-idf (Baeza-Yates et al., 1999) features instead of the raw-counts. tf-idf is one ofthe most widely used techniques in information retrieval. In the context of building bag-of-wordsrepresentations for documents, tf-idf re-weight features to increase the influence of rarer wordswhile decreasing the influence of common words appearing in all documents. The tf-idf-transformedword-count vector is ~xdvxdvlogDPd0minfxd0v;1g. After applying transform, the resulting vector ~xis normalized by its L2 norm. It’s worthwhile to note that leveraging first-order statistics for inferenceis difficult in the traditional paradigm of tracking variational parameters for each data point but iseasy with inference networks.Optimizing Local Variational Parameters: The inference network initially comprises a randomlyinitialized neural network. The predictions of the inference network early in optimization aresuboptimal variational parameters used to derive gradients of the parameters of the generative model.This induces noise and bias to the gradients used to update the parameters of the generative model;this noise and bias may push the generative model towards a poor local optimum. Previous work hassuggested that deep neural networks (which form the conditional probability distributions p(xjz))are sensitive to initialization (Glorot & Bengio, 2010; Larochelle et al., 2009).To avoid these issues, we only use the local variational parameters (x)predicted by the inferencenetwork to initialize an iterative optimizer that maximizes the ELBO with respect to ; we use theoptimized variational parameters^ (x)to derive gradients for the generative model. We then train theinference network using stochastic backpropagation and gradient descent, holding the parameters ofthe generative model fixed. Our procedure is detailed in Algorithm 1.Algorithm 1 Pseudocode for Learning: We evaluate expectations in L(x)(see Eq. 2) using a single samplefrom the variational distribution and aggregate gradients across mini-batches. M= 1corresponds to performingno additional optimization to the variational parameters We update ; (x);using stochastic gradient descentwith adaptive learning rates ; (x);obtained via ADAM (Kingma & Ba, 2015)Inputs : DatasetD:= [x1;:::;xD], Inference Model: q(zjx), Generative Model: p(xjz);p(z)while notConverged ()do1. Sample datapoint: xD2. Estimate local variational parameters (x)1usingq(zjx)3. Estimate (x)M^ (x) = arg max (x)L(x;; (x))via SGD as:m= 1;:::;M , (x)m+1= (x)m+ (x)@L(x;; (x)m)@ (x)m4. Updateas: +rL(x;; (x)M)5. Updateas: +rL(x;; (x))end whileIntrospection: Linear models are inherently interpretable. Consider linear regression, factor analysis(Spearman, 1904), and latent Dirichlet allocation (LDA; Blei et al., 2003), which (standardizingnotation) assume the following relationships:Regression: E[yjx] =Wx+b; Factor Analysis: xN(0;I);E[yjx] =Wx+b;Latent Dirichlet Allocation: xDirichlet();E[yjx] =Wx: (3)3Under review as a conference paper at ICLR 2017In each case, we need only inspect the parameter matrix Wto answer the question “what happenstoyif we increase xka little?” The answer is clear— ymoves in the direction of the kth row ofW.We can ask this question differently and get the same answer: “what is the derivative@E[yjx]@x?” Theanswer is simply the parameter matrix W.For models as in Fig 1, the variability in the training data is assumed to be due to the single latentstatez. The relationship between latent variables zand observations xcannot be quickly read offof the parameters . But we can still ask what happens if we perturb zby some small dz—thisis simply the directional derivative@E[xjz]@zdz. We can interpret this Jacobian matrix in much thesame way we would a factor loading matrix, with two main differences. First, the Jacobian matrix@E[xjz]@zvaries withz—the interpretation of zmay change significantly depending on context. Second,DLGMs exhibit rotational symmetry—the prior on zis rotationally symmetric, and the MLP canapply arbitrary rotations to zbefore applying any nonlinearities, so a priori there is no “natural” set ofbasis vectors for z. For a given Jacobian matrix, however, we can find the most significant directionsvia a singular value decomposition (SVD).Jacobian Vectors: We present our method to generate embeddings from Bayesian networks of theform Figure 1. We consider three variants of Jacobian embedding vectors, based on the unnormalizedpotentials from the MLP, logarithmic probabilities, and linear probabilities respectively:J(z)pot=@(z)@zJ(z)log=@log(z)@zJ(z)prob=@(z)@z(4)For anyz,fJ(z)log;J(z)pot;J(z)probg2RVKwhereKis the latent dimension and Vis thedimensionality of the observations. It is this matrix that we use to form embeddings. We denote by uithe Jacobian vector obtained from the row of the Jacobian matrix. When not referring to a particularvariant, we useJ(z)to denote the Jacobian matrix. J(z)is a function of zleaving open the choiceof where to evaluate this function. The semantics of our generative model suggest a natural choice:Jmean:=Ep(z)[J(z)]:This set of embeddings captures the variation in the output distribution withrespect to the latent state across the prior distribution of the generative model. Additionally, one mayalso evaluate the Jacobian at the approximate posterior corresponding to an observation x. We studyhow this may be used to obtain contextual word-vectors.In frameworks that support automatic differentiation (e.g., Theano; Theano Development Team,2016),J(z)is readily available and we estimate Jmean via Monte-Carlo sampling from the prior.Deriving Jacobian Vectors: For simplicity, we derive the functional form of the Jacobian in a linearmodel i.e where (zd) =Wzd(c.f Eq 1). We drop the subscript dand denote by i(z), theithelement of the vector (z).p(xi= 1jz) =exp(i(z))Pjexp(j(z))andi(z) =wTizFor linear models, rzi(z) =widirectly corresponds to J(z)pot. Noting thatrzexp(i(z)) =exp(i(z))rzi(z)andrzPjexp(j(z)) =Pjexp(j(z))rzj(z), we estimateJ(z)probas:rzp(xi= 1jz) =rzexp(i(z))Pjexp(j(z))=Pjexp(j(z))rzexp(i(z))exp(i(z))rzPjexp(j(z))(Pjexp(j(z)))2=Pjexp(j(z)) exp(i(z))wiexp(i(z))Pjexp(j(z))wj(Pjexp(j(z)))2=p(xi= 1jz)wip(xi= 1jz)Xjp(xj= 1jz)wj=p(xi= 1jz)(wiXjp(xj= 1jz)wj)Similarly, we may compute J(z)log:rzlogp(xi= 1jz) =wiXjp(xj= 1jz)wj=Xjp(xj= 1jz)(wiwj) (5)4Under review as a conference paper at ICLR 2017Denoting a word-pair vector as wiwj, wherewi;wjare columns of the matrix W. If we define theset of all word-pair vectors as S, then Eq 5 captures the idea that the vector representation for a wordilies in the convex hull of S. Furthermore, the word vector’s location in CONV (S)is determined bythe likelihood of the pairing word ( xj) under the model p(xj= 1jz).When we use a non-linear conditional probability distribution J(z)logbecomes:rzlogp(xi=1jz) =Pjp(xj= 1jz)(rzi(z)rzj(z))whererzi(z)is a non-linear function of z. To thebest of our knowledge, Jacobian Vectors and their properties have not been studied.4 R ELATED WORKLearning in Deep Generative Models: Salakhutdinov & Larochelle (2010) optimize the local varia-tional parameters obtained from an inference network when learning deep Boltzmann machines. ForDLGMs, Hjelm et al. (2016) also consider the optimization of the local variational parameters, thoughtheir exposition focuses on deriving an importance-sampling-based bound to use during learning indeep generative models with discrete latent variables. Their experimental results suggest the proce-dure does not improve performance much on the binarized MNIST dataset. This is consistent withour experience—we found that our secondary optimization procedure helped more when modelingsparse, high-dimensional text data than when modeling MNIST.Leveraging Gradient Information: The algorithmic procedure for obtaining Jacobian Vectors that wepropose resembles that used to derive Fisher Score features. For an data point Xunder a parametericdistribution p(X;), the Fisher scores is defined as UX=rlogp(X;). Jaakkola & Haussler(2007) similarly use UXto form a kernel function for subsequent use in a discriminative classifier.The intuition behind such methods is to note that the derivative of the log-probability with respect tothe parameters of the generative model encodes all the variability in the input under the generativeprocess. We rely on a related intuition, although our motivations are different; we are interested incharacterizing isolated features such as words, not vector observations such as documents. Also,we consider Jacobians with respect to per-observation latent variables z, rather than globally sharedparameters.In the context of discriminative modeling, (Erhan et al., 2009) use gradient information to study thepatterns with which neurons are activated in a deep neural networks while (Wang et al., 2016) use thespectra of the Jacobian to study the complexity of the functions learned by neural networks.Introspection via Embeddings: Landauer et al. (1998) proposed latent semantic analysis, one of theearliest works to create vector space representations of documents. Bengio et al. (2003); Mikolov& Dean (2016) propose log-linear models to create word-representations from document corporain an unsupervised fashion. Rudolph et al. (2016) describe a family of models to create contextualembeddings where the conditional distributions that lie in the exponential family. Finally, Choi et al.(2016) propose a variant of Word2Vec to create representations of diagnosis codes from temporalElectronic Health Record data. The models above explicitly condition the probability of a wordon its nearby context. In contrast, our model models the probability of a word as it appears in thedocument (or rather, conditioned on its global context). Augmenting the generative model in Figure 1to incorporate local context is a possible direction for future work.Miao et al. (2016) learn a shallow log-linear model on text data and obtain embeddings for words fromthe weight matrix that parameterize their generative model. Li et al. (2016) propose a modification toLDA that explicitly models representations for word in addition to modeling the word-topic structure.5 E VALUATIONText Data: We study the effect of further optimization of the variational parameters and inferencewith tf-idf features on the two datasets of varying size: the smaller 20Newsgroups (Lang, 2008)(train/valid/test: 10768=500=7505 ,V:2000 ) and the larger RCV2 (Lewis et al., 2004) dataset(train/valid/test: 789414 /5000 /10000 ,V:10000 ). We follow the preprocessing procedure defined in(Miao et al., 2016) for both datasets. We also train models on the Wikipedia corpus used in (Huanget al., 2012). We remove stop words, words appearing less than ten times in the dataset. and select ourvocabulary to comprise the union of the top 20000 words in the corpus, the words in the WordSim3535Under review as a conference paper at ICLR 2017(Finkelstein et al., 2001) and the words in the Stanford Contextual Word Similarity Dataset (SCWS)(Huang et al., 2012). The resulting dataset is of size train/valid: 1212781=1000 andV:20253 .EHR Data: We train shallow and deep generative models on a dataset constructed from ElectronicMedical Records. The dataset comprises 185000 patients where each patient’s data across time wasaggregated to create a bag-of-diagnosis-codes representation of the patient. The vocabulary comprisesfour different kinds of medical diagnosis codes: ICD9 (diagnosis), LOINC (laboratory tests), NDC(prescription medication), CPT (procedures). For a single patient, we have 51321 diagnosis codes.Training Procedure: On all datasets, we train shallow log-linear models ( (z) =Wz+b) anddeeper three-layer DLGMs ( (z) =MLP(z;)). We vary the number of secondary optimizationstepsM= 1;100(cf. Algorithm 1) to study the effect of optimization on (x)with ADAM (Kingma& Ba, 2015). We use a mini-batch size of 500, a learning rate of 0:01for (x)and0:0008 for;.The inference network was fixed to a two-layer MLP whose intermediate hidden layer h(x)was usedto parameterize the mean and diagonal log-variance (x);log (x). To evaluate the quality of thelearned generative models, we report an upper bound on perplexity (Mnih & Gregor, 2014) given byexp(1NPi1Nilogp(xi))where logp(xi)is replaced by Eq 2. The notation 3- M100-tfidf indicatesa model where the MLP parameterizing (z)has three hidden layers, the local variational parametersare updated 100times before an update of and tf-idf features were used in the inference network.Improving Learning: Table 1 depicts our results on 20newsgroups and RCV2. On the smallerdataset, we find that the deeper models overfit quickly and are outperformed by shallow generativemodels. On the larger datasets, the deeper models’ capacity is more readily utilized yielding bettergeneralization. The use of tf-idf features always helps learning on smaller datasets. On larger datasets,the benefits are smaller when we also optimize (x). Finally, the optimization of the local variationalparameters appears to help most on the larger datasets. To investigate how this occurs, we plot theheld-out likelihood versus epochs. For models trained on the larger RCV2 (Figure 2a) and Wikipedia(Figure 2b) datasets, the larger deep generative models converge to better solutions (and in fewerpasses through the data) with the additional optimization of (x).To study the effect of investigate where optimizing (x)is particularly effective on, we train a threelayer model on different subsets of the Wikipedia dataset. The subsets are created by selecting thetopKmost frequently occurring features in the data. Our rationale is that by holding everythingfixed and varying the level of sparsity (datasets with smaller values of Kare less sparse) in thedata, we can begin to understand when our method is most helpful. On held-out data, we computethe difference between the perplexity when the model is trained with M= 1 (denotedPM1) andM= 100 (denotedPM100) and compute the relative decrease in perplexity obtained asPM1PM100PM100.The results are depicted in Figure 2c where we see that our method improves learning as a functionof the dimensionality of the data.In Table 5 in the supplementary material, we study the effect of varying the parameters of theinference network. There, we perform a small grid search over the hidden dimension and the numberof layers in the inference network and find that optimizing the variational parameters continues toproduces models with lower overall perplexity.Jacobian Vectors: Our first avenue for introspection into the learned generative model is usinglog-singular values of the Jacobian matrix. Since the Jacobian matrix precisely encodes how sensitivethe outputs are with respect to the inputs, the log-singular value spectrum of this matrix directlycaptures the amount of variance in the data explained by the latent space. Said differently, we canread off the number of active units in the DLGM or V AE by counting the number of log-singularvalues larger than zero. Furthermore, this method of introspection depends only on the parametersof the generative model. In Figure 2d, 2e, we see that for larger models continuing to optimizethe variational parameters allows us to learn models that use many more of the available latentdimensions. This suggests that, when fit to text data, DLGMs may be particularly susceptible to theoverpruning phenomenon noted by Burda et al. (2015). In Figure 2, the lower held-out perplexity andthe increased utilization of the latent space suggest that the continued optimization of the variationalparameters yields more powerful generative models.We investigate how the Jacobian matrix may be used for model introspection by studying thequalitative properties of Jlogmean on DLGMs (of type “3-M100-tfidf”) trained on two, diverse setsof data. We form a Monte Carlo estimate of Jlogmean using 400samples. The cosine distance is6Under review as a conference paper at ICLR 2017Table 1: Test Perplexity: Left: Baselines Results on the 20newsgroups and RCV1-v2 dataset Legend: LDA(Blei et al., 2003), Replicated Softmax (RSM) (Hinton & Salakhutdinov, 2009), Sigmoid Belief Networks (SBN)and Deep Autoregressive Networks (DARN) (Mnih & Gregor, 2014), Neural Variational Document Model(Miao et al., 2016). Kdenotes the latent dimension in our notation. Right: DLGMs on text data with K= 100 .We vary the features presented to the inference network q(zjx)during learning between: normalized countvectors (xPVi=1xi, denoted “norm”) and normalized tf-idf (denoted “tf-idf”) features.ModelK 20News RCV1-v2LDA 50 1091 1437LDA 200 1058 1142RSM 50 953 988SBN 50 909 784fDARN 50 917 724fDARN 200 — 598NVDM 50 836 563NVDM 200 852 550DLGM20News RCV1-v2M1 M100 M1 M111-M1-norm 964 816 498 4791-M100-norm 1182 831 485 4533-M1-norm 1040 866 408 3603-M100-norm 1341 894 378 3291-M1-tfidf 895 785 475 4531-M100-tfidf 917 792 480 4513-M1-tfidf 1027 852 391 3463-M100-tfidf 1029 833 377 3270 50 100 150 200Epochs300400500600700800Upper Bound on Held-out Perplexity1-M13-M13-M1001-M100(a) RCV20 5 10 15 20 25 30Epochs1000120014001600180020002200Upper Bound on Held-out Perplexity3-M11-M13-M100 (b) Wikipedia100050001000020254Sorted Number of Features0.000.020.040.060.080.100.12Decrease in PerplexityPM1−PM100PM1(c) Perplexity vs Features0 20 40 60 80 100Number of singular values2101234567Log Singular Values of Jlogmean3-M13-M1001-M11-M100(d) RCV20 20 40 60 80 100Number of singular values210123456Log Singular Values of Jlogmean3-M13-M1001-M11-M100 (e) WikipediaFigure 2: Mechanics of Learning: Validation Perplexity and Log-singular Values of Jlogmean:Best viewedin color. For the RCV2 and Wikipedia (large) datasets, we visualize the validation perplexity as a function ofepochs. The solid lines indicate the validation perplexity for M= 1 and the dotted lines indicate M= 100 .The x-axis is notdirectly comparable on running times since larger values of Mtake longer during training. Wefind that learning with M= 100 takes approximately 15times as long per mini-batch of size 500on the textdatasets. Figure 2c compares relative differences in the final held-out perplexity, denoted P, between modelstrained using M= 1andM= 100 . On the x-axis, we vary the number of features used in the dataset. Figure2d, 2e depict the sorted log singular values of Jlogmean.used to define neighbors of words in the embedding space of the Jacobian and spectral clustering(V on Luxburg, 2007) is used to form clusters.In Table 2a, we visualize some of the nearest neighbors of words using Jlogmean obtained from modelstrained on the Wikipedia dataset. The neighbors are semantically sensible. Instead of evaluating theJacobian atLpointsz1:Lp(z), one may instead evaluate it at z1:Lq(zjx)for somex. In Table2b, we select three polysemous query words alongside “context words” that disambiguate the query’smeaning. For each word-context pair, we create a document comprising a subset of words in the the7Under review as a conference paper at ICLR 2017(a)Word Embeddings (Nearest Neighbors): We visualize nearest neighbors of word embeddings. We excludeplurals of the query and other words in the neighborhood.Query Neighborhoodintelligence espionage, secrecy, interrogation, counterterrorismzen dharma, buddhism, buddhas, meditation,yogaartificial artificially, molecules, synthetic, solublemilitary civilian, armys, commanders, infantry(b)Word Embeddings (Polysemy): We visualize the nearest neighbors under the Jacobian vector induced bythe posterior distribution of a document created based on the context word.Word Context Neighboring Wordscrane construction lifting, usaaf, spanned, crushed, liftbird erected, parkland, locally, farmland, causewaybank river watershed, footpath, confluence, drains, tributarymoney banking, government, bankers, comptroller, fiscalfires burn ignition, combustion, engines, fuel, enginelayoff thunderstorm, grassy, surrounded, walkway, burning(c)Medical Embeddings (Nearest Neighbors): We evaluate nearest neighbors of selected diagnosis, drug andprocedure codes (ignoring duplicates and shortening some code names). Metformin, Glimepiride, Pioglitazoneand Avandia are diabetic drugs. A contour meter is an instrument to track blood glucose. Advair, Albuterol,Proventil and Spiriva are prescribed to patients with chronic obstructive pulmonary disease (COPD)Code Neighboring CodesMetformin Glimepiride, Avandia, Contour Meter, PioglitazoneSpiriva (Bronchodilator) Advair , Albuterol , ProventilAsbestosis Exposure To Asbestos , Coal Workers’ Pneumoconiosis, Ct Scan ChestTable 2: Qualitative evaluation of Jacobian Vectors : In Table 2a and 2b, we evaluate the embeddings ofwords. In Table 8b and 2c we evaluate embeddings of medical diagnosis codes.Models Spearman 100Huang(G) 22.8Huang 71.3Glove 75.9C&W 55.3ESA 75Tiered Pruned tf-idf 76.91-M1Jprobmean 69.73-M100Jprobmean 59.6(a)WordSim353: “G” denotes the model inHuang et al. learned only with global contextin the document.Models Spearman 100Huang (S) 58.6Huang (M) 65.7C&W 57tf-idf-S 26.3Pruned tf-idf-S 62.5Pruned tf-idf-M 60.51-M1Jprobmean 61.73-M100Jprobmean 59.5(b)SCWS: (S) denotes a single prototypeapproach versus (M) that denotes a multi-prototype approach (that leverages context)Table 3: Semantic Similarity in Words: The baseline results are taken from (Huang et al., 2012). C&W usesembeddings from the language model of (Collobert & Weston, 2008). Glove corresponds to embeddings by(Pennington et al., 2014). The learning algorithm for our embeddings does not use local context.context’s Wikipedia page. Then, we use the learned inference network to perform posterior inferenceto evaluateJlogmean at the corresponding q(zjx). This yields a set of contextual Jacobian vectors. Wedisplay the nearest neighbors for each word under different contextual Jacobian vectors and findthat, while not always perfect, they capture different contextually relevant semantics. The take-awayhere is that by combining posterior inference in this Bayesian network with our methodology ofintrospecting the model, one obtains different context-specific representations for the observationsdespite not having been trained to capture this explicitly.In Table 8b (appendix), we visualize clusters formed from the embeddings of medical diagnosiscodes to find that they exhibit topical coherence. In Table 2c, the nearest neighbors of drugs includeother drugs prescribed in conjunction with or as a replacement for the query drug. For diagnosiscodes such as “Asbestosis”, the nearest neighbors are symptoms and procedures associated with the8Under review as a conference paper at ICLR 2017Table 4: Medical Relatedness Measure: Evaluating the quality of embedding using medical (NDF-RT andCCS) ontologies. Each column corresponds to a measure of how well the embedding space is amenable toperforming analogical reasoning (NDF-RT) or clusters meaningfully (CCS). A higher number is better. SCUIscorresponds to the application of the method developed by (Choi et al., 2016) on data released by (Finlaysonet al., 2014). The learning algorithm for our embeddings does not use local context.Models MRM NDF-RT MRM NDF-RT MRM CCS MRM CCS(May Treat) (May Prevent) (Fine Grained) (Coarse Grained)(De Vine et al., 2014) 53.21 57.14 22.63 24.56(Choi et al., 2016) 59.40 55.71 44.80 47.43SCUI 52.75 48.57 34.16 37.311-M1Jpotmean 59.63 32.86 31.58 33.883-M100Jpotmean 60.32 38.57 37.77 40.87disease. Finally, for a qualitative evaluation of Jacobian vectors obtained from a model trained onmovie ratings, we refer the reader to the appendix.The Semantics of Embeddings: We evaluate the vector space representations that we obtain fromJmean on benchmarks (such as WordSim353 (Finkelstein et al., 2001) and SCWS (Huang et al.,2012)) that attempt to measure the similarity of words. The algorithmically derived measure ofsimilarity is compared to a human-annotated score (between one and ten) using the Spearman rankcorrelation. The models that we compare to primarily use local context, which yields a more precisesignal about the meanings of particular words. Closest to us in terms of training procedure is (Huang(G)) in Table 3a, whose model we outperform. Finding ways to incorporate local context is fertileground for future work on models tailor-made for extracting embeddings.For medical codes, we follow the method in (Choi et al., 2016). The authors build two kinds ofevaluations to estimate whether an embedding space of medical diagnosis codes captures medicallyrelated concepts well. MRM NDF-RT (Medical Relatedness Measure under NDF-RT) leverages adatabase (NDF-RT) to evaluate how good an embedding space is at answering analogical queriesbetween drugs and diseases such as uDiabetesuMetformin(uLung CanceruTarceva ). (Metformin isa diabetic drug and Tarceva is used in the treatment of lung cancer). The evaluation ( MRM CCS)measures if the neighborhood of the diagnosis codes is medically coherent using a predefined medicalontology (CCS) as ground truth. The number computed may be thought of as a measure of precision,where a higher number is better. We refer the reader to the appendix for additional details.Table 4 details the results on evaluating the medical embeddings. Once again, the baselines wecompare (Choi et al., 2016) are variants of Word2Vec that maximize the likelihood of the diagnosiscodes conditioned on carefully crafted contexts. Our method performs comparably to the baselines,even though it relies exclusively on global context and was not designed with this task in mind. Thissetting depicts an instance where Jacobian vectors resulting from a deeper, better-trained modeloutperform those from a shallow model, highlighting the importance of a method of interpretationagnostic to the structure of the conditional probability functions in the generative model.Between the three choices of Jacobian vectors, we found that all three perform comparably on theword similarity evaluation with Jprobmean slightly outperforming the others. On the medical data, withwe found similar results aside from a few cases where Jprobmean did not perform well. For deeper models,we found that optimizing (x)improved the quality of the obtained Jacobian vectors on text andmedical data. The full versions of Tables 3 and 4 can be found in the appendix.6 D ISCUSSIONWe explored techniques to improve inference and learning in deep generative models of sparsenon-negative data. We also developed and explored a novel, simple, yet effective method to interpretthe structure of the non-linear generative model via embeddings obtained from the Jacobian matrixrelating latent variables to observations. The embeddings are evaluated qualitatively and quantitatively,and were seen to exhibit interesting semantic structure across a variety of domains. Studying theeffects of varying the priors on the latent variables, conditioning on context, and varying the neuralarchitectures that parameterize the conditional distributions suggest avenues for blending ideas fromgenerative modeling and Bayesian inference into building more powerful embeddings for data.9Under review as a conference paper at ICLR 2017<|im_end|> <|im_start|>assistant ### Review Title Decent paper, but lacking novelty ### Review Text This paper introduces three tricks for training deep latent variable models on sparse discrete data: 1) tf-idf weighting 2) Iteratively optimizing variational parameters after initializing them with an inference network 3) A technique for improving the interpretability of the deep model The first idea is sensible but rather trivial as a contribution. The second idea is also sensible, but is conceptually not novel. What is new is the finding that it works well for the dataset used in this paper. The third idea is interesting, and seems to give qualitatively reasonable results. The quantitative semantic similarity results don’t seem that convincing, but I am not very familiar with the relevant literature and therefore cannot make a confident judgement on this issue. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
BJyy3a0Ez
ICLR.cc/2018/Conference
2018
Continuous Propagation: Layer-Parallel Training
["Michael James", "Devansh Arpit", "Herman Sahota", "Ilya Sharapov"]
Continuous propagation is a parallel technique for training deep neural networks with batch size one at full utilization of a multiprocessor system. It enables spatially distributed computations on emerging deep learning hardware accelerators that do not impose programming limitations of contemporary GPUs. The algorithm achieves model parallelism along the depth of a deep network. The method is based on the continuous representation of the optimization process and enables sustained gradient generation during all phases of computation. We demonstrate that in addition to its increased concurrency, continuous propagation improves the convergence rate of state of the art methods while matching their accuracy.
["Deep Learning", "Model parallelism", "Learning theory"]
ABSTRACTContinuous propagation is a parallel technique for training deep neural networkswith batch size one at full utilization of a multiprocessor system. It enables spa-tially distributed computations on emerging deep learning hardware acceleratorsthat do not impose programming limitations of contemporary GPUs. The algo-rithm achieves model parallelism along the depth of a deep network. The methodis based on the continuous representation of the optimization process and enablessustained gradient generation during all phases of computation. We demonstratethat in addition to its increased concurrency, continuous propagation improves theconvergence rate of state of the art methods while matching their accuracy.1 I NTRODUCTIONStochastic gradient descent (SGD) with back-propagation has become a ubiquitous algorithm fortraining deep neural networks. Learning via gradient descent is inherently sequential because eachupdate in parameter space requires a gradient measurement at the new point in parameter space.t= t1@L@t1(1)One technique for parallelization is gradient averaging over a mini-batch but it does nothing to speedup the rate of sequential parameter updates. As mini-batch size increases, gradient noise is reduced.Beyond a point this causes poor generalization (Sec. 2 and Sec. 3.1).While deep neural networks have many layers, it is not possible to process them in parallel becauseinformation passes sequentially through the layers of a network.To overcome these limitations we propose continuous propagation —an approach that allows param-eters in all layers to update simultaneously while samples propagate through network layers.Information flows and concurrency in training algorithms can be visualized with pipeline diagrams.Figure 1( a) shows that gradient-descent algorithms require a full forward and backward pass throughthe network to compute a gradient estimate. Use of differential equations replaces sequential depen-dency with a continuous model that has sustained gradient generation , where all layers computegradients at all times rather than having modal forward and backward phases (Fig. 1( c), Sec. 4).The main advantage of this approach is that it enables layer parallelism by allowing each layer tolearn concurrently with others without explicit synchronization. As a result, parallelization alongthe depth of a network allows more computing resources to be applied to training. This computa-tional framework differs from a GPU-optimized implementation, where computations, even thoughperformed in parallel, happen sequentially in terms of layer utilization.While the minibatch approach waits for a group of gradient measurements to be completed, incontinuous propagation the gradient estimate is used as soon as it becomes available leading tostatistically superior behavior. This incremental contribution of each observation is matched bya continuous representation of the problem that allows us to formalize this approach and enablesconvergence analysis.The theoretical foundation of this technique relies on a continuous differential representation of thelearning process (Sec. 3.2). It is based on the observation that the time iteration equation of gradient-1Under review as a conference paper at ICLR 2018(a) (b) (c) (d)layer!layer!layer!layer!time h00hi0hD0δD0δi0h00hi0Θ0Θ1hi1hD1δD1δi1δ01h01hi1h0δ0hNδNθ0θ1 hN+1δN+1N (minibatch size)U (update interval)overheadhkδkθk+0θk+1θk+2θk+3θk+4θk+5θk+6h00(Θ0)hi0hD0δD0δi0h00hi0h00(ΘD)x0x1x0x1(a) SGD requires a full forward and backward passbefore updating parameters. ( b) MBGD processesmany inputs with the same weights and has coor-dinated quiet times to synchronize parameter up-dates. ( c) CPGD maintains a network in flux. Hid-den representations and deltas enter every layer atevery time step, and parameters update at every timestep. This is a coordinated synchronous operation.(d) Reverse checkpoint reduces memory (seen as re-duced area covered by vertical lines passing savedhidden representations forward in time) and reducestime disparity between calculated hidden represen-tations and their corresponding deltas.Figure 1: Pipeline diagrams for forward and backward propagation.descent learning algorithm (1) can be viewed as a numerical integration-approximation method ofthe differential system_ =@L@: (2)Theoretical results include a formal convergence proof of the method (sec. 4.1, Appendix).Experiments with SVHN, CIFAR-10, and CIFAR-100 image classification sets show that in additionto increased concurrency compared to state of the art algorithms, continuous propagation offerscomparable accuracy and improved convergence rate expressed in epochs of training (Sec. 5).2 R ELATED WORKOptimization in the presence of stochastic variables was introduced in the preeminent work by Rob-bins & Monro (1951). Robbins’ stochastic approximation finds roots of the expected value of anunknown stochastic function. Robbins identified efficiency as the convergence rate of the sequenceof stochastic observations.Stochastic gradient descent (SGD) extends the original gradient-descent (GD) method with a processof stochastic observations of a Lyapunov (cost) function and its gradient. A principal advantage ofSGD is that the effort used to obtain a gradient is fixed and independent of the size of the inputdomain. This independence also allows extension to infinite training sets. In exchange, notions ofconvergence also have to be made probabilistic (Bottou, 1998).Mini-batch gradient descent (MBGD) was first introduced as a hybrid approach between SGD andGD Møller (1993) in order to enjoy both the speed advantages of SGD and the convergence guar-antees of GD. More recently, mini-batches are important in amortizing the cost of context switchingin GPU-accelerated computation. When the state associated with model parameters (weights, gra-2Under review as a conference paper at ICLR 2018dients, momentum, etc.) are too large to fit in cache memory, time efficiency is gained by reusingthe parameters that can fit in cache across a mini-batch of activations.Asynchronous SGD (ASGD) is a parallel multi-GPU algorithm for mini-batch learning (Zhang et al.,2013; Dean et al., 2012). Its primary gain is in operational efficiency in dealing with a clusterof machines. ASGD eliminates cluster synchronization at the end of every gradient computation,which accommodates machine faults, but also causes some parameter differences between workernodes. Synthetic Gradients (Czarnecki et al., 2017) is another approach to train a deep networkasynchronously by modeling the error gradients at each layer independently.Thematic in both MBGD and ASGD is the recovery of otherwise idle computational resourcesto perform additional gradient measurements without a parameter update. While the problem ofrecruiting many machines to assist in gradient computation is solved by relying on increasinglylarge batches in the SGD algorithm, it is known that mini-batch training is an inefficient use of thiscomputing power Wilson & Martinez (2003), Keskar et al. (2016).Recent research has explored fundamental changes to GD. Difference target propagation (Lee et al.,2015) eliminates back-propagation entirely by learning bidirectional inverse mappings between ad-jacent layers. Feedback alignment (Lillicrap et al., 2014), in contrast, uses fixed random matriciesfor the backprop phase. It depends on feedback to maintain parameters as approximate pseudo-inverses of the fixed random matricies.Current research is not in computing more-accurate gradients but in being able to scale to largermodels. For example, Shazeer et al. (2017) set a goal of training a network of one trillion parameters.Such large models make it even more important to develop efficient parallelization methods.3 T HEORY3.1 A DVANTAGES OF MODEL PARALLELISMParallelizing a computation involves spreading its evaluation over separate computational units op-erating simultaneously. In consideration of deep neural networks, there are two principal ways thiscan be imagined. In a model-parallel regime, separate workers simultaneously evaluate the samenetwork input using distinct model parameters. Conversely, in a data-parallel regime, separateworkers simultaneously evaluate distinct network inputs using the same formal model parameters.Current scaling efforts use fine-grain model parallelism in block matrix–vector multiplication andcoarse-grain data parallelism across layers and among workers in a cluster.Parameter values used in parallel cannot have sequential dependencies. Therefore, coordinationis necessary among workers responsible for the same parameters. Strict synchronization ensuresidentical values of corresponding parameters; loose synchronization allows some discrepancy.While there are distinct ways to implement data parallelism, they all share the attribute that multiplegradients are evaluated at points independent of each other’s outcomes. That is, the point of eval-uation of the gradient is not able to benefit from learning based on the gradients that are evaluatedin parallel. Since mini-batches capture this attribute well, we use increasing mini-batch sizes as aproxy for all forms of increasing data parallelism.1In addition to discovering sharp minima in parameter space that lead to bad generalization, increas-ing mini-batch size is ineffective for scaling for three other reasons.First, given that all gradient measurements provide independent estimates of the true gradient,@L@xX=ExX@L@+N(0;) ; (3)havingnsamples increases the accuracy of our gradient estimate to1nnXi=1@L@xiX=ExX@L@+N(0;=pn): (4)1Specially designed schemes may be able to exploit additional information from data parallelism, such asgradient variance or cost curvature. We specifically preclude second-order methods from consideration.3Under review as a conference paper at ICLR 2018XAφAkφLYΘ(0)Θ(k)A/intercalφ/primeAk/intercalφ/primeh(k)δ(k)Figure 2: Feed-forward neural networkTable 1: Equations for deep network nodesNode Type FunctionAForwardh(l)= (l)h(l1)A|Backward(l)= (l)|(l+1)Update (l)= 0Rt0h(l1)(l)dtActivationh(l)=(h(l1))0Tangent(l)=0(h(l1))(l+1)X Input*x=Xbt=cY Labely(x)LLossL(h(D);y)*fXkXgis a sequence of random observa-tions ofX.x(t)is piecewise constant.For the same computational effort we could have taken nsteps, each with a step size equal to(=pn)@L@. Since the batch step size stayed within a neighborhood of well approximated byits first-order Taylor expansion, each of the nsteps of the SGD algorithm stays within this sameneighborhood. However, we have now proceeded through a total distance of (n=pn)@L@, so weare more efficient by approximatelypn(Goodfellow et al., 2016, section 8.1.3).Second, our objective function Lis nonlinear. Given this, the accuracy of the first-order gradientprovides limited utility, owing to the high curvature of the loss surface. Beyond this, the abilityto use faster learning rate by employing larger batch size is fruitless because it is bound to beinaccurate, and an update in parameter space is necessary to assess a gradient at a new location.Finally, much of the computational efficiency of SGD comes from the noise realized by replacingExXwith a sample estimate. Computing a larger sample estimate undoes this efficiency. Thesampling noise causes model regularization. Beyond a certain point, the gain in accuracy exceedsthe desired regularization, and synthetic noise must be added back to the computation to achieve thedesired results.3.2 F ORMULATION AS DIFFERENTIAL EQUATIONSDeep neural networks are high-dimensional parameterized functions f(x; ), which are often ex-pressed as directed acyclic graphs. The back-propagation algorithm, however, is best expressed byacyclic graph (Fig. 2).The cycle in the graph is a feedback iteration: the gradients produced by the first full networkevaluation change the weights used in the next iteration. This is because the iterative algorithms arediscrete approximations of a continuous differential system:_ =ExX@L(f(x; );y(x))@+G((t)); (5)whereGis an unbiased continuous-noise process, with time-varying statistics .Gprovides regu-larization to allow the continuous system to model phenomena observed in discrete-time learningsystems. In the discrete case, regularization is provided by the sampling procedure (SGD), by thelearning rate, and by other explicit mechanisms. We choose time-dependent Gbecause using alearning-rate schedule has practical importance in training deep models. Specifically, more regular-ization erases local high-frequency contours in parameter space. As the correct region is approached,regularization is reduced, leading to a better final solution.4Under review as a conference paper at ICLR 2018Algorithm 1 Continuous PropagationInput:X,0,L,yOutput: 1fArray access is modulo array size gh[D][D], hidden activation storage[D][2], delta storage[D], parameters 0for allt2Ndoh[0][t] xX[D1][t] L(h[D][t];y(h[0][tD]))for all layerskin parallel doh[k][t] ([k]h[k1][t1])[k][t] 0(h[k][t+k])T[k][k+1][t1])[k] [k](h[k][t+kD][k][t])end forend for1 Θ(t)hl(t)δl(Θ;L)hl+1(Θ)δl+1(Θ;L)Figure 3: Each processor implements layer dy-namics. Param values and states (e.g., momen-tum) reside locally in the node. Activations anddeltas stream through, modified by local params.Algorithm 2 Local Learning Rules for MBGDr=dDlh(l1)(l)Gt=Gt1+r; t2D+l<N modU0; otherwiset=t1nGt; t2D+l=NmodUt1; otherwise4 C ONTINUOUS PROPAGATIONContinuous propagation is derived by considering an arbitrary feed-forward neural network (Fig 2).Expressing all nodes as functions of time (Table 1), and applying function composition,2givesh(k)= kYi=0(i)(i)!x; (6)(k)= DYi=k0(h(i))(i)|!L(h(D);y); (7)_(k)=(h(k)(k)): (8)This factorization shows individual network layers as systems with their own local dynamics (Eq. 8;Fig. 3). Internal state evolves according to stimuli handto which it is subjected. The frameworkis general, and modifications yield analogs of other learning rules. For example, gradient descentwith momentum corresponds to3r(k)mom(t)h(k)(k); (9)(k)mom(t)Z10er(k)mom(t)d ; (10)_(k)mom(t)(k)mom(t): (11)The system we are characterizing has two natural dimensions: network depth and time evolution ofparameters. These are depicted in the pipeline drawings of Figure 1. Any implementation that seeksacceleration by mapping network layers to computational units separated in space must also accept2kYi=0fifkfk1:::f0.3(k)momis an IIR filter operating on rmom.5Under review as a conference paper at ICLR 2018latency communicating between these layers. Therefore, we introduce a time delay4between layers:h(k)= kYi=0d1(i)(i)!x; (12)(k)= DYi=k0(dDk(h(i)))(i)|!L(h(D);y) ; (13)_(k)=dDk(h(k))(k): (14)The continuous-propagation algorithm (Alg. 1; Fig. 1( c)) is the synchronous implementation ofthese time-delay equations.Notice that an input vector and its hidden representations experience model parameters at differenttime steps as it travels forward in the network. This difference cannot be detected by the vector itselfon the way forward. It is as if we choose a fixed set of parameters from successive time steps andapplied learning to the aggregate parameter state that results.Naturally, we have the choice when the delta values return through the network on the backwardpass either to use the immediate model parameters as they now stand or else to retrieve the historicalversion of model parameters5anchored to when the layer was used for the corresponding forward-pass activity. Deltas computed from the immediate network parameters use updated informationcorresponding to the current parameter slope. We might expect that they improve gradient descent:@L@(k)=@h(k)@(k)@L@h(k)=h(k1)(k): (15)Our choice of (k)in this vector product can point either in the current direction downhill or in ahistorical direction. We are better with the current direction so long as h(k1)is uncorrelated inexpectation. In practice we do not see a large difference in these approaches (Sec. 5.2).Total memory required in this algorithm is O(D2), the same as in SGD. Traditional techniques likereverse checkpoint (Dauvergne & Hasco ̈et, 2006) can be adapted to this regime (Fig. 1( d)). Wehave the interesting choice when using reverse checkpoint to admit immediate network weights forrecomputed activations. When we do this, the recomputed activations differ from those originallycomputed because of parameter evolution. In addition to reverse checkpoint’s normal use in reducingmemory footprint, this also reduces the time disparity in parameters used for computing forward-propagating activations and backward-propagating deltas in the aligning wave fronts.When continuous propagation is implemented as a series of local learning rules for each layer, it iscapable of expressing a variety of traditional algorithms. For example, rules for MBGD are shown inAlgorithm 2 (Fig. 1( b)). Observe that while the computation exactly matches the MBGD algorithm,the expression of synchronization and deferred parameter updates appears somewhat arbitrary.4.1 C ONVERGENCE ANALYSISContinuous representation of the learning process allows us to relate the time delay used in parameterupdates (12-14) to the discrete form of SGD (1). A key observation is that the simulated continuousmodel time tthat elapses during a layer computation is proportional to the learning rate . As thelearning rate schedule reduces , the model becomes asymptotically closer to traditional SGD.Therefore, we can adapt traditional convergence proofs for online descent to continuous propagation.We extend the approach of Lian et al. (2015) to the case where each layer’s parameters get differentdelays of updates. The asymptotic equivalence allows us to bound layer delays and demonstrate thatthe expected norms of gradients for each layer converge.The Appendix presents the formal convergence proof for the continuous-propagation method un-der the weak assumptions that the gradient of the objective function is Lipschitzian and that thestochastic gradient is unbiased with bounded variance.4dkf(t)f(tkt), where tis layer latency in units of model time.5To use anchored parameters, replace Equation 13 with (k)=QDi=k0(dDk(h(i)))dDk((i)|)L(h(D);y).6Under review as a conference paper at ICLR 2018Table 2: Hyper Parameters tested with Continuous PropagationMomentum 1=2half-life in epochs [0;0:10]Normalization - - None, Normalization PropagationArpit et al.(2016)Learning Rate - [0:001;0:05]Learning RateDecay1=2half-life in epochs CIFAR: [12:5;50]SVHN: 1ParameterAveraging1=2half-life in epochs [0;1]Data Set - - SVHN, CIFAR-10, CIFAR-100Delta rule - - Anchored, ImmediateBias - - With bias, No bias allowedInitialization - - Glorot, Glorot + Orthonormal5 E XPERIMENTAL RESULTSWe studied continuous propagation on deep convolutional networks with the network-in-networkarchitecture Lin et al. (2013). We observed successful training in a variety of settings shown in table2. In our experiments we initiate the learning schedule hyperparameters based on the values derivedfor MBGD. We decrease the proportionally to the square root of the batch sizepMB and increasethe momentum to adjust for thepMB factor in the half-life decay.5.1 C OMPATIBLE WITH STATE OF THE ARTMETHODSWe show CP is compatible with state of the art normalization techniques. We incorporate Normal-ization Propagation Arpit et al. (2016) which is an adaptation of the popular Batch NormalizationIoffe & Szegedy (2015) technique that works at batch size 1.We compare validation accuracy on SVHN, CIFAR-10 and CIFAR-100 using continuous propaga-tion with normalization propagation with the results obtained by Arpit et al. (2016) using the samenetwork architecture as the comparison study.Figure 4 shows validation accuracy in these experiments. In all cases continuous propagation is ableto match the validation accuracy in many fewer training epochs.0 20 40 60 80 100Epoch0.10.20.30.40.50.60.7AccuracyCP: =0.014, 1/2=12.50MB-50: =0.050, 1/2=25(a) CIFAR-1000 20 40 60 80 100Epoch0.650.700.750.800.850.900.95AccuracyCP: =0.014, g1/2=0.02, 1/2=20, 1/2=0.02MB-50: =0.050 (b) CIFAR-100 5 10 15 20 25Epoch0.700.750.800.850.900.951.00AccuracyMB-100: =0.080, 1/2=5CP: =0.014, 1/2=1.03 (c) SVHNFigure 4: Validation-sample accuracy during training5.2 A NCHORED -DELTA VERSUS IMMEDIATE -DELTA RULESWe train the convolution network under two conditions, following the discussion from Section 4: theanchored-delta rule, which uses the original parameters; and the immediate-delta rule, which usesthe evolved parameters. The goal of this substudy is to compare the learning performance of the twomethods. The immediate rule allows us to avoid using extra memory to store the model parametervalues for multiple input vectors.To facilitate comparison, we disable stability-inducing features from the network. For instance, wehave no bias addition or normalization of the hidden features. We performed a search over thelearning hyper parameter with momentum fixed at 12= 0:0008 .7Under review as a conference paper at ICLR 20189 8 7 6 5 4 3 2 1 0Learning rate = 0.6x/10065.0%67.5%70.0%72.5%75.0%77.5%80.0%82.5%85.0%Accuracyanchoredimmediate(a) Immediate- and anchored-rules achieve similaraccuracy. Plot shows accuracy at epoch 20 for asweep of learning rates.0.01 0.21 0.41 0.61 0.81 1.01 1.21 1.41 1.61Epoch20%30%40%50%60%Accuracy-0.5%0.0%0.5%1.0%1.5%∆Accuracyanchoredimmediateimmediate-anchored(b) Immediate is better than anchored for firstepoch. Plot shows average performance over 100random initializations.Figure 5: Comparison of immediate- and anchored- delta rules.We observe a negligible difference in accuracy between these methods after 20 epochs of training(Fig. 5(a)). We noticed a trend that early in training the immediate rule seemed to out-perform theanchor rule but became slightly worse during the second epoch. Because this difference was small,we decided to run 200 randomized trials to confirm the existence of this effect (Fig. 5(b)).6 D ISCUSSIONThe computing model we have posited is generic and capable of implementing traditional MBGD-style algorithms exactly and efficiently. As in other batch schemes, our framework enjoys a propertythat for large batch sizes computational utilization asymptotically approaches 100%.We realized that instead of idly waiting for batch-boundary synchronization, we could also im-mediately start processing the subsequent batch. This results in a strategy that is similar to ASGD.Namely, the parameter inconsistency within the pipeline is limited to no more than one batch bound-ary. This is also true in ASGD if the worker nodes are balanced on vector throughput. In that case,workers should never be more than one time step removed from their peers.In considering the optimal batch size to use, we were led down a path of differential equations andrules for the learning dynamics of each layer. Research in feedback alignment shows the importanceof feedback dynamics in learning. To implement local dynamics it is natural for weights to residein physical proximity to their use. This is true in the case of biological neurons (weight encoded insynaptic strength), as it is in our model-parallel algorithm.When considering what an optimal strategy may look like, we realize that we always have the abilityto specify that a layer remain idle at a time step in order to create a global synchronization boundary.Likewise, we also have the ability to allow a weight to remain fixed instead of adopting a newvalue. Why would either of these strategies be ideal? As we allow data to stream through the neuralnetwork, each input from the environment and each measurement of the cost function contains someamount of useful information not yet extracted from the environment. The purpose of any learningequation is to allow the network to respond to this information. In this light both the strategy of “idlywaiting” and the strategy of “keeping fixed” are rejections of the utility of this information. See thestep-function specification of the MBGD algorithm (Alg. 2). These are indications of suboptimality.We demonstrated that choosing a lower learning rate dominates using larger batch sizes. Continuouspropagation allows statistically efficient learning while maintaining all the cores of a multiprocessorsystem busy. It permits explicit regularization terms such as L1 penalty on parameters or activations.In gradient descent, explicit regularization works by biasing the gradient in favor of regularization.Regularization’s contribution to gradient is available immediately and does not require traversalthrough the entire network. Therefore in CP parameter update based on regularization penalty isapplied before the corresponding loss gradient is applied.We note that the hidden activity and delta arcs in our pipeline diagrams (Fig. 1) can be considered asindividual vectors or batch matrices. Interpreting them as batch matrices allows investigating thesetechniques directly on contemporary GPU hardware. This is also a demonstration that continuouspropagation can be combined with data parallelism in case network depth is exhausted.8Under review as a conference paper at ICLR 2018
H1YbqHUBG
Promising idea, but needs more demonstration of practicality
5: Marginally below acceptance threshold
Pros: * asynchronous model-parallel training of deep learning models would potentially help in further scaling of deep learning models * the paper is clearly written and easy to understand Cons: * weak experiments: performance of algorithms are not analyzed in terms of wall-clock, and important baselines are not compared against, making it difficult to judge the practical usefulness of the proposed algorithm * weak theory: although the algorithm is claimed to be motivated by continuous-time formulation of gradient descent, neither convergence proof nor algorithm design really use the continuous-time formulation and discrete-time formulation seems to suffice; the proof is straightforward corollary of Lin et al. Summary: This paper proposes to update parameters of each layer in deep neural network asynchronously, instead of updating all layers simultaneously and synchronously. Authors derive this algorithm by first formulating gradient descent in continuous-time form and then modifying time dependencies between layers. While asynchronous updates of parameters in stochastic gradient descent has been explored (dating back to [1] in 1986, and authors should also be referring to [2]), to my knowledge application of these ideas to layer-by-layer model parallelism for deep neural networks has not been studied. Since model-parallel training across machines has not been very successful, and model-parallelism has been only exploited within machines, asynchronous model-parallel optimization is an important topic of research which has the promise of scaling deep learning models beyond the memory capacity of a single machine. Unfortunately, the practical usefulness of the algorithm has not been demonstrated. It remains unanswered whether this algorithm can be implemented efficiently in modern hardware architectures, or in which situations this algorithm will be more useful than existing algorithms. Experiments are all reported in terms of the number of updates (epochs), but this is not useful in judging the practical advantage of the proposed algorithm. What matters in practice is how fast the algorithm is in improving the performance as a function of _wall-clock time_, and I would expect that synchronous algorithms would be much faster than the proposed algorithm in terms of wall-clock time, as they can better exploit optimized tensor arithmetic on CPUs and GPUs. Also, authors should compare against mini-batch gradient descent, because this is the most popular way of training deep neural networks; authors has the burden of proof that the proposed algorithm is practically more useful than the existing standard method. Authors argue their algorithm is motivated by continuous-time formulation of stochastic gradient descent, but it is unclear to me whether the continuous-time formulation was really necessary to derive the proposed algorithm. The algorithm operates in discrete time horizon, and continuous time is not used anywhere. Authors rely mostly on Lin et al for the convergence proof, which is also based on discrete time horizon. Authors argue in page 1 that Continuous Propagation is statistically superior to mini-batch gradient descent, but I cannot find statistical superiority of the method. Also, the upper bound of the time-delay T slows down the convergence rate (Proposition in the appendix), so it is unclear whether asynchronous update is theoretically faster than synchronous mini-batch gradient descent. I think which algorithm is faster depends on values of L, T and M. Authors do not provide enough citations. Continuous-time characterization of gradient descent has a long history, and authors should provide citation of it, for example when (5) is introduced. Authors should provide more discussion of the history of model-parallel asynchronous SGD (such as [1] and [2]), and when mentioning alternatives like Czarnecki et al (2017), authors should discuss what advantages and disadvantages the proposed algorithm has against these alternatives. [1] Distributed Asynchronous Deterministic and Stochastic Gradient Optimization Algorithms (Tsitsiklis, Bertsekas and Athans, 1986) [2] Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent (Niu et al, 2011)
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Continuous Propagation: Layer-Parallel Training ### Paper Abstract Continuous propagation is a parallel technique for training deep neural networks with batch size one at full utilization of a multiprocessor system. It enables spatially distributed computations on emerging deep learning hardware accelerators that do not impose programming limitations of contemporary GPUs. The algorithm achieves model parallelism along the depth of a deep network. The method is based on the continuous representation of the optimization process and enables sustained gradient generation during all phases of computation. We demonstrate that in addition to its increased concurrency, continuous propagation improves the convergence rate of state of the art methods while matching their accuracy. ### Paper Keywords ["Deep Learning", "Model parallelism", "Learning theory"] ### Paper Content ABSTRACTContinuous propagation is a parallel technique for training deep neural networkswith batch size one at full utilization of a multiprocessor system. It enables spa-tially distributed computations on emerging deep learning hardware acceleratorsthat do not impose programming limitations of contemporary GPUs. The algo-rithm achieves model parallelism along the depth of a deep network. The methodis based on the continuous representation of the optimization process and enablessustained gradient generation during all phases of computation. We demonstratethat in addition to its increased concurrency, continuous propagation improves theconvergence rate of state of the art methods while matching their accuracy.1 I NTRODUCTIONStochastic gradient descent (SGD) with back-propagation has become a ubiquitous algorithm fortraining deep neural networks. Learning via gradient descent is inherently sequential because eachupdate in parameter space requires a gradient measurement at the new point in parameter space.t= t1@L@t1(1)One technique for parallelization is gradient averaging over a mini-batch but it does nothing to speedup the rate of sequential parameter updates. As mini-batch size increases, gradient noise is reduced.Beyond a point this causes poor generalization (Sec. 2 and Sec. 3.1).While deep neural networks have many layers, it is not possible to process them in parallel becauseinformation passes sequentially through the layers of a network.To overcome these limitations we propose continuous propagation —an approach that allows param-eters in all layers to update simultaneously while samples propagate through network layers.Information flows and concurrency in training algorithms can be visualized with pipeline diagrams.Figure 1( a) shows that gradient-descent algorithms require a full forward and backward pass throughthe network to compute a gradient estimate. Use of differential equations replaces sequential depen-dency with a continuous model that has sustained gradient generation , where all layers computegradients at all times rather than having modal forward and backward phases (Fig. 1( c), Sec. 4).The main advantage of this approach is that it enables layer parallelism by allowing each layer tolearn concurrently with others without explicit synchronization. As a result, parallelization alongthe depth of a network allows more computing resources to be applied to training. This computa-tional framework differs from a GPU-optimized implementation, where computations, even thoughperformed in parallel, happen sequentially in terms of layer utilization.While the minibatch approach waits for a group of gradient measurements to be completed, incontinuous propagation the gradient estimate is used as soon as it becomes available leading tostatistically superior behavior. This incremental contribution of each observation is matched bya continuous representation of the problem that allows us to formalize this approach and enablesconvergence analysis.The theoretical foundation of this technique relies on a continuous differential representation of thelearning process (Sec. 3.2). It is based on the observation that the time iteration equation of gradient-1Under review as a conference paper at ICLR 2018(a) (b) (c) (d)layer!layer!layer!layer!time h00hi0hD0δD0δi0h00hi0Θ0Θ1hi1hD1δD1δi1δ01h01hi1h0δ0hNδNθ0θ1 hN+1δN+1N (minibatch size)U (update interval)overheadhkδkθk+0θk+1θk+2θk+3θk+4θk+5θk+6h00(Θ0)hi0hD0δD0δi0h00hi0h00(ΘD)x0x1x0x1(a) SGD requires a full forward and backward passbefore updating parameters. ( b) MBGD processesmany inputs with the same weights and has coor-dinated quiet times to synchronize parameter up-dates. ( c) CPGD maintains a network in flux. Hid-den representations and deltas enter every layer atevery time step, and parameters update at every timestep. This is a coordinated synchronous operation.(d) Reverse checkpoint reduces memory (seen as re-duced area covered by vertical lines passing savedhidden representations forward in time) and reducestime disparity between calculated hidden represen-tations and their corresponding deltas.Figure 1: Pipeline diagrams for forward and backward propagation.descent learning algorithm (1) can be viewed as a numerical integration-approximation method ofthe differential system_ =@L@: (2)Theoretical results include a formal convergence proof of the method (sec. 4.1, Appendix).Experiments with SVHN, CIFAR-10, and CIFAR-100 image classification sets show that in additionto increased concurrency compared to state of the art algorithms, continuous propagation offerscomparable accuracy and improved convergence rate expressed in epochs of training (Sec. 5).2 R ELATED WORKOptimization in the presence of stochastic variables was introduced in the preeminent work by Rob-bins & Monro (1951). Robbins’ stochastic approximation finds roots of the expected value of anunknown stochastic function. Robbins identified efficiency as the convergence rate of the sequenceof stochastic observations.Stochastic gradient descent (SGD) extends the original gradient-descent (GD) method with a processof stochastic observations of a Lyapunov (cost) function and its gradient. A principal advantage ofSGD is that the effort used to obtain a gradient is fixed and independent of the size of the inputdomain. This independence also allows extension to infinite training sets. In exchange, notions ofconvergence also have to be made probabilistic (Bottou, 1998).Mini-batch gradient descent (MBGD) was first introduced as a hybrid approach between SGD andGD Møller (1993) in order to enjoy both the speed advantages of SGD and the convergence guar-antees of GD. More recently, mini-batches are important in amortizing the cost of context switchingin GPU-accelerated computation. When the state associated with model parameters (weights, gra-2Under review as a conference paper at ICLR 2018dients, momentum, etc.) are too large to fit in cache memory, time efficiency is gained by reusingthe parameters that can fit in cache across a mini-batch of activations.Asynchronous SGD (ASGD) is a parallel multi-GPU algorithm for mini-batch learning (Zhang et al.,2013; Dean et al., 2012). Its primary gain is in operational efficiency in dealing with a clusterof machines. ASGD eliminates cluster synchronization at the end of every gradient computation,which accommodates machine faults, but also causes some parameter differences between workernodes. Synthetic Gradients (Czarnecki et al., 2017) is another approach to train a deep networkasynchronously by modeling the error gradients at each layer independently.Thematic in both MBGD and ASGD is the recovery of otherwise idle computational resourcesto perform additional gradient measurements without a parameter update. While the problem ofrecruiting many machines to assist in gradient computation is solved by relying on increasinglylarge batches in the SGD algorithm, it is known that mini-batch training is an inefficient use of thiscomputing power Wilson & Martinez (2003), Keskar et al. (2016).Recent research has explored fundamental changes to GD. Difference target propagation (Lee et al.,2015) eliminates back-propagation entirely by learning bidirectional inverse mappings between ad-jacent layers. Feedback alignment (Lillicrap et al., 2014), in contrast, uses fixed random matriciesfor the backprop phase. It depends on feedback to maintain parameters as approximate pseudo-inverses of the fixed random matricies.Current research is not in computing more-accurate gradients but in being able to scale to largermodels. For example, Shazeer et al. (2017) set a goal of training a network of one trillion parameters.Such large models make it even more important to develop efficient parallelization methods.3 T HEORY3.1 A DVANTAGES OF MODEL PARALLELISMParallelizing a computation involves spreading its evaluation over separate computational units op-erating simultaneously. In consideration of deep neural networks, there are two principal ways thiscan be imagined. In a model-parallel regime, separate workers simultaneously evaluate the samenetwork input using distinct model parameters. Conversely, in a data-parallel regime, separateworkers simultaneously evaluate distinct network inputs using the same formal model parameters.Current scaling efforts use fine-grain model parallelism in block matrix–vector multiplication andcoarse-grain data parallelism across layers and among workers in a cluster.Parameter values used in parallel cannot have sequential dependencies. Therefore, coordinationis necessary among workers responsible for the same parameters. Strict synchronization ensuresidentical values of corresponding parameters; loose synchronization allows some discrepancy.While there are distinct ways to implement data parallelism, they all share the attribute that multiplegradients are evaluated at points independent of each other’s outcomes. That is, the point of eval-uation of the gradient is not able to benefit from learning based on the gradients that are evaluatedin parallel. Since mini-batches capture this attribute well, we use increasing mini-batch sizes as aproxy for all forms of increasing data parallelism.1In addition to discovering sharp minima in parameter space that lead to bad generalization, increas-ing mini-batch size is ineffective for scaling for three other reasons.First, given that all gradient measurements provide independent estimates of the true gradient,@L@xX=ExX@L@+N(0;) ; (3)havingnsamples increases the accuracy of our gradient estimate to1nnXi=1@L@xiX=ExX@L@+N(0;=pn): (4)1Specially designed schemes may be able to exploit additional information from data parallelism, such asgradient variance or cost curvature. We specifically preclude second-order methods from consideration.3Under review as a conference paper at ICLR 2018XAφAkφLYΘ(0)Θ(k)A/intercalφ/primeAk/intercalφ/primeh(k)δ(k)Figure 2: Feed-forward neural networkTable 1: Equations for deep network nodesNode Type FunctionAForwardh(l)= (l)h(l1)A|Backward(l)= (l)|(l+1)Update (l)= 0Rt0h(l1)(l)dtActivationh(l)=(h(l1))0Tangent(l)=0(h(l1))(l+1)X Input*x=Xbt=cY Labely(x)LLossL(h(D);y)*fXkXgis a sequence of random observa-tions ofX.x(t)is piecewise constant.For the same computational effort we could have taken nsteps, each with a step size equal to(=pn)@L@. Since the batch step size stayed within a neighborhood of well approximated byits first-order Taylor expansion, each of the nsteps of the SGD algorithm stays within this sameneighborhood. However, we have now proceeded through a total distance of (n=pn)@L@, so weare more efficient by approximatelypn(Goodfellow et al., 2016, section 8.1.3).Second, our objective function Lis nonlinear. Given this, the accuracy of the first-order gradientprovides limited utility, owing to the high curvature of the loss surface. Beyond this, the abilityto use faster learning rate by employing larger batch size is fruitless because it is bound to beinaccurate, and an update in parameter space is necessary to assess a gradient at a new location.Finally, much of the computational efficiency of SGD comes from the noise realized by replacingExXwith a sample estimate. Computing a larger sample estimate undoes this efficiency. Thesampling noise causes model regularization. Beyond a certain point, the gain in accuracy exceedsthe desired regularization, and synthetic noise must be added back to the computation to achieve thedesired results.3.2 F ORMULATION AS DIFFERENTIAL EQUATIONSDeep neural networks are high-dimensional parameterized functions f(x; ), which are often ex-pressed as directed acyclic graphs. The back-propagation algorithm, however, is best expressed byacyclic graph (Fig. 2).The cycle in the graph is a feedback iteration: the gradients produced by the first full networkevaluation change the weights used in the next iteration. This is because the iterative algorithms arediscrete approximations of a continuous differential system:_ =ExX@L(f(x; );y(x))@+G((t)); (5)whereGis an unbiased continuous-noise process, with time-varying statistics .Gprovides regu-larization to allow the continuous system to model phenomena observed in discrete-time learningsystems. In the discrete case, regularization is provided by the sampling procedure (SGD), by thelearning rate, and by other explicit mechanisms. We choose time-dependent Gbecause using alearning-rate schedule has practical importance in training deep models. Specifically, more regular-ization erases local high-frequency contours in parameter space. As the correct region is approached,regularization is reduced, leading to a better final solution.4Under review as a conference paper at ICLR 2018Algorithm 1 Continuous PropagationInput:X,0,L,yOutput: 1fArray access is modulo array size gh[D][D], hidden activation storage[D][2], delta storage[D], parameters 0for allt2Ndoh[0][t] xX[D1][t] L(h[D][t];y(h[0][tD]))for all layerskin parallel doh[k][t] ([k]h[k1][t1])[k][t] 0(h[k][t+k])T[k][k+1][t1])[k] [k](h[k][t+kD][k][t])end forend for1 Θ(t)hl(t)δl(Θ;L)hl+1(Θ)δl+1(Θ;L)Figure 3: Each processor implements layer dy-namics. Param values and states (e.g., momen-tum) reside locally in the node. Activations anddeltas stream through, modified by local params.Algorithm 2 Local Learning Rules for MBGDr=dDlh(l1)(l)Gt=Gt1+r; t2D+l<N modU0; otherwiset=t1nGt; t2D+l=NmodUt1; otherwise4 C ONTINUOUS PROPAGATIONContinuous propagation is derived by considering an arbitrary feed-forward neural network (Fig 2).Expressing all nodes as functions of time (Table 1), and applying function composition,2givesh(k)= kYi=0(i)(i)!x; (6)(k)= DYi=k0(h(i))(i)|!L(h(D);y); (7)_(k)=(h(k)(k)): (8)This factorization shows individual network layers as systems with their own local dynamics (Eq. 8;Fig. 3). Internal state evolves according to stimuli handto which it is subjected. The frameworkis general, and modifications yield analogs of other learning rules. For example, gradient descentwith momentum corresponds to3r(k)mom(t)h(k)(k); (9)(k)mom(t)Z10er(k)mom(t)d ; (10)_(k)mom(t)(k)mom(t): (11)The system we are characterizing has two natural dimensions: network depth and time evolution ofparameters. These are depicted in the pipeline drawings of Figure 1. Any implementation that seeksacceleration by mapping network layers to computational units separated in space must also accept2kYi=0fifkfk1:::f0.3(k)momis an IIR filter operating on rmom.5Under review as a conference paper at ICLR 2018latency communicating between these layers. Therefore, we introduce a time delay4between layers:h(k)= kYi=0d1(i)(i)!x; (12)(k)= DYi=k0(dDk(h(i)))(i)|!L(h(D);y) ; (13)_(k)=dDk(h(k))(k): (14)The continuous-propagation algorithm (Alg. 1; Fig. 1( c)) is the synchronous implementation ofthese time-delay equations.Notice that an input vector and its hidden representations experience model parameters at differenttime steps as it travels forward in the network. This difference cannot be detected by the vector itselfon the way forward. It is as if we choose a fixed set of parameters from successive time steps andapplied learning to the aggregate parameter state that results.Naturally, we have the choice when the delta values return through the network on the backwardpass either to use the immediate model parameters as they now stand or else to retrieve the historicalversion of model parameters5anchored to when the layer was used for the corresponding forward-pass activity. Deltas computed from the immediate network parameters use updated informationcorresponding to the current parameter slope. We might expect that they improve gradient descent:@L@(k)=@h(k)@(k)@L@h(k)=h(k1)(k): (15)Our choice of (k)in this vector product can point either in the current direction downhill or in ahistorical direction. We are better with the current direction so long as h(k1)is uncorrelated inexpectation. In practice we do not see a large difference in these approaches (Sec. 5.2).Total memory required in this algorithm is O(D2), the same as in SGD. Traditional techniques likereverse checkpoint (Dauvergne & Hasco ̈et, 2006) can be adapted to this regime (Fig. 1( d)). Wehave the interesting choice when using reverse checkpoint to admit immediate network weights forrecomputed activations. When we do this, the recomputed activations differ from those originallycomputed because of parameter evolution. In addition to reverse checkpoint’s normal use in reducingmemory footprint, this also reduces the time disparity in parameters used for computing forward-propagating activations and backward-propagating deltas in the aligning wave fronts.When continuous propagation is implemented as a series of local learning rules for each layer, it iscapable of expressing a variety of traditional algorithms. For example, rules for MBGD are shown inAlgorithm 2 (Fig. 1( b)). Observe that while the computation exactly matches the MBGD algorithm,the expression of synchronization and deferred parameter updates appears somewhat arbitrary.4.1 C ONVERGENCE ANALYSISContinuous representation of the learning process allows us to relate the time delay used in parameterupdates (12-14) to the discrete form of SGD (1). A key observation is that the simulated continuousmodel time tthat elapses during a layer computation is proportional to the learning rate . As thelearning rate schedule reduces , the model becomes asymptotically closer to traditional SGD.Therefore, we can adapt traditional convergence proofs for online descent to continuous propagation.We extend the approach of Lian et al. (2015) to the case where each layer’s parameters get differentdelays of updates. The asymptotic equivalence allows us to bound layer delays and demonstrate thatthe expected norms of gradients for each layer converge.The Appendix presents the formal convergence proof for the continuous-propagation method un-der the weak assumptions that the gradient of the objective function is Lipschitzian and that thestochastic gradient is unbiased with bounded variance.4dkf(t)f(tkt), where tis layer latency in units of model time.5To use anchored parameters, replace Equation 13 with (k)=QDi=k0(dDk(h(i)))dDk((i)|)L(h(D);y).6Under review as a conference paper at ICLR 2018Table 2: Hyper Parameters tested with Continuous PropagationMomentum 1=2half-life in epochs [0;0:10]Normalization - - None, Normalization PropagationArpit et al.(2016)Learning Rate - [0:001;0:05]Learning RateDecay1=2half-life in epochs CIFAR: [12:5;50]SVHN: 1ParameterAveraging1=2half-life in epochs [0;1]Data Set - - SVHN, CIFAR-10, CIFAR-100Delta rule - - Anchored, ImmediateBias - - With bias, No bias allowedInitialization - - Glorot, Glorot + Orthonormal5 E XPERIMENTAL RESULTSWe studied continuous propagation on deep convolutional networks with the network-in-networkarchitecture Lin et al. (2013). We observed successful training in a variety of settings shown in table2. In our experiments we initiate the learning schedule hyperparameters based on the values derivedfor MBGD. We decrease the proportionally to the square root of the batch sizepMB and increasethe momentum to adjust for thepMB factor in the half-life decay.5.1 C OMPATIBLE WITH STATE OF THE ARTMETHODSWe show CP is compatible with state of the art normalization techniques. We incorporate Normal-ization Propagation Arpit et al. (2016) which is an adaptation of the popular Batch NormalizationIoffe & Szegedy (2015) technique that works at batch size 1.We compare validation accuracy on SVHN, CIFAR-10 and CIFAR-100 using continuous propaga-tion with normalization propagation with the results obtained by Arpit et al. (2016) using the samenetwork architecture as the comparison study.Figure 4 shows validation accuracy in these experiments. In all cases continuous propagation is ableto match the validation accuracy in many fewer training epochs.0 20 40 60 80 100Epoch0.10.20.30.40.50.60.7AccuracyCP: =0.014, 1/2=12.50MB-50: =0.050, 1/2=25(a) CIFAR-1000 20 40 60 80 100Epoch0.650.700.750.800.850.900.95AccuracyCP: =0.014, g1/2=0.02, 1/2=20, 1/2=0.02MB-50: =0.050 (b) CIFAR-100 5 10 15 20 25Epoch0.700.750.800.850.900.951.00AccuracyMB-100: =0.080, 1/2=5CP: =0.014, 1/2=1.03 (c) SVHNFigure 4: Validation-sample accuracy during training5.2 A NCHORED -DELTA VERSUS IMMEDIATE -DELTA RULESWe train the convolution network under two conditions, following the discussion from Section 4: theanchored-delta rule, which uses the original parameters; and the immediate-delta rule, which usesthe evolved parameters. The goal of this substudy is to compare the learning performance of the twomethods. The immediate rule allows us to avoid using extra memory to store the model parametervalues for multiple input vectors.To facilitate comparison, we disable stability-inducing features from the network. For instance, wehave no bias addition or normalization of the hidden features. We performed a search over thelearning hyper parameter with momentum fixed at 12= 0:0008 .7Under review as a conference paper at ICLR 20189 8 7 6 5 4 3 2 1 0Learning rate = 0.6x/10065.0%67.5%70.0%72.5%75.0%77.5%80.0%82.5%85.0%Accuracyanchoredimmediate(a) Immediate- and anchored-rules achieve similaraccuracy. Plot shows accuracy at epoch 20 for asweep of learning rates.0.01 0.21 0.41 0.61 0.81 1.01 1.21 1.41 1.61Epoch20%30%40%50%60%Accuracy-0.5%0.0%0.5%1.0%1.5%∆Accuracyanchoredimmediateimmediate-anchored(b) Immediate is better than anchored for firstepoch. Plot shows average performance over 100random initializations.Figure 5: Comparison of immediate- and anchored- delta rules.We observe a negligible difference in accuracy between these methods after 20 epochs of training(Fig. 5(a)). We noticed a trend that early in training the immediate rule seemed to out-perform theanchor rule but became slightly worse during the second epoch. Because this difference was small,we decided to run 200 randomized trials to confirm the existence of this effect (Fig. 5(b)).6 D ISCUSSIONThe computing model we have posited is generic and capable of implementing traditional MBGD-style algorithms exactly and efficiently. As in other batch schemes, our framework enjoys a propertythat for large batch sizes computational utilization asymptotically approaches 100%.We realized that instead of idly waiting for batch-boundary synchronization, we could also im-mediately start processing the subsequent batch. This results in a strategy that is similar to ASGD.Namely, the parameter inconsistency within the pipeline is limited to no more than one batch bound-ary. This is also true in ASGD if the worker nodes are balanced on vector throughput. In that case,workers should never be more than one time step removed from their peers.In considering the optimal batch size to use, we were led down a path of differential equations andrules for the learning dynamics of each layer. Research in feedback alignment shows the importanceof feedback dynamics in learning. To implement local dynamics it is natural for weights to residein physical proximity to their use. This is true in the case of biological neurons (weight encoded insynaptic strength), as it is in our model-parallel algorithm.When considering what an optimal strategy may look like, we realize that we always have the abilityto specify that a layer remain idle at a time step in order to create a global synchronization boundary.Likewise, we also have the ability to allow a weight to remain fixed instead of adopting a newvalue. Why would either of these strategies be ideal? As we allow data to stream through the neuralnetwork, each input from the environment and each measurement of the cost function contains someamount of useful information not yet extracted from the environment. The purpose of any learningequation is to allow the network to respond to this information. In this light both the strategy of “idlywaiting” and the strategy of “keeping fixed” are rejections of the utility of this information. See thestep-function specification of the MBGD algorithm (Alg. 2). These are indications of suboptimality.We demonstrated that choosing a lower learning rate dominates using larger batch sizes. Continuouspropagation allows statistically efficient learning while maintaining all the cores of a multiprocessorsystem busy. It permits explicit regularization terms such as L1 penalty on parameters or activations.In gradient descent, explicit regularization works by biasing the gradient in favor of regularization.Regularization’s contribution to gradient is available immediately and does not require traversalthrough the entire network. Therefore in CP parameter update based on regularization penalty isapplied before the corresponding loss gradient is applied.We note that the hidden activity and delta arcs in our pipeline diagrams (Fig. 1) can be considered asindividual vectors or batch matrices. Interpreting them as batch matrices allows investigating thesetechniques directly on contemporary GPU hardware. This is also a demonstration that continuouspropagation can be combined with data parallelism in case network depth is exhausted.8Under review as a conference paper at ICLR 2018<|im_end|> <|im_start|>assistant ### Review Title Promising idea, but needs more demonstration of practicality ### Review Text Pros: * asynchronous model-parallel training of deep learning models would potentially help in further scaling of deep learning models * the paper is clearly written and easy to understand Cons: * weak experiments: performance of algorithms are not analyzed in terms of wall-clock, and important baselines are not compared against, making it difficult to judge the practical usefulness of the proposed algorithm * weak theory: although the algorithm is claimed to be motivated by continuous-time formulation of gradient descent, neither convergence proof nor algorithm design really use the continuous-time formulation and discrete-time formulation seems to suffice; the proof is straightforward corollary of Lin et al. Summary: This paper proposes to update parameters of each layer in deep neural network asynchronously, instead of updating all layers simultaneously and synchronously. Authors derive this algorithm by first formulating gradient descent in continuous-time form and then modifying time dependencies between layers. While asynchronous updates of parameters in stochastic gradient descent has been explored (dating back to [1] in 1986, and authors should also be referring to [2]), to my knowledge application of these ideas to layer-by-layer model parallelism for deep neural networks has not been studied. Since model-parallel training across machines has not been very successful, and model-parallelism has been only exploited within machines, asynchronous model-parallel optimization is an important topic of research which has the promise of scaling deep learning models beyond the memory capacity of a single machine. Unfortunately, the practical usefulness of the algorithm has not been demonstrated. It remains unanswered whether this algorithm can be implemented efficiently in modern hardware architectures, or in which situations this algorithm will be more useful than existing algorithms. Experiments are all reported in terms of the number of updates (epochs), but this is not useful in judging the practical advantage of the proposed algorithm. What matters in practice is how fast the algorithm is in improving the performance as a function of _wall-clock time_, and I would expect that synchronous algorithms would be much faster than the proposed algorithm in terms of wall-clock time, as they can better exploit optimized tensor arithmetic on CPUs and GPUs. Also, authors should compare against mini-batch gradient descent, because this is the most popular way of training deep neural networks; authors has the burden of proof that the proposed algorithm is practically more useful than the existing standard method. Authors argue their algorithm is motivated by continuous-time formulation of stochastic gradient descent, but it is unclear to me whether the continuous-time formulation was really necessary to derive the proposed algorithm. The algorithm operates in discrete time horizon, and continuous time is not used anywhere. Authors rely mostly on Lin et al for the convergence proof, which is also based on discrete time horizon. Authors argue in page 1 that Continuous Propagation is statistically superior to mini-batch gradient descent, but I cannot find statistical superiority of the method. Also, the upper bound of the time-delay T slows down the convergence rate (Proposition in the appendix), so it is unclear whether asynchronous update is theoretically faster than synchronous mini-batch gradient descent. I think which algorithm is faster depends on values of L, T and M. Authors do not provide enough citations. Continuous-time characterization of gradient descent has a long history, and authors should provide citation of it, for example when (5) is introduced. Authors should provide more discussion of the history of model-parallel asynchronous SGD (such as [1] and [2]), and when mentioning alternatives like Czarnecki et al (2017), authors should discuss what advantages and disadvantages the proposed algorithm has against these alternatives. [1] Distributed Asynchronous Deterministic and Stochastic Gradient Optimization Algorithms (Tsitsiklis, Bertsekas and Athans, 1986) [2] Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent (Niu et al, 2011) ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
SR2L__h9q9p
ICML.cc/2020/Workshop/SAS
2020
Investigating Self-supervised Pre-training for End-to-end Speech Translation
["Ha Nguyen", "Fethi Bougares", "Natalia Tomashenko", "Yannick Est\u00e8ve", "laurent besacier"]
Self-supervised learning from raw speech has been proven beneficial to improve automatic speech recognition (ASR). We investigate here its impact on end-to-end automatic speech translation (AST) performance. We use a contrastive predictive coding (CPC) model pre-trained from unlabeled speech as a feature extractor for a downstream AST task. We show that self-supervised pre-training is particularly efficient in low resource settings and that fine-tuning CPC models on the AST training data further improves performance. Even in higher resource settings, ensembling AST models trained with filter-bank and CPC representations leads to near state-of-the-art models without using any ASR pre-training. This might be particularly beneficial when one needs to develop a system that translates from speech in a language with poorly standardized orthography or even from speech in an unwritten language.
["self-supervised learning from speech", "automatic speech translation", "end-to-end models", "low resource settings."]
Investigating Self-supervised Pre-training for End-to-end Speech TranslationHa Nguyen1 2Fethi Bougares3Natalia Tomashenko2Yannick Est `eve2Laurent Besacier1AbstractSelf-supervised learning from raw speech hasbeen proven beneficial to improve automaticspeech recognition (ASR). We investigate hereits impact on end-to-end automatic speech trans-lation (AST) performance. We use a contrastivepredictive coding (CPC) model pre-trained fromunlabeled speech as a feature extractor for a down-stream AST task. We show that self-supervisedpre-training is particularly efficient in low re-source settings and that fine-tuning CPC modelson the AST training data further improves per-formance. Even in higher resource settings, en-sembling AST models trained with filter-bank andCPC representations leads to near state-of-the-artmodels without using any ASR pre-training. Thismight be particularly beneficial when one needsto develop a system that translates from speech ina language with poorly standardized orthographyor even from speech in an unwritten language.1. IntroductionSelf-supervised learning using huge unlabeled data has beenexplored with very promising results for image processing(Chen et al., 2020) and natural language processing (De-vlin et al., 2018). Recent works investigated self-supervisedrepresentation learning from speech (Baevski et al., 2019;Kawakami et al., 2020; Chung & Glass, 2019). They weresuccessful to improve performance on downstream taskssuch as speech recognition. These recent works suggest thatit is possible to reduce dependence on labeled data for build-ing speech systems through acoustic representation learning.We investigate the possibility to leverage unlabeled speechfor end-to-end automatic speech translation (AST). We fo-cus on scenarios where (a) recordings in source language are1LIG - Universit ́e Grenoble Alpes, France2LIA - AvignonUniversit ́e, France3LIUM - Le Mans Universit ́e, France. Cor-respondence to: Ha Nguyen <manh-ha.nguyen@univ-grenoble-alpes.fr >.Published at the workshop on Self-supervision in Audio and Speechat the 37thInternational Conference on Machine Learning , Vi-enna, Austria. Copyright 2020 by the author(s).not transcribed1(no ASR pre-training is possible), (b) onlya small-medium amount of training data (speech aligned totranslations) is available, (c) a larger amount of unlabeledspeech can be used. This scenario is typical of situationswhen one builds a system that translates from speech ina language with poorly standardized orthography or evenfrom an unwritten language.In summary, our contributions are: (1) we propose an in-depth study on the impact of self-supervised pre-trainingfor AST, (2) we show that fine-tuning pre-trained repre-sentations on the AST training data is beneficial and thatself-supervised pre-training is particularly efficient in lowresource settings, (3) even in high resource settings, ensem-bling models trained with filter-bank and self-supervisedrepresentations leads to near state-of-the-art models withoutusing ASR pre-training, (4) we analyze the representationslearnt and show that they allow to better discriminate phones,better align source and target sequences, and are more robustto speaker variability.2. Related Works2.1. Self-supervised learning from speechSelf-supervised learning from speech consists in resolv-ing pseudo-tasks not requiring human annotations as a pre-training to the real tasks to solve. These pseudo-tasks targetpredicting next samples or solving ordering problems. Au-toregressive predictive coding (APC) (Chung et al., 2019;Chung & Glass, 2020) considers the sequential structureof speech and predicts information about a future frame.An easier learning objective is introduced in ContrastivePredictive Coding (CPC) which consists in distinguishing atrue future audio frame from negatives (Baevski et al., 2019;Schneider et al., 2019; Kahn et al., 2019). (Chung & Glass,2019) shows that such representations are useful to improveseveral speech tasks while (Kawakami et al., 2020) extendsthose works by looking at the representations’ robustness todomain and language shifts. In the same vein, (Rivi `ere et al.,2020) compares self-supervised and supervised pre-trainingfor ASR and shows that CPC pre-training extracts featuresthat transfer well to other languages, being on par or evenoutperforming supervised pre-training. Another promising1Transcription not available or language poorly writtenInvestigating Self-supervised Pre-training for End-to-end Speech Translationway is to use speech enhancement as a task for feature rep-resentation learning (Ravanelli et al., 2020; Engel et al.,2020). Finally, several self-supervised tasks can be jointlytackled to discover better speech representations (Pascualet al., 2019).2.2. End-to-end Automatic Speech TranslationPrevious automatic speech-to-text translation (AST) sys-tems operate in two steps: source language automatic speechrecognition (ASR) and source-to-target text machine trans-lation (MT). However, recent works have attempted to buildend-to-end AST without using source language transcriptionduring learning or decoding (B ́erard et al., 2016; Weiss et al.,2017) or using it at training time only (B ́erard et al., 2018).Recently several extensions of these pioneering works wereintroduced: low resource AST (Bansal et al., 2018), unsu-pervised AST (Chung et al., 2018), end-to-end speech-to-speech translation ( Translatotron ) (Jia et al., 2019), multi-lingual AST (Di Gangi et al., 2019). Improvements of end-to-end AST were also proposed using weakly superviseddata (Jia et al., 2018) or adding a second attention mecha-nism (Sperber et al., 2019). While supervised pre-trainingfor AST was investigated (see for instance (B ́erard et al.,2018)), we are aware of a single research group (Chung& Glass, 2019; 2020) that investigated self-supervised pre-training for AST. However their experiments were done ina high resource setting and AST (for which only marginalgains were displayed) was solely investigated among othertasks, without an in-depth analysis of the representationslearnt.3. Self-supervised Pre-training from Speech3.1. Contrastive predictive coding modelWe use the self-supervised pre-training model introducedin (Schneider et al., 2019) ( wav2vec ) which is based oncontrastive predictive coding. The model uses (1) an en-coder network that converts the audio signal in a latentrepresentation (from raw speech samples xinto a featurerepresentation z), and (2) a context network that aggregatesmultiple time steps to build contextualized representations(from a sequence ziv;:::;z iinto a context vector ci).2Thefull model (encoder+context) is trained end-to-end to dis-tinguish a sample zi+kthat is k steps in the future fromnegative samples ~zuniformly chosen from the same audiosequence. A contrastive loss is minimized for each stepk= 1;:::;K and the overall loss is summed over differentstep sizes (more details in (Schneider et al., 2019)).2Practically, each ziencodes 30msof speech every 10ms. Asforci, the total receptive field of the context network is 210ms.Table 1. Statistics of different How2 data partitionsPartition #segments #hours #src w #tgt w10% 17,751 28 313K 295K20% 35,858 56 626K 591K30% 53,698 84 887K 940K60% 107,676 169 1778K 1883Kfull 179,438 281 2963K 3139K3.2. Pre-trained models for EnglishWe use an off-the-shelf model provided for English.3It istrained on Librispeech corpus (Panayotov et al., 2015). Wealso investigate if fine-tuning the model on our task spe-cific data is beneficial. For this, we fine-tune wav2vec onthe full speech corpora used for our AST experiments (seenext section). It is important to note that no transcripts nortranslations are needed for this step which requires onlyraw speech. After fine-tuning wav2vec , we input the repre-sentations produced by the context network cito the ASTencoder instead of filter-bank features (see Figure 1).4. End-to-end Speech TranslationExperiments4.1. Experimental setup4.1.1. D ATAHow2 corpus (Sanabria et al., 2018) is used for our mainexperiments. This corpus contains about 297:6hours ofspeech, which is transcribed and translated into 3:3millionof English words and 3:1million of Portuguese words re-spectively.4From this version of data, we first filter out toolong sentences (sentences longer than 30seconds or 400characters). Then, in order to simulate lower resource sce-narios, we randomly split the corpus into four sub-corporaof roughly 10%,20%,30%, and 60% of the filtered fullcorpus. Our splits guarantee that smaller partitions are fullyincluded in the bigger ones. The statistics of all the parti-tions and the filtered version of full corpora can be found inTable 1.4.1.2. S PEECH FEATURES AND DATA AUGMENTATIONAs shown in Figure 1, we extract either wav2vec featuresor filter-bank+pitch features (later denoted as fbanks ) fromspeech input.5Depending on the experiments, mean and3https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/4As shown by (Nguyen et al., 2019), How2 is sensitive to thedownloading moment. Our version was downloaded in July, 2019.5Our preliminary experiments on How2 10% with MFCC fea-tures which lead to similar performance as filter-bank are notInvestigating Self-supervised Pre-training for End-to-end Speech Translationvariance normalization ( MVN ) is optionally applied to thegenerated features. For wav2vec feature extraction, weeither use an off-the-shelf model trained on LibriSpeech(Panayotov et al., 2015) or a model fine-tuned on How2training set. MVN parameters are estimated on the speechtranslation training set and then applied to all train/dev/testsets. Overall, we have 4 different self-supervised represen-tations named wav2vec ,wav2vec + norm ,wav2vec + FT(fined-tuned wav2vec ) and wav2vec + FT + norm . All thosewav2vec features are of dimension 512. We compare theabove representations to conventional filter-bank features.Similar to (Nguyen et al., 2019), we extract 80-dimensionalMel filter-bank features, concatenated with 3-dimensionalpitch features from windows of 25ms, and a frame shift of10ms.MVN is used in the same manner as for wav2vecfeatures. This gives us 2 additional speech representationsnamed fbanks andfbanks + norm respectively (their dimen-sion is 83).6Data augmentation through speed perturbationis also applied with factors of 0:9,1:0, and 1:1to the train-ing data. Our development set is made of 1;984sentencesrandomly excluded from the training set. How2 val set isused as our test data.4.2. Speech-to-text translation model4.2.1. A RCHITECTURE .We use an attention-based encoder-decoder architecture,whose encoder is illustrated in Figure 1. The encoder is astack of two VGG-like (Simonyan & Zisserman, 2015) CNNblocks followed by five 1024-dimensional BLSTM layers.Each VGG block contains two 2D-convolution layers justbefore a 2D-maxpooling layer, which aims to reduce bothtime (T) and frequency dimension ( D) of the input speechfeatures by a factor of 2. These two VGG blocks transforminput speech features’ shape from (TD)to(T=4D=4).Bahdanau’s attention mechanism (Bahdanau et al., 2015)is used in all our experiments. The decoder is a stack oftwo 1024-dimensional LSTM layers. As proven effective in(Nguyen et al., 2019), this model is consistently used for allthe experiments with fbanks features presented throughoutthis paper. However wav2vec features have higher dimen-sion (512) than fbanks (83). In order to compare both inputrepresentations with a similar parameter budget in the ar-chitecture (and also because training an architecture withinput features of dimension 512 would be substantially morecomputationally expensive), we add a projection block atthe bottom of the encoder.7This block (containing a linearpresented here.6For the rest of the paper fbanks will actually mean filter-bank+pitch7Our implementation of the wav2vec speech encoder, aswell as the detailed recipes for our experiments can be foundonline: https://github.com/mhn226/espnet/tree/interspeech2020 .VGG blockMaxPoolCNNCNNVGG blockBLSTM -2BLSTM -1Linear + ReLU1 2wav2vec fbanksMVN MVNBLSTM -5512 8383Figure 1. Architecture of the speech encoder: a stack of two VGGblocks followed by 5 BLSTM layers. We use as input (1) wav2vecfeatures (that pass through an additional projection layer to reducetheir dimension from 512 to 83), or (2) filter-bank+pitch features.The input features are optionally normalized (MVN).layer followed by a ReLU) reduces the wav2vec ’s featuresize from 512to83(see Figure 1).4.2.2. H YPERPARAMETERS ’DETAILSModels are trained in maximum 20 epochs with early stop-ping after 3epochs if the accuracy on the dev set does notimprove. Adadelta is chosen as optimizer and dropout is setto0:3on the encoder side. We decode all our models withbeam size of 10.4.3. Experimental results on How2On each partition of How2 corpus, we train 6 models whichtake as input different speech representations presented insection 4.1.2, thus in total 30 models shown in Table 2.We evaluate on How2 val set, which contains 2;022seg-ments (about 3:2hours of speech), in the same conditionsas (Nguyen et al., 2019). It is clear from the table that inlow resource settings (28 and 56 hours), self-supervisedrepresentations ( wav2vec ) significantly outperform fbanks .Figure 2a confirms this and shows that models trained withwav2vec representations converge better and faster. Theimpact of normalization and fine-tuning is also notable fromboth Table 2 and Figure 2a. In very low resource settings(like 28hours), fine-tuning wav2vec can greatly help, andwith normalization, the performance further improves. Inhigher resource settings ( 169and281hours of translatedspeech), differences between wav2vec andfbanks fade away(and so does the impact of fine-tuning and normalization).However, our ensembling experiments of lines 7 and 8 onInvestigating Self-supervised Pre-training for End-to-end Speech TranslationTable 2. Detokenized case-sensitive BLEU scores measured on How2 val set of different models trained on different partitions of How2corpus (EN-PT) with different speech features. FTmeans fine-tuned and norm stands for MVN normalization.No. Feature 10% (28h) 20% (56h) 30% (84h) 60% (169h) 100% (281h)1 wav2vec 11.33 26.75 30.83 36.33 41.022 wav2vec + FT 12.52 27.30 32.11 37.78 42.323 wav2vec + norm 16.52 27.33 31.27 37.62 41.084 wav2vec + FT + norm 18.50 27.68 32.17 37.75 41.305 fbanks 1.03 18.61 27.32 37.23 41.636 fbanks + norm 2.11 24.58 30.21 37.56 42.517 Ensemble [5, 6] 25.28 31.90 40.39 44.358 Ensemble [4, 6] 29.87 34.67 41.22 45.029 Ensemble [1,2,3,4,5,6] 31.88 36.80 42.62 46.16(a) How2 10% (28 hours)(b) How2 20% (56 hours)Figure 2. Learning curves (accuracy) of models trained on differentpartitions of How2100% of How2 show that it is beneficial to ensemble the bestsystem ( fbanks+norm , line 6) with a system trained withwav2vec (wav2vec+FT+norm , line 4) rather than a bettermodel ( fbanks , line 5) also based on filter-bank features,even though wav2vec+FT+norm underperforms fbanks onthis partition. Ensembling all our models (line 9) leads toBLEU > 30even in very low resource training conditions(56 hours). Finally, in order to compare ourselves with thestate-of-the-art (Inaguma et al., 2020), we decode How2dev5 (a.k.a How2 test), which consists of 2;305segments(about 3.7 hours of speech), using the ensemble of all ourmodels trained on the full corpus (line 9). This gives usnear state-of-the-art BLEU: we obtain 46:16on How2 valand47:17on How2 dev5. This latter score on dev5 is tobe compared with 48:04reported with an ensemble modelin (Inaguma et al., 2020) where ASR and MT pre-trainingwere used, as well as data augmentation with SpecAugment .4.4. Validation on two other language pairsTo validate our results in low resource settings (56 hours),we train our models on two subsets of MuST-C (Di Gangiet al., 2019) English-to-German and English-to-French train-ing data (56 hours each, a training size similar to How220%). As illustrated by Table 3, MuST-C is more chal-lenging than How2 (as confirmed by official IWSLT 2019evaluation results (Niehues et al., 2019)), but for both lan-guage pairs, wav2vec significantly outperform fbanks . Thisconfirms that self-supervised pre-training is useful in lowresource scenarios.5. Analysis of Learnt RepresentationsThis section tries to answer the question why wav2vec rep-resentation performs better than filter-bank features in lowresource settings. The following subsections present theexperiments which show that wav2vec might be (1) betterat discriminating phones, (2) better at aligning source andInvestigating Self-supervised Pre-training for End-to-end Speech TranslationTable 3. AST BLEU on MuST-C 56h for EN-DE andEN-FR .Lang Features tst-COMMON tst-HEEN-DEwav2vec 7.56 7.21wav2vec+norm 7.83 8.12fbanks 1.50 1.09fbanks+norm 4.89 4.87EN-FRwav2vec 12.08 12.41wav2vec+norm 12.58 12.58fbanks 0.54 0.00fbanks+norm 7.10 6.37Table 4. Phone error rate (PER %) on TIMIT dev and test set.No. Feature TIMIT dev TMIT test1 wav2vec 13.0 15.02 wav2vec + norm 13.9 15.83 fbanks 22.2 24.94 fbanks + norm 20.7 23.5target sequences, and (3) more robust to speaker variability.5.1. Better phone discriminationWe first replicate an experiment from (Schneider et al., 2019)for phoneme recognition on TIMIT (Garofolo et al., 1993).Speech representations are extracted from train, dev and testsplit of TIMIT. A simple attentional encoder-decoder modelis used: encoder with 4 BLSTM layers of hidden size 320,decoder with 1 LSTM layer and location-based attention(Luong et al., 2015). The results of Table 4 confirm thatwav2vec representations (normalized or not) are much betterat recognizing phones than fbanks .5.2. Better source-target alignmentsWe evaluate the entropies of the soft alignments obtainedwith different speech representations in teacher forcingmode. Lettjbe the alignment score between target tokenytand source speech frame xj, we evaluate the entropyTable 5. Averaged entropies of soft-alignments on How2 dev andval set. AST models trained on 10% partition of How2.No. Feature How2 dev How2 val1 wav2vec 0.66 0.662 wav2vec + FT 0.65 0.653 wav2vec + norm 0.57 0.574 wav2vec + FT + norm 0.51 0.515 fbanks 0.89 0.906 fbanks + norm 0.93 0.93(a) wav2vec - entropy = 0:67(b) fbanks - entropy = 0:81Figure 3. Soft alignments between source speech features and tar-get text for sentence “A outra pessoa perde.”of the probability distribution t,Ht=Pjxjj=1tjlogtjfor every target token. This measure is then averaged forall tokens at the corpus level (How 10%). A low entropymeans the attention mechanism is confident in its source-target alignments (see example in Figure 3). Table 5 showsclearly that, in our low resource setting, wav2vec leads tobetter alignments (lower entropy) than fbanks . Fine-tuningand normalization of self-supervised representations alsoimprove the soft alignments.5.3. Better robustness to speaker variabilityTable 6. Equal error rate (EER %) on the V oxCeleb1 test and Lib-riSpeech test sets for female (f) and male (m) speakers.No. Feature VoxCeleb Libri (f) Libri (m)1 wav2vec 22.75 11.22 2.232wav2vec + norm 20.93 10.54 1.793 fbanks 15.78 5.47 0.894 fbanks + norm 16.25 3.47 0.67To investigate robustness to speaker variability, we trainedseveral automatic speaker verification (ASV) systems usingwav2vec orfbanks features. Models are trained on Lib-riSpeech train-clean-360 dataset (Panayotov et al., 2015)Investigating Self-supervised Pre-training for End-to-end Speech Translationusing Kaldi (Povey et al., 2011). ASV systems are basedon x-vectors and probabilistic linear discriminant analysis(PLDA) (Snyder et al., 2018). To extract x-vectors, weused a time delay neural network (TDNN) model topologysimilar to the one described in (Snyder et al., 2018). In-put features are fbanks orwav2vec (optionally normalized)while output corresponds to 921 speakers of the trainingcorpus. ASV experiments are conducted on the VoxCeleb1test(Nagrani et al., 2017) and LibriSpeech test-clean (Panay-otov et al., 2015) sets.8ASV results (equal error rate - EER)are presented in Table 6. We observe that in all experiments,models trained on wav2vec features provide significantlyhigher EER in comparison with fbanks . This confirms ourhypothesis that wav2vec representations remove speakerinformation from speech signal.96. ConclusionWe investigated the impact of self-supervised learning forend-to-end AST. It was shown that representations basedon contrastive predicting coding (CPC) improve results sig-nificantly compared to baseline filter-bank, in low-mediumresource conditions ( train< 100h). Our explanation is thatself-supervised representations show better phone discrimi-nation, source-target alignments and speaker robustness.ReferencesBaevski, A., Auli, M., and Mohamed, A. Effectiveness ofself-supervised pre-training for speech recognition, 2019.Bahdanau, D., Cho, K., and Bengio, Y . Neural MachineTranslation by Jointly Learning to Align and Translate.InProc. of ICLR , 2015.Bansal, S., Kamper, H., Livescu, K., Lopez, A., and Goldwa-ter, S. Pre-training on high-resource speech recognitionimproves low-resource speech-to-text translation. CoRR ,abs/1809.01431, 2018. URL http://arxiv.org/abs/1809.01431 .B ́erard, A., Pietquin, O., Servan, C., and Besacier, L. Listenand translate: A proof of concept for end-to-end speech-to-text translation. In NIPS Workshop on End-to-endLearning for Speech and Audio Processing , 2016.B ́erard, A., Besacier, L., Kocabiyikoglu, A. C., and Pietquin,O. End-to-end automatic speech translation of audio-8The trial and enrollment subsets of the LibriSpeech test-cleanfor the ASV task are described in more details in (Tomashenkoet al., 2020).9We would also expect that mean and variance normalizationincrease EER but this is not the case. One explanation mightbe that normalization also removes channel variability and thusimproves ASV .books. CoRR , abs/1802.04200, 2018. URL http://arxiv.org/abs/1802.04200 .Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. Asimple framework for contrastive learning of visual rep-resentations, 2020.Chung, Y ., Weng, W., Tong, S., and Glass, J. To-wards unsupervised speech-to-text translation. CoRR ,abs/1811.01307, 2018. URL http://arxiv.org/abs/1811.01307 .Chung, Y ., Hsu, W., Tang, H., and Glass, J. R. An un-supervised autoregressive model for speech represen-tation learning. CoRR , abs/1904.03240, 2019. URLhttp://arxiv.org/abs/1904.03240 .Chung, Y .-A. and Glass, J. Generative pre-training forspeech with autoregressive predictive coding, 2019.Chung, Y .-A. and Glass, J. Improved speech representationswith multi-target autoregressive predictive coding, 2020.Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT:pre-training of deep bidirectional transformers for lan-guage understanding. CoRR , abs/1810.04805, 2018. URLhttp://arxiv.org/abs/1810.04805 .Di Gangi, M. A., Cattoni, R., Bentivogli, L., Negri, M.,and Turchi, M. MuST-C: a Multilingual Speech Trans-lation Corpus. In Proceedings of the 2019 Confer-ence of the North American Chapter of the Associa-tion for Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) , pp.2012–2017, Minneapolis, Minnesota, June 2019. Asso-ciation for Computational Linguistics. doi: 10.18653/v1/N19-1202. URL https://www.aclweb.org/anthology/N19-1202 .Engel, J., Hantrakul, L., Gu, C., and Roberts, A. Ddsp:Differentiable digital signal processing, 2020.Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G.,Pallett, D. S., and Dahlgren, N. L. DARPA TIMIT acous-tic phonetic continuous speech corpus cdrom, 1993.Inaguma, H., Kiyono, S., Duh, K., Karita, S., Soplin,N. E. Y ., Hayashi, T., and Watanabe, S. ESPnet-ST:All-in-one speech translation toolkit. arXiv preprintarXiv:2004.10234 , 2020.Jia, Y ., Johnson, M., Macherey, W., Weiss, R. J., Cao, Y .,Chiu, C., Ari, N., Laurenzo, S., and Wu, Y . Leveragingweakly supervised data to improve end-to-end speech-to-text translation. CoRR , abs/1811.02050, 2018. URLhttp://arxiv.org/abs/1811.02050 .Investigating Self-supervised Pre-training for End-to-end Speech TranslationJia, Y ., Weiss, R. J., Biadsy, F., Macherey, W., Johnson,M., Chen, Z., and Wu, Y . Direct speech-to-speechtranslation with a sequence-to-sequence model. CoRR ,abs/1904.06037, 2019. URL http://arxiv.org/abs/1904.06037 .Kahn, J., Rivi `ere, M., Zheng, W., Kharitonov, E., Xu, Q.,Mazar ́e, P.-E., Karadayi, J., Liptchinsky, V ., Collobert, R.,Fuegen, C., Likhomanenko, T., Synnaeve, G., Joulin, A.,Mohamed, A., and Dupoux, E. Libri-light: A benchmarkfor asr with limited or no supervision, 2019.Kawakami, K., Wang, L., Dyer, C., Blunsom, P., andvan den Oord, A. Learning robust and multilingual speechrepresentations, 2020.Luong, N.-Q., Besacier, L., and Lecouteux, B. Towards ac-curate predictors of word quality for machine translation:Lessons learned on french - english and english - spanishsystems. Data and Knowledge Engineering , 2015.Nagrani, A., Chung, J. S., and Zisserman, A. V oxCeleb: alarge-scale speaker identification dataset. In Interspeech ,pp. 2616–2620, 2017.Nguyen, H., Tomashenko, N., Boito, M. Z., Caubriere, A.,Bougares, F., Rouvier, M., Besacier, L., and Esteve, Y .ON-TRAC consortium end-to-end speech translation sys-tems for the IWSLT 2019 shared task. In Proc. of IWSLT ,2019.Niehues, J., Cattoni, R., St ̈uker, S., Negri, M., Turchi, M.,Salesky, E., Sanabria, R., Barrault, L., Specia, L., andFederico, M. The iwslt 2019 evaluation campaign. In Pro-ceedings of the 16th International Workshop on SpokenLanguage Translation (IWSLT 2019) , 2019.Panayotov, V ., Chen, G., Povey, D., and Khudanpur, S.LibriSpeech: an ASR corpus based on public domainaudio books. In 2015 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) , pp.5206–5210, 2015.Pascual, S., Ravanelli, M., Serr `a, J., Bonafonte, A., and Ben-gio, Y . Learning problem-agnostic speech representationsfrom multiple self-supervised tasks. 2019.Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek,O., Goel, N., Hannemann, M., et al. The Kaldi speechrecognition toolkit. Technical report, 2011.Ravanelli, M., Zhong, J., Pascual, S., Swietojanski, P.,Monteiro, J., Trmal, J., and Bengio, Y . Multi-task self-supervised learning for robust speech recognition, 2020.Rivi`ere, M., Joulin, A., Mazar ́e, P.-E., and Dupoux, E. Un-supervised pretraining transfers well across languages,2020.Sanabria, R., Caglayan, O., Palaskar, S., Elliott, D., Bar-rault, L., Specia, L., and Metze, F. How2: a large-scaledataset for multimodal language understanding. In ViGILWorkshop, NeurIPS , 2018.Schneider, S., Baevski, A., Collobert, R., and Auli,M. wav2vec: Unsupervised Pre-Training for SpeechRecognition. In Proc. Interspeech 2019 , pp. 3465–3469, 2019. doi: 10.21437/Interspeech.2019-1873.URL http://dx.doi.org/10.21437/Interspeech.2019-1873 .Simonyan, K. and Zisserman, A. Very deep convolutionalnetworks for large-scale image recognition. In Proc. ofICLR , 2015.Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., andKhudanpur, S. X-vectors: Robust DNN embeddingsfor speaker recognition. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , pp. 5329–5333, 2018.Sperber, M., Neubig, G., Niehues, J., and Waibel, A.Attention-passing models for robust and data-efficientend-to-end speech translation. CoRR , abs/1904.07209,2019. URL http://arxiv.org/abs/1904.07209 .Tomashenko, N., Srivastava, B. M. L., Wang, X., Vincent,E., Nautsch, A., Yamagishi, J., Evans, N., et al. TheV oicePrivacy 2020 Challenge evaluation plan. 2020.Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y ., and Chen,Z. Sequence-to-sequence models can directly transcribeforeign speech. In Proc. of INTERSPEECH , 2017.
JpqRX8tdMGK
Good paper and detailed analysis on the use of CPC model in end2end speech translation
8: Top 50% of accepted papers, clear accept
This paper investigates the application of self-supervised pre-training to end-to-end speech translation. In particular, it presents an application of the contrastive predictive coding (CPC) model that gives the possibility of leveraging massive audio data without the need of the textual transcriptions. An extensive set of experiments and analysis is reported showing that i) the CPC model is particularly beneficial when limited quantities of training data is available and ii) the use of wav2Vec allows the speech translation model to have a better representation of the input audio compared to filter banks. The paper is clear and well-written. The addressed topic is interesting for the ST community because the paucity of speech-translation training data calls for different approaches to use alternative sources of information (e.g. only audio data). The motivation and the experimental parts are robust and appropriate. The analysis of the learnt representation tries to shed light on the improvements in performance obtained in low-resource settings. The main limitation of this paper could be the novelty because the paper extends the work by Chung & Glass (2019 and 2020). However, the use of different training data sizes and the investigation of the learnt representation make the paper interesting and quite distinguishable from the previous literature. Comments and questions: -) Why did the authors choose CPC model instead of the autoregressive predictive coding (ACPC)? Is it only a matter of simplicity of the former? or there is evidence that the former outperforms the latter? The motivation should be mentioned in the paper. -) It is not clear if the (Inaguma et al., 2020) models resulting in the best BLEU score on How2 are trained only using How2 or with more speech-translation data. This should be specified at the end of section 4.3. -) The best systems at the IWSLT workshop last year used much more ST data and different data augmentation techniques such as knowledge distillation. The ensemble of wav2vec and fbanks approaches the best performing system on the How2 dataset. Although it is clear that the CPC model has fewer requirements in terms of task-specific data, it would be interesting and useful for the community to have a proper comparison/combination of the CPC model with these methods both in low and high-resource scenarios
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Investigating Self-supervised Pre-training for End-to-end Speech Translation ### Paper Abstract Self-supervised learning from raw speech has been proven beneficial to improve automatic speech recognition (ASR). We investigate here its impact on end-to-end automatic speech translation (AST) performance. We use a contrastive predictive coding (CPC) model pre-trained from unlabeled speech as a feature extractor for a downstream AST task. We show that self-supervised pre-training is particularly efficient in low resource settings and that fine-tuning CPC models on the AST training data further improves performance. Even in higher resource settings, ensembling AST models trained with filter-bank and CPC representations leads to near state-of-the-art models without using any ASR pre-training. This might be particularly beneficial when one needs to develop a system that translates from speech in a language with poorly standardized orthography or even from speech in an unwritten language. ### Paper Keywords ["self-supervised learning from speech", "automatic speech translation", "end-to-end models", "low resource settings."] ### Paper Content Investigating Self-supervised Pre-training for End-to-end Speech TranslationHa Nguyen1 2Fethi Bougares3Natalia Tomashenko2Yannick Est `eve2Laurent Besacier1AbstractSelf-supervised learning from raw speech hasbeen proven beneficial to improve automaticspeech recognition (ASR). We investigate hereits impact on end-to-end automatic speech trans-lation (AST) performance. We use a contrastivepredictive coding (CPC) model pre-trained fromunlabeled speech as a feature extractor for a down-stream AST task. We show that self-supervisedpre-training is particularly efficient in low re-source settings and that fine-tuning CPC modelson the AST training data further improves per-formance. Even in higher resource settings, en-sembling AST models trained with filter-bank andCPC representations leads to near state-of-the-artmodels without using any ASR pre-training. Thismight be particularly beneficial when one needsto develop a system that translates from speech ina language with poorly standardized orthographyor even from speech in an unwritten language.1. IntroductionSelf-supervised learning using huge unlabeled data has beenexplored with very promising results for image processing(Chen et al., 2020) and natural language processing (De-vlin et al., 2018). Recent works investigated self-supervisedrepresentation learning from speech (Baevski et al., 2019;Kawakami et al., 2020; Chung & Glass, 2019). They weresuccessful to improve performance on downstream taskssuch as speech recognition. These recent works suggest thatit is possible to reduce dependence on labeled data for build-ing speech systems through acoustic representation learning.We investigate the possibility to leverage unlabeled speechfor end-to-end automatic speech translation (AST). We fo-cus on scenarios where (a) recordings in source language are1LIG - Universit ́e Grenoble Alpes, France2LIA - AvignonUniversit ́e, France3LIUM - Le Mans Universit ́e, France. Cor-respondence to: Ha Nguyen <manh-ha.nguyen@univ-grenoble-alpes.fr >.Published at the workshop on Self-supervision in Audio and Speechat the 37thInternational Conference on Machine Learning , Vi-enna, Austria. Copyright 2020 by the author(s).not transcribed1(no ASR pre-training is possible), (b) onlya small-medium amount of training data (speech aligned totranslations) is available, (c) a larger amount of unlabeledspeech can be used. This scenario is typical of situationswhen one builds a system that translates from speech ina language with poorly standardized orthography or evenfrom an unwritten language.In summary, our contributions are: (1) we propose an in-depth study on the impact of self-supervised pre-trainingfor AST, (2) we show that fine-tuning pre-trained repre-sentations on the AST training data is beneficial and thatself-supervised pre-training is particularly efficient in lowresource settings, (3) even in high resource settings, ensem-bling models trained with filter-bank and self-supervisedrepresentations leads to near state-of-the-art models withoutusing ASR pre-training, (4) we analyze the representationslearnt and show that they allow to better discriminate phones,better align source and target sequences, and are more robustto speaker variability.2. Related Works2.1. Self-supervised learning from speechSelf-supervised learning from speech consists in resolv-ing pseudo-tasks not requiring human annotations as a pre-training to the real tasks to solve. These pseudo-tasks targetpredicting next samples or solving ordering problems. Au-toregressive predictive coding (APC) (Chung et al., 2019;Chung & Glass, 2020) considers the sequential structureof speech and predicts information about a future frame.An easier learning objective is introduced in ContrastivePredictive Coding (CPC) which consists in distinguishing atrue future audio frame from negatives (Baevski et al., 2019;Schneider et al., 2019; Kahn et al., 2019). (Chung & Glass,2019) shows that such representations are useful to improveseveral speech tasks while (Kawakami et al., 2020) extendsthose works by looking at the representations’ robustness todomain and language shifts. In the same vein, (Rivi `ere et al.,2020) compares self-supervised and supervised pre-trainingfor ASR and shows that CPC pre-training extracts featuresthat transfer well to other languages, being on par or evenoutperforming supervised pre-training. Another promising1Transcription not available or language poorly writtenInvestigating Self-supervised Pre-training for End-to-end Speech Translationway is to use speech enhancement as a task for feature rep-resentation learning (Ravanelli et al., 2020; Engel et al.,2020). Finally, several self-supervised tasks can be jointlytackled to discover better speech representations (Pascualet al., 2019).2.2. End-to-end Automatic Speech TranslationPrevious automatic speech-to-text translation (AST) sys-tems operate in two steps: source language automatic speechrecognition (ASR) and source-to-target text machine trans-lation (MT). However, recent works have attempted to buildend-to-end AST without using source language transcriptionduring learning or decoding (B ́erard et al., 2016; Weiss et al.,2017) or using it at training time only (B ́erard et al., 2018).Recently several extensions of these pioneering works wereintroduced: low resource AST (Bansal et al., 2018), unsu-pervised AST (Chung et al., 2018), end-to-end speech-to-speech translation ( Translatotron ) (Jia et al., 2019), multi-lingual AST (Di Gangi et al., 2019). Improvements of end-to-end AST were also proposed using weakly superviseddata (Jia et al., 2018) or adding a second attention mecha-nism (Sperber et al., 2019). While supervised pre-trainingfor AST was investigated (see for instance (B ́erard et al.,2018)), we are aware of a single research group (Chung& Glass, 2019; 2020) that investigated self-supervised pre-training for AST. However their experiments were done ina high resource setting and AST (for which only marginalgains were displayed) was solely investigated among othertasks, without an in-depth analysis of the representationslearnt.3. Self-supervised Pre-training from Speech3.1. Contrastive predictive coding modelWe use the self-supervised pre-training model introducedin (Schneider et al., 2019) ( wav2vec ) which is based oncontrastive predictive coding. The model uses (1) an en-coder network that converts the audio signal in a latentrepresentation (from raw speech samples xinto a featurerepresentation z), and (2) a context network that aggregatesmultiple time steps to build contextualized representations(from a sequence ziv;:::;z iinto a context vector ci).2Thefull model (encoder+context) is trained end-to-end to dis-tinguish a sample zi+kthat is k steps in the future fromnegative samples ~zuniformly chosen from the same audiosequence. A contrastive loss is minimized for each stepk= 1;:::;K and the overall loss is summed over differentstep sizes (more details in (Schneider et al., 2019)).2Practically, each ziencodes 30msof speech every 10ms. Asforci, the total receptive field of the context network is 210ms.Table 1. Statistics of different How2 data partitionsPartition #segments #hours #src w #tgt w10% 17,751 28 313K 295K20% 35,858 56 626K 591K30% 53,698 84 887K 940K60% 107,676 169 1778K 1883Kfull 179,438 281 2963K 3139K3.2. Pre-trained models for EnglishWe use an off-the-shelf model provided for English.3It istrained on Librispeech corpus (Panayotov et al., 2015). Wealso investigate if fine-tuning the model on our task spe-cific data is beneficial. For this, we fine-tune wav2vec onthe full speech corpora used for our AST experiments (seenext section). It is important to note that no transcripts nortranslations are needed for this step which requires onlyraw speech. After fine-tuning wav2vec , we input the repre-sentations produced by the context network cito the ASTencoder instead of filter-bank features (see Figure 1).4. End-to-end Speech TranslationExperiments4.1. Experimental setup4.1.1. D ATAHow2 corpus (Sanabria et al., 2018) is used for our mainexperiments. This corpus contains about 297:6hours ofspeech, which is transcribed and translated into 3:3millionof English words and 3:1million of Portuguese words re-spectively.4From this version of data, we first filter out toolong sentences (sentences longer than 30seconds or 400characters). Then, in order to simulate lower resource sce-narios, we randomly split the corpus into four sub-corporaof roughly 10%,20%,30%, and 60% of the filtered fullcorpus. Our splits guarantee that smaller partitions are fullyincluded in the bigger ones. The statistics of all the parti-tions and the filtered version of full corpora can be found inTable 1.4.1.2. S PEECH FEATURES AND DATA AUGMENTATIONAs shown in Figure 1, we extract either wav2vec featuresor filter-bank+pitch features (later denoted as fbanks ) fromspeech input.5Depending on the experiments, mean and3https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/4As shown by (Nguyen et al., 2019), How2 is sensitive to thedownloading moment. Our version was downloaded in July, 2019.5Our preliminary experiments on How2 10% with MFCC fea-tures which lead to similar performance as filter-bank are notInvestigating Self-supervised Pre-training for End-to-end Speech Translationvariance normalization ( MVN ) is optionally applied to thegenerated features. For wav2vec feature extraction, weeither use an off-the-shelf model trained on LibriSpeech(Panayotov et al., 2015) or a model fine-tuned on How2training set. MVN parameters are estimated on the speechtranslation training set and then applied to all train/dev/testsets. Overall, we have 4 different self-supervised represen-tations named wav2vec ,wav2vec + norm ,wav2vec + FT(fined-tuned wav2vec ) and wav2vec + FT + norm . All thosewav2vec features are of dimension 512. We compare theabove representations to conventional filter-bank features.Similar to (Nguyen et al., 2019), we extract 80-dimensionalMel filter-bank features, concatenated with 3-dimensionalpitch features from windows of 25ms, and a frame shift of10ms.MVN is used in the same manner as for wav2vecfeatures. This gives us 2 additional speech representationsnamed fbanks andfbanks + norm respectively (their dimen-sion is 83).6Data augmentation through speed perturbationis also applied with factors of 0:9,1:0, and 1:1to the train-ing data. Our development set is made of 1;984sentencesrandomly excluded from the training set. How2 val set isused as our test data.4.2. Speech-to-text translation model4.2.1. A RCHITECTURE .We use an attention-based encoder-decoder architecture,whose encoder is illustrated in Figure 1. The encoder is astack of two VGG-like (Simonyan & Zisserman, 2015) CNNblocks followed by five 1024-dimensional BLSTM layers.Each VGG block contains two 2D-convolution layers justbefore a 2D-maxpooling layer, which aims to reduce bothtime (T) and frequency dimension ( D) of the input speechfeatures by a factor of 2. These two VGG blocks transforminput speech features’ shape from (TD)to(T=4D=4).Bahdanau’s attention mechanism (Bahdanau et al., 2015)is used in all our experiments. The decoder is a stack oftwo 1024-dimensional LSTM layers. As proven effective in(Nguyen et al., 2019), this model is consistently used for allthe experiments with fbanks features presented throughoutthis paper. However wav2vec features have higher dimen-sion (512) than fbanks (83). In order to compare both inputrepresentations with a similar parameter budget in the ar-chitecture (and also because training an architecture withinput features of dimension 512 would be substantially morecomputationally expensive), we add a projection block atthe bottom of the encoder.7This block (containing a linearpresented here.6For the rest of the paper fbanks will actually mean filter-bank+pitch7Our implementation of the wav2vec speech encoder, aswell as the detailed recipes for our experiments can be foundonline: https://github.com/mhn226/espnet/tree/interspeech2020 .VGG blockMaxPoolCNNCNNVGG blockBLSTM -2BLSTM -1Linear + ReLU1 2wav2vec fbanksMVN MVNBLSTM -5512 8383Figure 1. Architecture of the speech encoder: a stack of two VGGblocks followed by 5 BLSTM layers. We use as input (1) wav2vecfeatures (that pass through an additional projection layer to reducetheir dimension from 512 to 83), or (2) filter-bank+pitch features.The input features are optionally normalized (MVN).layer followed by a ReLU) reduces the wav2vec ’s featuresize from 512to83(see Figure 1).4.2.2. H YPERPARAMETERS ’DETAILSModels are trained in maximum 20 epochs with early stop-ping after 3epochs if the accuracy on the dev set does notimprove. Adadelta is chosen as optimizer and dropout is setto0:3on the encoder side. We decode all our models withbeam size of 10.4.3. Experimental results on How2On each partition of How2 corpus, we train 6 models whichtake as input different speech representations presented insection 4.1.2, thus in total 30 models shown in Table 2.We evaluate on How2 val set, which contains 2;022seg-ments (about 3:2hours of speech), in the same conditionsas (Nguyen et al., 2019). It is clear from the table that inlow resource settings (28 and 56 hours), self-supervisedrepresentations ( wav2vec ) significantly outperform fbanks .Figure 2a confirms this and shows that models trained withwav2vec representations converge better and faster. Theimpact of normalization and fine-tuning is also notable fromboth Table 2 and Figure 2a. In very low resource settings(like 28hours), fine-tuning wav2vec can greatly help, andwith normalization, the performance further improves. Inhigher resource settings ( 169and281hours of translatedspeech), differences between wav2vec andfbanks fade away(and so does the impact of fine-tuning and normalization).However, our ensembling experiments of lines 7 and 8 onInvestigating Self-supervised Pre-training for End-to-end Speech TranslationTable 2. Detokenized case-sensitive BLEU scores measured on How2 val set of different models trained on different partitions of How2corpus (EN-PT) with different speech features. FTmeans fine-tuned and norm stands for MVN normalization.No. Feature 10% (28h) 20% (56h) 30% (84h) 60% (169h) 100% (281h)1 wav2vec 11.33 26.75 30.83 36.33 41.022 wav2vec + FT 12.52 27.30 32.11 37.78 42.323 wav2vec + norm 16.52 27.33 31.27 37.62 41.084 wav2vec + FT + norm 18.50 27.68 32.17 37.75 41.305 fbanks 1.03 18.61 27.32 37.23 41.636 fbanks + norm 2.11 24.58 30.21 37.56 42.517 Ensemble [5, 6] 25.28 31.90 40.39 44.358 Ensemble [4, 6] 29.87 34.67 41.22 45.029 Ensemble [1,2,3,4,5,6] 31.88 36.80 42.62 46.16(a) How2 10% (28 hours)(b) How2 20% (56 hours)Figure 2. Learning curves (accuracy) of models trained on differentpartitions of How2100% of How2 show that it is beneficial to ensemble the bestsystem ( fbanks+norm , line 6) with a system trained withwav2vec (wav2vec+FT+norm , line 4) rather than a bettermodel ( fbanks , line 5) also based on filter-bank features,even though wav2vec+FT+norm underperforms fbanks onthis partition. Ensembling all our models (line 9) leads toBLEU > 30even in very low resource training conditions(56 hours). Finally, in order to compare ourselves with thestate-of-the-art (Inaguma et al., 2020), we decode How2dev5 (a.k.a How2 test), which consists of 2;305segments(about 3.7 hours of speech), using the ensemble of all ourmodels trained on the full corpus (line 9). This gives usnear state-of-the-art BLEU: we obtain 46:16on How2 valand47:17on How2 dev5. This latter score on dev5 is tobe compared with 48:04reported with an ensemble modelin (Inaguma et al., 2020) where ASR and MT pre-trainingwere used, as well as data augmentation with SpecAugment .4.4. Validation on two other language pairsTo validate our results in low resource settings (56 hours),we train our models on two subsets of MuST-C (Di Gangiet al., 2019) English-to-German and English-to-French train-ing data (56 hours each, a training size similar to How220%). As illustrated by Table 3, MuST-C is more chal-lenging than How2 (as confirmed by official IWSLT 2019evaluation results (Niehues et al., 2019)), but for both lan-guage pairs, wav2vec significantly outperform fbanks . Thisconfirms that self-supervised pre-training is useful in lowresource scenarios.5. Analysis of Learnt RepresentationsThis section tries to answer the question why wav2vec rep-resentation performs better than filter-bank features in lowresource settings. The following subsections present theexperiments which show that wav2vec might be (1) betterat discriminating phones, (2) better at aligning source andInvestigating Self-supervised Pre-training for End-to-end Speech TranslationTable 3. AST BLEU on MuST-C 56h for EN-DE andEN-FR .Lang Features tst-COMMON tst-HEEN-DEwav2vec 7.56 7.21wav2vec+norm 7.83 8.12fbanks 1.50 1.09fbanks+norm 4.89 4.87EN-FRwav2vec 12.08 12.41wav2vec+norm 12.58 12.58fbanks 0.54 0.00fbanks+norm 7.10 6.37Table 4. Phone error rate (PER %) on TIMIT dev and test set.No. Feature TIMIT dev TMIT test1 wav2vec 13.0 15.02 wav2vec + norm 13.9 15.83 fbanks 22.2 24.94 fbanks + norm 20.7 23.5target sequences, and (3) more robust to speaker variability.5.1. Better phone discriminationWe first replicate an experiment from (Schneider et al., 2019)for phoneme recognition on TIMIT (Garofolo et al., 1993).Speech representations are extracted from train, dev and testsplit of TIMIT. A simple attentional encoder-decoder modelis used: encoder with 4 BLSTM layers of hidden size 320,decoder with 1 LSTM layer and location-based attention(Luong et al., 2015). The results of Table 4 confirm thatwav2vec representations (normalized or not) are much betterat recognizing phones than fbanks .5.2. Better source-target alignmentsWe evaluate the entropies of the soft alignments obtainedwith different speech representations in teacher forcingmode. Lettjbe the alignment score between target tokenytand source speech frame xj, we evaluate the entropyTable 5. Averaged entropies of soft-alignments on How2 dev andval set. AST models trained on 10% partition of How2.No. Feature How2 dev How2 val1 wav2vec 0.66 0.662 wav2vec + FT 0.65 0.653 wav2vec + norm 0.57 0.574 wav2vec + FT + norm 0.51 0.515 fbanks 0.89 0.906 fbanks + norm 0.93 0.93(a) wav2vec - entropy = 0:67(b) fbanks - entropy = 0:81Figure 3. Soft alignments between source speech features and tar-get text for sentence “A outra pessoa perde.”of the probability distribution t,Ht=Pjxjj=1tjlogtjfor every target token. This measure is then averaged forall tokens at the corpus level (How 10%). A low entropymeans the attention mechanism is confident in its source-target alignments (see example in Figure 3). Table 5 showsclearly that, in our low resource setting, wav2vec leads tobetter alignments (lower entropy) than fbanks . Fine-tuningand normalization of self-supervised representations alsoimprove the soft alignments.5.3. Better robustness to speaker variabilityTable 6. Equal error rate (EER %) on the V oxCeleb1 test and Lib-riSpeech test sets for female (f) and male (m) speakers.No. Feature VoxCeleb Libri (f) Libri (m)1 wav2vec 22.75 11.22 2.232wav2vec + norm 20.93 10.54 1.793 fbanks 15.78 5.47 0.894 fbanks + norm 16.25 3.47 0.67To investigate robustness to speaker variability, we trainedseveral automatic speaker verification (ASV) systems usingwav2vec orfbanks features. Models are trained on Lib-riSpeech train-clean-360 dataset (Panayotov et al., 2015)Investigating Self-supervised Pre-training for End-to-end Speech Translationusing Kaldi (Povey et al., 2011). ASV systems are basedon x-vectors and probabilistic linear discriminant analysis(PLDA) (Snyder et al., 2018). To extract x-vectors, weused a time delay neural network (TDNN) model topologysimilar to the one described in (Snyder et al., 2018). In-put features are fbanks orwav2vec (optionally normalized)while output corresponds to 921 speakers of the trainingcorpus. ASV experiments are conducted on the VoxCeleb1test(Nagrani et al., 2017) and LibriSpeech test-clean (Panay-otov et al., 2015) sets.8ASV results (equal error rate - EER)are presented in Table 6. We observe that in all experiments,models trained on wav2vec features provide significantlyhigher EER in comparison with fbanks . This confirms ourhypothesis that wav2vec representations remove speakerinformation from speech signal.96. ConclusionWe investigated the impact of self-supervised learning forend-to-end AST. It was shown that representations basedon contrastive predicting coding (CPC) improve results sig-nificantly compared to baseline filter-bank, in low-mediumresource conditions ( train< 100h). Our explanation is thatself-supervised representations show better phone discrimi-nation, source-target alignments and speaker robustness.ReferencesBaevski, A., Auli, M., and Mohamed, A. Effectiveness ofself-supervised pre-training for speech recognition, 2019.Bahdanau, D., Cho, K., and Bengio, Y . Neural MachineTranslation by Jointly Learning to Align and Translate.InProc. of ICLR , 2015.Bansal, S., Kamper, H., Livescu, K., Lopez, A., and Goldwa-ter, S. Pre-training on high-resource speech recognitionimproves low-resource speech-to-text translation. CoRR ,abs/1809.01431, 2018. URL http://arxiv.org/abs/1809.01431 .B ́erard, A., Pietquin, O., Servan, C., and Besacier, L. Listenand translate: A proof of concept for end-to-end speech-to-text translation. In NIPS Workshop on End-to-endLearning for Speech and Audio Processing , 2016.B ́erard, A., Besacier, L., Kocabiyikoglu, A. C., and Pietquin,O. End-to-end automatic speech translation of audio-8The trial and enrollment subsets of the LibriSpeech test-cleanfor the ASV task are described in more details in (Tomashenkoet al., 2020).9We would also expect that mean and variance normalizationincrease EER but this is not the case. One explanation mightbe that normalization also removes channel variability and thusimproves ASV .books. CoRR , abs/1802.04200, 2018. URL http://arxiv.org/abs/1802.04200 .Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. Asimple framework for contrastive learning of visual rep-resentations, 2020.Chung, Y ., Weng, W., Tong, S., and Glass, J. To-wards unsupervised speech-to-text translation. CoRR ,abs/1811.01307, 2018. URL http://arxiv.org/abs/1811.01307 .Chung, Y ., Hsu, W., Tang, H., and Glass, J. R. An un-supervised autoregressive model for speech represen-tation learning. CoRR , abs/1904.03240, 2019. URLhttp://arxiv.org/abs/1904.03240 .Chung, Y .-A. and Glass, J. Generative pre-training forspeech with autoregressive predictive coding, 2019.Chung, Y .-A. and Glass, J. Improved speech representationswith multi-target autoregressive predictive coding, 2020.Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT:pre-training of deep bidirectional transformers for lan-guage understanding. CoRR , abs/1810.04805, 2018. URLhttp://arxiv.org/abs/1810.04805 .Di Gangi, M. A., Cattoni, R., Bentivogli, L., Negri, M.,and Turchi, M. MuST-C: a Multilingual Speech Trans-lation Corpus. In Proceedings of the 2019 Confer-ence of the North American Chapter of the Associa-tion for Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) , pp.2012–2017, Minneapolis, Minnesota, June 2019. Asso-ciation for Computational Linguistics. doi: 10.18653/v1/N19-1202. URL https://www.aclweb.org/anthology/N19-1202 .Engel, J., Hantrakul, L., Gu, C., and Roberts, A. Ddsp:Differentiable digital signal processing, 2020.Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G.,Pallett, D. S., and Dahlgren, N. L. DARPA TIMIT acous-tic phonetic continuous speech corpus cdrom, 1993.Inaguma, H., Kiyono, S., Duh, K., Karita, S., Soplin,N. E. Y ., Hayashi, T., and Watanabe, S. ESPnet-ST:All-in-one speech translation toolkit. arXiv preprintarXiv:2004.10234 , 2020.Jia, Y ., Johnson, M., Macherey, W., Weiss, R. J., Cao, Y .,Chiu, C., Ari, N., Laurenzo, S., and Wu, Y . Leveragingweakly supervised data to improve end-to-end speech-to-text translation. CoRR , abs/1811.02050, 2018. URLhttp://arxiv.org/abs/1811.02050 .Investigating Self-supervised Pre-training for End-to-end Speech TranslationJia, Y ., Weiss, R. J., Biadsy, F., Macherey, W., Johnson,M., Chen, Z., and Wu, Y . Direct speech-to-speechtranslation with a sequence-to-sequence model. CoRR ,abs/1904.06037, 2019. URL http://arxiv.org/abs/1904.06037 .Kahn, J., Rivi `ere, M., Zheng, W., Kharitonov, E., Xu, Q.,Mazar ́e, P.-E., Karadayi, J., Liptchinsky, V ., Collobert, R.,Fuegen, C., Likhomanenko, T., Synnaeve, G., Joulin, A.,Mohamed, A., and Dupoux, E. Libri-light: A benchmarkfor asr with limited or no supervision, 2019.Kawakami, K., Wang, L., Dyer, C., Blunsom, P., andvan den Oord, A. Learning robust and multilingual speechrepresentations, 2020.Luong, N.-Q., Besacier, L., and Lecouteux, B. Towards ac-curate predictors of word quality for machine translation:Lessons learned on french - english and english - spanishsystems. Data and Knowledge Engineering , 2015.Nagrani, A., Chung, J. S., and Zisserman, A. V oxCeleb: alarge-scale speaker identification dataset. In Interspeech ,pp. 2616–2620, 2017.Nguyen, H., Tomashenko, N., Boito, M. Z., Caubriere, A.,Bougares, F., Rouvier, M., Besacier, L., and Esteve, Y .ON-TRAC consortium end-to-end speech translation sys-tems for the IWSLT 2019 shared task. In Proc. of IWSLT ,2019.Niehues, J., Cattoni, R., St ̈uker, S., Negri, M., Turchi, M.,Salesky, E., Sanabria, R., Barrault, L., Specia, L., andFederico, M. The iwslt 2019 evaluation campaign. In Pro-ceedings of the 16th International Workshop on SpokenLanguage Translation (IWSLT 2019) , 2019.Panayotov, V ., Chen, G., Povey, D., and Khudanpur, S.LibriSpeech: an ASR corpus based on public domainaudio books. In 2015 IEEE International Conference onAcoustics, Speech and Signal Processing (ICASSP) , pp.5206–5210, 2015.Pascual, S., Ravanelli, M., Serr `a, J., Bonafonte, A., and Ben-gio, Y . Learning problem-agnostic speech representationsfrom multiple self-supervised tasks. 2019.Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek,O., Goel, N., Hannemann, M., et al. The Kaldi speechrecognition toolkit. Technical report, 2011.Ravanelli, M., Zhong, J., Pascual, S., Swietojanski, P.,Monteiro, J., Trmal, J., and Bengio, Y . Multi-task self-supervised learning for robust speech recognition, 2020.Rivi`ere, M., Joulin, A., Mazar ́e, P.-E., and Dupoux, E. Un-supervised pretraining transfers well across languages,2020.Sanabria, R., Caglayan, O., Palaskar, S., Elliott, D., Bar-rault, L., Specia, L., and Metze, F. How2: a large-scaledataset for multimodal language understanding. In ViGILWorkshop, NeurIPS , 2018.Schneider, S., Baevski, A., Collobert, R., and Auli,M. wav2vec: Unsupervised Pre-Training for SpeechRecognition. In Proc. Interspeech 2019 , pp. 3465–3469, 2019. doi: 10.21437/Interspeech.2019-1873.URL http://dx.doi.org/10.21437/Interspeech.2019-1873 .Simonyan, K. and Zisserman, A. Very deep convolutionalnetworks for large-scale image recognition. In Proc. ofICLR , 2015.Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., andKhudanpur, S. X-vectors: Robust DNN embeddingsfor speaker recognition. In 2018 IEEE InternationalConference on Acoustics, Speech and Signal Processing(ICASSP) , pp. 5329–5333, 2018.Sperber, M., Neubig, G., Niehues, J., and Waibel, A.Attention-passing models for robust and data-efficientend-to-end speech translation. CoRR , abs/1904.07209,2019. URL http://arxiv.org/abs/1904.07209 .Tomashenko, N., Srivastava, B. M. L., Wang, X., Vincent,E., Nautsch, A., Yamagishi, J., Evans, N., et al. TheV oicePrivacy 2020 Challenge evaluation plan. 2020.Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y ., and Chen,Z. Sequence-to-sequence models can directly transcribeforeign speech. In Proc. of INTERSPEECH , 2017.<|im_end|> <|im_start|>assistant ### Review Title Good paper and detailed analysis on the use of CPC model in end2end speech translation ### Review Text This paper investigates the application of self-supervised pre-training to end-to-end speech translation. In particular, it presents an application of the contrastive predictive coding (CPC) model that gives the possibility of leveraging massive audio data without the need of the textual transcriptions. An extensive set of experiments and analysis is reported showing that i) the CPC model is particularly beneficial when limited quantities of training data is available and ii) the use of wav2Vec allows the speech translation model to have a better representation of the input audio compared to filter banks. The paper is clear and well-written. The addressed topic is interesting for the ST community because the paucity of speech-translation training data calls for different approaches to use alternative sources of information (e.g. only audio data). The motivation and the experimental parts are robust and appropriate. The analysis of the learnt representation tries to shed light on the improvements in performance obtained in low-resource settings. The main limitation of this paper could be the novelty because the paper extends the work by Chung & Glass (2019 and 2020). However, the use of different training data sizes and the investigation of the learnt representation make the paper interesting and quite distinguishable from the previous literature. Comments and questions: -) Why did the authors choose CPC model instead of the autoregressive predictive coding (ACPC)? Is it only a matter of simplicity of the former? or there is evidence that the former outperforms the latter? The motivation should be mentioned in the paper. -) It is not clear if the (Inaguma et al., 2020) models resulting in the best BLEU score on How2 are trained only using How2 or with more speech-translation data. This should be specified at the end of section 4.3. -) The best systems at the IWSLT workshop last year used much more ST data and different data augmentation techniques such as knowledge distillation. The ensemble of wav2vec and fbanks approaches the best performing system on the How2 dataset. Although it is clear that the CPC model has fewer requirements in terms of task-specific data, it would be interesting and useful for the community to have a proper comparison/combination of the CPC model with these methods both in low and high-resource scenarios ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
zM6fevLxIhI
ICLR.cc/2021/Conference
2021
Variational Structured Attention Networks for Dense Pixel-Wise Prediction
["Guanglei Yang", "Paolo Rota", "Xavier Alameda-Pineda", "Dan Xu", "Mingli Ding", "Elisa Ricci"]
State-of-the-art performances in dense pixel-wise prediction tasks are obtained with specifically designed convolutional networks. These models often benefit from attention mechanisms that allow better learning of deep representations. Recent works showed the importance of estimating both spatial- and channel-wise attention tensors. In this paper, we propose a unified approach to jointly estimate spatial attention maps and channel attention vectors so as to structure the resulting attention tensor. Moreover, we integrate the estimation of the attention within a probabilistic framework, leading to VarIational STructured Attention networks(VISTA). We implement the inference rules within the neural network, thus allowing for joint learning of the probabilistic and the CNN front-end parameters. Importantly, as demonstrated by our extensive empirical evaluation on six large-scale datasets VISTA outperforms the state-of-the-art in multiple continuous and discrete pixel-level prediction tasks, thus confirming the benefit of structuring the attention tensor and of inferring it within a probabilistic formulation.
["attention network", "pixel-wise prediction"]
ABSTRACTState-of-the-art performances in dense pixel-wise prediction tasks are obtainedwith specifically designed convolutional networks. These models often benefitfrom attention mechanisms that allow better learning of deep representations. Re-cent works showed the importance of estimating both spatial- and channel-wiseattention tensors. In this paper we propose a unified approach to jointly estimatespatial attention maps and channel attention vectors so as to structure the result-ing attention tensor. Moreover, we integrate the estimation of the attention withina probabilistic framework, leading to VarIational STructured Attention networks(VISTA-Net). We implement the inference rules within the neural network, thusallowing for joint learning of the probabilistic and the CNN front-end parameters.Importantly, as demonstrated by our extensive empirical evaluation on six large-scale datasets, VISTA-Net outperforms the state-of-the-art in multiple continuousand discrete pixel-level prediction tasks, thus confirming the benefit of structuringthe attention tensor and of inferring it within a probabilistic formulation.1 I NTRODUCTIONOver the past decade, convolutional neural networks (CNNs) have become the privileged method-ology to address computer vision tasks requiring dense pixel-wise prediction, such as semantic seg-mentation (Chen et al., 2016b; Fu et al., 2019), monocular depth prediction (Liu et al., 2015; Roy &Todorovic, 2016), contour detection (Xu et al., 2017a) and normal surface computation (Eigen et al.,2014). Recent studies provided clear evidence that attention mechanisms (Mnih et al., 2014) withindeep networks are undoubtedly a crucial factor in improving the performance (Chen et al., 2016b;Xu et al., 2017a; Fu et al., 2019; Zhan et al., 2018). In particular, previous works demonstrated thatdeeply learned attentions acting as soft weights to interact with different deep features at each chan-nel (Zhong et al., 2020; Zhang et al., 2018; Song et al., 2020) and at each pixel location (Li et al.,2020a; Johnston & Carneiro, 2020; Tay et al., 2019) permits to improve the pixel-wise predictionaccuracy (see Fig.1.a and Fig.1.b). Recently, Fu et al. (2019) proposed the Dual Attention Network(DANet), embedding in a fully convolutional network (FCN) two complementary attention modules,specifically conceived to model separately the semantic dependencies associated to the spatial andto the channel dimensions (Fig.1.c).Concurrently, other approaches have considered the use of structured attention models integratedwithin a graph network framework (Zhang et al., 2020; Chen et al., 2019; Xu et al., 2017a), showingthe empirical advantage of adopting a graphical model to effectively capture the structured infor-mation present in the hidden layers of the neural network and thus enabling the learning of betterdeep feature representations. Notably, Xu et al. (2017a) first introduced attention-gated conditionalrandom fields (AG-CRFs), a convolutional neural network implementing a probabilistic graphicalmodel that considers attention variables as gates (Minka & Winn, 2009) in order to learn improveddeep features and effectively fuse multi-scale information. However, their structured attention modelis only learned at the spatial-wise level, while channel-wise dependencies are not considered.This paper advances the state of the art in dense pixel-wise prediction by proposing a novel approachto learn more effective deep representations by integrating a structured attention model which jointlyaccount for spatial- and channel-level dependencies using an attention tensor (Fig.1.d) within a CRFframework. More precisely, inspired from Xu et al. (2017a) we model the attention as gates. Cru-cially, we address the question on how to enforce structure within these latent gates, in order to1Under review as a conference paper at ICLR 2021Feature map Channel-wise attention Channel-wise weighted feature map (a) Channel-wise attention.Feature map Spatial attention Spatially weighted feature map (b) Spatial attention.Channel-wise weighted feature map Spatially-weighted feature map Weighted feature map Spatial attention matrix Channel attention matrix (c) Separate spatial and channelattention.Channel-wiseattention vectorSpatialattention mapStructuredattention tensorStructured latentattention gateProbabilisticallyenhancedfeature map(d) Proposed structured attention.Figure 1: Different attention mechanisms. (a) and (b) correspond to channel-only and spatial-onlyattention, respectively. (c) corresponds to previous works (Fu et al., 2019) adding ( ) a channel anda spatial attention tensor. (d) shows the attention mechanism of VISTA-Net: a channel-wise vectorand a spatial map are estimated then tensor-multiplied ( ) yielding a structured attention tensor. Theattention tensor acts as a structured latent gate, producing a probabilistically enhanced feature map.jointly model spatial- and channel-level dependencies while learning deep features. To do so, wehypothesize that the attention tensor is nothing but the sum of Trank- 1tensors, each of them beingthe tensor product of a spatial attention map and a channel attention vector. This attention tensoris used as a structured latent attention gate, enhancing the feature maps. We cast the inferenceproblem into a maximum-likelihood estimation formulation that is made computationally tractablethanks to a variational approximation. Furthermore, we implement the maximum likelihood updaterules within a neural network, so that they can be jointly learned with the preferred CNN front-end.We called our approach based on structured attention and variational inference VarIational STruc-tured Attention Networks or VISTA-Net. We evaluate our method on multiple pixel-wise predictionproblems, i.e.monocular depth estimation, semantic segmentation and surface normale prediction,considering six publicly available datasets, i.e.NYUD-V2 (Silberman et al., 2012), KITTI (Geigeret al., 2013), Pascal-Context (Mottaghi et al., 2014), Pascal VOC2012 (Everingham et al., 2010),Cityscape (Cordts et al., 2016) and ScanNet (Dai et al., 2017). Our results demonstrate that VISTA-Net is able to learn rich deep representations thanks to the proposed structured attention and ourprobabilistic formulation, outperforming state-of-the-art methods.Related Work. Several works have considered integrating attention models within deep archi-tectures to improve performance in several tasks such as image categorization (Xiao et al., 2015),speech recognition (Chorowski et al., 2015) and machine translation (Vaswani et al., 2017; Kimet al., 2017; Luong et al., 2015). Focusing on pixel-wise prediction, Chen et al. (2016b) first de-scribed an attention model to combine multi-scale features learned by a FCN for semantic segmenta-tion. Zhang et al. (2018) designed EncNet, a network equipped with a channel attention mechanismto model global context. Zhao et al. (2018) proposed to account for pixel-wise dependencies in-troducing relative position information in spatial dimension within the convolutional layers. Huanget al. (2019b) described CCNet, a deep architecture that embeds a criss-cross attention module withthe idea of modeling contextual dependencies using sparsely-connected graphs, such as to achievehigher computational efficiency. Fu et al. (2019) proposed to model semantic dependencies asso-ciated with spatial and channel dimensions by using two separate attention modules. Zhong et al.(2020) introduced a squeeze-and-attention network (SANet) specialized to pixel-wise prediction thattakes into account spatial and channel inter-dependencies in an efficient way.Attention was first adopted within a CRF framework by Xu et al. (2017a), which introduced gatesto control the message passing between latent variables and showed that this strategy is effective forcontour detection. Our work significantly departs from these previous approaches, as we introducea novel structured attention mechanism, jointly handling spatial- and channel-level dependencieswithin a probabilistic framework. Notably, we also prove that our model can be successfully em-ployed in case of several challenging dense pixel-level prediction tasks. Our work is also closelyrelated to previous studies on dual graph convolutional network (Zhang et al., 2019c) and dynamicgraph message passing networks (Zhang et al., 2020), which have been successfully used for pixel-level prediction tasks. However, while they also resort on message passing for learning refined deepfeature representations, they lack a probabilistic formulation. Finally, previous studies (Xu et al.,2017c; Arnab et al., 2016; Chen et al., 2019) described CRF-based models for pixel-wise estima-2Under review as a conference paper at ICLR 2021tion, e.g.to learn and optimally fuse deep representations at multiple scales. However, they did notemploy structured attention gates.2 V ARIATIONAL STRUCTURED ATTENTION NETWORKS : VISTA-N ETAs previously discussed, we aim to enhance the learned representation by structuring the attentionwithin a probabilistic formulation. One the one side, inducing structure in the attention mechanismshas been proven to be successful (Fu et al., 2019; Zhong et al., 2020). On the other side, probabilisticformulations combined with deep architectures are interesting for pixel-level prediction tasks (Xuet al., 2017b). Up to our knowledge, we are the first to bring together recent advances in pixel-wiseprediction by formulating a novel structured attention mechanism within a probabilistic CRF-likeinference framework. Inspired by Fu et al. (2019), where two spatial- and a channel-wise full-rank tensors are computed, we opt to infer different spatial and channel attention variables.Verydifferently from Fu et al. (2019), we propose to structure a generic attention tensor aof dimensionWHC(widht, height, channels), as the sum of Trank-1 tensors:a=TXt=1mtvt; mt2R1WH;vt2RC11; (1)meaning that mtcan be understood as an image of WHpixels and vtas a vector of dimensionC, anddenotes the tensor product, in the case above leading to a 3-way tensor of dimensionsWHC. Each of the tensor products within the sum yields a tensor of rank-1, consequentlylimiting the rank of ato be at maximum T. Equation (1) is the algebraic expression of the proposedstructured attention mechanism, and is the methodological foundation of VISTA-Net.Moreover, we inspire from the CRF formulation with gating variables proposed in (Xu et al., 2017a),and derive a new energy function and variational approximation to enable efficient learning andinference procedures. Additionally, this formulation allows us to consider the CRF kernels as latentvariables and infer them from the data, together with the structured attention variables mtandvt.We believe learning the kernels is important because it allows the CRF to weight the informationflow depending on the content of the rather than keeping the same weights for all images.We assume a generic CNN front-end providing a set of Smulti-scale feature maps F=ffsgSs=1. Toease notation, we assume that each feature map has Ppixels andCchannels, but in practice thesedimensions depend on the scale s. For each scale, we also consider the set of hidden variables zscorresponding to fs, andZ=fzsgSs=1. These hidden variables correspond to refined convolutionalfutures that incorporate information and attention from other feature maps, so as to better representthe key information for the pixel-level task at hand. Intuitively, the structured attention tensor shouldhelp refining the hidden variables to allow better performance at various pixel-level prediction tasks.As in (Xu et al., 2017a), for every pair of emittingeandreceivingrscales, we consider a dedicatedattention tensor ae;r. Very importantly, in our case this attention tensor is structured following (1),and so we have a set of hidden spatial attention maps M=fmte;rgS;S;Te;r;t =1and hidden channelattention vectors V=fvte;rgS;S;Te;r;t =1. More precisely, mte;r2 f0;1gPandvte;r2 f0;1gCarea binary spatial map and a stochastic channel-wise vector, hencePCc=1vt;ce;r= 1. In this way,we reduce ambiguity and ease the learning. This also means that the model is conceived to payattention to only Tchannels of the feature map. While this could seem limiting at first glance weremark that: (i) the model learns which are the optimalTchannels among the possible Cthat haveto be used to refine the hidden variables and (ii) the posterior distribution of mtboils down to aconvex combination of all channels, as it will appear clear when discussing the inference procedure.2.1 E NERGY FUNCTION AND VARIATIONAL APPROXIMATIONOur model consists on three different latent variables: the hidden features Z, and the hidden attentionmapsMand vectors V. In addition, we also consider inferring the CRF kernels, denoted by Kfrom3Under review as a conference paper at ICLR 2021the data. More precisely, the energy function associated to the proposed models writes:E(Z;M;V;K;F;) =XsXp;cz(zp;cr;fp;cr)+Xe;rXp;c;p0;c0Xtmt;pe;rvt;ce;r (zp;cr;zp0;c0e;ke;p0;c0r;p;c) +k(fp;cr;fp0;c0e;ke;p0;c0r;p;c); (2)wherez,kand are potentials to be defined and ke;p0;c0r;p;c denotes the kernel value weighting theinformation flow from the (p0;c0)-th value of the feature map of scale eto the (p;c)-th value of thefeature map of scale r. Since the exact posterior distribution is not computationally tractable, we optto approximate it with the following family of separable distributions:p(Z;M;V;KjF;)q(Z;M;V;K) =qz(Z)qm(M)qv(V)qk(K): (3)In that case, the optimal solution for each of the factors of the distribution is to take the expectationw.r.t. to all the others, for instance:qz(Z)/expEqm(M)qv(V)qk(K)nE(Z;M;V;K;F;)o: (4)It can be shown that the optimal variational factors write:qz(zp;cr)/expz(zp;cr;fp;cr) +Xe6=rXtmt;pe;rvt;ce;rXp0;c0Eqzqkf (zp;cr;zp0;c0e;ke;p0;c0r;p;c)g;qm(mt;pe;r)/expmt;pe;rXcvt;ce;rXp0;c0Eqz;qkf (zp;cs;zp0;c0s0;ke;p0;c0r;p;c)g;qv(vt;ce;r)/expvt;ce;rXpmt;pe;rXp0;c0Eqz;qkf (zp;cs;zp0;c0s0;ke;p0;c0r;p;c)g;qk(ke;p0;c0r;p;c)/expk(fp;cr;fp0;c0e;ke;p0;c0r;p;c) +Xtmt;pe;rvt;ce;rEqzf (zp;cs;zp0;c0s0;ke;p0;c0r;p;c)g;(5)where mt;pe;r=Eqmfmt;pe;rgdenotes the the posterior mean, and analogously for vt;ce;r. This resultalso implies that thanks to the variational approximation in (3), the posterior distributions factorisein each of the variables above, e.g.qz(Z) =QS;P;Cr;p;c =1qz(zp;cr). The relation between the varioushidden variables as for their inference is shown in Figure 2 (left). In addition, we also show theinformation flow between the hidden variables using arrows. Finally, on Figure 2 (right) we showthe relation between the channel-wise and spatial attention variables and how the final structuredattention tensor is computed.2.2 I NFERENCE WITH VISTA-N ETIn order to construct an operative model we need to define the potentials z,kand . In our case,the unary potentials correspond to:z(zp;cr;fp;cr) =bp;cr2(zp;crfp;cr)2; k(fp;cr;fp0;c0e;ke;p0;c0r;p;c) =12(ke;p0;c0r;p;cfp;crfp0;c0e)2;(6)wherebp;cs>0is a weighting factor. is bilinear in the hidden feature maps: (zp;cr;zp0;c0e;ke;p0;c0r;p;c) =zp;crke ;p0;c0r;p;czp0;c0e: (7)Using the over bar notation also for the hidden features and kernels, e.g. zp;cs=Eqzfzp;csg, and bycombining the kernel definitions (6) and (7) with the expression of the variational factors (5), weobtain the following update rules for the latent variables.Z-step. It can be seen that the posterior distribution on qzis Gaussian with mean:zp;cs=1bp;csbp;csfp;cs+XeXtmt;ps;s0vt;cs;s0Xp0;c0ke;p0;c0r;p;czp0;c0s0(8)4Under review as a conference paper at ICLR 2021fef rz ez rkreme,rve,rme,r1me,rTve,r1ve,rTxx+ [C,1,1] [C,1,1] [1,W,H] [1,W,H] [C,W,H] [C,W,H] [C,W,H]Hidden featuresConvolutional kernelsStructured attention tensorObserved featuresStructured attention tensorme,rt ve,rtae,r=xFigure 2: (left) Schematic representation of the various hidden variables in VISTA-Net. For eachpair of emitting eand receiving rscales, their respective convolutional features fare shown in blue,their hidden variables zin green, the associated learned kernel kin yellow, and the channel-wiseand spatial attention vector and matrix vandmin red. Arros of the corresponding color denote theflow of information when updating the variable. (right) The computational relationships between thechannel-wise and spatial attention variables is shown, as well as the operations required to computethe final structured attention tensor a.This corresponds to the update rule obtained in (Xu et al., 2017a) with two remarkable differences.First, the posterior of the attention gate corresponds to the posterior of the structured tensor of rankT. Second, the impact of the neighboring features is weighted by the expected kernel value ke;p0;c0r;p;c .M-step. The variational approximation leads to a Bernoulli distribution for qm(mt;pe;r), which boilsdown to the following a posterior mean value using the sigmoid function :mt;pe;r=Xcvt;ce;rXp0;c0zp;cske;p0;c0r;p;czp0;c0s0: (9)V-step. It can be shown that the approximated posterior distribution is categorical, and that theexpected value of each dimension of vte;rcan be computed using the softmax operator:(vt;ce;r)Cc=1=softmaxXpmt;pe;rXp0;c0zp;cske;p0;c0r;p;czp0;c0eCc=1: (10)K-step. Finally, we need to derive the update rules for K. By further deriving the correspondingvariational posterior distribution, it can be shown that the a posterior distribution for the kernels is aGaussian distribution with the following mean:ke;p0;c0r;p;c =fp;crfp0;c0e+Xtmt;pe;rvt;ce;rzp;crzp0;c0e: (11)This solution is very straightforward, but since the kernels are estimated independently for eachpair of receiving (r;p;c )- emitting (e;p0;c0)pixels, it has two major drawbacks. First, the kernelvalues are estimated without any spatial context. Second, given the large amount of kernel values,one must find a very efficient way to compute them. We propose to kill two birds with one stoneby learning the kernels from the features using convolutional layers. By design, they take spatialcontext into account, and many popular libraries have efficient implementations of the convolutionoperation. The estimated kernel corresponding to the input channel c0of scalee,ke;c0ris computedvia a convolutional operation. The input of the convolution is a concatenation of the tensor fr+zrPTt=1mtr;evtr;eand the image zc0eresized to the spatial size of fr.Joint learning. We implement the inference procedure described before within the neural network,on the top of the CNN front-end. Indeed, implementing all inference operations using available deeplearning operators has two prominent advantages. First, we can perform the inference and learningthe CNN front-end at the same time, within the same formalism and for the same aim. Second, thisallows direct parallelisation of our method, speeding up training and inference.5Under review as a conference paper at ICLR 2021The precise implementation goes as follows. Regarding zr, we first apply message passing from thee-th scale to the sr-th scale is performed with ze!r ker~ze, where ~denotes the convolutionaloperation and kerdenotes the corresponding learned convolution kernel. We then apply element-wise product with the corresponding structured attention tensorPTt=1mte;rvte;r. Finally wecompute the element-wise sum with other emiting scales and the feature maps fr, see (8). Regardingme;r, we first compute the element-wise product between zrandze!r. The sum over channelsweighted by ve;ris computed previous to applying pixel-wise sigmoid, see (9). Regarding ve;rwe operate in a very similar fashion, but weighting each pixel with me;rand then summing everychannel independently, before applying softmax, see (10). Regarding ke;c0r, as discussed before, itis computed via a convolutional operation on the concatenations of ftm+gtmand the image zc0eresized to the spatial size of fr. In terms of initialisation, we draw a random guess for MandV,and set ZtoF. This allows us to update the kernels, then the other variables.Once the hidden variables are updated, we use them to address several different pixel-wise predictiontasks involving continuous and discrete variables, including monocular depth estimation, surfacenormal estimation and semantic segmentation. Following previous works, the network optimizationlosses for these three tasks are a standard L2 loss (Xu et al., 2017c), a consine similarity loss (Eigenet al., 2014) and a cross-entropy loss (Chen et al., 2016a), respectively. The CNN front-end andVISTA-Net, are jointly trained end-to-end.3 E XPERIMENTAL EVALUATION3.1 D ATASETS AND EXPERIMENTAL PROTOCOLTasks and Datasets. We demonstrate the effectiveness of VISTA-Net on two tasks: monoculardepth estimation on the NYU-v2 (Silberman et al., 2012) and the KITTI (Geiger et al., 2013) datasetsand semantic segmentation on the Pascal-Context (Mottaghi et al., 2014), the Pascal VOC2012 (Ev-eringham et al., 2010) and the Cityscape (Cordts et al., 2016). We also conducted experiments on thesurface normal estimation task on ScanNet (Dai et al., 2017) but due to lack of space the associatedresults are reported in the Appendix.For NYU-v2 and KITTI we follow the experimental settings proposed by Eigen et al. (Eigen et al.,2014). For NYU-v2 we use 120K RGB-Depth pairs with a resolution of 480640pixels, acquiredwith a Microsoft Kinect device from 464 indoor scenes, using 249 scenes for training and 215 scenes(654 images) for test. For KITTI we specifically use 22,600 frames from 32 scenes for training and697 frames from the rest 29 scenes for test.For Pascal-Context we follow the works (Chen et al., 2016a; Zhang et al., 2018) and we considerthe most frequent 59 classes. The remaining classes are masked during training and test. PascalVOC2012 contains 20 classes divided in 10582 training, 1449 validation and 1456 test images.Our method is trained using the protocol described in (Zhong et al., 2020; Long et al., 2015). ForCityscape dataset, only the 5,000 finely annotated images are used in our experiments, split into2,975/500/1,525 images for training, validation, and test.Evaluation Metrics. To evaluate the performance on monocular depth estimation, we considerseveral metrics as in (Eigen & Fergus, 2015), including mean relative error (rel), root mean squarederror (rms), mean log10 error (log10), and accuracy with threshold t(t2f1:25;1:252;1:253g).As for semantic segmentation, we consider two metrics following (Zhou et al., 2017; Zhang et al.,2018), i.e.pixel accuracy (pixAcc) and mean intersection over union (mIoU), averaged over classes.Implementation Details. VISTA-Net is implemented in Pytorch . The experiments are conductedon four Nvidia Quadro RTX 6000 GPUs, each with 24 GB memory. The ResNet-101 architec-ture pretrained on ImageNet (Deng et al., 2009) is considered in all the experiments for initializingthe backbone network of VISTA-Net except in for the experiments on the Cityscape dataset wherewe choose HRNet V2-W48 (whose complexity is comparable to dilated-ResNet-101) for fair com-parison with previous works. Our model can be used for effective deep feature learning in bothsingle-scale and multi-scale contexts. To boost the performance, following previous works (Xie &Tu, 2015; Xu et al., 2017a), we also consider features output by different convolutional blocks ofa CNN backbone ( e.g.res3c, ref4f, ref5d of a ResNet-50). For the semantic segmentation task, weuse a learning rate of 0.001 on Pascal-context and Pascal-VOC 2012 and 0.01 on cityscapes with amomentum of 0.9 and a weight decay of 0.0001 using a polynomial learning rate scheduler as pre-6Under review as a conference paper at ICLR 2021Table 1: Depth Estimation: KITTI dataset. Only monocular estimation methods are reported.MethodError (lower is better) Accuracy (higher is better)abs-rel sq-rel rms log-rms <1:25<1:252<1:253CC (Ranjan et al., 2019) 0.140 1.070 5.326 0.217 0.826 0.941 0.975Bian et al. (2019) 0.137 1.089 5.439 0.217 0.830 0.942 0.975S3Net(Cheng et al., 2020a) 0.124 0.826 4.981 0.200 0.846 0.955 0.982MS-CRF (Xu et al., 2017c) 0.125 0.899 4.685 - 0.816 0.951 0.983AG-CRF (Xu et al., 2017a) 0.126 0.901 4.689 0.157 0.813 0.950 0.982Monodepth2 (Godard et al., 2019) 0.115 0.903 4.863 0.193 0.877 0.959 0.981pRGBD(Tiwari et al., 2020) 0.113 0.793 4.655 0.188 0.874 0.96 0.983SGDepth(Klingner et al., 2020) 0.107 0.768 4.468 0.180 0.891 0.963 0.982Johnston & Carneiro (2020) 0.106 0.861 4.699 0.185 0.889 0.962 0.982Shu et al. (2020) 0.104 0.729 4.481 0.179 0.893 0.965 0.984DORN (Fu et al., 2018) 0.072 0.307 2.727 0.120 0.932 0.984 0.994Yin et al. (2019) 0.072 - 3.258 0.117 0.938 0.990 0.998PackNet-SfM (Guizilini et al., 2020) 0.071 0.359 3.153 0.109 0.944 0.990 0.997Lee et al. (2019) 0.061 0.261 2.834 0.099 0.954 0.992 0.998VISTA-Net (ours) 0.063 0.255 2.776 0.099 0.954 0.993 0.998MethodError (lower is better) Accuracy (higher is better)rel log10 rms <1:25<1:252<1:253PAD-Net (Xu et al., 2018) 0.214 0.091 0.792 0.643 0.902 0.977Li et al. (2017) 0.152 0.064 0.611 0.789 0.955 0.988CLIFFNet(Lijun et al., 2020) 0.128 0.171 0.493 0.844 0.964 0.991Laina et al. (2016) 0.127 0.055 0.573 0.811 0.953 0.988MS-CRF (Xu et al., 2017c) 0.121 0.052 0.586 0.811 0.954 0.987Lee & Kim (2020) 0.119 0.5 - 0.87 0.974 0.993AG-CRF (Xu et al., 2017a) 0.112 0.051 0.526 0.818 0.960 0.989DORN (Fu et al., 2018) 0.115 0.051 0.509 0.828 0.965 0.992Xia et al. (2020) 0.116 - 0.512 0.861 0.969 0.991Yin et al. (2019) 0.108 0.048 0.416 0.875 0.976 0.994Lee et al. (2019) 0.113 0.049 0.407 0.871 0.977 0.995VISTA-Net (ours) 0.111 0.048 0.393 0.881 0.979 0.996 Original - GT - DORN (Fu et al., 2018) - VISTA-NetFigure 3: Depth estimation: quantitative (left) and qualitative (right) comparison on NYU-v2.viously done in (Zhang et al., 2018; Chen et al., 2016a). For the monocular depth estimation task,the learning rate is set to 104with weight decay of 0.01. The Adam optimizer is used in all our ex-periments with a batch size of 8 for monocular depth estimation and 16 for semantic segmentation.The total training epochs are set to 50 for depth prediction experiments, to 150 for Pascal-contextand Pascal VOC 2012 datasets and to 500 for the Cityscapes dataset.3.2 E XPERIMENTAL RESULTS AND ANALYSISMonocular Depth Estimation. Comparative results on KITTI dataset are shown in Table 1. Wepropose a comparison with state of the art models such as (Eigen et al., 2014; Ranjan et al., 2019;Bian et al., 2019; Godard et al., 2019; Fu et al., 2018; Yin et al., 2019; Lee et al., 2019; Guiziliniet al., 2020). In addition we demonstrate the effectiveness of our VISTA-Net comparing with MS-CRF (Xu et al., 2017c), a previous approach which exploit a probabilistic framework for multi-scale feature learning but does not consider an attention mechanisms. Our approach is superior,thus demonstrating the effectiveness of the proposed attention model. We also compare with AG-CRF (Xu et al., 2017a) adapting their model to the monocular depth estimation problem. Alsoin this case VISTA-Net outperforms the competitor confirming the importance of having a jointstructured spatial- and channel-wise attention model. Note that AG-CRF (Xu et al., 2017a) andVISTA-Net are compared using the same backbone. In order to demonstrate the competitivenessof our approach in an indoor scenario we also report the results on NYUD-V2 dataset in Fig. 3.Similarly to the experiments on KITTI, VISTA-Net outperforms both state of the art approaches andprevious methods based on attention gates and CRFs (Xu et al., 2017c;a).Semantic Segmentation. We first compare VISTA-Net with the most recent methods on the Pascal-Context dataset, including (Zhang et al., 2018; Fu et al., 2019; Zhu et al., 2019; Ding et al., 2019;Zhang et al., 2019b; Wang et al., 2020; He et al., 2019). As for the depth estimation task, also inthis case we evaluate the performance of AG-CRF Xu et al. (2017a), adapting the original code tothe semantic segmentation task. VISTA-Net, as shown in Table 2, is 0.6 points better according tothe mIoU metric than the best available method, i.e.AG-CRF. Importantly, VISTA-Net outperformsEncNet (Zhang et al., 2018), which uses only channel-wise attention, as well as DANet (Fu et al.,7Under review as a conference paper at ICLR 2021Table 2: Semantic Segmentation on PASCAL-Context. D-ResNet-101 denotes Dilated ResNet-101.Method Backbone pixAcc% mIoU%CFM (VGG+MCG) (Dai et al., 2015b) VGG-16 - 34.4DeepLab-v2 (Chen et al., 2016a) VGG-16 - 37.6FCN-8s (Long et al., 2015) VGG-16 50.7 37.8BoxSup (Dai et al., 2015a) VGG-16 - 40.5ConvPP-8s (Xie et al., 2016) VGG-16 - 41.0PixelNet (Bansal et al., 2017) VGG-16 51.5 41.4HRNetV2 (Wang et al., 2020) - - 54.0EncNet (Zhang et al., 2018) D-ResNet-101 79.23 51.7DANet (Fu et al., 2019) D-ResNet-101 - 52.6ANN (Zhu et al., 2019) D-ResNet-101 - 52.8SpyGR (Li et al., 2020a) ResNet-101 - 52.8SANet (Zhong et al., 2020) ResNet-101 80.6 53.0SVCNet (Ding et al., 2019) ResNet-101 - 53.2CFNet (Zhang et al., 2019b) ResNet-101 - 54.0APCNet (He et al., 2019) D-ResNet-101 - 54.7AG-CRF (Xu et al., 2017a) D-ResNet-101 80.8 54.8VISTA-Net (ours) D-ResNet-101 81.1 55.4Table 3: Semantic Segmentation: results on both Cityscapes validation and testing set (trained onthe standard training set) (left) and on PASCAL VOC 2012 validation set (right). D-ResNet-101means Dilated-ResNet-101. (*) indicates COCO pretrained weights.Method Backbone Test set mIoUDynamic (Li et al., 2020b) Layer33-PSP Val 79.7SpyGR (Li et al., 2020a) ResNet-101 Val 80.5CCNet (Huang et al., 2019b) ResNet-101 Val 81.3Panoptic-DeepLab (Cheng et al., 2020b) D-ResNet-101 Val 81.5CDGCNet (Hu et al., 2020) D-ResNet-101 Val 81.9VISTA-Net HRNetV2-W48 Val 82.3PSANet (Zhao et al., 2018) D-ResNet-101 Test 78.6PAN (Li et al., 2018) D-ResNet-101 Test 78.6AAF Ke et al. (2018) D-ResNet-101 Test 79.1HRNet (Wang et al., 2020) HRNetV2-W48 Test 80.4Dynamic (Li et al., 2020b) Layer33-PSP Test 80.7VISTA-Net HRNetV2-W48 Test 81.4Method Backbone mIoU%DeepLabV3 (Chen et al., 2017) D-ResNet-101 75.7Dynamic (Li et al., 2020b) Layer33 79.0Res2Net (Gao et al., 2019) Res2Net-101 80.2DANet (Fu et al., 2019) ResNet-101 80.4Auto-Deeplab (Liu et al., 2019)ResNet-101 82.0EncNet (Zhang et al., 2018) D-ResNet-101 85.9SANet (Zhong et al., 2020)ResNet-101 86.1VISTA-Net D-ResNet-101 89.82019) and SANet(Zhong et al., 2020), which considers inter-dependencies both at spatial and chan-nel level in their attention model. In Table 3(right) are shown results on PASCAL VOC2012. Again,our method outperforms EncNet (Zhang et al., 2018), SANet (Zhong et al., 2020) and DANet (Fuet al., 2019). In particular, VISTA-Net is 3.7 points better according to the mIoU metric than the bestavailable method, i.e.SANet. Finally, Table. 3(left) reports the results on Cityscape. As in the pre-vious two datasets VISTA-Net outperforms the competitors (by nearly one point mIoU). Additionalresults are reported in the Appendix.Ablation Study. We also performed an ablation study on the Pascal-context dataset to furtherdemonstrate each proposed component’s impact. Fig. 5 (left) shows that the performance of VISTA-Net degrades not only when the model does not employ the structured attention mechanism, but alsowhen only channel-wise or spatial-wise attention is used. Moreover, we can also see the advantage ofusing the proposed probabilistic formulation for joint modeling both spatial- and channel-wise atten-tion in a principled manner. Interestingly, the performance achieved in each of the variants (spatial,channel, no probabilistic formulation) is similar. This leads us to believe that the proposed method’scompetitive advantage is in combining structured attention with a probabilistic formulation. No-tably, the feature refinement through message passing seems to be the most crucial contribution toimprove the performance. For the sake of completeness, we also report the results of DANet, andof AG-CRF (which corresponds to the Multiple-scale/Spatial setting in). Finally, in Fig.5 we showthe performance of VISTA-Net for different values of the tensor rank T. It is important to noticethat the framework reaches better performance when Tis higher. Fig. 4 clearly illustrate the percep-tual improvement in segmentation masks obtained with higher values of the attention tensor rank.Additional examples are provided in the supplementary material.4 C ONCLUSIONSIn this paper we proposed a novel approach to improve the learning of deep features representationsfor dense pixel-wise prediction tasks. Our approach seamlessly integrates a novel structured atten-8Under review as a conference paper at ICLR 2021(a) Original Image (b) GT (c) VISTA-Net (w/o Att.) (d) VISTA-Net (rank=1) (e) VISTA-Net (rank=9)Figure 4: Semantic segmentation maps obtained with VISTA-Net on the Pascal-Context dataset.Scales Structured Attention Probabilistic mIoU PixAccDANet (Fu et al., 2019) Separate Attention No 52.6 -SinglescaleNo structure Yes 51.7 78.9Spatial Yes 53.0 79.7Channel Yes 53.1 79.8Low-rank tensor No 53.2 79.9Low-rank tensor Yes 53.9 80.3MultiplescaleNo structure Yes 52.8 79.5Spatial Yes 54.8 80.8Channel Yes 54.6 80.6Low-rank tensor No 54.7 80.7Low-rank tensor Yes 55.4 81.154.254.654.855.355.401379Number of Rank5051525354555657mIoU(%)Figure 5: Ablation study on the Pascal-context dataset: performance of VISTA-Net (left) for differ-ent attention mechanisms and scales, (right) for different values of the tensor rank T.tion model within a probabilistic framework. In particular, we proposed to structure the attentiontensors as the sum of Trank-one tensors, each being the tensor-product of a spatial attention map anda channel attention vector. These two kinds of variables are jointly learned within the probabilisticformulation made tractable thanks to the variational approximation. The proposed structured atten-tion is rich enough to capture complex spatial- and channel-level inter-dependencies, while beingefficient to compute. The overall optimisation of the probabilistic model and of the CNN front-endis performed jointly. Extensive experimental evaluations show that VISTA-Net outperforms state-of-the-art methods on several datasets, thus confirming the importance of structuring the attentionvariables for dense pixel-level prediction tasks.
S01pt94K-nR
Recommendation to Reject
5: Marginally below acceptance threshold
########################################################################## Summary: This paper proposes the VarIational STructured Attention networks (VISTA-Net), which improves pervious SOTA models for dense pixel-wise prediction tasks. The proposed VISTA-Net is featured by two aspects: 1) A new structured attention is proposed, which is able to jointly model spatial-level and channel-level dependencies; 2) It incorporates the proposed structured attention with a CRF-like inference framework, which allows the probabilistic inference. Experimental studies are conducted on monocular depth estimation and semantic image segmentation, showing improved performances of VISTA-Net consistently. ########################################################################## Reasons for score: Overall, I vote for rejection. My major concerns lie in three aspects, as detailed in Cons below: 1) This work is highly similar to Xu et al. (2017a) in terms of both methods and presentations. The difference is not significant; 2) While the presentation mainly follows Xu et al. (2017a), it needs some improvement; 3) The experimental studies lack more detailed analysis on the proposed method. ########################################################################## Pros: 1. The work is well-motivated. The aim of the proposed method sounds natural to me. 2. I like the ablation studies. But they could be performed on at least one more dataset. ########################################################################## Cons: 1. This work is highly similar to Xu et al. (2017a) in terms of both methods and presentations. The difference is not significant. Method-wise, as discussed in Related Work, the difference only lies in that VISTA-Net takes channel-level dependencies into consideration. First, this means that "Moreover, we integrate the estimation of the attention within a probabilistic framework" (quoted from abstract) is not a novel contribution. Second, considering channel-level dependencies in attention has limited novelty. As discussed in Related Work, multiple studies have explored several ways. In addition, a key step in the proposed method is Equation (1), where the tensor multiplication operator is not explained. In my understanding, it should be the outer product, or more generally, Kronecker product. Missing the clear definition of this operator hinders the clarity in describing the proposed method. Presentation-wise, the similarity is even higher. The entire section 2 follows the exact organization of section 2 in Xu et al. (2017a). By comparing the equations and presentations, it's more convincing that the novelty of this work is quite limited. 2. While the presentation mainly follows Xu et al. (2017a), it needs some improvement. First, the same notations as in Xu et al. (2017a) are used. However, some key things are not well explained. For example, the set of hidden variables $z_s$ corresponding to $f_s$ comes out from nowhere. I have to resort to Xu et al. (2017a) to know why we need $z_s$. Second, as mentioned above, the key steps like Equation (1) lack clear explanations. 3. The experimental studies lack more detailed analysis on the proposed method. It would be more meaningful to visualize those attention maps/gates instead of dense prediction results. ########################################################################## Questions during rebuttal period: Please address and clarify the cons above ########################################################################## Comments after the rebuttal period: Pros: First, it is totally acceptable to follow the notations and organization of Xu et al. (2017a), as long as the statements are clear and self-contained. The original submission failed on providing key details. The authors have made revisions to address this concern. Thanks! Second, I appreciate the extra experimental results and visualizations. Cons: However, the authors' responses do not fully address my concerns about the novelty, especially method-wise. I will raise my score to 5, but still recommend rejection.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Variational Structured Attention Networks for Dense Pixel-Wise Prediction ### Paper Abstract State-of-the-art performances in dense pixel-wise prediction tasks are obtained with specifically designed convolutional networks. These models often benefit from attention mechanisms that allow better learning of deep representations. Recent works showed the importance of estimating both spatial- and channel-wise attention tensors. In this paper, we propose a unified approach to jointly estimate spatial attention maps and channel attention vectors so as to structure the resulting attention tensor. Moreover, we integrate the estimation of the attention within a probabilistic framework, leading to VarIational STructured Attention networks(VISTA). We implement the inference rules within the neural network, thus allowing for joint learning of the probabilistic and the CNN front-end parameters. Importantly, as demonstrated by our extensive empirical evaluation on six large-scale datasets VISTA outperforms the state-of-the-art in multiple continuous and discrete pixel-level prediction tasks, thus confirming the benefit of structuring the attention tensor and of inferring it within a probabilistic formulation. ### Paper Keywords ["attention network", "pixel-wise prediction"] ### Paper Content ABSTRACTState-of-the-art performances in dense pixel-wise prediction tasks are obtainedwith specifically designed convolutional networks. These models often benefitfrom attention mechanisms that allow better learning of deep representations. Re-cent works showed the importance of estimating both spatial- and channel-wiseattention tensors. In this paper we propose a unified approach to jointly estimatespatial attention maps and channel attention vectors so as to structure the result-ing attention tensor. Moreover, we integrate the estimation of the attention withina probabilistic framework, leading to VarIational STructured Attention networks(VISTA-Net). We implement the inference rules within the neural network, thusallowing for joint learning of the probabilistic and the CNN front-end parameters.Importantly, as demonstrated by our extensive empirical evaluation on six large-scale datasets, VISTA-Net outperforms the state-of-the-art in multiple continuousand discrete pixel-level prediction tasks, thus confirming the benefit of structuringthe attention tensor and of inferring it within a probabilistic formulation.1 I NTRODUCTIONOver the past decade, convolutional neural networks (CNNs) have become the privileged method-ology to address computer vision tasks requiring dense pixel-wise prediction, such as semantic seg-mentation (Chen et al., 2016b; Fu et al., 2019), monocular depth prediction (Liu et al., 2015; Roy &Todorovic, 2016), contour detection (Xu et al., 2017a) and normal surface computation (Eigen et al.,2014). Recent studies provided clear evidence that attention mechanisms (Mnih et al., 2014) withindeep networks are undoubtedly a crucial factor in improving the performance (Chen et al., 2016b;Xu et al., 2017a; Fu et al., 2019; Zhan et al., 2018). In particular, previous works demonstrated thatdeeply learned attentions acting as soft weights to interact with different deep features at each chan-nel (Zhong et al., 2020; Zhang et al., 2018; Song et al., 2020) and at each pixel location (Li et al.,2020a; Johnston & Carneiro, 2020; Tay et al., 2019) permits to improve the pixel-wise predictionaccuracy (see Fig.1.a and Fig.1.b). Recently, Fu et al. (2019) proposed the Dual Attention Network(DANet), embedding in a fully convolutional network (FCN) two complementary attention modules,specifically conceived to model separately the semantic dependencies associated to the spatial andto the channel dimensions (Fig.1.c).Concurrently, other approaches have considered the use of structured attention models integratedwithin a graph network framework (Zhang et al., 2020; Chen et al., 2019; Xu et al., 2017a), showingthe empirical advantage of adopting a graphical model to effectively capture the structured infor-mation present in the hidden layers of the neural network and thus enabling the learning of betterdeep feature representations. Notably, Xu et al. (2017a) first introduced attention-gated conditionalrandom fields (AG-CRFs), a convolutional neural network implementing a probabilistic graphicalmodel that considers attention variables as gates (Minka & Winn, 2009) in order to learn improveddeep features and effectively fuse multi-scale information. However, their structured attention modelis only learned at the spatial-wise level, while channel-wise dependencies are not considered.This paper advances the state of the art in dense pixel-wise prediction by proposing a novel approachto learn more effective deep representations by integrating a structured attention model which jointlyaccount for spatial- and channel-level dependencies using an attention tensor (Fig.1.d) within a CRFframework. More precisely, inspired from Xu et al. (2017a) we model the attention as gates. Cru-cially, we address the question on how to enforce structure within these latent gates, in order to1Under review as a conference paper at ICLR 2021Feature map Channel-wise attention Channel-wise weighted feature map (a) Channel-wise attention.Feature map Spatial attention Spatially weighted feature map (b) Spatial attention.Channel-wise weighted feature map Spatially-weighted feature map Weighted feature map Spatial attention matrix Channel attention matrix (c) Separate spatial and channelattention.Channel-wiseattention vectorSpatialattention mapStructuredattention tensorStructured latentattention gateProbabilisticallyenhancedfeature map(d) Proposed structured attention.Figure 1: Different attention mechanisms. (a) and (b) correspond to channel-only and spatial-onlyattention, respectively. (c) corresponds to previous works (Fu et al., 2019) adding ( ) a channel anda spatial attention tensor. (d) shows the attention mechanism of VISTA-Net: a channel-wise vectorand a spatial map are estimated then tensor-multiplied ( ) yielding a structured attention tensor. Theattention tensor acts as a structured latent gate, producing a probabilistically enhanced feature map.jointly model spatial- and channel-level dependencies while learning deep features. To do so, wehypothesize that the attention tensor is nothing but the sum of Trank- 1tensors, each of them beingthe tensor product of a spatial attention map and a channel attention vector. This attention tensoris used as a structured latent attention gate, enhancing the feature maps. We cast the inferenceproblem into a maximum-likelihood estimation formulation that is made computationally tractablethanks to a variational approximation. Furthermore, we implement the maximum likelihood updaterules within a neural network, so that they can be jointly learned with the preferred CNN front-end.We called our approach based on structured attention and variational inference VarIational STruc-tured Attention Networks or VISTA-Net. We evaluate our method on multiple pixel-wise predictionproblems, i.e.monocular depth estimation, semantic segmentation and surface normale prediction,considering six publicly available datasets, i.e.NYUD-V2 (Silberman et al., 2012), KITTI (Geigeret al., 2013), Pascal-Context (Mottaghi et al., 2014), Pascal VOC2012 (Everingham et al., 2010),Cityscape (Cordts et al., 2016) and ScanNet (Dai et al., 2017). Our results demonstrate that VISTA-Net is able to learn rich deep representations thanks to the proposed structured attention and ourprobabilistic formulation, outperforming state-of-the-art methods.Related Work. Several works have considered integrating attention models within deep archi-tectures to improve performance in several tasks such as image categorization (Xiao et al., 2015),speech recognition (Chorowski et al., 2015) and machine translation (Vaswani et al., 2017; Kimet al., 2017; Luong et al., 2015). Focusing on pixel-wise prediction, Chen et al. (2016b) first de-scribed an attention model to combine multi-scale features learned by a FCN for semantic segmenta-tion. Zhang et al. (2018) designed EncNet, a network equipped with a channel attention mechanismto model global context. Zhao et al. (2018) proposed to account for pixel-wise dependencies in-troducing relative position information in spatial dimension within the convolutional layers. Huanget al. (2019b) described CCNet, a deep architecture that embeds a criss-cross attention module withthe idea of modeling contextual dependencies using sparsely-connected graphs, such as to achievehigher computational efficiency. Fu et al. (2019) proposed to model semantic dependencies asso-ciated with spatial and channel dimensions by using two separate attention modules. Zhong et al.(2020) introduced a squeeze-and-attention network (SANet) specialized to pixel-wise prediction thattakes into account spatial and channel inter-dependencies in an efficient way.Attention was first adopted within a CRF framework by Xu et al. (2017a), which introduced gatesto control the message passing between latent variables and showed that this strategy is effective forcontour detection. Our work significantly departs from these previous approaches, as we introducea novel structured attention mechanism, jointly handling spatial- and channel-level dependencieswithin a probabilistic framework. Notably, we also prove that our model can be successfully em-ployed in case of several challenging dense pixel-level prediction tasks. Our work is also closelyrelated to previous studies on dual graph convolutional network (Zhang et al., 2019c) and dynamicgraph message passing networks (Zhang et al., 2020), which have been successfully used for pixel-level prediction tasks. However, while they also resort on message passing for learning refined deepfeature representations, they lack a probabilistic formulation. Finally, previous studies (Xu et al.,2017c; Arnab et al., 2016; Chen et al., 2019) described CRF-based models for pixel-wise estima-2Under review as a conference paper at ICLR 2021tion, e.g.to learn and optimally fuse deep representations at multiple scales. However, they did notemploy structured attention gates.2 V ARIATIONAL STRUCTURED ATTENTION NETWORKS : VISTA-N ETAs previously discussed, we aim to enhance the learned representation by structuring the attentionwithin a probabilistic formulation. One the one side, inducing structure in the attention mechanismshas been proven to be successful (Fu et al., 2019; Zhong et al., 2020). On the other side, probabilisticformulations combined with deep architectures are interesting for pixel-level prediction tasks (Xuet al., 2017b). Up to our knowledge, we are the first to bring together recent advances in pixel-wiseprediction by formulating a novel structured attention mechanism within a probabilistic CRF-likeinference framework. Inspired by Fu et al. (2019), where two spatial- and a channel-wise full-rank tensors are computed, we opt to infer different spatial and channel attention variables.Verydifferently from Fu et al. (2019), we propose to structure a generic attention tensor aof dimensionWHC(widht, height, channels), as the sum of Trank-1 tensors:a=TXt=1mtvt; mt2R1WH;vt2RC11; (1)meaning that mtcan be understood as an image of WHpixels and vtas a vector of dimensionC, anddenotes the tensor product, in the case above leading to a 3-way tensor of dimensionsWHC. Each of the tensor products within the sum yields a tensor of rank-1, consequentlylimiting the rank of ato be at maximum T. Equation (1) is the algebraic expression of the proposedstructured attention mechanism, and is the methodological foundation of VISTA-Net.Moreover, we inspire from the CRF formulation with gating variables proposed in (Xu et al., 2017a),and derive a new energy function and variational approximation to enable efficient learning andinference procedures. Additionally, this formulation allows us to consider the CRF kernels as latentvariables and infer them from the data, together with the structured attention variables mtandvt.We believe learning the kernels is important because it allows the CRF to weight the informationflow depending on the content of the rather than keeping the same weights for all images.We assume a generic CNN front-end providing a set of Smulti-scale feature maps F=ffsgSs=1. Toease notation, we assume that each feature map has Ppixels andCchannels, but in practice thesedimensions depend on the scale s. For each scale, we also consider the set of hidden variables zscorresponding to fs, andZ=fzsgSs=1. These hidden variables correspond to refined convolutionalfutures that incorporate information and attention from other feature maps, so as to better representthe key information for the pixel-level task at hand. Intuitively, the structured attention tensor shouldhelp refining the hidden variables to allow better performance at various pixel-level prediction tasks.As in (Xu et al., 2017a), for every pair of emittingeandreceivingrscales, we consider a dedicatedattention tensor ae;r. Very importantly, in our case this attention tensor is structured following (1),and so we have a set of hidden spatial attention maps M=fmte;rgS;S;Te;r;t =1and hidden channelattention vectors V=fvte;rgS;S;Te;r;t =1. More precisely, mte;r2 f0;1gPandvte;r2 f0;1gCarea binary spatial map and a stochastic channel-wise vector, hencePCc=1vt;ce;r= 1. In this way,we reduce ambiguity and ease the learning. This also means that the model is conceived to payattention to only Tchannels of the feature map. While this could seem limiting at first glance weremark that: (i) the model learns which are the optimalTchannels among the possible Cthat haveto be used to refine the hidden variables and (ii) the posterior distribution of mtboils down to aconvex combination of all channels, as it will appear clear when discussing the inference procedure.2.1 E NERGY FUNCTION AND VARIATIONAL APPROXIMATIONOur model consists on three different latent variables: the hidden features Z, and the hidden attentionmapsMand vectors V. In addition, we also consider inferring the CRF kernels, denoted by Kfrom3Under review as a conference paper at ICLR 2021the data. More precisely, the energy function associated to the proposed models writes:E(Z;M;V;K;F;) =XsXp;cz(zp;cr;fp;cr)+Xe;rXp;c;p0;c0Xtmt;pe;rvt;ce;r (zp;cr;zp0;c0e;ke;p0;c0r;p;c) +k(fp;cr;fp0;c0e;ke;p0;c0r;p;c); (2)wherez,kand are potentials to be defined and ke;p0;c0r;p;c denotes the kernel value weighting theinformation flow from the (p0;c0)-th value of the feature map of scale eto the (p;c)-th value of thefeature map of scale r. Since the exact posterior distribution is not computationally tractable, we optto approximate it with the following family of separable distributions:p(Z;M;V;KjF;)q(Z;M;V;K) =qz(Z)qm(M)qv(V)qk(K): (3)In that case, the optimal solution for each of the factors of the distribution is to take the expectationw.r.t. to all the others, for instance:qz(Z)/expEqm(M)qv(V)qk(K)nE(Z;M;V;K;F;)o: (4)It can be shown that the optimal variational factors write:qz(zp;cr)/expz(zp;cr;fp;cr) +Xe6=rXtmt;pe;rvt;ce;rXp0;c0Eqzqkf (zp;cr;zp0;c0e;ke;p0;c0r;p;c)g;qm(mt;pe;r)/expmt;pe;rXcvt;ce;rXp0;c0Eqz;qkf (zp;cs;zp0;c0s0;ke;p0;c0r;p;c)g;qv(vt;ce;r)/expvt;ce;rXpmt;pe;rXp0;c0Eqz;qkf (zp;cs;zp0;c0s0;ke;p0;c0r;p;c)g;qk(ke;p0;c0r;p;c)/expk(fp;cr;fp0;c0e;ke;p0;c0r;p;c) +Xtmt;pe;rvt;ce;rEqzf (zp;cs;zp0;c0s0;ke;p0;c0r;p;c)g;(5)where mt;pe;r=Eqmfmt;pe;rgdenotes the the posterior mean, and analogously for vt;ce;r. This resultalso implies that thanks to the variational approximation in (3), the posterior distributions factorisein each of the variables above, e.g.qz(Z) =QS;P;Cr;p;c =1qz(zp;cr). The relation between the varioushidden variables as for their inference is shown in Figure 2 (left). In addition, we also show theinformation flow between the hidden variables using arrows. Finally, on Figure 2 (right) we showthe relation between the channel-wise and spatial attention variables and how the final structuredattention tensor is computed.2.2 I NFERENCE WITH VISTA-N ETIn order to construct an operative model we need to define the potentials z,kand . In our case,the unary potentials correspond to:z(zp;cr;fp;cr) =bp;cr2(zp;crfp;cr)2; k(fp;cr;fp0;c0e;ke;p0;c0r;p;c) =12(ke;p0;c0r;p;cfp;crfp0;c0e)2;(6)wherebp;cs>0is a weighting factor. is bilinear in the hidden feature maps: (zp;cr;zp0;c0e;ke;p0;c0r;p;c) =zp;crke ;p0;c0r;p;czp0;c0e: (7)Using the over bar notation also for the hidden features and kernels, e.g. zp;cs=Eqzfzp;csg, and bycombining the kernel definitions (6) and (7) with the expression of the variational factors (5), weobtain the following update rules for the latent variables.Z-step. It can be seen that the posterior distribution on qzis Gaussian with mean:zp;cs=1bp;csbp;csfp;cs+XeXtmt;ps;s0vt;cs;s0Xp0;c0ke;p0;c0r;p;czp0;c0s0(8)4Under review as a conference paper at ICLR 2021fef rz ez rkreme,rve,rme,r1me,rTve,r1ve,rTxx+ [C,1,1] [C,1,1] [1,W,H] [1,W,H] [C,W,H] [C,W,H] [C,W,H]Hidden featuresConvolutional kernelsStructured attention tensorObserved featuresStructured attention tensorme,rt ve,rtae,r=xFigure 2: (left) Schematic representation of the various hidden variables in VISTA-Net. For eachpair of emitting eand receiving rscales, their respective convolutional features fare shown in blue,their hidden variables zin green, the associated learned kernel kin yellow, and the channel-wiseand spatial attention vector and matrix vandmin red. Arros of the corresponding color denote theflow of information when updating the variable. (right) The computational relationships between thechannel-wise and spatial attention variables is shown, as well as the operations required to computethe final structured attention tensor a.This corresponds to the update rule obtained in (Xu et al., 2017a) with two remarkable differences.First, the posterior of the attention gate corresponds to the posterior of the structured tensor of rankT. Second, the impact of the neighboring features is weighted by the expected kernel value ke;p0;c0r;p;c .M-step. The variational approximation leads to a Bernoulli distribution for qm(mt;pe;r), which boilsdown to the following a posterior mean value using the sigmoid function :mt;pe;r=Xcvt;ce;rXp0;c0zp;cske;p0;c0r;p;czp0;c0s0: (9)V-step. It can be shown that the approximated posterior distribution is categorical, and that theexpected value of each dimension of vte;rcan be computed using the softmax operator:(vt;ce;r)Cc=1=softmaxXpmt;pe;rXp0;c0zp;cske;p0;c0r;p;czp0;c0eCc=1: (10)K-step. Finally, we need to derive the update rules for K. By further deriving the correspondingvariational posterior distribution, it can be shown that the a posterior distribution for the kernels is aGaussian distribution with the following mean:ke;p0;c0r;p;c =fp;crfp0;c0e+Xtmt;pe;rvt;ce;rzp;crzp0;c0e: (11)This solution is very straightforward, but since the kernels are estimated independently for eachpair of receiving (r;p;c )- emitting (e;p0;c0)pixels, it has two major drawbacks. First, the kernelvalues are estimated without any spatial context. Second, given the large amount of kernel values,one must find a very efficient way to compute them. We propose to kill two birds with one stoneby learning the kernels from the features using convolutional layers. By design, they take spatialcontext into account, and many popular libraries have efficient implementations of the convolutionoperation. The estimated kernel corresponding to the input channel c0of scalee,ke;c0ris computedvia a convolutional operation. The input of the convolution is a concatenation of the tensor fr+zrPTt=1mtr;evtr;eand the image zc0eresized to the spatial size of fr.Joint learning. We implement the inference procedure described before within the neural network,on the top of the CNN front-end. Indeed, implementing all inference operations using available deeplearning operators has two prominent advantages. First, we can perform the inference and learningthe CNN front-end at the same time, within the same formalism and for the same aim. Second, thisallows direct parallelisation of our method, speeding up training and inference.5Under review as a conference paper at ICLR 2021The precise implementation goes as follows. Regarding zr, we first apply message passing from thee-th scale to the sr-th scale is performed with ze!r ker~ze, where ~denotes the convolutionaloperation and kerdenotes the corresponding learned convolution kernel. We then apply element-wise product with the corresponding structured attention tensorPTt=1mte;rvte;r. Finally wecompute the element-wise sum with other emiting scales and the feature maps fr, see (8). Regardingme;r, we first compute the element-wise product between zrandze!r. The sum over channelsweighted by ve;ris computed previous to applying pixel-wise sigmoid, see (9). Regarding ve;rwe operate in a very similar fashion, but weighting each pixel with me;rand then summing everychannel independently, before applying softmax, see (10). Regarding ke;c0r, as discussed before, itis computed via a convolutional operation on the concatenations of ftm+gtmand the image zc0eresized to the spatial size of fr. In terms of initialisation, we draw a random guess for MandV,and set ZtoF. This allows us to update the kernels, then the other variables.Once the hidden variables are updated, we use them to address several different pixel-wise predictiontasks involving continuous and discrete variables, including monocular depth estimation, surfacenormal estimation and semantic segmentation. Following previous works, the network optimizationlosses for these three tasks are a standard L2 loss (Xu et al., 2017c), a consine similarity loss (Eigenet al., 2014) and a cross-entropy loss (Chen et al., 2016a), respectively. The CNN front-end andVISTA-Net, are jointly trained end-to-end.3 E XPERIMENTAL EVALUATION3.1 D ATASETS AND EXPERIMENTAL PROTOCOLTasks and Datasets. We demonstrate the effectiveness of VISTA-Net on two tasks: monoculardepth estimation on the NYU-v2 (Silberman et al., 2012) and the KITTI (Geiger et al., 2013) datasetsand semantic segmentation on the Pascal-Context (Mottaghi et al., 2014), the Pascal VOC2012 (Ev-eringham et al., 2010) and the Cityscape (Cordts et al., 2016). We also conducted experiments on thesurface normal estimation task on ScanNet (Dai et al., 2017) but due to lack of space the associatedresults are reported in the Appendix.For NYU-v2 and KITTI we follow the experimental settings proposed by Eigen et al. (Eigen et al.,2014). For NYU-v2 we use 120K RGB-Depth pairs with a resolution of 480640pixels, acquiredwith a Microsoft Kinect device from 464 indoor scenes, using 249 scenes for training and 215 scenes(654 images) for test. For KITTI we specifically use 22,600 frames from 32 scenes for training and697 frames from the rest 29 scenes for test.For Pascal-Context we follow the works (Chen et al., 2016a; Zhang et al., 2018) and we considerthe most frequent 59 classes. The remaining classes are masked during training and test. PascalVOC2012 contains 20 classes divided in 10582 training, 1449 validation and 1456 test images.Our method is trained using the protocol described in (Zhong et al., 2020; Long et al., 2015). ForCityscape dataset, only the 5,000 finely annotated images are used in our experiments, split into2,975/500/1,525 images for training, validation, and test.Evaluation Metrics. To evaluate the performance on monocular depth estimation, we considerseveral metrics as in (Eigen & Fergus, 2015), including mean relative error (rel), root mean squarederror (rms), mean log10 error (log10), and accuracy with threshold t(t2f1:25;1:252;1:253g).As for semantic segmentation, we consider two metrics following (Zhou et al., 2017; Zhang et al.,2018), i.e.pixel accuracy (pixAcc) and mean intersection over union (mIoU), averaged over classes.Implementation Details. VISTA-Net is implemented in Pytorch . The experiments are conductedon four Nvidia Quadro RTX 6000 GPUs, each with 24 GB memory. The ResNet-101 architec-ture pretrained on ImageNet (Deng et al., 2009) is considered in all the experiments for initializingthe backbone network of VISTA-Net except in for the experiments on the Cityscape dataset wherewe choose HRNet V2-W48 (whose complexity is comparable to dilated-ResNet-101) for fair com-parison with previous works. Our model can be used for effective deep feature learning in bothsingle-scale and multi-scale contexts. To boost the performance, following previous works (Xie &Tu, 2015; Xu et al., 2017a), we also consider features output by different convolutional blocks ofa CNN backbone ( e.g.res3c, ref4f, ref5d of a ResNet-50). For the semantic segmentation task, weuse a learning rate of 0.001 on Pascal-context and Pascal-VOC 2012 and 0.01 on cityscapes with amomentum of 0.9 and a weight decay of 0.0001 using a polynomial learning rate scheduler as pre-6Under review as a conference paper at ICLR 2021Table 1: Depth Estimation: KITTI dataset. Only monocular estimation methods are reported.MethodError (lower is better) Accuracy (higher is better)abs-rel sq-rel rms log-rms <1:25<1:252<1:253CC (Ranjan et al., 2019) 0.140 1.070 5.326 0.217 0.826 0.941 0.975Bian et al. (2019) 0.137 1.089 5.439 0.217 0.830 0.942 0.975S3Net(Cheng et al., 2020a) 0.124 0.826 4.981 0.200 0.846 0.955 0.982MS-CRF (Xu et al., 2017c) 0.125 0.899 4.685 - 0.816 0.951 0.983AG-CRF (Xu et al., 2017a) 0.126 0.901 4.689 0.157 0.813 0.950 0.982Monodepth2 (Godard et al., 2019) 0.115 0.903 4.863 0.193 0.877 0.959 0.981pRGBD(Tiwari et al., 2020) 0.113 0.793 4.655 0.188 0.874 0.96 0.983SGDepth(Klingner et al., 2020) 0.107 0.768 4.468 0.180 0.891 0.963 0.982Johnston & Carneiro (2020) 0.106 0.861 4.699 0.185 0.889 0.962 0.982Shu et al. (2020) 0.104 0.729 4.481 0.179 0.893 0.965 0.984DORN (Fu et al., 2018) 0.072 0.307 2.727 0.120 0.932 0.984 0.994Yin et al. (2019) 0.072 - 3.258 0.117 0.938 0.990 0.998PackNet-SfM (Guizilini et al., 2020) 0.071 0.359 3.153 0.109 0.944 0.990 0.997Lee et al. (2019) 0.061 0.261 2.834 0.099 0.954 0.992 0.998VISTA-Net (ours) 0.063 0.255 2.776 0.099 0.954 0.993 0.998MethodError (lower is better) Accuracy (higher is better)rel log10 rms <1:25<1:252<1:253PAD-Net (Xu et al., 2018) 0.214 0.091 0.792 0.643 0.902 0.977Li et al. (2017) 0.152 0.064 0.611 0.789 0.955 0.988CLIFFNet(Lijun et al., 2020) 0.128 0.171 0.493 0.844 0.964 0.991Laina et al. (2016) 0.127 0.055 0.573 0.811 0.953 0.988MS-CRF (Xu et al., 2017c) 0.121 0.052 0.586 0.811 0.954 0.987Lee & Kim (2020) 0.119 0.5 - 0.87 0.974 0.993AG-CRF (Xu et al., 2017a) 0.112 0.051 0.526 0.818 0.960 0.989DORN (Fu et al., 2018) 0.115 0.051 0.509 0.828 0.965 0.992Xia et al. (2020) 0.116 - 0.512 0.861 0.969 0.991Yin et al. (2019) 0.108 0.048 0.416 0.875 0.976 0.994Lee et al. (2019) 0.113 0.049 0.407 0.871 0.977 0.995VISTA-Net (ours) 0.111 0.048 0.393 0.881 0.979 0.996 Original - GT - DORN (Fu et al., 2018) - VISTA-NetFigure 3: Depth estimation: quantitative (left) and qualitative (right) comparison on NYU-v2.viously done in (Zhang et al., 2018; Chen et al., 2016a). For the monocular depth estimation task,the learning rate is set to 104with weight decay of 0.01. The Adam optimizer is used in all our ex-periments with a batch size of 8 for monocular depth estimation and 16 for semantic segmentation.The total training epochs are set to 50 for depth prediction experiments, to 150 for Pascal-contextand Pascal VOC 2012 datasets and to 500 for the Cityscapes dataset.3.2 E XPERIMENTAL RESULTS AND ANALYSISMonocular Depth Estimation. Comparative results on KITTI dataset are shown in Table 1. Wepropose a comparison with state of the art models such as (Eigen et al., 2014; Ranjan et al., 2019;Bian et al., 2019; Godard et al., 2019; Fu et al., 2018; Yin et al., 2019; Lee et al., 2019; Guiziliniet al., 2020). In addition we demonstrate the effectiveness of our VISTA-Net comparing with MS-CRF (Xu et al., 2017c), a previous approach which exploit a probabilistic framework for multi-scale feature learning but does not consider an attention mechanisms. Our approach is superior,thus demonstrating the effectiveness of the proposed attention model. We also compare with AG-CRF (Xu et al., 2017a) adapting their model to the monocular depth estimation problem. Alsoin this case VISTA-Net outperforms the competitor confirming the importance of having a jointstructured spatial- and channel-wise attention model. Note that AG-CRF (Xu et al., 2017a) andVISTA-Net are compared using the same backbone. In order to demonstrate the competitivenessof our approach in an indoor scenario we also report the results on NYUD-V2 dataset in Fig. 3.Similarly to the experiments on KITTI, VISTA-Net outperforms both state of the art approaches andprevious methods based on attention gates and CRFs (Xu et al., 2017c;a).Semantic Segmentation. We first compare VISTA-Net with the most recent methods on the Pascal-Context dataset, including (Zhang et al., 2018; Fu et al., 2019; Zhu et al., 2019; Ding et al., 2019;Zhang et al., 2019b; Wang et al., 2020; He et al., 2019). As for the depth estimation task, also inthis case we evaluate the performance of AG-CRF Xu et al. (2017a), adapting the original code tothe semantic segmentation task. VISTA-Net, as shown in Table 2, is 0.6 points better according tothe mIoU metric than the best available method, i.e.AG-CRF. Importantly, VISTA-Net outperformsEncNet (Zhang et al., 2018), which uses only channel-wise attention, as well as DANet (Fu et al.,7Under review as a conference paper at ICLR 2021Table 2: Semantic Segmentation on PASCAL-Context. D-ResNet-101 denotes Dilated ResNet-101.Method Backbone pixAcc% mIoU%CFM (VGG+MCG) (Dai et al., 2015b) VGG-16 - 34.4DeepLab-v2 (Chen et al., 2016a) VGG-16 - 37.6FCN-8s (Long et al., 2015) VGG-16 50.7 37.8BoxSup (Dai et al., 2015a) VGG-16 - 40.5ConvPP-8s (Xie et al., 2016) VGG-16 - 41.0PixelNet (Bansal et al., 2017) VGG-16 51.5 41.4HRNetV2 (Wang et al., 2020) - - 54.0EncNet (Zhang et al., 2018) D-ResNet-101 79.23 51.7DANet (Fu et al., 2019) D-ResNet-101 - 52.6ANN (Zhu et al., 2019) D-ResNet-101 - 52.8SpyGR (Li et al., 2020a) ResNet-101 - 52.8SANet (Zhong et al., 2020) ResNet-101 80.6 53.0SVCNet (Ding et al., 2019) ResNet-101 - 53.2CFNet (Zhang et al., 2019b) ResNet-101 - 54.0APCNet (He et al., 2019) D-ResNet-101 - 54.7AG-CRF (Xu et al., 2017a) D-ResNet-101 80.8 54.8VISTA-Net (ours) D-ResNet-101 81.1 55.4Table 3: Semantic Segmentation: results on both Cityscapes validation and testing set (trained onthe standard training set) (left) and on PASCAL VOC 2012 validation set (right). D-ResNet-101means Dilated-ResNet-101. (*) indicates COCO pretrained weights.Method Backbone Test set mIoUDynamic (Li et al., 2020b) Layer33-PSP Val 79.7SpyGR (Li et al., 2020a) ResNet-101 Val 80.5CCNet (Huang et al., 2019b) ResNet-101 Val 81.3Panoptic-DeepLab (Cheng et al., 2020b) D-ResNet-101 Val 81.5CDGCNet (Hu et al., 2020) D-ResNet-101 Val 81.9VISTA-Net HRNetV2-W48 Val 82.3PSANet (Zhao et al., 2018) D-ResNet-101 Test 78.6PAN (Li et al., 2018) D-ResNet-101 Test 78.6AAF Ke et al. (2018) D-ResNet-101 Test 79.1HRNet (Wang et al., 2020) HRNetV2-W48 Test 80.4Dynamic (Li et al., 2020b) Layer33-PSP Test 80.7VISTA-Net HRNetV2-W48 Test 81.4Method Backbone mIoU%DeepLabV3 (Chen et al., 2017) D-ResNet-101 75.7Dynamic (Li et al., 2020b) Layer33 79.0Res2Net (Gao et al., 2019) Res2Net-101 80.2DANet (Fu et al., 2019) ResNet-101 80.4Auto-Deeplab (Liu et al., 2019)ResNet-101 82.0EncNet (Zhang et al., 2018) D-ResNet-101 85.9SANet (Zhong et al., 2020)ResNet-101 86.1VISTA-Net D-ResNet-101 89.82019) and SANet(Zhong et al., 2020), which considers inter-dependencies both at spatial and chan-nel level in their attention model. In Table 3(right) are shown results on PASCAL VOC2012. Again,our method outperforms EncNet (Zhang et al., 2018), SANet (Zhong et al., 2020) and DANet (Fuet al., 2019). In particular, VISTA-Net is 3.7 points better according to the mIoU metric than the bestavailable method, i.e.SANet. Finally, Table. 3(left) reports the results on Cityscape. As in the pre-vious two datasets VISTA-Net outperforms the competitors (by nearly one point mIoU). Additionalresults are reported in the Appendix.Ablation Study. We also performed an ablation study on the Pascal-context dataset to furtherdemonstrate each proposed component’s impact. Fig. 5 (left) shows that the performance of VISTA-Net degrades not only when the model does not employ the structured attention mechanism, but alsowhen only channel-wise or spatial-wise attention is used. Moreover, we can also see the advantage ofusing the proposed probabilistic formulation for joint modeling both spatial- and channel-wise atten-tion in a principled manner. Interestingly, the performance achieved in each of the variants (spatial,channel, no probabilistic formulation) is similar. This leads us to believe that the proposed method’scompetitive advantage is in combining structured attention with a probabilistic formulation. No-tably, the feature refinement through message passing seems to be the most crucial contribution toimprove the performance. For the sake of completeness, we also report the results of DANet, andof AG-CRF (which corresponds to the Multiple-scale/Spatial setting in). Finally, in Fig.5 we showthe performance of VISTA-Net for different values of the tensor rank T. It is important to noticethat the framework reaches better performance when Tis higher. Fig. 4 clearly illustrate the percep-tual improvement in segmentation masks obtained with higher values of the attention tensor rank.Additional examples are provided in the supplementary material.4 C ONCLUSIONSIn this paper we proposed a novel approach to improve the learning of deep features representationsfor dense pixel-wise prediction tasks. Our approach seamlessly integrates a novel structured atten-8Under review as a conference paper at ICLR 2021(a) Original Image (b) GT (c) VISTA-Net (w/o Att.) (d) VISTA-Net (rank=1) (e) VISTA-Net (rank=9)Figure 4: Semantic segmentation maps obtained with VISTA-Net on the Pascal-Context dataset.Scales Structured Attention Probabilistic mIoU PixAccDANet (Fu et al., 2019) Separate Attention No 52.6 -SinglescaleNo structure Yes 51.7 78.9Spatial Yes 53.0 79.7Channel Yes 53.1 79.8Low-rank tensor No 53.2 79.9Low-rank tensor Yes 53.9 80.3MultiplescaleNo structure Yes 52.8 79.5Spatial Yes 54.8 80.8Channel Yes 54.6 80.6Low-rank tensor No 54.7 80.7Low-rank tensor Yes 55.4 81.154.254.654.855.355.401379Number of Rank5051525354555657mIoU(%)Figure 5: Ablation study on the Pascal-context dataset: performance of VISTA-Net (left) for differ-ent attention mechanisms and scales, (right) for different values of the tensor rank T.tion model within a probabilistic framework. In particular, we proposed to structure the attentiontensors as the sum of Trank-one tensors, each being the tensor-product of a spatial attention map anda channel attention vector. These two kinds of variables are jointly learned within the probabilisticformulation made tractable thanks to the variational approximation. The proposed structured atten-tion is rich enough to capture complex spatial- and channel-level inter-dependencies, while beingefficient to compute. The overall optimisation of the probabilistic model and of the CNN front-endis performed jointly. Extensive experimental evaluations show that VISTA-Net outperforms state-of-the-art methods on several datasets, thus confirming the importance of structuring the attentionvariables for dense pixel-level prediction tasks.<|im_end|> <|im_start|>assistant ### Review Title Recommendation to Reject ### Review Text ########################################################################## Summary: This paper proposes the VarIational STructured Attention networks (VISTA-Net), which improves pervious SOTA models for dense pixel-wise prediction tasks. The proposed VISTA-Net is featured by two aspects: 1) A new structured attention is proposed, which is able to jointly model spatial-level and channel-level dependencies; 2) It incorporates the proposed structured attention with a CRF-like inference framework, which allows the probabilistic inference. Experimental studies are conducted on monocular depth estimation and semantic image segmentation, showing improved performances of VISTA-Net consistently. ########################################################################## Reasons for score: Overall, I vote for rejection. My major concerns lie in three aspects, as detailed in Cons below: 1) This work is highly similar to Xu et al. (2017a) in terms of both methods and presentations. The difference is not significant; 2) While the presentation mainly follows Xu et al. (2017a), it needs some improvement; 3) The experimental studies lack more detailed analysis on the proposed method. ########################################################################## Pros: 1. The work is well-motivated. The aim of the proposed method sounds natural to me. 2. I like the ablation studies. But they could be performed on at least one more dataset. ########################################################################## Cons: 1. This work is highly similar to Xu et al. (2017a) in terms of both methods and presentations. The difference is not significant. Method-wise, as discussed in Related Work, the difference only lies in that VISTA-Net takes channel-level dependencies into consideration. First, this means that "Moreover, we integrate the estimation of the attention within a probabilistic framework" (quoted from abstract) is not a novel contribution. Second, considering channel-level dependencies in attention has limited novelty. As discussed in Related Work, multiple studies have explored several ways. In addition, a key step in the proposed method is Equation (1), where the tensor multiplication operator is not explained. In my understanding, it should be the outer product, or more generally, Kronecker product. Missing the clear definition of this operator hinders the clarity in describing the proposed method. Presentation-wise, the similarity is even higher. The entire section 2 follows the exact organization of section 2 in Xu et al. (2017a). By comparing the equations and presentations, it's more convincing that the novelty of this work is quite limited. 2. While the presentation mainly follows Xu et al. (2017a), it needs some improvement. First, the same notations as in Xu et al. (2017a) are used. However, some key things are not well explained. For example, the set of hidden variables $z_s$ corresponding to $f_s$ comes out from nowhere. I have to resort to Xu et al. (2017a) to know why we need $z_s$. Second, as mentioned above, the key steps like Equation (1) lack clear explanations. 3. The experimental studies lack more detailed analysis on the proposed method. It would be more meaningful to visualize those attention maps/gates instead of dense prediction results. ########################################################################## Questions during rebuttal period: Please address and clarify the cons above ########################################################################## Comments after the rebuttal period: Pros: First, it is totally acceptable to follow the notations and organization of Xu et al. (2017a), as long as the statements are clear and self-contained. The original submission failed on providing key details. The authors have made revisions to address this concern. Thanks! Second, I appreciate the extra experimental results and visualizations. Cons: However, the authors' responses do not fully address my concerns about the novelty, especially method-wise. I will raise my score to 5, but still recommend rejection. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
HyxPx3R9tm
ICLR.cc/2019/Conference
2019
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
["Xue Bin Peng", "Angjoo Kanazawa", "Sam Toyer", "Pieter Abbeel", "Sergey Levine"]
Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods.
["reinforcement learning", "generative adversarial networks", "imitation learning", "inverse reinforcement learning", "information bottleneck"]
ABSTRACTAdversarial learning methods have been proposed for a wide range of applications,but the training of adversarial models can be notoriously unstable. Effectively bal-ancing the performance of the generator and discriminator is critical, since a dis-criminator that achieves very high accuracy will produce relatively uninformativegradients. In this work, we propose a simple and general technique to constraininformation flow in the discriminator by means of an information bottleneck. Byenforcing a constraint on the mutual information between the observations and thediscriminator’s internal representation, we can effectively modulate the discrimi-nator’s accuracy and maintain useful and informative gradients. We demonstratethat our proposed variational discriminator bottleneck (VDB) leads to significantimprovements across three distinct application areas for adversarial learning algo-rithms. Our primary evaluation studies the applicability of the VDB to imitationlearning of dynamic continuous control skills, such as running. We show that ourmethod can learn such skills directly from rawvideo demonstrations, substantiallyoutperforming prior adversarial imitation learning methods. The VDB can also becombined with adversarial inverse reinforcement learning to learn parsimoniousreward functions that can be transferred and re-optimized in new settings. Finally,we demonstrate that VDB can train GANs more effectively for image generation,improving upon a number of prior stabilization methods. (Video1)1 I NTRODUCTIONAdversarial learning methods provide a promising approach to modeling distributions over high-dimensional data with complex internal correlation structures. These methods generally use a dis-criminator to supervise the training of a generator in order to produce samples that are indistinguish-able from the data. A particular instantiation is generative adversarial networks, which can be usedfor high-fidelity generation of images (Goodfellow et al., 2014; Karras et al., 2017) and other high-dimensional data (V ondrick et al., 2016; Xie et al., 2018; Donahue et al., 2018). Adversarial methodscan also be used to learn reward functions in the framework of inverse reinforcement learning (Finnet al., 2016a; Fu et al., 2017), or to directly imitate demonstrations (Ho & Ermon, 2016). However,they suffer from major optimization challenges, one of which is balancing the performance of thegenerator and discriminator. A discriminator that achieves very high accuracy can produce relativelyuninformative gradients, but a weak discriminator can also hamper the generator’s ability to learn.These challenges have led to widespread interest in a variety of stabilization methods for adversariallearning algorithms (Arjovsky et al., 2017; Kodali et al., 2017; Berthelot et al., 2017).In this work, we propose a simple regularization technique for adversarial learning, which constrainsthe information flow from the inputs to the discriminator using a variational approximation to theinformation bottleneck. By enforcing a constraint on the mutual information between the inputobservations and the discriminator’s internal representation, we can encourage the discriminatorto learn a representation that has heavy overlap between the data and the generator’s distribution,thereby effectively modulating the discriminator’s accuracy and maintaining useful and informative1xbpeng.github.io/projects/VDB/1Published as a conference paper at ICLR 2019Figure 1: Our method is general and can be applied to a broad range of adversarial learning tasks.Left: Motion imitation with adversarial imitation learning. Middle: Image generation. Right:Learning transferable reward functions through adversarial inverse reinforcement learning.gradients for the generator. Our approach to stabilizing adversarial learning can be viewed as anadaptive variant of instance noise (Salimans et al., 2016; Sønderby et al., 2016; Arjovsky & Bottou,2017). However, we show that the adaptive nature of this method is critical. Constraining the mutualinformation between the discriminator’s internal representation and the input allows the regularizerto directly limit the discriminator’s accuracy, which automates the choice of noise magnitude andapplies this noise to a compressed representation of the input that is specifically optimized to modelthe most discerning differences between the generator and data distributions.The main contribution of this work is the variational discriminator bottleneck (VDB), an adaptivestochastic regularization method for adversarial learning that substantially improves performanceacross a range of different application domains, examples of which are available in Figure 1. Ourmethod can be easily applied to a variety of tasks and architectures. First, we evaluate our methodon a suite of challenging imitation tasks, including learning highly acrobatic skills from mocap datawith a simulated humanoid character. Our method also enables characters to learn dynamic contin-uous control skills directly from raw video demonstrations, and drastically improves upon previouswork that uses adversarial imitation learning. We further evaluate the effectiveness of the techniquefor inverse reinforcement learning, which recovers a reward function from demonstrations in or-der to train future policies. Finally, we apply our framework to image generation using generativeadversarial networks, where employing VDB improves the performance in many cases.2 R ELATED WORKRecent years have seen an explosion of adversarial learning techniques, spurred by the success ofgenerative adversarial networks (GANs) (Goodfellow et al., 2014). A GAN framework is commonlycomposed of a discriminator and a generator, where the discriminator’s objective is to classify sam-ples as real or fake, while the generator’s objective is to produce samples that fool the discriminator.Similar frameworks have also been proposed for inverse reinforcement learning (IRL) (Finn et al.,2016b) and imitation learning (Ho & Ermon, 2016). The training of adversarial models can be ex-tremely unstable, with one of the most prevalent challenges being balancing the interplay betweenthe discriminator and the generator (Berthelot et al., 2017). The discriminator can often overpowerthe generator, easily differentiating between real and fake samples, thus providing the generatorwith uninformative gradients for improvement (Che et al., 2016). Alternative loss functions havebeen proposed to mitigate this problem (Mao et al., 2016; Zhao et al., 2016; Arjovsky et al., 2017).Regularizers have been incorporated to improve stability and convergence, such as gradient penal-ties (Kodali et al., 2017; Gulrajani et al., 2017a; Mescheder et al., 2018), reconstruction loss (Cheet al., 2016), and a myriad of other heuristics (Sønderby et al., 2016; Salimans et al., 2016; Arjovsky& Bottou, 2017; Berthelot et al., 2017). Task-specific architectural designs can also substantiallyimprove performance (Radford et al., 2015; Karras et al., 2017). Similarly, our method also aims toregularize the discriminator in order to improve the feedback provided to the generator. But insteadof explicit regularization of gradients or architecture-specific constraints, we apply a general infor-mation bottleneck to the discriminator, which previous works have shown to encourage networks toignore irrelevant cues (Achille & Soatto, 2017). We hypothesize that this then allows the generatorto focus on improving the most discerning differences between real and fake samples.Adversarial techniques have also been applied to inverse reinforcement learning (Fu et al., 2017),where a reward function is recovered from demonstrations, which can then be used to train policies toreproduce a desired skill. Finn et al. (2016a) showed an equivalence between maximum entropy IRLand GANs. Similar techniques have been developed for adversarial imitation learning (Ho & Ermon,2016; Merel et al., 2017), where agents learn to imitate demonstrations without explicitly recovering2Published as a conference paper at ICLR 2019a reward function. One advantage of adversarial methods is that by leveraging a discriminator inplace of a reward function, they can be applied to imitate skills where reward functions can bedifficult to engineer. However, the performance of policies trained through adversarial methods stillfalls short of those produced by manually designed reward functions, when such reward functionsare available (Rajeswaran et al., 2017; Peng et al., 2018). We show that our method can significantlyimprove upon previous works that use adversarial techniques, and produces results of comparablequality to those from state-of-the-art approaches that utilize manually engineered reward functions.Our variational discriminator bottleneck is based on the information bottleneck (Tishby & Za-slavsky, 2015), a technique for regularizing internal representations to minimize the mutual informa-tion with the input. Intuitively, a compressed representation can improve generalization by ignoringirrelevant distractors present in the original input. The information bottleneck can be instantiated inpractical deep models by leveraging a variational bound and the reparameterization trick, inspiredby a similar approach in variational autoencoders (V AE) (Kingma & Welling, 2013). The resultingvariational information bottleneck approximates this compression effect in deep networks (Alemiet al., 2016; Achille & Soatto, 2017). A similar bottleneck has also been applied to learn disentan-gled representations (Higgins et al., 2017). Building on the success of V AEs and GANs, a numberof efforts have been made to combine the two. Makhzani et al. (2016) used adversarial discrimina-tors during the training of V AEs to encourage the marginal distribution of the latent encoding to besimilar to the prior distribution, similar techniques include Mescheder et al. (2017) and Chen et al.(2018). Conversely, Larsen et al. (2016) modeled the generator of a GAN using a V AE. Zhao et al.(2016) used an autoencoder instead of a V AE to model the discriminator, but does not enforce an in-formation bottleneck on the encoding. While instance noise is widely used in modern architectures(Salimans et al., 2016; Sønderby et al., 2016; Arjovsky & Bottou, 2017), we show that explicitlyenforcing an information bottleneck leads to improved performance over simply adding noise for avariety of applications.3 P RELIMINARIESIn this section, we provide a review of the variational information bottleneck proposed by Alemiet al. (2016) in the context of supervised learning. Our variational discriminator bottleneck is basedon the same principle, and can be instantiated in the context of GANs, inverse RL, and imitationlearning. Given a dataset fxi;yig, with features xiand labels yi, the standard maximum likelihoodestimateq(yijxi)can be determined according tominqEx;yp(x;y)[logq(yjx)]: (1)Unfortunately, this estimate is prone to overfitting, and the resulting model can often exploit idiosyn-crasies in the data (Krizhevsky et al., 2012; Srivastava et al., 2014). Alemi et al. (2016) proposedregularizing the model using an information bottleneck to encourage the model to focus only on themost discriminative features. The bottleneck can be incorporated by first introducing an encoderE(zjx)that maps the features xto a latent distribution over Z, and then enforcing an upper boundIcon the mutual information between the encoding and the original features I(X;Z). This resultsin the following regularized objective J(q;E)J(q;E) = minq;EEx;yp(x;y)EzE(zjx)[logq(yjz)]s.t.I(X;Z)Ic:(2)Note that the model q(yjz)now maps samples from the latent distribution zto the label y. Themutual information is defined according toI(X;Z) =Zp(x;z) logp(x;z)p(x)p(z)dxdz=Zp(x)E(zjx) logE(zjx)p(z)dxdz; (3)wherep(x)is the distribution given by the dataset. Computing the marginal distributionp(z) =RE(zjx)p(x)dxcan be challenging. Instead, a variational lower bound can be obtainedby using an approximation r(z)of the marginal. Since KL [p(z)jjr(z)]0,Rp(z) logp(z)dzRp(z) logr(z)dz, an upper bound on I(X;Z)can be obtained via the KL divergence,I(X;Z)Zp(x)E(zjx)logE(zjx)r(z)dxdz=Exp(x)[KL [E(zjx)jjr(z)]]: (4)3Published as a conference paper at ICLR 2019Figure 2: Left: Overview of the variational discriminator bottleneck. The encoder first maps sam-plesxto a latent distribution E(zjx). The discriminator is then trained to classify samples zfrom thelatent distribution. An information bottleneck I(X;Z)Icis applied to Z.Right: Visualizationof discriminators trained to differentiate two Gaussians with different KL bounds Ic.This provides an upper bound on the regularized objective ~J(q;E)J(q;E),~J(q;E) = minq;EEx;yp(x;y)EzE(zjx)[logq(yjz)]s.t. Exp(x)[KL [E(zjx)jjr(z)]]Ic:(5)To solve this problem, the constraint can be subsumed into the objective with a coefficient minq;EEx;yp(x;y)EzE(zjx)[logq(yjz)]+Exp(x)[KL [E(zjx)jjr(z)]]Ic: (6)Alemi et al. (2016) evaluated the method on supervised learning tasks, and showed that modelstrained with a VIB can be less prone to overfitting and more robust to adversarial examples.4 V ARIATIONAL DISCRIMINATOR BOTTLENECKTo outline our method, we first consider a standard GAN framework consisting of a discriminatorDand a generator G, where the goal of the discriminator is to distinguish between samples from thetarget distribution p(x)and samples from the generator G(x),maxGminDExp(x)[log (D(x))] +ExG(x)[log (1D(x))]:We incorporate a variational information bottleneck by introducing an encoder Einto the discrimi-nator that maps a sample xto a stochastic encoding zE(zjx), and then apply a constraint Iconthe mutual information I(X;Z)between the original features and the encoding. Dis then trainedto classify samples drawn from the encoder distribution. A schematic illustration of the frameworkis available in Figure 2. The regularized objective J(D;E )for the discriminator is given byJ(D;E ) =minD;EExp(x)EzE(zjx)[log (D(z))]+ExG(x)EzE(zjx)[log (1D(z))]s.t. Ex~p(x)[KL [E(zjx)jjr(z)]]Ic;(7)with ~p=12p+12Gbeing a mixture of the target distribution and the generator. We refer to thisregularizer as the variational discriminator bottleneck (VDB). To optimize this objective, we canintroduce a Lagrange multiplier ,J(D;E ) =minD;Emax0Exp(x)EzE(zjx)[log (D(z))]+ExG(x)EzE(zjx)[log (1D(z))]+Ex~p(x)[KL [E(zjx)jjr(z)]]Ic:(8)As we will discuss in Section 4.1 and demonstrate in our experiments, enforcing a specific mutualinformation budget between xandzis critical for good performance. We therefore adaptively updatevia dual gradient descent to enforce a specific constraint Icon the mutual information,D;E arg minD;EL(D;E; ) max0; +Ex~p(x)[KL [E(zjx)jjr(z)]]Ic;(9)4Published as a conference paper at ICLR 2019whereL(D;E; )is the LagrangianL(D;E; ) =Exp(x)EzE(zjx)[log (D(z))]+ExG(x)EzE(zjx)[log (1D(z))]+Ex~p(x)[KL [E(zjx)jjr(z)]]Ic;(10)andis the stepsize for the dual variable in dual gradient descent (Boyd & Vandenberghe, 2004).In practice, we perform only one gradient step on DandE, followed by an update to . We refer toa GAN that incorporates a VDB as a variational generative adversarial network (VGAN).In our experiments, the prior r(z) =N(0;I)is modeled with a standard Gaussian. The encoderE(zjx) =N(E(x);E(x))models a Gaussian distribution in the latent variables Z, with meanE(x)and diagonal covariance matrix E(x). When computing the KL loss, each batch of datacontains an equal number of samples from p(x)andG(x). We use a simplified objective for thegenerator,maxGExG(x)[log (1D(E(x)))]: (11)where the KL penalty is excluded from the generator’s objective. Instead of computing the expecta-tion overZ, we found that approximating the expectation by evaluating Dat the meanE(x)of theencoder’s distribution was sufficient for our tasks. The discriminator is modeled with a single linearunit followed by a sigmoid D(z) =(wTDz+bD), with weights wDand bias bD.4.1 D ISCUSSION AND ANALYSISTo interpret the effects of the VDB, we consider the results presented by Arjovsky & Bottou (2017),which show that for two distributions with disjoint support, the optimal discriminator can perfectlyclassify all samples and its gradients will be zero almost everywhere. Thus, as the discriminator con-verges to the optimum, the gradients for the generator vanishes accordingly. To address this issue,Arjovsky & Bottou (2017) proposed applying continuous noise to the discriminator inputs, therebyensuring that the distributions have continuous support everywhere. In practice, if the original dis-tributions are sufficiently distant from each other, the added noise will have negligible effects. Asshown by Mescheder et al. (2017), the optimal choice for the variance of the noise to ensure con-vergence can be quite delicate. In our method, by first using a learned encoder to map the inputs toan embedding and then applying an information bottleneck on the embedding, we can dynamicallyadjust the variance of the noise such that the distributions not only share support in the embeddingspace, but also have significant overlap. Since the minimum amount of information required forbinary classification is 1 bit, by selecting an information constraint Ic<1, the discriminator is pre-vented from from perfectly differentiating between the distributions. To illustrate the effects of theVDB, we consider a simple task of training a discriminator to differentiate between two Gaussiandistributions. Figure 2 visualizes the decision boundaries learned with different bounds Icon themutual information. Without a VDB, the discriminator learns a sharp decision boundary, resulting invanishing gradients for much of the space. But as Icdecreases and the bound tightens, the decisionboundary is smoothed, providing more informative gradients that can be leveraged by the generator.Taking this analysis further, we can extend Theorem 3.2 from Arjovsky & Bottou (2017) to analyzethe VDB, and show that the gradient of the generator will be non-degenerate for a small enoughconstraintIc, under some additional simplifying assumptions. The result in Arjovsky & Bottou(2017) states that the gradient consists of vectors that point toward samples on the data manifold,multiplied by coefficients that depend on the noise. However, these coefficients may be arbitrarilysmall if the generated samples are far from real samples, and the noise is not large enough. Thiscan still cause the generator gradient to vanish. In the case of the VDB, the constraint ensures thatthese coefficients are always bounded below. Due to space constraints, this result is presented inAppendix A.4.2 VAIL: V ARIATIONAL ADVERSARIAL IMITATION LEARNINGTo extend the VDB to imitation learning, we start with the generative adversarial imitation learning(GAIL) framework (Ho & Ermon, 2016), where the discriminator’s objective is to differentiate be-tween the state distribution induced by a target policy (s)and the state distribution of the agent’spolicy(s),maxminDEs(s)[log (D(s))] +Es(s)[log (1D(s))]:5Published as a conference paper at ICLR 2019(a) Backflip (b) Cartwheel(c) Dance (d) RunFigure 3: Simulated humanoid performing various skills. V AIL is able to closely imitate a broadrange of skills from mocap data.The discriminator is trained to maximize the likelihood assigned to states from the target policy,while minimizing the likelihood assigned to states from the agent’s policy. The discriminator alsoserves as the reward function for the agent, which encourages the policy to visit states that, to thediscriminator, appear indistinguishable from the demonstrations. Similar to the GAN framework,we can incorporate a VDB into the discriminator,J(D;E ) =minD;Emax0Es(s)EzE(zjs)[log (D(z))]+Es(s)EzE(zjs)[log (1D(z))]+Es~(s)[KL [E(zjs)jjr(z)]]Ic:(12)where ~=12+12represents a mixture of the target policy and the agent’s policy. The rewardforis then specified by the discriminator rt=log (1D(E(s))). We refer to this method asvariational adversarial imitation learning (V AIL).4.3 VAIRL: V ARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNINGThe VDB can also be applied to adversarial inverse reinforcement learning (Fu et al., 2017) to yield anew algorithm which we call variational adversarial inverse reinforcement learning (V AIRL). AIRLoperates in a similar manner to GAIL, but with a discriminator of the formD(s;a;s0) =exp (f(s;a;s0))exp (f(s;a;s0)) +(ajs); (13)wheref(s;a;s0) =g(s;a) +h(s0)h(s), withgandhbeing learned functions. Under certainrestrictions on the environment, Fu et al. show that if g(s;a)is defined to depend only on the currentstate s, the optimal g(s)recovers the expert’s true reward function r(s)up to a constant g(s) =r(s) +const. In this case, the learned reward can be re-used to train policies in environments withdifferent dynamics, and will yield the same policy as if the policy was trained under the expert’strue reward. In contrast, GAIL’s discriminator typically cannot be re-optimized in this way (Fuet al., 2017). In V AIRL, we introduce stochastic encoders Eg(zgjs);Eh(zhjs), andg(zg);h(zh)aremodified to be functions of the encoding. We can reformulate Equation 13 asD(s;a;z) =exp (f(zg;zh;z0h))exp (f(zg;zh;z0h)) +(ajs);forz= (zg;zh;z0h)andf(zg;zh;z0h) =Dg(zg) +Dh(z0h)Dh(zh). We then obtain a modifiedobjective of the formJ(D;E ) =minD;Emax0Es;s0(s;s0)EzE(zjs;s0)[log (D(s;a;z))]+Es;s0(s;s0)EzE(zjs;s0)[log (1D(s;a;z))]+Es;s0~(s;s0)[KL [E(zjs;s0)jjr(z)]]Ic;where(s;s0)denotes the joint distribution of successive states from a policy, and E(zjs;s0) =Eg(zgjs)Eh(zhjs)Eh(z0hjs0).6Published as a conference paper at ICLR 2019Figure 4: Learning curves comparing V AIL to other methods for motion imitation. Performance ismeasured using the average joint rotation error between the simulated character and the referencemotion. Each method is evaluated with 3 random seeds.Method Backflip Cartwheel Dance Run SpinkickBC 3.01 2.88 2.93 2.63 2.88Merel et al., 2017 1:330:03 1:470:12 2:610:30 0:520:04 1:820:35GAIL 0:740:15 0:840:05 1:310:16 0:170:03 1:070:03GAIL - noise 0:420:02 0:920:07 0:960:08 0:210:05 0:950:14GAIL - noise z 0:670:12 0:720:04 1:140:08 0:140:03 0:640:09GAIL - GP 0:620:09 0:690:05 0:800:32 0:120:02 0:640:04V AIL (ours) 0:360:13 0:400:08 0:400:21 0:130:01 0:340:05V AIL - GP (ours) 0:460:17 0:310:02 0:150:01 0:100:01 0:310:02Peng et al., 2018 0.26 0.21 0.20 0.14 0.19Table 1: Average joint rotation error (radians) on humanoid motion imitation tasks. V AIL outper-forms the other methods for all skills evaluated, except for policies trained using the manually-designed reward function from (Peng et al., 2018).5 E XPERIMENTSWe evaluate our method on adversarial learning problems in imitation learning, inverse reinforce-ment learning, and image generation. In the case of imitation learning, we show that the VDBenables agents to learn complex motion skills from a single demonstration, including visual demon-strations provided in the form of video clips. We also show that the VDB improves the performanceof inverse RL methods. Inverse RL aims to reconstruct a reward function from a set demonstrations,which can then used to perform the task in new environments, in contrast to imitation learning,which aims to recover a policy directly. Our method is also not limited to control tasks, and wedemonstrate its effectiveness for unconditional image generation.5.1 VAIL: V ARIATIONAL ADVERSARIAL IMITATION LEARNINGThe goal of the motion imitation tasks is to train a simulated character to mimic demonstrations pro-vided by mocap clips recorded from human actors. Each mocap clip provides a sequence of targetstatesfs0;s1;:::;sTgthat the character should track at each timestep. We use a similar experimentalsetup as Peng et al. (2018), with a 34 degrees-of-freedom humanoid character. We found that thediscriminator architecture can greatly affect the performance on complex skills. The particular ar-chitecture we employ differs substantially from those used in prior work (Merel et al., 2017), detailsof which are available in Appendix C. The encoding Zis128D and an information constraint ofIc= 0:5is applied for all skills, with a dual stepsize of = 105. All policies are trained usingPPO (Schulman et al., 2017).The motions learned by the policies are best seen in the supplementary video. Snapshots of thecharacter’s motions are shown in Figure 3. Each skill is learned from a single demonstration. V AILis able to closely reproduce a variety of skills, including those that involve highly dynamics flips andcomplex contacts. We compare V AIL to a number of other techniques, including state-only GAIL(Ho & Ermon, 2016), GAIL with instance noise applied to the discriminator inputs (GAIL - noise),GAIL with instance noise applied to the last hidden layer (GAIL - noise z), and GAIL with a gradientpenalty applied to the discriminator (GAIL - GP) (Mescheder et al., 2018). Since the VDB helps toprevent vanishing gradients, while GP mitigates exploding gradients, the two techniques can be seenas being complementary. Therefore, we also train a model that combines both V AIL and GP (V AIL -7Published as a conference paper at ICLR 2019Figure 5: Left: Snapshots of the video demonstration and the simulated character trained withV AIL. The policy learns to run by directly imitating the video. Right: Saliency maps that visualizethe magnitude of the discriminator’s gradient with respect to all channels of the RGB input imagesfrom both the demonstration and the simulation. Pixel values are normalized between [0;1].Figure 6: Left: Learning curves comparing policies for the video imitation task trained using apixel-wise loss as the reward, GAIL, and V AIL. Only V AIL successfully learns to run from a videodemonstration. Middle: Effect of training with fixed values of and adaptive (Ic= 0:5).Right: .KL loss over the course of training with adaptive . The dual gradient descent update for effec-tively enforces the VDB constraint Ic.GP). Implementation details for combining the VDB and GP are available in Appendix B. Learningcurves for the various methods are shown in Figure 10 and Table 1 summarizes the performance ofthe final policies. Performance is measured in terms of the average joint rotation error between thesimulated character and the reference motion. We also include a reimplementation of the methoddescribed by Merel et al. (2017). For the purpose of our experiments, GAIL denotes policies trainedusing our particular architecture but without a VDB, and Merel et al. (2017) denotes policies trainedusing an architecture that closely mirror those from previous work. Furthermore, we include com-parisons to policies trained using the handcrafted reward from Peng et al. (2018), as well as policiestrained via behavioral cloning (BC). Since mocap data does not provide expert actions, we use thepolicies from Peng et al. (2018) as oracles to provide state-action demonstrations, which are thenused to train the BC policies via supervised learning. Each BC policy is trained with 10k samplesfrom the oracle policies, while all other policies are trained from just a single demonstration, theequivalent of approximately 100 samples.V AIL consistently outperforms previous adversarial methods, and V AIL - GP achieves the best per-formance overall. Simply adding instance noise to the inputs (Salimans et al., 2016) or hidden layerwithout the KL constraint (Sønderby et al., 2016) leads to worse performance, since the network canlearn a latent representation that renders the effects of the noise negligible. Though training withthe handcrafted reward still outperforms the adversarial methods, V AIL demonstrates comparableperformance to the handcrafted reward without manual reward or feature engineering, and producesmotions that closely resemble the original demonstrations. The method from Merel et al. (2017) wasable to imitate simple skills such as running, but was unable to reproduce more acrobatic skills suchas the backflip and spinkick. In the case of running, our implementation produces more natural gaitsthan the results reported in Merel et al. (2017). Behavioral cloning is unable to reproduce any of theskills, despite being provided with substantially more demonstration data than the other methods.Video Imitation: While our method achieves substantially better results on motion imitation whencompared to prior work, previous methods can still produce reasonable behaviors. However, if thedemonstrations are provided in terms of the raw pixels from video clips, instead of mocap data,the imitation task becomes substantially harder. The goal of the agent is therefore to directly im-8Published as a conference paper at ICLR 2019MethodTransfer environmentsC-maze S-mazeGAIL -24.67.2 1.01.3V AIL -65.618.9 20.839.7AIRL -15.37.8 -0.20.1AIRL - GP -9.140.4 -0.140.3V AIRL (= 0) -25.57.2 62.333.2V AIRL (ours) -10.02.2 74.038.7V AIRL - GP (ours) -9.180.4 156.55.6TRPO expert -5.1 153.2Figure 7: Left: C-Maze and S-Maze. When trained on the training maze on the left, AIRL learnsa reward that overfits to the training task, and which cannot be transferred to the mirrored maze onthe right. In contrast, V AIRL learns a smoother reward function that enables more-reliable transfer.Right: Performance on flipped test versions of our two training mazes. We report mean return ( std. dev.) over five runs, and the mean return for the expert used to generate demonstrations.itate the skill depicted in the video. This is also a setting where manually engineering rewards isimpractical, since simple losses like pixel distance do not provide a semantically meaningful mea-sure of similarity. Figure 6 compares learning curves of policies trained with V AIL, GAIL, andpolicies trained using a reward function defined by the average pixel-wise difference between theframeMtfrom the video demonstration and a rendered image Mtof the agent at each timestep t,rt= 113642jjMtMtjj2. Each frame is represented by a 6464RGB image.Both GAIL and the pixel-loss are unable to learn the running gait. V AIL is the only method thatsuccessfully learns to imitate the skill from the video demonstration. Snapshots of the video demon-stration and the simulated motion is available in Figure 5. To further investigate the effects of theVDB, we visualize the gradient of the discriminator with respect to images from the video demon-stration and simulation. Saliency maps for discriminators trained with V AIL and GAIL are availablein Figure 5. The V AIL discriminator learns to attend to spatially coherent image patches around thecharacter, while the GAIL discriminator exhibits less structure. The magnitude of the gradients fromV AIL also tend to be significantly larger than those from GAIL, which may suggests that V AIL isable to mitigate the problem of vanishing gradients present in GAIL.Adaptive Constraint: To evaluate the effects of the adaptive updates, we compare policies trainedwith different fixed values of and policies where is updated adaptively to enforce a desiredinformation constraint Ic= 0:5. Figure 6 illustrates the learning curves and the KL loss over thecourse of training. When is too small, performance reverts to that achieved by GAIL. Large valuesofhelp to smooth the discriminator landscape and improve learning speed during the early stagesof training, but converges to a worse performance. Policies trained using dual gradient descent toadaptively update consistently achieves the best performance overall.5.2 VAIRL: V ARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNINGNext, we use V AIRL to recover reward functions from demonstrations. Unlike the discriminatorlearned by V AIL, the reward function recovered by V AIRL can be re-optimized to train new policiesfrom scratch in the same environment. In some cases, it can also be used to transfer similar behaviourto different environments. In Figure 7, we show the results of applying V AIRL to the C-maze fromFu et al. (2017), and a more complex S-maze; the simple 2D observation spaces of these tasksmake it easy to interpret the recovered reward functions. In both mazes, the expert is trained tonavigate from a start position at the bottom of the maze to a fixed target position at the top. We useeach method to obtain an imitation policy and to approximate the expert’s reward on the originalmaze. The recovered reward is then used to train a new policy to solve a left–right flipped versionof the training maze. On the C-maze, we found that plain AIRL—without a gradient penalty—would sometimes overfit and fail to transfer to the new environment, as evidenced by the rewardvisualization in Figure 7 (left) and the higher return variance in Figure 7 (right). In contrast, byincorporating a VDB into AIRL, V AIRL learns a substantially smoother reward function that ismore suitable for transfer. Furthermore, we found that in the S-maze with two internal walls, AIRLwas too unstable to acquire a meaningful reward function. This was true even with the use of agradient penalty. In contrast, V AIRL was able to learn a reasonable reward in most cases without a9Published as a conference paper at ICLR 2019Method FIDGAN 63.6Inst Noise 30.7SN 23.9GP 22.6WGAN-GP 19.9VGAN (ours) 24.8VGAN-SN (ours) 71.8VGAN-GP (ours) 18.1Figure 8: Comparison of VGAN and other methods on CIFAR-10, with performance evaluatedusing the Fr ́echet Inception Distance (FID).Figure 9: VGAN samples on CIFAR-10, CelebA 128 128, and CelebAHQ 1024 1024.gradient penalty, and its performance improved even further with the addition of a gradient penalty.To evaluate the effects of the VDB, we observe that the performance of V AIRL drops on both taskswhen the KL constraint is disabled ( = 0), suggesting that the improvements from the VDB cannotbe attributed entirely to the noise introduced by the sampling process for z. Further details of theseexperiments and illustrations of the recovered reward functions are available in Appendix D.5.3 VGAN: V ARIATIONAL GENERATIVE ADVERSARIAL NETWORKSFinally, we apply the VDB to image generation with generative adversarial networks, which werefer to as VGAN. Experiment are conducted on CIFAR-10 (Krizhevsky et al.), CelebA (Liu et al.(2015)), and CelebAHQ (Karras et al., 2018) datasets. We compare our approach to recent stabi-lization techniques: WGAN-GP (Gulrajani et al., 2017b), instance noise (Sønderby et al., 2016;Arjovsky & Bottou, 2017), spectral normalization (SN) (Miyato et al., 2018), and gradient penalty(GP) (Mescheder et al., 2018), as well as the original GAN (Goodfellow et al., 2014) on CIFAR-10. To measure performance, we report the Fr ́echet Inception Distance (FID) (Heusel et al., 2017),which has been shown to be more consistent with human evaluation. All methods are implementedusing the same base model, built on the resnet architecture of Mescheder et al. (2018). Aside fromtuning the KL constraint Icfor VGAN, no additional hyperparameter optimization was performedto modify the settings provided by Mescheder et al. (2018). The performance of the various methodson CIFAR-10 are shown in Figure 8. While vanilla GAN and instance noise are prone to divergingas training progresses, VGAN remains stable. Note that instance noise can be seen as a non-adaptiveversion of VGAN without constraints on Ic. This experiment again highlights that there is a signif-icant improvement from imposing the information bottleneck over simply adding instance noise.Combining both VDB and gradient penalty (VGAN - GP) achieves the best performance overallwith an FID of 18.1. We also experimented with combining the VDB with SN, but this combinationis prone to diverging. See Figure 9 for samples of images generated with our approach. Please referto Appendix E for experimental details and more results.6 C ONCLUSIONWe present the variational discriminator bottleneck, a general regularization technique for adver-sarial learning. Our experiments show that the VDB is broadly applicable to a variety of domains,and yields significant improvements over previous techniques on a number of challenging tasks.While our experiments have produced promising results for video imitation, the results have beenprimarily with videos of synthetic scenes. We believe that extending the technique to imitating real-world videos is an exciting direction. Another exciting direction for future work is a more in-depththeoretical analysis of the method, to derive convergence and stability results or conditions.10Published as a conference paper at ICLR 2019ACKNOWLEDGEMENTSWe would like to thank the anonymous reviewers for their helpful feedback, and AWS and NVIDIAfor providing computational resources. This research was funded by an NSERC Postgradu-ate Scholarship, a Berkeley Fellowship for Graduate Study, BAIR, Huawei, and ONR PECASEN000141612723.
Byl41tz9nX
a constraint on the discriminator of GAN model to maintain informative gradients
6: Marginally above acceptance threshold
This paper proposed a constraint on the discriminator of GAN model to maintain informative gradients. It is completed by control the mutual information between the observations and the discriminator’s internal representation to be no bigger than a predefined value. The idea is interesting and the discussions of applications in different areas are useful. However, I still have some concerns about the work: 1. in the experiments about image generation, it seems that the proposed method does not enhance the performance obviously when compared to GP and WGAN-GP, Why the combination of VGAN and GP can enhance the performance greatly(How do they complementary to each other), what about the performance when combine VGAN with WGAN-GP? 2. How do you combine VGAN and GP, is there any parameter to balance their effect? 3. The author stated on page 2 that “the proposed information bottleneck encourages the discriminator to ignore irrelevant cues, which then allows the generator to focus on improving the most discerning differences between real and fake samples”, a proof on theory or experiments should be used to illustrate this state. 4. Is it possible to apply GP and WGAN-GP to the Motion imitation or adversarial inverse reinforcement learning problems? If so, will it perform better than VGAN? 5. How about VGAN compares with Spectral norm GAN?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow ### Paper Abstract Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninformative gradients. In this work, we propose a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck. By enforcing a constraint on the mutual information between the observations and the discriminator's internal representation, we can effectively modulate the discriminator's accuracy and maintain useful and informative gradients. We demonstrate that our proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms. Our primary evaluation studies the applicability of the VDB to imitation learning of dynamic continuous control skills, such as running. We show that our method can learn such skills directly from raw video demonstrations, substantially outperforming prior adversarial imitation learning methods. The VDB can also be combined with adversarial inverse reinforcement learning to learn parsimonious reward functions that can be transferred and re-optimized in new settings. Finally, we demonstrate that VDB can train GANs more effectively for image generation, improving upon a number of prior stabilization methods. ### Paper Keywords ["reinforcement learning", "generative adversarial networks", "imitation learning", "inverse reinforcement learning", "information bottleneck"] ### Paper Content ABSTRACTAdversarial learning methods have been proposed for a wide range of applications,but the training of adversarial models can be notoriously unstable. Effectively bal-ancing the performance of the generator and discriminator is critical, since a dis-criminator that achieves very high accuracy will produce relatively uninformativegradients. In this work, we propose a simple and general technique to constraininformation flow in the discriminator by means of an information bottleneck. Byenforcing a constraint on the mutual information between the observations and thediscriminator’s internal representation, we can effectively modulate the discrimi-nator’s accuracy and maintain useful and informative gradients. We demonstratethat our proposed variational discriminator bottleneck (VDB) leads to significantimprovements across three distinct application areas for adversarial learning algo-rithms. Our primary evaluation studies the applicability of the VDB to imitationlearning of dynamic continuous control skills, such as running. We show that ourmethod can learn such skills directly from rawvideo demonstrations, substantiallyoutperforming prior adversarial imitation learning methods. The VDB can also becombined with adversarial inverse reinforcement learning to learn parsimoniousreward functions that can be transferred and re-optimized in new settings. Finally,we demonstrate that VDB can train GANs more effectively for image generation,improving upon a number of prior stabilization methods. (Video1)1 I NTRODUCTIONAdversarial learning methods provide a promising approach to modeling distributions over high-dimensional data with complex internal correlation structures. These methods generally use a dis-criminator to supervise the training of a generator in order to produce samples that are indistinguish-able from the data. A particular instantiation is generative adversarial networks, which can be usedfor high-fidelity generation of images (Goodfellow et al., 2014; Karras et al., 2017) and other high-dimensional data (V ondrick et al., 2016; Xie et al., 2018; Donahue et al., 2018). Adversarial methodscan also be used to learn reward functions in the framework of inverse reinforcement learning (Finnet al., 2016a; Fu et al., 2017), or to directly imitate demonstrations (Ho & Ermon, 2016). However,they suffer from major optimization challenges, one of which is balancing the performance of thegenerator and discriminator. A discriminator that achieves very high accuracy can produce relativelyuninformative gradients, but a weak discriminator can also hamper the generator’s ability to learn.These challenges have led to widespread interest in a variety of stabilization methods for adversariallearning algorithms (Arjovsky et al., 2017; Kodali et al., 2017; Berthelot et al., 2017).In this work, we propose a simple regularization technique for adversarial learning, which constrainsthe information flow from the inputs to the discriminator using a variational approximation to theinformation bottleneck. By enforcing a constraint on the mutual information between the inputobservations and the discriminator’s internal representation, we can encourage the discriminatorto learn a representation that has heavy overlap between the data and the generator’s distribution,thereby effectively modulating the discriminator’s accuracy and maintaining useful and informative1xbpeng.github.io/projects/VDB/1Published as a conference paper at ICLR 2019Figure 1: Our method is general and can be applied to a broad range of adversarial learning tasks.Left: Motion imitation with adversarial imitation learning. Middle: Image generation. Right:Learning transferable reward functions through adversarial inverse reinforcement learning.gradients for the generator. Our approach to stabilizing adversarial learning can be viewed as anadaptive variant of instance noise (Salimans et al., 2016; Sønderby et al., 2016; Arjovsky & Bottou,2017). However, we show that the adaptive nature of this method is critical. Constraining the mutualinformation between the discriminator’s internal representation and the input allows the regularizerto directly limit the discriminator’s accuracy, which automates the choice of noise magnitude andapplies this noise to a compressed representation of the input that is specifically optimized to modelthe most discerning differences between the generator and data distributions.The main contribution of this work is the variational discriminator bottleneck (VDB), an adaptivestochastic regularization method for adversarial learning that substantially improves performanceacross a range of different application domains, examples of which are available in Figure 1. Ourmethod can be easily applied to a variety of tasks and architectures. First, we evaluate our methodon a suite of challenging imitation tasks, including learning highly acrobatic skills from mocap datawith a simulated humanoid character. Our method also enables characters to learn dynamic contin-uous control skills directly from raw video demonstrations, and drastically improves upon previouswork that uses adversarial imitation learning. We further evaluate the effectiveness of the techniquefor inverse reinforcement learning, which recovers a reward function from demonstrations in or-der to train future policies. Finally, we apply our framework to image generation using generativeadversarial networks, where employing VDB improves the performance in many cases.2 R ELATED WORKRecent years have seen an explosion of adversarial learning techniques, spurred by the success ofgenerative adversarial networks (GANs) (Goodfellow et al., 2014). A GAN framework is commonlycomposed of a discriminator and a generator, where the discriminator’s objective is to classify sam-ples as real or fake, while the generator’s objective is to produce samples that fool the discriminator.Similar frameworks have also been proposed for inverse reinforcement learning (IRL) (Finn et al.,2016b) and imitation learning (Ho & Ermon, 2016). The training of adversarial models can be ex-tremely unstable, with one of the most prevalent challenges being balancing the interplay betweenthe discriminator and the generator (Berthelot et al., 2017). The discriminator can often overpowerthe generator, easily differentiating between real and fake samples, thus providing the generatorwith uninformative gradients for improvement (Che et al., 2016). Alternative loss functions havebeen proposed to mitigate this problem (Mao et al., 2016; Zhao et al., 2016; Arjovsky et al., 2017).Regularizers have been incorporated to improve stability and convergence, such as gradient penal-ties (Kodali et al., 2017; Gulrajani et al., 2017a; Mescheder et al., 2018), reconstruction loss (Cheet al., 2016), and a myriad of other heuristics (Sønderby et al., 2016; Salimans et al., 2016; Arjovsky& Bottou, 2017; Berthelot et al., 2017). Task-specific architectural designs can also substantiallyimprove performance (Radford et al., 2015; Karras et al., 2017). Similarly, our method also aims toregularize the discriminator in order to improve the feedback provided to the generator. But insteadof explicit regularization of gradients or architecture-specific constraints, we apply a general infor-mation bottleneck to the discriminator, which previous works have shown to encourage networks toignore irrelevant cues (Achille & Soatto, 2017). We hypothesize that this then allows the generatorto focus on improving the most discerning differences between real and fake samples.Adversarial techniques have also been applied to inverse reinforcement learning (Fu et al., 2017),where a reward function is recovered from demonstrations, which can then be used to train policies toreproduce a desired skill. Finn et al. (2016a) showed an equivalence between maximum entropy IRLand GANs. Similar techniques have been developed for adversarial imitation learning (Ho & Ermon,2016; Merel et al., 2017), where agents learn to imitate demonstrations without explicitly recovering2Published as a conference paper at ICLR 2019a reward function. One advantage of adversarial methods is that by leveraging a discriminator inplace of a reward function, they can be applied to imitate skills where reward functions can bedifficult to engineer. However, the performance of policies trained through adversarial methods stillfalls short of those produced by manually designed reward functions, when such reward functionsare available (Rajeswaran et al., 2017; Peng et al., 2018). We show that our method can significantlyimprove upon previous works that use adversarial techniques, and produces results of comparablequality to those from state-of-the-art approaches that utilize manually engineered reward functions.Our variational discriminator bottleneck is based on the information bottleneck (Tishby & Za-slavsky, 2015), a technique for regularizing internal representations to minimize the mutual informa-tion with the input. Intuitively, a compressed representation can improve generalization by ignoringirrelevant distractors present in the original input. The information bottleneck can be instantiated inpractical deep models by leveraging a variational bound and the reparameterization trick, inspiredby a similar approach in variational autoencoders (V AE) (Kingma & Welling, 2013). The resultingvariational information bottleneck approximates this compression effect in deep networks (Alemiet al., 2016; Achille & Soatto, 2017). A similar bottleneck has also been applied to learn disentan-gled representations (Higgins et al., 2017). Building on the success of V AEs and GANs, a numberof efforts have been made to combine the two. Makhzani et al. (2016) used adversarial discrimina-tors during the training of V AEs to encourage the marginal distribution of the latent encoding to besimilar to the prior distribution, similar techniques include Mescheder et al. (2017) and Chen et al.(2018). Conversely, Larsen et al. (2016) modeled the generator of a GAN using a V AE. Zhao et al.(2016) used an autoencoder instead of a V AE to model the discriminator, but does not enforce an in-formation bottleneck on the encoding. While instance noise is widely used in modern architectures(Salimans et al., 2016; Sønderby et al., 2016; Arjovsky & Bottou, 2017), we show that explicitlyenforcing an information bottleneck leads to improved performance over simply adding noise for avariety of applications.3 P RELIMINARIESIn this section, we provide a review of the variational information bottleneck proposed by Alemiet al. (2016) in the context of supervised learning. Our variational discriminator bottleneck is basedon the same principle, and can be instantiated in the context of GANs, inverse RL, and imitationlearning. Given a dataset fxi;yig, with features xiand labels yi, the standard maximum likelihoodestimateq(yijxi)can be determined according tominqEx;yp(x;y)[logq(yjx)]: (1)Unfortunately, this estimate is prone to overfitting, and the resulting model can often exploit idiosyn-crasies in the data (Krizhevsky et al., 2012; Srivastava et al., 2014). Alemi et al. (2016) proposedregularizing the model using an information bottleneck to encourage the model to focus only on themost discriminative features. The bottleneck can be incorporated by first introducing an encoderE(zjx)that maps the features xto a latent distribution over Z, and then enforcing an upper boundIcon the mutual information between the encoding and the original features I(X;Z). This resultsin the following regularized objective J(q;E)J(q;E) = minq;EEx;yp(x;y)EzE(zjx)[logq(yjz)]s.t.I(X;Z)Ic:(2)Note that the model q(yjz)now maps samples from the latent distribution zto the label y. Themutual information is defined according toI(X;Z) =Zp(x;z) logp(x;z)p(x)p(z)dxdz=Zp(x)E(zjx) logE(zjx)p(z)dxdz; (3)wherep(x)is the distribution given by the dataset. Computing the marginal distributionp(z) =RE(zjx)p(x)dxcan be challenging. Instead, a variational lower bound can be obtainedby using an approximation r(z)of the marginal. Since KL [p(z)jjr(z)]0,Rp(z) logp(z)dzRp(z) logr(z)dz, an upper bound on I(X;Z)can be obtained via the KL divergence,I(X;Z)Zp(x)E(zjx)logE(zjx)r(z)dxdz=Exp(x)[KL [E(zjx)jjr(z)]]: (4)3Published as a conference paper at ICLR 2019Figure 2: Left: Overview of the variational discriminator bottleneck. The encoder first maps sam-plesxto a latent distribution E(zjx). The discriminator is then trained to classify samples zfrom thelatent distribution. An information bottleneck I(X;Z)Icis applied to Z.Right: Visualizationof discriminators trained to differentiate two Gaussians with different KL bounds Ic.This provides an upper bound on the regularized objective ~J(q;E)J(q;E),~J(q;E) = minq;EEx;yp(x;y)EzE(zjx)[logq(yjz)]s.t. Exp(x)[KL [E(zjx)jjr(z)]]Ic:(5)To solve this problem, the constraint can be subsumed into the objective with a coefficient minq;EEx;yp(x;y)EzE(zjx)[logq(yjz)]+Exp(x)[KL [E(zjx)jjr(z)]]Ic: (6)Alemi et al. (2016) evaluated the method on supervised learning tasks, and showed that modelstrained with a VIB can be less prone to overfitting and more robust to adversarial examples.4 V ARIATIONAL DISCRIMINATOR BOTTLENECKTo outline our method, we first consider a standard GAN framework consisting of a discriminatorDand a generator G, where the goal of the discriminator is to distinguish between samples from thetarget distribution p(x)and samples from the generator G(x),maxGminDExp(x)[log (D(x))] +ExG(x)[log (1D(x))]:We incorporate a variational information bottleneck by introducing an encoder Einto the discrimi-nator that maps a sample xto a stochastic encoding zE(zjx), and then apply a constraint Iconthe mutual information I(X;Z)between the original features and the encoding. Dis then trainedto classify samples drawn from the encoder distribution. A schematic illustration of the frameworkis available in Figure 2. The regularized objective J(D;E )for the discriminator is given byJ(D;E ) =minD;EExp(x)EzE(zjx)[log (D(z))]+ExG(x)EzE(zjx)[log (1D(z))]s.t. Ex~p(x)[KL [E(zjx)jjr(z)]]Ic;(7)with ~p=12p+12Gbeing a mixture of the target distribution and the generator. We refer to thisregularizer as the variational discriminator bottleneck (VDB). To optimize this objective, we canintroduce a Lagrange multiplier ,J(D;E ) =minD;Emax0Exp(x)EzE(zjx)[log (D(z))]+ExG(x)EzE(zjx)[log (1D(z))]+Ex~p(x)[KL [E(zjx)jjr(z)]]Ic:(8)As we will discuss in Section 4.1 and demonstrate in our experiments, enforcing a specific mutualinformation budget between xandzis critical for good performance. We therefore adaptively updatevia dual gradient descent to enforce a specific constraint Icon the mutual information,D;E arg minD;EL(D;E; ) max0; +Ex~p(x)[KL [E(zjx)jjr(z)]]Ic;(9)4Published as a conference paper at ICLR 2019whereL(D;E; )is the LagrangianL(D;E; ) =Exp(x)EzE(zjx)[log (D(z))]+ExG(x)EzE(zjx)[log (1D(z))]+Ex~p(x)[KL [E(zjx)jjr(z)]]Ic;(10)andis the stepsize for the dual variable in dual gradient descent (Boyd & Vandenberghe, 2004).In practice, we perform only one gradient step on DandE, followed by an update to . We refer toa GAN that incorporates a VDB as a variational generative adversarial network (VGAN).In our experiments, the prior r(z) =N(0;I)is modeled with a standard Gaussian. The encoderE(zjx) =N(E(x);E(x))models a Gaussian distribution in the latent variables Z, with meanE(x)and diagonal covariance matrix E(x). When computing the KL loss, each batch of datacontains an equal number of samples from p(x)andG(x). We use a simplified objective for thegenerator,maxGExG(x)[log (1D(E(x)))]: (11)where the KL penalty is excluded from the generator’s objective. Instead of computing the expecta-tion overZ, we found that approximating the expectation by evaluating Dat the meanE(x)of theencoder’s distribution was sufficient for our tasks. The discriminator is modeled with a single linearunit followed by a sigmoid D(z) =(wTDz+bD), with weights wDand bias bD.4.1 D ISCUSSION AND ANALYSISTo interpret the effects of the VDB, we consider the results presented by Arjovsky & Bottou (2017),which show that for two distributions with disjoint support, the optimal discriminator can perfectlyclassify all samples and its gradients will be zero almost everywhere. Thus, as the discriminator con-verges to the optimum, the gradients for the generator vanishes accordingly. To address this issue,Arjovsky & Bottou (2017) proposed applying continuous noise to the discriminator inputs, therebyensuring that the distributions have continuous support everywhere. In practice, if the original dis-tributions are sufficiently distant from each other, the added noise will have negligible effects. Asshown by Mescheder et al. (2017), the optimal choice for the variance of the noise to ensure con-vergence can be quite delicate. In our method, by first using a learned encoder to map the inputs toan embedding and then applying an information bottleneck on the embedding, we can dynamicallyadjust the variance of the noise such that the distributions not only share support in the embeddingspace, but also have significant overlap. Since the minimum amount of information required forbinary classification is 1 bit, by selecting an information constraint Ic<1, the discriminator is pre-vented from from perfectly differentiating between the distributions. To illustrate the effects of theVDB, we consider a simple task of training a discriminator to differentiate between two Gaussiandistributions. Figure 2 visualizes the decision boundaries learned with different bounds Icon themutual information. Without a VDB, the discriminator learns a sharp decision boundary, resulting invanishing gradients for much of the space. But as Icdecreases and the bound tightens, the decisionboundary is smoothed, providing more informative gradients that can be leveraged by the generator.Taking this analysis further, we can extend Theorem 3.2 from Arjovsky & Bottou (2017) to analyzethe VDB, and show that the gradient of the generator will be non-degenerate for a small enoughconstraintIc, under some additional simplifying assumptions. The result in Arjovsky & Bottou(2017) states that the gradient consists of vectors that point toward samples on the data manifold,multiplied by coefficients that depend on the noise. However, these coefficients may be arbitrarilysmall if the generated samples are far from real samples, and the noise is not large enough. Thiscan still cause the generator gradient to vanish. In the case of the VDB, the constraint ensures thatthese coefficients are always bounded below. Due to space constraints, this result is presented inAppendix A.4.2 VAIL: V ARIATIONAL ADVERSARIAL IMITATION LEARNINGTo extend the VDB to imitation learning, we start with the generative adversarial imitation learning(GAIL) framework (Ho & Ermon, 2016), where the discriminator’s objective is to differentiate be-tween the state distribution induced by a target policy (s)and the state distribution of the agent’spolicy(s),maxminDEs(s)[log (D(s))] +Es(s)[log (1D(s))]:5Published as a conference paper at ICLR 2019(a) Backflip (b) Cartwheel(c) Dance (d) RunFigure 3: Simulated humanoid performing various skills. V AIL is able to closely imitate a broadrange of skills from mocap data.The discriminator is trained to maximize the likelihood assigned to states from the target policy,while minimizing the likelihood assigned to states from the agent’s policy. The discriminator alsoserves as the reward function for the agent, which encourages the policy to visit states that, to thediscriminator, appear indistinguishable from the demonstrations. Similar to the GAN framework,we can incorporate a VDB into the discriminator,J(D;E ) =minD;Emax0Es(s)EzE(zjs)[log (D(z))]+Es(s)EzE(zjs)[log (1D(z))]+Es~(s)[KL [E(zjs)jjr(z)]]Ic:(12)where ~=12+12represents a mixture of the target policy and the agent’s policy. The rewardforis then specified by the discriminator rt=log (1D(E(s))). We refer to this method asvariational adversarial imitation learning (V AIL).4.3 VAIRL: V ARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNINGThe VDB can also be applied to adversarial inverse reinforcement learning (Fu et al., 2017) to yield anew algorithm which we call variational adversarial inverse reinforcement learning (V AIRL). AIRLoperates in a similar manner to GAIL, but with a discriminator of the formD(s;a;s0) =exp (f(s;a;s0))exp (f(s;a;s0)) +(ajs); (13)wheref(s;a;s0) =g(s;a) +h(s0)h(s), withgandhbeing learned functions. Under certainrestrictions on the environment, Fu et al. show that if g(s;a)is defined to depend only on the currentstate s, the optimal g(s)recovers the expert’s true reward function r(s)up to a constant g(s) =r(s) +const. In this case, the learned reward can be re-used to train policies in environments withdifferent dynamics, and will yield the same policy as if the policy was trained under the expert’strue reward. In contrast, GAIL’s discriminator typically cannot be re-optimized in this way (Fuet al., 2017). In V AIRL, we introduce stochastic encoders Eg(zgjs);Eh(zhjs), andg(zg);h(zh)aremodified to be functions of the encoding. We can reformulate Equation 13 asD(s;a;z) =exp (f(zg;zh;z0h))exp (f(zg;zh;z0h)) +(ajs);forz= (zg;zh;z0h)andf(zg;zh;z0h) =Dg(zg) +Dh(z0h)Dh(zh). We then obtain a modifiedobjective of the formJ(D;E ) =minD;Emax0Es;s0(s;s0)EzE(zjs;s0)[log (D(s;a;z))]+Es;s0(s;s0)EzE(zjs;s0)[log (1D(s;a;z))]+Es;s0~(s;s0)[KL [E(zjs;s0)jjr(z)]]Ic;where(s;s0)denotes the joint distribution of successive states from a policy, and E(zjs;s0) =Eg(zgjs)Eh(zhjs)Eh(z0hjs0).6Published as a conference paper at ICLR 2019Figure 4: Learning curves comparing V AIL to other methods for motion imitation. Performance ismeasured using the average joint rotation error between the simulated character and the referencemotion. Each method is evaluated with 3 random seeds.Method Backflip Cartwheel Dance Run SpinkickBC 3.01 2.88 2.93 2.63 2.88Merel et al., 2017 1:330:03 1:470:12 2:610:30 0:520:04 1:820:35GAIL 0:740:15 0:840:05 1:310:16 0:170:03 1:070:03GAIL - noise 0:420:02 0:920:07 0:960:08 0:210:05 0:950:14GAIL - noise z 0:670:12 0:720:04 1:140:08 0:140:03 0:640:09GAIL - GP 0:620:09 0:690:05 0:800:32 0:120:02 0:640:04V AIL (ours) 0:360:13 0:400:08 0:400:21 0:130:01 0:340:05V AIL - GP (ours) 0:460:17 0:310:02 0:150:01 0:100:01 0:310:02Peng et al., 2018 0.26 0.21 0.20 0.14 0.19Table 1: Average joint rotation error (radians) on humanoid motion imitation tasks. V AIL outper-forms the other methods for all skills evaluated, except for policies trained using the manually-designed reward function from (Peng et al., 2018).5 E XPERIMENTSWe evaluate our method on adversarial learning problems in imitation learning, inverse reinforce-ment learning, and image generation. In the case of imitation learning, we show that the VDBenables agents to learn complex motion skills from a single demonstration, including visual demon-strations provided in the form of video clips. We also show that the VDB improves the performanceof inverse RL methods. Inverse RL aims to reconstruct a reward function from a set demonstrations,which can then used to perform the task in new environments, in contrast to imitation learning,which aims to recover a policy directly. Our method is also not limited to control tasks, and wedemonstrate its effectiveness for unconditional image generation.5.1 VAIL: V ARIATIONAL ADVERSARIAL IMITATION LEARNINGThe goal of the motion imitation tasks is to train a simulated character to mimic demonstrations pro-vided by mocap clips recorded from human actors. Each mocap clip provides a sequence of targetstatesfs0;s1;:::;sTgthat the character should track at each timestep. We use a similar experimentalsetup as Peng et al. (2018), with a 34 degrees-of-freedom humanoid character. We found that thediscriminator architecture can greatly affect the performance on complex skills. The particular ar-chitecture we employ differs substantially from those used in prior work (Merel et al., 2017), detailsof which are available in Appendix C. The encoding Zis128D and an information constraint ofIc= 0:5is applied for all skills, with a dual stepsize of = 105. All policies are trained usingPPO (Schulman et al., 2017).The motions learned by the policies are best seen in the supplementary video. Snapshots of thecharacter’s motions are shown in Figure 3. Each skill is learned from a single demonstration. V AILis able to closely reproduce a variety of skills, including those that involve highly dynamics flips andcomplex contacts. We compare V AIL to a number of other techniques, including state-only GAIL(Ho & Ermon, 2016), GAIL with instance noise applied to the discriminator inputs (GAIL - noise),GAIL with instance noise applied to the last hidden layer (GAIL - noise z), and GAIL with a gradientpenalty applied to the discriminator (GAIL - GP) (Mescheder et al., 2018). Since the VDB helps toprevent vanishing gradients, while GP mitigates exploding gradients, the two techniques can be seenas being complementary. Therefore, we also train a model that combines both V AIL and GP (V AIL -7Published as a conference paper at ICLR 2019Figure 5: Left: Snapshots of the video demonstration and the simulated character trained withV AIL. The policy learns to run by directly imitating the video. Right: Saliency maps that visualizethe magnitude of the discriminator’s gradient with respect to all channels of the RGB input imagesfrom both the demonstration and the simulation. Pixel values are normalized between [0;1].Figure 6: Left: Learning curves comparing policies for the video imitation task trained using apixel-wise loss as the reward, GAIL, and V AIL. Only V AIL successfully learns to run from a videodemonstration. Middle: Effect of training with fixed values of and adaptive (Ic= 0:5).Right: .KL loss over the course of training with adaptive . The dual gradient descent update for effec-tively enforces the VDB constraint Ic.GP). Implementation details for combining the VDB and GP are available in Appendix B. Learningcurves for the various methods are shown in Figure 10 and Table 1 summarizes the performance ofthe final policies. Performance is measured in terms of the average joint rotation error between thesimulated character and the reference motion. We also include a reimplementation of the methoddescribed by Merel et al. (2017). For the purpose of our experiments, GAIL denotes policies trainedusing our particular architecture but without a VDB, and Merel et al. (2017) denotes policies trainedusing an architecture that closely mirror those from previous work. Furthermore, we include com-parisons to policies trained using the handcrafted reward from Peng et al. (2018), as well as policiestrained via behavioral cloning (BC). Since mocap data does not provide expert actions, we use thepolicies from Peng et al. (2018) as oracles to provide state-action demonstrations, which are thenused to train the BC policies via supervised learning. Each BC policy is trained with 10k samplesfrom the oracle policies, while all other policies are trained from just a single demonstration, theequivalent of approximately 100 samples.V AIL consistently outperforms previous adversarial methods, and V AIL - GP achieves the best per-formance overall. Simply adding instance noise to the inputs (Salimans et al., 2016) or hidden layerwithout the KL constraint (Sønderby et al., 2016) leads to worse performance, since the network canlearn a latent representation that renders the effects of the noise negligible. Though training withthe handcrafted reward still outperforms the adversarial methods, V AIL demonstrates comparableperformance to the handcrafted reward without manual reward or feature engineering, and producesmotions that closely resemble the original demonstrations. The method from Merel et al. (2017) wasable to imitate simple skills such as running, but was unable to reproduce more acrobatic skills suchas the backflip and spinkick. In the case of running, our implementation produces more natural gaitsthan the results reported in Merel et al. (2017). Behavioral cloning is unable to reproduce any of theskills, despite being provided with substantially more demonstration data than the other methods.Video Imitation: While our method achieves substantially better results on motion imitation whencompared to prior work, previous methods can still produce reasonable behaviors. However, if thedemonstrations are provided in terms of the raw pixels from video clips, instead of mocap data,the imitation task becomes substantially harder. The goal of the agent is therefore to directly im-8Published as a conference paper at ICLR 2019MethodTransfer environmentsC-maze S-mazeGAIL -24.67.2 1.01.3V AIL -65.618.9 20.839.7AIRL -15.37.8 -0.20.1AIRL - GP -9.140.4 -0.140.3V AIRL (= 0) -25.57.2 62.333.2V AIRL (ours) -10.02.2 74.038.7V AIRL - GP (ours) -9.180.4 156.55.6TRPO expert -5.1 153.2Figure 7: Left: C-Maze and S-Maze. When trained on the training maze on the left, AIRL learnsa reward that overfits to the training task, and which cannot be transferred to the mirrored maze onthe right. In contrast, V AIRL learns a smoother reward function that enables more-reliable transfer.Right: Performance on flipped test versions of our two training mazes. We report mean return ( std. dev.) over five runs, and the mean return for the expert used to generate demonstrations.itate the skill depicted in the video. This is also a setting where manually engineering rewards isimpractical, since simple losses like pixel distance do not provide a semantically meaningful mea-sure of similarity. Figure 6 compares learning curves of policies trained with V AIL, GAIL, andpolicies trained using a reward function defined by the average pixel-wise difference between theframeMtfrom the video demonstration and a rendered image Mtof the agent at each timestep t,rt= 113642jjMtMtjj2. Each frame is represented by a 6464RGB image.Both GAIL and the pixel-loss are unable to learn the running gait. V AIL is the only method thatsuccessfully learns to imitate the skill from the video demonstration. Snapshots of the video demon-stration and the simulated motion is available in Figure 5. To further investigate the effects of theVDB, we visualize the gradient of the discriminator with respect to images from the video demon-stration and simulation. Saliency maps for discriminators trained with V AIL and GAIL are availablein Figure 5. The V AIL discriminator learns to attend to spatially coherent image patches around thecharacter, while the GAIL discriminator exhibits less structure. The magnitude of the gradients fromV AIL also tend to be significantly larger than those from GAIL, which may suggests that V AIL isable to mitigate the problem of vanishing gradients present in GAIL.Adaptive Constraint: To evaluate the effects of the adaptive updates, we compare policies trainedwith different fixed values of and policies where is updated adaptively to enforce a desiredinformation constraint Ic= 0:5. Figure 6 illustrates the learning curves and the KL loss over thecourse of training. When is too small, performance reverts to that achieved by GAIL. Large valuesofhelp to smooth the discriminator landscape and improve learning speed during the early stagesof training, but converges to a worse performance. Policies trained using dual gradient descent toadaptively update consistently achieves the best performance overall.5.2 VAIRL: V ARIATIONAL ADVERSARIAL INVERSE REINFORCEMENT LEARNINGNext, we use V AIRL to recover reward functions from demonstrations. Unlike the discriminatorlearned by V AIL, the reward function recovered by V AIRL can be re-optimized to train new policiesfrom scratch in the same environment. In some cases, it can also be used to transfer similar behaviourto different environments. In Figure 7, we show the results of applying V AIRL to the C-maze fromFu et al. (2017), and a more complex S-maze; the simple 2D observation spaces of these tasksmake it easy to interpret the recovered reward functions. In both mazes, the expert is trained tonavigate from a start position at the bottom of the maze to a fixed target position at the top. We useeach method to obtain an imitation policy and to approximate the expert’s reward on the originalmaze. The recovered reward is then used to train a new policy to solve a left–right flipped versionof the training maze. On the C-maze, we found that plain AIRL—without a gradient penalty—would sometimes overfit and fail to transfer to the new environment, as evidenced by the rewardvisualization in Figure 7 (left) and the higher return variance in Figure 7 (right). In contrast, byincorporating a VDB into AIRL, V AIRL learns a substantially smoother reward function that ismore suitable for transfer. Furthermore, we found that in the S-maze with two internal walls, AIRLwas too unstable to acquire a meaningful reward function. This was true even with the use of agradient penalty. In contrast, V AIRL was able to learn a reasonable reward in most cases without a9Published as a conference paper at ICLR 2019Method FIDGAN 63.6Inst Noise 30.7SN 23.9GP 22.6WGAN-GP 19.9VGAN (ours) 24.8VGAN-SN (ours) 71.8VGAN-GP (ours) 18.1Figure 8: Comparison of VGAN and other methods on CIFAR-10, with performance evaluatedusing the Fr ́echet Inception Distance (FID).Figure 9: VGAN samples on CIFAR-10, CelebA 128 128, and CelebAHQ 1024 1024.gradient penalty, and its performance improved even further with the addition of a gradient penalty.To evaluate the effects of the VDB, we observe that the performance of V AIRL drops on both taskswhen the KL constraint is disabled ( = 0), suggesting that the improvements from the VDB cannotbe attributed entirely to the noise introduced by the sampling process for z. Further details of theseexperiments and illustrations of the recovered reward functions are available in Appendix D.5.3 VGAN: V ARIATIONAL GENERATIVE ADVERSARIAL NETWORKSFinally, we apply the VDB to image generation with generative adversarial networks, which werefer to as VGAN. Experiment are conducted on CIFAR-10 (Krizhevsky et al.), CelebA (Liu et al.(2015)), and CelebAHQ (Karras et al., 2018) datasets. We compare our approach to recent stabi-lization techniques: WGAN-GP (Gulrajani et al., 2017b), instance noise (Sønderby et al., 2016;Arjovsky & Bottou, 2017), spectral normalization (SN) (Miyato et al., 2018), and gradient penalty(GP) (Mescheder et al., 2018), as well as the original GAN (Goodfellow et al., 2014) on CIFAR-10. To measure performance, we report the Fr ́echet Inception Distance (FID) (Heusel et al., 2017),which has been shown to be more consistent with human evaluation. All methods are implementedusing the same base model, built on the resnet architecture of Mescheder et al. (2018). Aside fromtuning the KL constraint Icfor VGAN, no additional hyperparameter optimization was performedto modify the settings provided by Mescheder et al. (2018). The performance of the various methodson CIFAR-10 are shown in Figure 8. While vanilla GAN and instance noise are prone to divergingas training progresses, VGAN remains stable. Note that instance noise can be seen as a non-adaptiveversion of VGAN without constraints on Ic. This experiment again highlights that there is a signif-icant improvement from imposing the information bottleneck over simply adding instance noise.Combining both VDB and gradient penalty (VGAN - GP) achieves the best performance overallwith an FID of 18.1. We also experimented with combining the VDB with SN, but this combinationis prone to diverging. See Figure 9 for samples of images generated with our approach. Please referto Appendix E for experimental details and more results.6 C ONCLUSIONWe present the variational discriminator bottleneck, a general regularization technique for adver-sarial learning. Our experiments show that the VDB is broadly applicable to a variety of domains,and yields significant improvements over previous techniques on a number of challenging tasks.While our experiments have produced promising results for video imitation, the results have beenprimarily with videos of synthetic scenes. We believe that extending the technique to imitating real-world videos is an exciting direction. Another exciting direction for future work is a more in-depththeoretical analysis of the method, to derive convergence and stability results or conditions.10Published as a conference paper at ICLR 2019ACKNOWLEDGEMENTSWe would like to thank the anonymous reviewers for their helpful feedback, and AWS and NVIDIAfor providing computational resources. This research was funded by an NSERC Postgradu-ate Scholarship, a Berkeley Fellowship for Graduate Study, BAIR, Huawei, and ONR PECASEN000141612723.<|im_end|> <|im_start|>assistant ### Review Title a constraint on the discriminator of GAN model to maintain informative gradients ### Review Text This paper proposed a constraint on the discriminator of GAN model to maintain informative gradients. It is completed by control the mutual information between the observations and the discriminator’s internal representation to be no bigger than a predefined value. The idea is interesting and the discussions of applications in different areas are useful. However, I still have some concerns about the work: 1. in the experiments about image generation, it seems that the proposed method does not enhance the performance obviously when compared to GP and WGAN-GP, Why the combination of VGAN and GP can enhance the performance greatly(How do they complementary to each other), what about the performance when combine VGAN with WGAN-GP? 2. How do you combine VGAN and GP, is there any parameter to balance their effect? 3. The author stated on page 2 that “the proposed information bottleneck encourages the discriminator to ignore irrelevant cues, which then allows the generator to focus on improving the most discerning differences between real and fake samples”, a proof on theory or experiments should be used to illustrate this state. 4. Is it possible to apply GP and WGAN-GP to the Motion imitation or adversarial inverse reinforcement learning problems? If so, will it perform better than VGAN? 5. How about VGAN compares with Spectral norm GAN? ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
3YdNZD5dMxI
ICLR.cc/2021/Conference
2021
Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck
["Samaneh Azadi", "Michael Tschannen", "Eric Tzeng", "Sylvain Gelly", "Trevor Darrell", "Mario Lucic"]
Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and use them to learn the scene structure through an unconditional progressive segmentation generation network. During inference, our model first synthesizes a realistic segmentation layout from scratch, then synthesizes a realistic scene conditioned on that layout through a conditional segmentation-to-image synthesis network. When trained end-to-end, the resulting model outperforms state-of-the-art generative models in unsupervised image synthesis on two challenging domains in terms of the Frechet Inception Distance and perceptual evaluations. Moreover, we demonstrate that the end-to-end training significantly improves the segmentation-to-image synthesis sub-network, which results in superior performance over the state-of-the-art when conditioning on real segmentation layouts.
["Unconditional Image Synthesis", "Complex Scene", "GAN", "Semantic Bottleneck"]
ABSTRACTCoupling the high-fidelity generation capabilities of label-conditional image syn-thesis methods with the flexibility of unconditional generative models, we proposea semantic bottleneck GAN model for unconditional synthesis of complex scenes.We assume pixel-wise segmentation labels are available during training and usethem to learn the scene structure through an unconditional progressive segmenta-tion generation network. During inference, our model first synthesizes a realisticsegmentation layout from scratch, then synthesizes a realistic scene conditionedon that layout through a conditional segmentation-to-image synthesis network.When trained end-to-end, the resulting model outperforms state-of-the-art gen-erative models in unsupervised image synthesis on two challenging domains interms of the Fr ́echet Inception Distance and perceptual evaluations. Moreover, wedemonstrate that the end-to-end training significantly improves the segmentation-to-image synthesis sub-network, which results in superior performance over thestate-of-the-art when conditioning on real segmentation layouts.1 I NTRODUCTIONSignificant strides have been made on generative models for image synthesis, with a variety of meth-ods based on Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) achieving state-of-the-art performance. At lower resolutions or in specialized domains, GAN-based methods areable to synthesize samples which are near-indistinguishable from real samples (Brock et al., 2019).However, generating complex, high-resolution scenes from scratch remains a challenging problem,as shown in Figure 1-(a) and (b). As image resolution and complexity increase, the coherence ofsynthesized images decreases — samples lack consistent local or global structures.Stochastic decoder-based models, such as conditional GANs, were recently proposed to alleviatesome of these issues. In particular, both Pix2PixHD (Wang et al., 2018) and SPADE (Park et al.,2019) are able to synthesize high-quality scenes using a strong conditioning mechanism based onsemantic segmentation labels during the scene generation process. Global structure encoded inthe segmentation layout of the scene is what allows these models to focus primarily on generatingconvincing local content consistent with that structure.A key practical drawback of such conditional models is that they require full segmentation layoutsas input. Thus, unlike unconditional generative approaches which synthesize images from randomlysampled noise, these models are limited to generating images from a set of scenes that is prescribedin advance, typically either through segmentation labels from an existing dataset, or scenes that arehand-crafted by experts.Contributions To overcome these limitations, we propose a new model, the Semantic BottleneckGAN (SB-GAN), which couples high-fidelity generation capabilities of label-conditional modelswith the flexibility of unconditional image generation. This in turn enables our model to synthesizean unlimited number of novel complex scenes, while still maintaining high-fidelity output charac-teristic of image-conditional models.Our SB-GAN first unconditionally generates a pixel-wise semantic label map of a scene (i.e. foreach spatial location it outputs a class label), and then generates a realistic scene image by condi-tioning on that semantic map, Figure 1-(d). By factorizing the task into these two steps, we are able1Under review as a conference paper at ICLR 2021Z... ... ... ZZ... Conditional Image Synthesis Segmentation Layout Synthesis 4x8 256x512 (d) Semantic Bottleneck GAN: unconditional synthesis of complex scenes (a) Non-complex samples synthesized by BigGAN (from ImageNet) (b) A complex sample synthesized by BigGAN (from Cityscapes) (c) A complex sample synthesized by SB-GAN (from Cityscapes) Figure 1: (a) Examples of non-complex images from ImageNet synthesized by the state-of-the-art BigGAN model (Brock et al., 2019). Although these samples look decent, the complex scenessynthesized by BigGAN (e.g., from the Cityscapes dataset) are blurry and defective in local structure(e.g., cars are blended together) (b). Zoom in for more detail. (c) A complex scene synthesizedby our model respects both local and global structural integrity of the scene. (d) Schematic ofour unconditional Semantic Bottleneck GAN. We progressively train the adversarial segmentationsynthesis network to generate realistic segmentation maps from scratch, then synthesize a photo-realistic image using a conditional image synthesis network. End-to-end coupling of these twocomponents results in state-of-the-art unconditional synthesis of complex scenes.to separately tackle the problems of producing convincing segmentation layouts (i.e. a useful globalstructure) and filling these layouts with convincing appearances (i.e. local structure). When trainedend-to-end, the model yields samples which have a coherent global structure as well as fine localdetails, e.g., Figure 1-(c). Empirical evaluation shows that our Semantic Bottleneck GAN achievesa new state-of-the-art on two complex datasets with relatively small number of training images,Cityscapes and ADE-Indoor, as measured both by the Fr ́echet Inception Distance (FID) and by per-ceptual evaluations. Additionally, we observe that the conditional segmentation-to-image synthesiscomponent of our SB-GAN jointly trained with segmentation layout synthesis significantly improvesthe state-of-the-art semantic image synthesis network (Park et al., 2019), resulting in higher-qualityoutputs when conditioning on ground truth segmentation layouts.Key Challenges While both unconditional generation and image-to-image translation are well-explored learning problems, fully unconditional generation of the segmentation maps is a notori-ously hard task: (i) Semantic categories do not respect any ordering relationships and the networkis therefore required to capture the intricate relationship between segmentation classes, their shapes,and their spatial dependencies. (ii) As opposed to RGB values, semantic categories are discrete,hence non-differentiable which poses a challenge for end-to-end training (Sec. 3.2) (iii) Naivelycombining state-of-the-art unconditional generation and image-to-image translation models leads topoor performance. However, by carefully designing an additional discriminator component and acorresponding training protocol, we not only manage to improve the performance of the end-to-endmodel, but also the performance of each component separately (Sec. 3.3).We emphasize that despite these challenges our approach scales to 256256resolution and 95semantic categories, whereas existing state-of-the-art GAN models directly generating RGB imagesat that resolution already suffer from considerable instability (Sec. 4).2 R ELATED WORKGenerative Adversarial Networks (GANs) GANs (Goodfellow et al., 2014) are a powerful classof generative models successfully applied to various image synthesis tasks such as image style trans-fer (Isola et al., 2017; Zhu et al., 2017), unsupervised representation learning (Chen et al., 2016;Pathak et al., 2016; Radford et al., 2016), image super-resolution (Ledig et al., 2017; Dong et al.,2016), and text-to-image synthesis (Zhang et al., 2017; Xu et al., 2018; Qiao et al., 2019b). TrainingGANs is notoriously hard and recent efforts focused on improving neural architectures (Wang &Gupta, 2016; Karras et al., 2017; Zhang et al., 2019; Chen et al., 2019a), loss functions (Arjovskyet al., 2017), regularization (Gulrajani et al., 2017; Miyato et al., 2018), large-scale training (Brock2Under review as a conference paper at ICLR 2021Discriminator “Which image is real?” z~~Random noise Discriminator “Which segmentation is real?” Segmentation Synthesis Network Conditional Image Synthesis Network Discriminator “Which image is more consistent with the segmentation?” Figure 2: Schematic of Semantic Bottleneck GAN. Starting from random noise, we synthesize asegmentation layout and use a discriminator to bias the segmentation synthesis network towards re-alistic looking segmentation layouts. The generated layout is then provided as input to a conditionalimage synthesis network to synthesize the final image. A second discriminator is used to bias theconditional image synthesis network towards realistic images paired with real segmentation layouts.Finally, a third unconditional discriminator is used to bias the conditional image synthesis networktowards generating images that match the real image distribution.et al., 2019), self-supervision (Chen et al., 2019b), and sampling (Brock et al., 2019; Azadi et al.,2019a). Improving the performance of GANs by disentangling structure and style has been stud-ied by Wang & Gupta (2016) where structure is represented by a surface normal map and style isthe texture mapped onto the structure. Another compelling approach which enables generation ofhigh-resolution images is based on progressive training: a model is trained to first synthesize lower-resolution images (e.g. 88), then the resolution is gradually increased until the desired resolutionis achieved (Karras et al., 2017). Recently, Brock et al. (2019) showed that GANs significantly ben-efit from large-scale training, both in terms of model size and batch size. We note that these modelsare able to synthesize high-quality images in settings where objects are very prominent and centrallyplaced or follow some well-defined structure, as the corresponding distribution is easier to capture.In contrast, when the scenes are more complex and the amount of data is limited, the task becomesextremely challenging for these state-of-the-art models. We aim to improve the performance in thecontext of complex scenes and a small number of training examples by disentangling the imagegeneration problem into learning the structure represented by semantic layouts and filling in theRGB details using a semantic image synthesis model. A similar idea was proposed by a concurrentwork (V olokitin et al., 2020) with substantial differences in the model and results.GANs on discrete domains GANs for discrete domains have been investigated in severalworks (Yu et al., 2017b; Lin et al., 2017; Bojchevski et al., 2018; Lu et al., 2018). Training inthis domain is even more challenging as the samples from discrete distributions are not differen-tiable with respect to the network parameters. This problem can be somewhat alleviated by usingthe Gumbel-softmax distribution, which is a continuous approximation to a multinomial distributionparameterized in terms of the softmax function (Kusner & Hern ́andez-Lobato, 2016). We will showhow to apply a similar principle to learn the distribution of discrete segmentation masks.Conditional image synthesis In conditional image synthesis one aims to generate images by con-ditioning on an input which can be provided in the form of an image (Isola et al., 2017; Zhu et al.,2017; Azadi et al., 2018; 2019b; Liu et al., 2017), a text phrase (Reed et al., 2016; Zhang et al.,2017; Qiao et al., 2019a; Ashual & Wolf, 2019; Hong et al., 2018), a scene graph (Johnson et al.,2018; Ashual & Wolf, 2019), a class label, or a semantic layout (Odena et al., 2017; Chen & Koltun,2017; Wang et al., 2018; Park et al., 2019). These conditional GAN methods learn a mapping thattranslates samples from the source distribution into samples from the target domain.The text-to-image synthesis models proposed in (Hong et al., 2018; Li et al., 2019) decomposethe synthesis task into multiple steps. As illustrated in the Appendix, given the text description, asemantic layout is constructed by generating object bounding boxes and refining each box by esti-mating object shapes. Then, an image is synthesized conditionally on the generated semantic layoutfrom the first step. Our work shares the same high-level idea of decomposing the image generationproblem into the semantic layout synthesis and the conditional semantic-layout-to-image synthesis.However, we note that the above approaches, as opposed to ours, are conditional and require su-pervision in the form of textual descriptions. Secondly, they are sequential in nature and synthesizemasks of a few different objects (e.g. person, elephant), but not a fully fine-grained semantic map(e.g. missing sky, grass, etc.). In stark contrast, our approach unconditionally synthesizes the full3Under review as a conference paper at ICLR 2021semantic layout of the entire scene from a noise input in an end-to-end network design. Due to theabove distinctions, their segmentation synthesis models differ significantly from ours in terms ofarchitecture and design as shown in Figure 6 in the Appendix.3 S EMANTIC BOTTLENECK GAN (SB-GAN)We propose an unconditional Semantic Bottleneck GAN architecture to learn the distribution ofcomplex scenes. To tackle the problems of learning both the global layout and the local structure,we divide this synthesis problem into two parts: an unconditional segmentation map synthesis net-work and a conditional segmentation-to-image synthesis model. Our first network is designed tocoarsely learn the scene distribution by synthesizing semantic layouts. It generates per-pixel seman-tic categories following the progressive GAN model architecture (ProGAN) (Karras et al., 2017).This fully unconditional generation of the segmentation maps is novel, very challenging, and a care-ful design is crucial, as described in Section 3.1. The second network populates the synthesizedsemantic layouts with texture by predicting RGB pixel values using Spatially-Adaptive Normaliza-tion (SPADE), following the architecture of the state-of-the-art semantic synthesis network in (Parket al., 2019). We assume the ground truth segmentation masks are available for all or part of thetarget scene dataset. In the following sections, we will first discuss our semantic bottleneck synthe-sis pipeline and summarize the SPADE network for image synthesis. We will then couple these twonetworks in an end-to-end design which we refer to as Semantic Bottleneck GAN (SB-GAN).3.1 S EMANTIC BOTTLENECK SYNTHESISOur goal is to learn a (coarse) estimate of the scene distribution from samples corresponding to realsegmentation maps with Ksemantic categories. Starting from random noise, we generate a tensorY2J1;KKN1HWwhich represents a per-pixel segmentation class, with HandWindicatingthe height and width, respectively, of the generated map and Nthe batch size. In practice, weprogressively train from a low to a high resolution using the ProGAN architecture (Karras et al.,2017) coupled with the Improved WGAN loss function (Gulrajani et al., 2017) on the ground truthdiscrete-valued segmentation maps, illustrated in Figure 1-(d). Similar to ProGAN, to increasethe spatial resolution of the generated segmentation maps during training, we incrementally addlayers to the generator and the discriminator. In contrast to ProGAN, in which the generator outputscontinuous RGB values, we predict per-pixel discrete semantic class labels. This task is extremelychallenging as it requires the network to capture the intricate relationship between segmentationclasses and their spatial dependencies. To this end, we apply the Gumbel-softmax trick (Jang et al.,2017; Maddison et al., 2016) coupled with a straight-through estimator (Jang et al., 2017), describedin detail below.We synthesize segmentation layouts by first generating per-pixel probability scores of belongingto each of the Ksemantic classes and then sampling a semantic class per pixel. The per-pixelprobability scores are computed by applying a softmax function to the last layer of the generator(i.e. logits) which results in probability maps Pij2[0;1]K, withPKk=1Pijk= 1 for each spatiallocation (i;j)2J1;HKJ1;W K. To sample a semantic class from this multinomial distribution, wewould ideally apply the following well-known procedure at each spatial location: (1) sample ki.i.d.samples,Gk, from the standard Gumbel distribution, (2) add these samples to each logit, and (3) takethe index of the maximal value. This reparametrization indeed allows for an efficient forward-pass,but is not differentiable. Nevertheless, the max can be replaced with the softmax function and thequality of the approximation can be controlled by varying the temperature hyperparameter — thesmaller the, the closer the approximation is to the categorical distribution (Jang et al., 2017):Sijk=expf(logPijk+Gk)=gPKi=1expf(logPiji+Gi)=g: (1)Similar to the real samples, the synthesized samples fed to the GAN discriminator should still con-taindiscrete category labels. As a result, for the forward pass, we compute arg maxkSk, while forthe backward pass, we use the soft predicted scores Skdirectly, a strategy known as straight-throughestimation (Jang et al., 2017).4Under review as a conference paper at ICLR 20213.2 S EMANTIC IMAGE SYNTHESISOur second sub-network converts the synthesized semantic layouts into photo-realistic images usingspatially-adaptive normalization (Park et al., 2019). The segmentation masks are employed to spreadthe semantic information throughout the generator by modulating the activations with a spatiallyadaptive learned transformation. We follow the same generator and discriminator architectures andloss functions used in (Park et al., 2019), where the generator contains a series of SPADE residualblocks with upsampling layers. The loss functions to train SPADE are summarized as:LDSPD=Ey;x[min(0;1 +DSPD(y;x))]Ey[min(0;1DSPD(y;G SPD(y)))]LGSPD=Ey[DSPD(y;G SPD(y)))] +1LVGG1+2LFeat1; (2)whereGSPD,DSPDstand for the SPADE generator and discriminator, and LVGG1andLFeat1representthe VGG and discriminator feature matching L1loss functions, respectively (Park et al., 2019; Wanget al., 2018). We pre-train this network using pairs of real RGB images, x, and their correspondingreal segmentation masks, y, from the target scene data set.In the next section, we will describe how to employ the synthesized segmentation masks in an end-to-end manner to improve the performance of both the semantic bottleneck and the semantic imagesynthesis sub-networks.3.3 E ND-TO-END FRAMEWORKAfter training semantic bottleneck synthesis model to synthesize segmentation masks and the se-mantic image synthesis model to stochastically map segmentations to photo-realistic images, weadversarially fine-tune the parameters of both networks in an end-to-end approach by introducing anunconditional discriminator network on top of the SPADE generator (see Figure 2).This second discriminator, D2, has the same architecture as the SPADE discriminator, but is de-signed to distinguish between real RGB images and the fake ones generated from the synthesizedsemantic layouts. Unlike the SPADE conditional GAN loss, which examines pairs of input segmen-tations and output images, (y;x)in equation 2, the GAN loss on D2,LD2, is unconditional and onlycompares real images to synthesized ones, as shown in equation 3:LD2=Ex[min(0;1 +D2(x))]Ez[min(0;1D2(G(z)))] (3)LG=Ez[D2(G(z)))] +LGSPD+LGSB; G (z) =GSPD(GSB(z))whereGSBrepresents the semantic bottleneck synthesis generator, and LGSBis the improved WGANloss to pretrain GSBdescribed in Section 3.1. In contrast to the conditional discriminator in SPADE,which enforces consistency between the input semantic map and the output image, D2is primarilyconcerned with the overall quality of the final output. The hyper parameter determines the ratiobetween the two generators during fine-tuning. The parameters of both generators, GSBandGSPD, aswell as the corresponding discriminators, DSBandDSPD, are updated in this end-to-end fine-tuning.We illustrate our final end-to-end network in Figure 2. Jointly fine-tuning the two networks in anend-to-end fashion allows the two networks to reinforce each other, leading to improved perfor-mance. The gradients with respect to RGB images synthesized by SPADE are back-propagated tothe segmentation synthesis model, thereby encouraging it to synthesize segmentation layouts thatlead to higher quality final images. Hence, SPADE plays the role of a loss function for synthesiz-ing segmentations, but in the RGB space, hence providing a goal that was absent from the initialtraining. Similarly, fine-tuning SPADE with synthesized segmentations allows it to adapt to a morediverse set of scene layouts, which improves the quality of generated samples.4 E XPERIMENTS AND RESULTSWe evaluate the performance of the proposed approach on two datasets containing images withcomplex scenes, where the ground truth segmentation masks are available during training (possiblyonly for a subset of the images). We also study the role of the two network components, semanticbottleneck and semantic image synthesis, on the final result. We compare the performance of SB-GAN against the state-of-the-art BigGAN model (Brock et al., 2019) as well as a ProGAN (Karraset al., 2017) baseline that has been trained on the RGB images directly. We evaluate our methodusing Fr ́echet Inception Distance (FID) as well as a perceptual evaluation.5Under review as a conference paper at ICLR 2021SB-GAN ProGAN Figure 3: Images synthesized on Cityscapes-5K. Best viewed on screen; zoom in for more detail.Although both models capture the general scene layout, SB-GAN (1st row) generates more convinc-ing objects, e.g. buildings and cars.SB-GAN ProGAN BigGAN Figure 4: Images synthesized on Cityscapes-25K. Best viewed on screen; zoom in for more detail.Images synthesized by BigGAN (3rd row) are blurry and sometimes defective in local structures.Datasets We study the performance of our model on the Cityscapes and ADE-indoor datasets asthe two domains with complex scene images.Cityscapes-5K (Cordts et al., 2016) contains street scene images in German cities with trainingand validation set sizes of 3,000 and 500 images, respectively. Ground truth segmentation maskswith 33 semantic classes are available for all images in this dataset.Cityscapes-25K (Cordts et al., 2016) contains street scene images in German cities with train-ing and validation set sizes of 23,000 and 500 images, respectively with 19 semantic classes.Cityscapes-5K is a subset of this dataset, providing 3,000 images in the training set here as wellas the entire validation set. Fine ground truth annotations are only provided for this subset, withthe remaining 20,000 training images containing only coarse annotations. We extract the corre-sponding fine annotations for the rest of training images using the state-of-the-art segmentationmodel (Yu et al., 2017a) trained on the training annotated samples from Cityscapes-5K.ADE-Indoor is a subset of the ADE20K dataset (Zhou et al., 2017) containing 4,377 challengingtraining images from indoor scenes and 433 validation images with 95 semantic categories.Evaluation We use the Fr ́echet Inception Distance (FID) (Heusel et al., 2017) as well as a percep-tual evaluation of the quality of the generated samples. To compute FID, the real data and generatedsamples are embedded in a specific layer of a pre-trained Inception network. Then, a multivariateGaussian is fit to the data, and the distance is computed as FID(x;g) =jjxgjj22+ Tr( x+g2(xg)12), whereanddenote the empirical mean and covariance, and subscripts xandgdenote the real and generated data respectively. FID is sensitive to both the addition of spuriousmodes and to mode dropping (Sajjadi et al., 2018; Lucic et al., 2018). On the Cityscapes dataset,we ran five trials where we computed FID on 500 random synthetic images and 500 real validationimages, and report the average score. On ADE-Indoor, this is repeated on batches of 433 images.Implementation details In all our experiments, we set 1=2= 10 , and= 10 . The initialgenerator and discriminator learning rates for training SPADE both in the pretraining and end-to-endsteps are 104and4104, respectively. The learning rate for the semantic bottleneck synthesissub-network is set to 103in the pretraining step and to 105in the end-to-end fine-tuning on6Under review as a conference paper at ICLR 2021SB-GAN ProGAN BigGAN Figure 5: Images synthesized on ADE-Indoor. This dataset is very challenging, causing modecollapse for the BigGAN model (3rd row). In contrast, samples generated by SB-GAN (1st row) aregenerally of higher quality and much more structured than those of ProGAN (2nd row).Table 1: FID of the synthesized samples (lower is better), averaged over 5 random sets of samples.Images were synthesized at resolution of X2Xon Cityscapes and XXon ADE-Indoor.(a)X= 256ProGAN SB-GANW/O FTSB-GANCITYSCAPES -5K 92.57 83.20 65.49CITYSCAPES -25K 63.87 71.13 62.97ADE-I NDOOR 104.83 91.80 85.27(b)X= 128ProGAN BigGAN SB-GAN178.19 - 57.4856.7 64.82 54.9285.94 156.65 81.39Cityscapes, and to 104for ADE-Indoor. The temperature hyperparameter, , is always set to 1.For BigGAN, we followed the setup by Lucic et al. (2019)1, where we modified the code to allow fornon-square images of Cityscapes. We used one class label for all images to have an unconditionalBigGAN model. For both datasets, we varied the batch size (using values in f128;256;512;2048g),learning rate, and location of the self-attention block. We trained the final model for 50K iterations.4.1 Q UALITATIVE RESULTSIn Figures 3, 4, and 5, we provide qualitative comparisons of the competing methods on the threeaforementioned datasets. We observe that both Cityscapes-5K and ADE-Indoor are very challeng-ing for the state-of-the-art ProGAN and BigGAN models, likely due to the complexity of the dataand small number of training instances. Even at a resolution of 128128on the ADE-Indoordataset, BigGAN suffers from mode collapse, as illustrated in Figure 5. In contrast, SB-GAN sig-nificantly improves the structure of the scene distribution and provides samples of higher quality.On Cityscapes-25K, the performance improvement of SB-GAN is more modest due to the largenumber of training images available. It is worth emphasizing that in this case only 3K groundtruth segmentations are available to train SB-GAN. Compared to BigGAN, images synthesized bySB-GAN are sharper and contain more structural details (e.g., one can zoom-in on the synthesizedcars). Additional synthesized semantic layouts and images are illustrated in Figures 7 to 10 in theAppendix.1Configuration as in https://github.com/google/compare_gan/blob/master/example_configs/biggan_imagenet128.gin7Under review as a conference paper at ICLR 2021Table 2: FID of the synthesized samples whenconditioned on the ground truth labels. ForSB-GAN, we train the entire model end-to-end and extract the trained SPADE.SPADE SB-GANCITYSCAPES -5K 72.12 60.39CITYSCAPES -25K 60.83 54.13ADE-I NDOOR 50.30 48.15Table 3: Average perceptual evaluation scoreswhen each evaluators has selected a qualityscore in the range of 1 (terrible quality) to 4(high quality) for each image.ProGAN BigGAN SB-GAN2.08 - 2.482.53 2.27 2.612.35 1.96 2.494.2 Q UANTITATIVE EVALUATIONTo provide a thorough empirical evaluation of the proposed approach, we generate samples foreach dataset and report the FID scores of the resulting images (averaged across 5 sets of generatedsamples). We evaluate SB-GAN both before and after end-to-end fine-tuning, and compare ourmethod to two strong baselines, ProGAN (Karras et al., 2017) and BigGAN (Brock et al., 2019).The results are detailed in Tables 1a and 1b. First, in the low-data regime, even without fine-tuning,our Semantic Bottleneck GAN produces higher quality samples and significantly outperforms thebaselines on Cityscapes-5K and ADE-Indoor. The advantage of our proposed method is even morestriking on smaller datasets. While competing methods are unable to learn a high-quality model ofthe underlying distribution without having access to a large number of samples, SB-GAN is lesssensitive to the number of training data points. Secondly, we observe that by jointly training the twocomponents, SB-GAN produces state-of-the-art results across all three datasets.We were not able to successfully train BigGAN at a resolution of 256512 due to instabilityobserved during training and mode collapse. Table 1b shows the results for a lower-resolutionsetting, for which we were able to successfully train BigGAN. We report the results before thetraining collapses. BigGAN is, to a certain extent, able to capture the distribution of Cityscapes-25K, but fails completely on ADE-Indoor. Interestingly, BigGAN fails to capture the distribution ofCityscapes-5K even at 128128resolution.Generating by conditioning on real segmentations To independently assess the impact of end-to-end training on the conditional image synthesis sub-network, we evaluate the quality of generatedsamples when conditioning on ground truth validation segmentations from each dataset. Compar-isons to the baseline network SPADE (Park et al., 2019) are provided in Table 2 and Figures 13and 14 in the Appendix. We observe that the image synthesis component of SB-GAN consistentlyoutperforms SPADE across all three datasets, indicating that fine-tuning on data sampled from thesegmentation generator improves the conditional image generator.Fine-tuning ablation study To dissect the effect of end-to-end training, we perform a study ondifferent components of SB-GAN in the Appendix. In particular, we consider three settings: (1) SB-GAN before end-to-end fine-tuning, (2) fine-tuning only the semantic bottleneck synthesis compo-nent, (3) fine-tuning only the conditional image synthesis component, and (4) fine-tuning all jointly.4.3 P ERCEPTUAL EVALUATIONWe used Amazon Mechanical Turk (AMT) to assess the performance of each method on each datasetusing600 pairs of (synthesized images, human evaluators) with a total of 200 unique synthesizedimages. For each image, evaluators were asked to assign a score between 1 to 4 to each image,indicating low-to-high quality images, respectively. The results are summarized in Table 3 and areconsistent with our FID-based evaluations.5 C ONCLUSIONWe proposed an end-to-end Semantic Bottleneck GAN model that synthesizes semantic layouts fromscratch, and then generates photo-realistic scenes conditioned on the synthesized layouts. Throughextensive quantitative and qualitative evaluations, we showed that this novel end-to-end trainingpipeline significantly outperforms the state-of-the-art models in unconditional synthesis of complex8Under review as a conference paper at ICLR 2021scenes. In addition, Semantic Bottleneck GAN strongly improves the performance of the state-of-the-art semantic image synthesis model in synthesizing photo-realistic images from ground truthsegmentations. As a future work, one could explore novel ways to train GANs with discrete outputs,especially to deal with the non-differentiable nature of the generated outputs.
aF-dixAE0H
Reasonable approach but validation is unconvincing
4: Ok but not good enough - rejection
The paper suggests a new approach for unconditional generation of complex scenes. The approach performs generation in two steps: first a semantic map is generated from noise using a conventional generator architecture, then the semantic image is turner into an RGB image by SPADE translator. The paper has several strengths. First, the idea is clear and may make sense (though this has not been shown convincingly). Furthermore, the paper is well written and has detailed related work review (though some important papers are missed). The results are also interesting. Despite the strenghts, I think that the paper may not be suitable for ICLR in the current form for the following reasons: 1) Novelty. The idea of two stage generation of complex images with GANs has been proposed before in a well-known paper [Wang & Gupta, Generative Image Modeling Using Style and Structure Adversarial Networks, ECCV16] . There, the image of normals rather than semantic segmentation served as an intermediate ("bottleneck") representation, otherwise the idea is very similar. It is likely that the normal map may be a better intermediate representation since it is continuous-valued and does not need to deal with discretization issues. Overall, a comparison and a proper positioning w.r.t. [Wang&Gupta] is needed. 2) Deficient comparisons with 1-step GANs. The authors for some reason chose Progressive GANs and BigGAN as the reference 1-stage GANs. This choice is totally unclear to me. StyleGAN v1 and v2 are improved versions of ProGAN and should have been tried instead. Given the use of SPADE (i.e. style based generator) in the authors' architecture, the comparison to StyleGAN would be all the more natural. I find it very likely that the result of StyleGAN can be very similar or better than the authors after similar amount of tuning, especially on Cityscapes 25K. I am therefore not convinced that the proposed idea is actually working. 3) [Minor] There is a published CVPR workshop paper with a very similar idea https://openaccess.thecvf.com/content_CVPRW_2020/html/w23/Volokitin_Decomposing_Image_Generation_Into_Layout_Prediction_and_Conditional_Synthesis_CVPRW_2020_paper.html . The results are worse and some important differences exist. However, it does undermine the novelty. Still a CVPR-workshop paper may be missed by the community, so I weigh this issue as minor. 4) The results are interesting, but they are not terrific. I wonder if the authors can scale their method to higher resolutions (which would be very useful for complex scenes), or if it would break down in some way. 5) [Minor; suggestion not a criticism] There is an obvious use for what the authors are doing. The data they generate can be used to train semantic segmentation networks (essentially serving as dataset augmentation). I think having the evaluation of this aspect would be useful and would make the paper stronger. 6) [Minor] The phrase "In fact, due to the missing ordering relationships, generating smooth segmentation maps cannot be enforced by smoothness among values of neighboring pixels." should be reformulated. A large body of work exist (generally associated with MRF/CRFs in computer vision; and also in statistical physics) on enforcing smoothness (in different senses) in discrete-valued maps, e.g. via Potts prior. To sum up, the idea is clear and well described, but the paper does not convince me that the idea is working (improves over StyleGAN; can synthesize high-res images) and in particular that semantic bottleneck is working better than other bottlenecks (e.g. proposed by [Wang&Gupta]).
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Unconditional Synthesis of Complex Scenes Using a Semantic Bottleneck ### Paper Abstract Coupling the high-fidelity generation capabilities of label-conditional image synthesis methods with the flexibility of unconditional generative models, we propose a semantic bottleneck GAN model for unconditional synthesis of complex scenes. We assume pixel-wise segmentation labels are available during training and use them to learn the scene structure through an unconditional progressive segmentation generation network. During inference, our model first synthesizes a realistic segmentation layout from scratch, then synthesizes a realistic scene conditioned on that layout through a conditional segmentation-to-image synthesis network. When trained end-to-end, the resulting model outperforms state-of-the-art generative models in unsupervised image synthesis on two challenging domains in terms of the Frechet Inception Distance and perceptual evaluations. Moreover, we demonstrate that the end-to-end training significantly improves the segmentation-to-image synthesis sub-network, which results in superior performance over the state-of-the-art when conditioning on real segmentation layouts. ### Paper Keywords ["Unconditional Image Synthesis", "Complex Scene", "GAN", "Semantic Bottleneck"] ### Paper Content ABSTRACTCoupling the high-fidelity generation capabilities of label-conditional image syn-thesis methods with the flexibility of unconditional generative models, we proposea semantic bottleneck GAN model for unconditional synthesis of complex scenes.We assume pixel-wise segmentation labels are available during training and usethem to learn the scene structure through an unconditional progressive segmenta-tion generation network. During inference, our model first synthesizes a realisticsegmentation layout from scratch, then synthesizes a realistic scene conditionedon that layout through a conditional segmentation-to-image synthesis network.When trained end-to-end, the resulting model outperforms state-of-the-art gen-erative models in unsupervised image synthesis on two challenging domains interms of the Fr ́echet Inception Distance and perceptual evaluations. Moreover, wedemonstrate that the end-to-end training significantly improves the segmentation-to-image synthesis sub-network, which results in superior performance over thestate-of-the-art when conditioning on real segmentation layouts.1 I NTRODUCTIONSignificant strides have been made on generative models for image synthesis, with a variety of meth-ods based on Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) achieving state-of-the-art performance. At lower resolutions or in specialized domains, GAN-based methods areable to synthesize samples which are near-indistinguishable from real samples (Brock et al., 2019).However, generating complex, high-resolution scenes from scratch remains a challenging problem,as shown in Figure 1-(a) and (b). As image resolution and complexity increase, the coherence ofsynthesized images decreases — samples lack consistent local or global structures.Stochastic decoder-based models, such as conditional GANs, were recently proposed to alleviatesome of these issues. In particular, both Pix2PixHD (Wang et al., 2018) and SPADE (Park et al.,2019) are able to synthesize high-quality scenes using a strong conditioning mechanism based onsemantic segmentation labels during the scene generation process. Global structure encoded inthe segmentation layout of the scene is what allows these models to focus primarily on generatingconvincing local content consistent with that structure.A key practical drawback of such conditional models is that they require full segmentation layoutsas input. Thus, unlike unconditional generative approaches which synthesize images from randomlysampled noise, these models are limited to generating images from a set of scenes that is prescribedin advance, typically either through segmentation labels from an existing dataset, or scenes that arehand-crafted by experts.Contributions To overcome these limitations, we propose a new model, the Semantic BottleneckGAN (SB-GAN), which couples high-fidelity generation capabilities of label-conditional modelswith the flexibility of unconditional image generation. This in turn enables our model to synthesizean unlimited number of novel complex scenes, while still maintaining high-fidelity output charac-teristic of image-conditional models.Our SB-GAN first unconditionally generates a pixel-wise semantic label map of a scene (i.e. foreach spatial location it outputs a class label), and then generates a realistic scene image by condi-tioning on that semantic map, Figure 1-(d). By factorizing the task into these two steps, we are able1Under review as a conference paper at ICLR 2021Z... ... ... ZZ... Conditional Image Synthesis Segmentation Layout Synthesis 4x8 256x512 (d) Semantic Bottleneck GAN: unconditional synthesis of complex scenes (a) Non-complex samples synthesized by BigGAN (from ImageNet) (b) A complex sample synthesized by BigGAN (from Cityscapes) (c) A complex sample synthesized by SB-GAN (from Cityscapes) Figure 1: (a) Examples of non-complex images from ImageNet synthesized by the state-of-the-art BigGAN model (Brock et al., 2019). Although these samples look decent, the complex scenessynthesized by BigGAN (e.g., from the Cityscapes dataset) are blurry and defective in local structure(e.g., cars are blended together) (b). Zoom in for more detail. (c) A complex scene synthesizedby our model respects both local and global structural integrity of the scene. (d) Schematic ofour unconditional Semantic Bottleneck GAN. We progressively train the adversarial segmentationsynthesis network to generate realistic segmentation maps from scratch, then synthesize a photo-realistic image using a conditional image synthesis network. End-to-end coupling of these twocomponents results in state-of-the-art unconditional synthesis of complex scenes.to separately tackle the problems of producing convincing segmentation layouts (i.e. a useful globalstructure) and filling these layouts with convincing appearances (i.e. local structure). When trainedend-to-end, the model yields samples which have a coherent global structure as well as fine localdetails, e.g., Figure 1-(c). Empirical evaluation shows that our Semantic Bottleneck GAN achievesa new state-of-the-art on two complex datasets with relatively small number of training images,Cityscapes and ADE-Indoor, as measured both by the Fr ́echet Inception Distance (FID) and by per-ceptual evaluations. Additionally, we observe that the conditional segmentation-to-image synthesiscomponent of our SB-GAN jointly trained with segmentation layout synthesis significantly improvesthe state-of-the-art semantic image synthesis network (Park et al., 2019), resulting in higher-qualityoutputs when conditioning on ground truth segmentation layouts.Key Challenges While both unconditional generation and image-to-image translation are well-explored learning problems, fully unconditional generation of the segmentation maps is a notori-ously hard task: (i) Semantic categories do not respect any ordering relationships and the networkis therefore required to capture the intricate relationship between segmentation classes, their shapes,and their spatial dependencies. (ii) As opposed to RGB values, semantic categories are discrete,hence non-differentiable which poses a challenge for end-to-end training (Sec. 3.2) (iii) Naivelycombining state-of-the-art unconditional generation and image-to-image translation models leads topoor performance. However, by carefully designing an additional discriminator component and acorresponding training protocol, we not only manage to improve the performance of the end-to-endmodel, but also the performance of each component separately (Sec. 3.3).We emphasize that despite these challenges our approach scales to 256256resolution and 95semantic categories, whereas existing state-of-the-art GAN models directly generating RGB imagesat that resolution already suffer from considerable instability (Sec. 4).2 R ELATED WORKGenerative Adversarial Networks (GANs) GANs (Goodfellow et al., 2014) are a powerful classof generative models successfully applied to various image synthesis tasks such as image style trans-fer (Isola et al., 2017; Zhu et al., 2017), unsupervised representation learning (Chen et al., 2016;Pathak et al., 2016; Radford et al., 2016), image super-resolution (Ledig et al., 2017; Dong et al.,2016), and text-to-image synthesis (Zhang et al., 2017; Xu et al., 2018; Qiao et al., 2019b). TrainingGANs is notoriously hard and recent efforts focused on improving neural architectures (Wang &Gupta, 2016; Karras et al., 2017; Zhang et al., 2019; Chen et al., 2019a), loss functions (Arjovskyet al., 2017), regularization (Gulrajani et al., 2017; Miyato et al., 2018), large-scale training (Brock2Under review as a conference paper at ICLR 2021Discriminator “Which image is real?” z~~Random noise Discriminator “Which segmentation is real?” Segmentation Synthesis Network Conditional Image Synthesis Network Discriminator “Which image is more consistent with the segmentation?” Figure 2: Schematic of Semantic Bottleneck GAN. Starting from random noise, we synthesize asegmentation layout and use a discriminator to bias the segmentation synthesis network towards re-alistic looking segmentation layouts. The generated layout is then provided as input to a conditionalimage synthesis network to synthesize the final image. A second discriminator is used to bias theconditional image synthesis network towards realistic images paired with real segmentation layouts.Finally, a third unconditional discriminator is used to bias the conditional image synthesis networktowards generating images that match the real image distribution.et al., 2019), self-supervision (Chen et al., 2019b), and sampling (Brock et al., 2019; Azadi et al.,2019a). Improving the performance of GANs by disentangling structure and style has been stud-ied by Wang & Gupta (2016) where structure is represented by a surface normal map and style isthe texture mapped onto the structure. Another compelling approach which enables generation ofhigh-resolution images is based on progressive training: a model is trained to first synthesize lower-resolution images (e.g. 88), then the resolution is gradually increased until the desired resolutionis achieved (Karras et al., 2017). Recently, Brock et al. (2019) showed that GANs significantly ben-efit from large-scale training, both in terms of model size and batch size. We note that these modelsare able to synthesize high-quality images in settings where objects are very prominent and centrallyplaced or follow some well-defined structure, as the corresponding distribution is easier to capture.In contrast, when the scenes are more complex and the amount of data is limited, the task becomesextremely challenging for these state-of-the-art models. We aim to improve the performance in thecontext of complex scenes and a small number of training examples by disentangling the imagegeneration problem into learning the structure represented by semantic layouts and filling in theRGB details using a semantic image synthesis model. A similar idea was proposed by a concurrentwork (V olokitin et al., 2020) with substantial differences in the model and results.GANs on discrete domains GANs for discrete domains have been investigated in severalworks (Yu et al., 2017b; Lin et al., 2017; Bojchevski et al., 2018; Lu et al., 2018). Training inthis domain is even more challenging as the samples from discrete distributions are not differen-tiable with respect to the network parameters. This problem can be somewhat alleviated by usingthe Gumbel-softmax distribution, which is a continuous approximation to a multinomial distributionparameterized in terms of the softmax function (Kusner & Hern ́andez-Lobato, 2016). We will showhow to apply a similar principle to learn the distribution of discrete segmentation masks.Conditional image synthesis In conditional image synthesis one aims to generate images by con-ditioning on an input which can be provided in the form of an image (Isola et al., 2017; Zhu et al.,2017; Azadi et al., 2018; 2019b; Liu et al., 2017), a text phrase (Reed et al., 2016; Zhang et al.,2017; Qiao et al., 2019a; Ashual & Wolf, 2019; Hong et al., 2018), a scene graph (Johnson et al.,2018; Ashual & Wolf, 2019), a class label, or a semantic layout (Odena et al., 2017; Chen & Koltun,2017; Wang et al., 2018; Park et al., 2019). These conditional GAN methods learn a mapping thattranslates samples from the source distribution into samples from the target domain.The text-to-image synthesis models proposed in (Hong et al., 2018; Li et al., 2019) decomposethe synthesis task into multiple steps. As illustrated in the Appendix, given the text description, asemantic layout is constructed by generating object bounding boxes and refining each box by esti-mating object shapes. Then, an image is synthesized conditionally on the generated semantic layoutfrom the first step. Our work shares the same high-level idea of decomposing the image generationproblem into the semantic layout synthesis and the conditional semantic-layout-to-image synthesis.However, we note that the above approaches, as opposed to ours, are conditional and require su-pervision in the form of textual descriptions. Secondly, they are sequential in nature and synthesizemasks of a few different objects (e.g. person, elephant), but not a fully fine-grained semantic map(e.g. missing sky, grass, etc.). In stark contrast, our approach unconditionally synthesizes the full3Under review as a conference paper at ICLR 2021semantic layout of the entire scene from a noise input in an end-to-end network design. Due to theabove distinctions, their segmentation synthesis models differ significantly from ours in terms ofarchitecture and design as shown in Figure 6 in the Appendix.3 S EMANTIC BOTTLENECK GAN (SB-GAN)We propose an unconditional Semantic Bottleneck GAN architecture to learn the distribution ofcomplex scenes. To tackle the problems of learning both the global layout and the local structure,we divide this synthesis problem into two parts: an unconditional segmentation map synthesis net-work and a conditional segmentation-to-image synthesis model. Our first network is designed tocoarsely learn the scene distribution by synthesizing semantic layouts. It generates per-pixel seman-tic categories following the progressive GAN model architecture (ProGAN) (Karras et al., 2017).This fully unconditional generation of the segmentation maps is novel, very challenging, and a care-ful design is crucial, as described in Section 3.1. The second network populates the synthesizedsemantic layouts with texture by predicting RGB pixel values using Spatially-Adaptive Normaliza-tion (SPADE), following the architecture of the state-of-the-art semantic synthesis network in (Parket al., 2019). We assume the ground truth segmentation masks are available for all or part of thetarget scene dataset. In the following sections, we will first discuss our semantic bottleneck synthe-sis pipeline and summarize the SPADE network for image synthesis. We will then couple these twonetworks in an end-to-end design which we refer to as Semantic Bottleneck GAN (SB-GAN).3.1 S EMANTIC BOTTLENECK SYNTHESISOur goal is to learn a (coarse) estimate of the scene distribution from samples corresponding to realsegmentation maps with Ksemantic categories. Starting from random noise, we generate a tensorY2J1;KKN1HWwhich represents a per-pixel segmentation class, with HandWindicatingthe height and width, respectively, of the generated map and Nthe batch size. In practice, weprogressively train from a low to a high resolution using the ProGAN architecture (Karras et al.,2017) coupled with the Improved WGAN loss function (Gulrajani et al., 2017) on the ground truthdiscrete-valued segmentation maps, illustrated in Figure 1-(d). Similar to ProGAN, to increasethe spatial resolution of the generated segmentation maps during training, we incrementally addlayers to the generator and the discriminator. In contrast to ProGAN, in which the generator outputscontinuous RGB values, we predict per-pixel discrete semantic class labels. This task is extremelychallenging as it requires the network to capture the intricate relationship between segmentationclasses and their spatial dependencies. To this end, we apply the Gumbel-softmax trick (Jang et al.,2017; Maddison et al., 2016) coupled with a straight-through estimator (Jang et al., 2017), describedin detail below.We synthesize segmentation layouts by first generating per-pixel probability scores of belongingto each of the Ksemantic classes and then sampling a semantic class per pixel. The per-pixelprobability scores are computed by applying a softmax function to the last layer of the generator(i.e. logits) which results in probability maps Pij2[0;1]K, withPKk=1Pijk= 1 for each spatiallocation (i;j)2J1;HKJ1;W K. To sample a semantic class from this multinomial distribution, wewould ideally apply the following well-known procedure at each spatial location: (1) sample ki.i.d.samples,Gk, from the standard Gumbel distribution, (2) add these samples to each logit, and (3) takethe index of the maximal value. This reparametrization indeed allows for an efficient forward-pass,but is not differentiable. Nevertheless, the max can be replaced with the softmax function and thequality of the approximation can be controlled by varying the temperature hyperparameter — thesmaller the, the closer the approximation is to the categorical distribution (Jang et al., 2017):Sijk=expf(logPijk+Gk)=gPKi=1expf(logPiji+Gi)=g: (1)Similar to the real samples, the synthesized samples fed to the GAN discriminator should still con-taindiscrete category labels. As a result, for the forward pass, we compute arg maxkSk, while forthe backward pass, we use the soft predicted scores Skdirectly, a strategy known as straight-throughestimation (Jang et al., 2017).4Under review as a conference paper at ICLR 20213.2 S EMANTIC IMAGE SYNTHESISOur second sub-network converts the synthesized semantic layouts into photo-realistic images usingspatially-adaptive normalization (Park et al., 2019). The segmentation masks are employed to spreadthe semantic information throughout the generator by modulating the activations with a spatiallyadaptive learned transformation. We follow the same generator and discriminator architectures andloss functions used in (Park et al., 2019), where the generator contains a series of SPADE residualblocks with upsampling layers. The loss functions to train SPADE are summarized as:LDSPD=Ey;x[min(0;1 +DSPD(y;x))]Ey[min(0;1DSPD(y;G SPD(y)))]LGSPD=Ey[DSPD(y;G SPD(y)))] +1LVGG1+2LFeat1; (2)whereGSPD,DSPDstand for the SPADE generator and discriminator, and LVGG1andLFeat1representthe VGG and discriminator feature matching L1loss functions, respectively (Park et al., 2019; Wanget al., 2018). We pre-train this network using pairs of real RGB images, x, and their correspondingreal segmentation masks, y, from the target scene data set.In the next section, we will describe how to employ the synthesized segmentation masks in an end-to-end manner to improve the performance of both the semantic bottleneck and the semantic imagesynthesis sub-networks.3.3 E ND-TO-END FRAMEWORKAfter training semantic bottleneck synthesis model to synthesize segmentation masks and the se-mantic image synthesis model to stochastically map segmentations to photo-realistic images, weadversarially fine-tune the parameters of both networks in an end-to-end approach by introducing anunconditional discriminator network on top of the SPADE generator (see Figure 2).This second discriminator, D2, has the same architecture as the SPADE discriminator, but is de-signed to distinguish between real RGB images and the fake ones generated from the synthesizedsemantic layouts. Unlike the SPADE conditional GAN loss, which examines pairs of input segmen-tations and output images, (y;x)in equation 2, the GAN loss on D2,LD2, is unconditional and onlycompares real images to synthesized ones, as shown in equation 3:LD2=Ex[min(0;1 +D2(x))]Ez[min(0;1D2(G(z)))] (3)LG=Ez[D2(G(z)))] +LGSPD+LGSB; G (z) =GSPD(GSB(z))whereGSBrepresents the semantic bottleneck synthesis generator, and LGSBis the improved WGANloss to pretrain GSBdescribed in Section 3.1. In contrast to the conditional discriminator in SPADE,which enforces consistency between the input semantic map and the output image, D2is primarilyconcerned with the overall quality of the final output. The hyper parameter determines the ratiobetween the two generators during fine-tuning. The parameters of both generators, GSBandGSPD, aswell as the corresponding discriminators, DSBandDSPD, are updated in this end-to-end fine-tuning.We illustrate our final end-to-end network in Figure 2. Jointly fine-tuning the two networks in anend-to-end fashion allows the two networks to reinforce each other, leading to improved perfor-mance. The gradients with respect to RGB images synthesized by SPADE are back-propagated tothe segmentation synthesis model, thereby encouraging it to synthesize segmentation layouts thatlead to higher quality final images. Hence, SPADE plays the role of a loss function for synthesiz-ing segmentations, but in the RGB space, hence providing a goal that was absent from the initialtraining. Similarly, fine-tuning SPADE with synthesized segmentations allows it to adapt to a morediverse set of scene layouts, which improves the quality of generated samples.4 E XPERIMENTS AND RESULTSWe evaluate the performance of the proposed approach on two datasets containing images withcomplex scenes, where the ground truth segmentation masks are available during training (possiblyonly for a subset of the images). We also study the role of the two network components, semanticbottleneck and semantic image synthesis, on the final result. We compare the performance of SB-GAN against the state-of-the-art BigGAN model (Brock et al., 2019) as well as a ProGAN (Karraset al., 2017) baseline that has been trained on the RGB images directly. We evaluate our methodusing Fr ́echet Inception Distance (FID) as well as a perceptual evaluation.5Under review as a conference paper at ICLR 2021SB-GAN ProGAN Figure 3: Images synthesized on Cityscapes-5K. Best viewed on screen; zoom in for more detail.Although both models capture the general scene layout, SB-GAN (1st row) generates more convinc-ing objects, e.g. buildings and cars.SB-GAN ProGAN BigGAN Figure 4: Images synthesized on Cityscapes-25K. Best viewed on screen; zoom in for more detail.Images synthesized by BigGAN (3rd row) are blurry and sometimes defective in local structures.Datasets We study the performance of our model on the Cityscapes and ADE-indoor datasets asthe two domains with complex scene images.Cityscapes-5K (Cordts et al., 2016) contains street scene images in German cities with trainingand validation set sizes of 3,000 and 500 images, respectively. Ground truth segmentation maskswith 33 semantic classes are available for all images in this dataset.Cityscapes-25K (Cordts et al., 2016) contains street scene images in German cities with train-ing and validation set sizes of 23,000 and 500 images, respectively with 19 semantic classes.Cityscapes-5K is a subset of this dataset, providing 3,000 images in the training set here as wellas the entire validation set. Fine ground truth annotations are only provided for this subset, withthe remaining 20,000 training images containing only coarse annotations. We extract the corre-sponding fine annotations for the rest of training images using the state-of-the-art segmentationmodel (Yu et al., 2017a) trained on the training annotated samples from Cityscapes-5K.ADE-Indoor is a subset of the ADE20K dataset (Zhou et al., 2017) containing 4,377 challengingtraining images from indoor scenes and 433 validation images with 95 semantic categories.Evaluation We use the Fr ́echet Inception Distance (FID) (Heusel et al., 2017) as well as a percep-tual evaluation of the quality of the generated samples. To compute FID, the real data and generatedsamples are embedded in a specific layer of a pre-trained Inception network. Then, a multivariateGaussian is fit to the data, and the distance is computed as FID(x;g) =jjxgjj22+ Tr( x+g2(xg)12), whereanddenote the empirical mean and covariance, and subscripts xandgdenote the real and generated data respectively. FID is sensitive to both the addition of spuriousmodes and to mode dropping (Sajjadi et al., 2018; Lucic et al., 2018). On the Cityscapes dataset,we ran five trials where we computed FID on 500 random synthetic images and 500 real validationimages, and report the average score. On ADE-Indoor, this is repeated on batches of 433 images.Implementation details In all our experiments, we set 1=2= 10 , and= 10 . The initialgenerator and discriminator learning rates for training SPADE both in the pretraining and end-to-endsteps are 104and4104, respectively. The learning rate for the semantic bottleneck synthesissub-network is set to 103in the pretraining step and to 105in the end-to-end fine-tuning on6Under review as a conference paper at ICLR 2021SB-GAN ProGAN BigGAN Figure 5: Images synthesized on ADE-Indoor. This dataset is very challenging, causing modecollapse for the BigGAN model (3rd row). In contrast, samples generated by SB-GAN (1st row) aregenerally of higher quality and much more structured than those of ProGAN (2nd row).Table 1: FID of the synthesized samples (lower is better), averaged over 5 random sets of samples.Images were synthesized at resolution of X2Xon Cityscapes and XXon ADE-Indoor.(a)X= 256ProGAN SB-GANW/O FTSB-GANCITYSCAPES -5K 92.57 83.20 65.49CITYSCAPES -25K 63.87 71.13 62.97ADE-I NDOOR 104.83 91.80 85.27(b)X= 128ProGAN BigGAN SB-GAN178.19 - 57.4856.7 64.82 54.9285.94 156.65 81.39Cityscapes, and to 104for ADE-Indoor. The temperature hyperparameter, , is always set to 1.For BigGAN, we followed the setup by Lucic et al. (2019)1, where we modified the code to allow fornon-square images of Cityscapes. We used one class label for all images to have an unconditionalBigGAN model. For both datasets, we varied the batch size (using values in f128;256;512;2048g),learning rate, and location of the self-attention block. We trained the final model for 50K iterations.4.1 Q UALITATIVE RESULTSIn Figures 3, 4, and 5, we provide qualitative comparisons of the competing methods on the threeaforementioned datasets. We observe that both Cityscapes-5K and ADE-Indoor are very challeng-ing for the state-of-the-art ProGAN and BigGAN models, likely due to the complexity of the dataand small number of training instances. Even at a resolution of 128128on the ADE-Indoordataset, BigGAN suffers from mode collapse, as illustrated in Figure 5. In contrast, SB-GAN sig-nificantly improves the structure of the scene distribution and provides samples of higher quality.On Cityscapes-25K, the performance improvement of SB-GAN is more modest due to the largenumber of training images available. It is worth emphasizing that in this case only 3K groundtruth segmentations are available to train SB-GAN. Compared to BigGAN, images synthesized bySB-GAN are sharper and contain more structural details (e.g., one can zoom-in on the synthesizedcars). Additional synthesized semantic layouts and images are illustrated in Figures 7 to 10 in theAppendix.1Configuration as in https://github.com/google/compare_gan/blob/master/example_configs/biggan_imagenet128.gin7Under review as a conference paper at ICLR 2021Table 2: FID of the synthesized samples whenconditioned on the ground truth labels. ForSB-GAN, we train the entire model end-to-end and extract the trained SPADE.SPADE SB-GANCITYSCAPES -5K 72.12 60.39CITYSCAPES -25K 60.83 54.13ADE-I NDOOR 50.30 48.15Table 3: Average perceptual evaluation scoreswhen each evaluators has selected a qualityscore in the range of 1 (terrible quality) to 4(high quality) for each image.ProGAN BigGAN SB-GAN2.08 - 2.482.53 2.27 2.612.35 1.96 2.494.2 Q UANTITATIVE EVALUATIONTo provide a thorough empirical evaluation of the proposed approach, we generate samples foreach dataset and report the FID scores of the resulting images (averaged across 5 sets of generatedsamples). We evaluate SB-GAN both before and after end-to-end fine-tuning, and compare ourmethod to two strong baselines, ProGAN (Karras et al., 2017) and BigGAN (Brock et al., 2019).The results are detailed in Tables 1a and 1b. First, in the low-data regime, even without fine-tuning,our Semantic Bottleneck GAN produces higher quality samples and significantly outperforms thebaselines on Cityscapes-5K and ADE-Indoor. The advantage of our proposed method is even morestriking on smaller datasets. While competing methods are unable to learn a high-quality model ofthe underlying distribution without having access to a large number of samples, SB-GAN is lesssensitive to the number of training data points. Secondly, we observe that by jointly training the twocomponents, SB-GAN produces state-of-the-art results across all three datasets.We were not able to successfully train BigGAN at a resolution of 256512 due to instabilityobserved during training and mode collapse. Table 1b shows the results for a lower-resolutionsetting, for which we were able to successfully train BigGAN. We report the results before thetraining collapses. BigGAN is, to a certain extent, able to capture the distribution of Cityscapes-25K, but fails completely on ADE-Indoor. Interestingly, BigGAN fails to capture the distribution ofCityscapes-5K even at 128128resolution.Generating by conditioning on real segmentations To independently assess the impact of end-to-end training on the conditional image synthesis sub-network, we evaluate the quality of generatedsamples when conditioning on ground truth validation segmentations from each dataset. Compar-isons to the baseline network SPADE (Park et al., 2019) are provided in Table 2 and Figures 13and 14 in the Appendix. We observe that the image synthesis component of SB-GAN consistentlyoutperforms SPADE across all three datasets, indicating that fine-tuning on data sampled from thesegmentation generator improves the conditional image generator.Fine-tuning ablation study To dissect the effect of end-to-end training, we perform a study ondifferent components of SB-GAN in the Appendix. In particular, we consider three settings: (1) SB-GAN before end-to-end fine-tuning, (2) fine-tuning only the semantic bottleneck synthesis compo-nent, (3) fine-tuning only the conditional image synthesis component, and (4) fine-tuning all jointly.4.3 P ERCEPTUAL EVALUATIONWe used Amazon Mechanical Turk (AMT) to assess the performance of each method on each datasetusing600 pairs of (synthesized images, human evaluators) with a total of 200 unique synthesizedimages. For each image, evaluators were asked to assign a score between 1 to 4 to each image,indicating low-to-high quality images, respectively. The results are summarized in Table 3 and areconsistent with our FID-based evaluations.5 C ONCLUSIONWe proposed an end-to-end Semantic Bottleneck GAN model that synthesizes semantic layouts fromscratch, and then generates photo-realistic scenes conditioned on the synthesized layouts. Throughextensive quantitative and qualitative evaluations, we showed that this novel end-to-end trainingpipeline significantly outperforms the state-of-the-art models in unconditional synthesis of complex8Under review as a conference paper at ICLR 2021scenes. In addition, Semantic Bottleneck GAN strongly improves the performance of the state-of-the-art semantic image synthesis model in synthesizing photo-realistic images from ground truthsegmentations. As a future work, one could explore novel ways to train GANs with discrete outputs,especially to deal with the non-differentiable nature of the generated outputs.<|im_end|> <|im_start|>assistant ### Review Title Reasonable approach but validation is unconvincing ### Review Text The paper suggests a new approach for unconditional generation of complex scenes. The approach performs generation in two steps: first a semantic map is generated from noise using a conventional generator architecture, then the semantic image is turner into an RGB image by SPADE translator. The paper has several strengths. First, the idea is clear and may make sense (though this has not been shown convincingly). Furthermore, the paper is well written and has detailed related work review (though some important papers are missed). The results are also interesting. Despite the strenghts, I think that the paper may not be suitable for ICLR in the current form for the following reasons: 1) Novelty. The idea of two stage generation of complex images with GANs has been proposed before in a well-known paper [Wang & Gupta, Generative Image Modeling Using Style and Structure Adversarial Networks, ECCV16] . There, the image of normals rather than semantic segmentation served as an intermediate ("bottleneck") representation, otherwise the idea is very similar. It is likely that the normal map may be a better intermediate representation since it is continuous-valued and does not need to deal with discretization issues. Overall, a comparison and a proper positioning w.r.t. [Wang&Gupta] is needed. 2) Deficient comparisons with 1-step GANs. The authors for some reason chose Progressive GANs and BigGAN as the reference 1-stage GANs. This choice is totally unclear to me. StyleGAN v1 and v2 are improved versions of ProGAN and should have been tried instead. Given the use of SPADE (i.e. style based generator) in the authors' architecture, the comparison to StyleGAN would be all the more natural. I find it very likely that the result of StyleGAN can be very similar or better than the authors after similar amount of tuning, especially on Cityscapes 25K. I am therefore not convinced that the proposed idea is actually working. 3) [Minor] There is a published CVPR workshop paper with a very similar idea https://openaccess.thecvf.com/content_CVPRW_2020/html/w23/Volokitin_Decomposing_Image_Generation_Into_Layout_Prediction_and_Conditional_Synthesis_CVPRW_2020_paper.html . The results are worse and some important differences exist. However, it does undermine the novelty. Still a CVPR-workshop paper may be missed by the community, so I weigh this issue as minor. 4) The results are interesting, but they are not terrific. I wonder if the authors can scale their method to higher resolutions (which would be very useful for complex scenes), or if it would break down in some way. 5) [Minor; suggestion not a criticism] There is an obvious use for what the authors are doing. The data they generate can be used to train semantic segmentation networks (essentially serving as dataset augmentation). I think having the evaluation of this aspect would be useful and would make the paper stronger. 6) [Minor] The phrase "In fact, due to the missing ordering relationships, generating smooth segmentation maps cannot be enforced by smoothness among values of neighboring pixels." should be reformulated. A large body of work exist (generally associated with MRF/CRFs in computer vision; and also in statistical physics) on enforcing smoothness (in different senses) in discrete-valued maps, e.g. via Potts prior. To sum up, the idea is clear and well described, but the paper does not convince me that the idea is working (improves over StyleGAN; can synthesize high-res images) and in particular that semantic bottleneck is working better than other bottlenecks (e.g. proposed by [Wang&Gupta]). ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
B1xFVhActm
ICLR.cc/2019/Conference
2019
Fake Sentence Detection as a Training Task for Sentence Encoding
["Viresh Ranjan", "Heeyoung Kwon", "Niranjan Balasubramanian", "Minh Hoai"]
Sentence encoders are typically trained on generative language modeling tasks with large unlabeled datasets. While these encoders achieve strong results on many sentence-level tasks, they are difficult to train with long training cycles. We introduce fake sentence detection as a new discriminative training task for learning sentence encoders. We automatically generate fake sentences by corrupting original sentences from a source collection and train the encoders to produce representations that are effective at detecting fake sentences. This binary classification task turns to be quite efficient for training sentence encoders. We compare a basic BiLSTM encoder trained on this task with strong sentence encoding models (Skipthought and FastSent) trained on a language modeling task. We find that the BiLSTM trains much faster on fake sentence detection (20 hours instead of weeks) using smaller amounts of data (1M instead of 64M sentences). Further analysis shows the learned representations also capture many syntactic and semantic properties expected from good sentence representations.
["fake sentence detection", "sentence encoders", "training task", "sentence", "tasks", "encoders", "fake sentences", "task", "generative language", "large unlabeled datasets"]
ABSTRACTSentence encoders are typically trained on generative language modeling taskswith large unlabeled datasets. While these encoders achieve strong results onmany sentence-level tasks, they are difficult to train with long training cycles. Weintroduce fake sentence detection as a new discriminative training task for learn-ing sentence encoders. We automatically generate fake sentences by corruptingoriginal sentences from a source collection and train the encoders to produce rep-resentations that are effective at detecting fake sentences. This binary classifica-tion task turns to be quite efficient for training sentence encoders. We compare abasic BiLSTM encoder trained on this task with strong sentence encoding mod-els (Skip-thought and FastSent) trained on a language modeling task. We findthat the BiLSTM trains much faster on fake sentence detection (20 hours insteadof weeks) using smaller amounts of data (1M instead of 64M sentences). Fur-ther analysis shows the learned representations also capture many syntactic andsemantic properties expected from good sentence representations.1 I NTRODUCTIONUniversal sentence encoding is a way to scale language processing tasks, especially when the tar-get training data is limited. Solutions to sentence-level language processing tasks can be seen asconsisting of two parts – one that creates a generic representation which approximates the meaningof the sentence, and another that uses this representations to make the target classifications. Theidea behind universal sentence encoding is to learn generic sentence representations from unlabeledtexts, which are often much larger and easier to obtain than the training data for the target tasks. Agood universal encoding is one which is effective for training downstream target tasks.The success of language modeling ideas for learning word representations has inspired similar ideasfor learning universal sentence representations. Skip-thought model (Kiros et al., 2015) uses a Bidi-rectional LSTM (BiLSTM) encoder, which encodes a given sentence, and uses the encoding togenerate neighboring sentences. Following up on this general idea, the FastSent model (Hill et al.,2016) simplifies the task and the encoder further by learning to output the average word embeddingsso that they can predict words in the neighboring sentences (but not necessarily in a sequence).This language modeling based training is undesirable in two respects. (1) Predicting neighboringsentences is a difficult generative training task. The task requires having a large output space,at least the size of the vocabulary. It also requires a complex decoding model such as an LSTMdecoder with millions of parameters. Effective training requires a large amount of training datawith long training cycles. Indeed Skip-thought requires tens of millions of training sentences andit takes more than a week to train. (2) Even minor changes to a sentence can drastically changeits meaning, a phenomenon that needs to be modeled for many NLP tasks. Language model basedtraining essentially relies on the training sample for this kind of generalization, i.e., the training textcollection should include many instances of sentences that have only minor lexical differences butfound in completely different contexts. Larger the dataset the more likely it is for the encoder tolearn how to model these fine grained distinctions.In this work, we introduce a discriminative training task fake sentence detection to address thesechallenges. The main idea is to generate fake sentences by corrupting an original sentence. Weuse two methods to generate fake sentences: word shuffling where we swap the positions of twowords at random and word dropping , where we drop a word at random from the original sentence.1Under review as a conference paper at ICLR 2019The resulting fake sentences are mostly similar to the original sentences—a fake sentence differsfrom its source in at most two word positions. We create the training corpus from a source corpusof unlabeled sentences. For each sentence in the source corpus we add multiple fake sentences bycorrupting the source sentence.This training task has three key advantages: (1) This binary classification task can be modeledwith fewer parameters in the output layer and can be trained more efficiently compared to languagemodeling based training tasks. (2) From a language standpoint, the task forces the encoder to trackboth syntax and semantics. For instance, swapping words within a sentence can break the syntax(e.g., “John landed in Chicago on Friday” versus “John landed Chicago in on Friday”), and breakor alter the semantics; it can lead to an incoherent or less plausible sentence (e.g., “John landed inChicago on Friday” versus “Chicago landed in John on Friday”). (3) The task explicitly forces theencoder to capture big shifts in meaning that arise from small changes to a sentence. In particular,since the discrimination is really dependent on a small change to an original sentence, encoderscannot get away with simple aggregation—they need to model compositional aspects well enoughto be able to detect small but semantically shifts in sentence constructions.In overview, we train a bidirectional LSTM (BiLSTM) as our universal encoder and use a three-layer feed-forward network that uses the encoded sentence to predict if it is fake or real. We thenevaluate this trained encoder without any further tuning on multiple sentence-level tasks and useprobing tasks (Conneau et al., 2018) to study the syntactic and semantic properties in the encodedrepresentations.In summary, this paper makes the following main contributions:1. Introduces fake sentence detection as an unsupervised training task for learning sentenceencoders that can distinguish between small changes in mostly similar sentences.2. Provides an empirical evaluation on multiple sentence-level tasks showing representationstrained on the fake sentence tasks outperform a strong baseline model trained on languagemodeling tasks, even when training on small amounts of data (1M vs. 64M sentences)reducing training time from weeks to within 20 hours.3. Demonstrates that the fake sentence training produces encoded representations that arebetter at capturing many linguistic properties compared to language model training.2 M OTIVATIONMany sentence-level prediction tasks require learning a function to estimate the probability of a labelygiven a sentence x. This function can be parameterized as P(yjx;), and the parameter vector can be learned by minimizing the negative conditional log likelihood of the training data D, i.e.,minP(x;y)2Dlog(P(yjx;)). However, the amount of training data is limited in many cases,so learningby optimizing the conditional probability leads to overfitting, especially if one uses adeep neural network with millions of parameters.One approach to address this problem is to break the task into two sub-tasks each with its own setof parameters: (1) sentence encoding—produce an effective low-dimensional representation of thesentenceenc(x; 1), (2) target prediction—estimate probability of the label yconditioned on theencodingP(yjx;enc (x; 1);2). The dimension of 2is much smaller than the dimension of andit can be learned with the provided labeled training data D. For the encoder function enc(x; 1), itis hoped that the parameter vector 1can be learned or pre-trained with a different sentence-leveltask using other data, which is possibly unlabeled, but can be easily collected in large quantity. Theencoder can be trained on the auxiliary task as part of a prediction function fwhich has its ownset of parameters 3. The encoder parameters 1and the auxiliary task parameters 3are learned tominimize the loss on the auxiliary task:Px2ULaux(f(enc(x; 1);3)).In this work, we are interested in learning an universal sentence encoder that can be subsequentlyused for various downstream sentence-level tasks. One reasonable choice for the auxiliary lossfunction would be the negative log likelihood of the data, i.e., logP(xj1;3)or equivalentlylog(P(enc(x; 1)j1;3)). Recall for the downstream task, we will seek the parameter vector 2to minimize the negative conditional log likelihood log(P(yjenc(x; 1);1;2)). The combina-tion of two loss functions is: log(P(yjenc(x; 1);1;2))log(P(enc(x; 1)j1;3)). This is2Under review as a conference paper at ICLR 2019equivalent tolog(P(y;enc (x; 1)j1;2;3)). In other words, we are maximizing the joint prob-ability of observing the label yand the encoding vector enc(x; 1). Optimizing the joint probabilityof the label and data corresponds to having a generative classifier, as opposed to the discriminativeclassifier that optimizes a conditional probability. As shown by (Ng & Jordan, 2002), a generativeclassifier converges faster to its asymptotic behavior than a discriminative classifier does. A genera-tive approach is particularly useful when there is limited amount of training data. Thus, we proposeto learn a sentence encoder by maximizing the data likelihood P(enc(x; 1)j1;3), and we expectit to be an useful universal sentence encoder, especially when there is limited amount of labeledtraining data for downstream sentence-level task.It is surprisingly simple to learn an encoder to maximize the data likelihood P(enc(x; 1)j1;3),and this paper proposes exactly that. We can parameterize this probability using a simple functionsuch as a log-linear model:P(enc(x; 1)j1;3) =1Zexp(T3enc(x; 1)) (1)where the functional form of the encoder encis an BiLSTM, and Zis a normalization constant toensure a valid probability function.Z=Xxexp(T3enc(x; 1)) =Xx2Uexp(T3enc(x; 1)) +Xx2Vexp(T3enc(x; 1)): (2)whereUis the set of real sentences, and Vis the set of fake sentences. Our objective isto learn1;3to maximize the quantity in Equation 1. This is equivalent to maximizingPx2Uexp(T3enc(x; 1))while minimizingPx2Vexp(T3enc(x; 1)). This can simply bedone by training a classifier to separate real sentences from fake sentences.In this work, we approximate the space of fake sentences by randomly swapping or dropping wordsfrom real sentences. Our method has some connection to the GAN framework (Goodfellow et al.,2014), where there is a generator that attempts to generate realistic data and a discriminator thatdistinguishes between real and generated data instances. As shown by Goodfellow et al. (2014), thisapproach will converge to a solution where the output of the discriminator is the estimated value fora data point to be real. In our case, we do not need to learn the generator and the discriminator inan adversarial manner because we already have a ‘strong’ generator that generates highly realisticsentences, simply by shuffling or dropping words. Furthermore, our method can be thought as agenerative model, defined and trained using a discriminative function. This aspect is related to therecently proposed Introspective Neural Networks (Lazarow et al., 2017).There are some similarities between the proposed method and Skip-thought sentence encoder. Thetraining task of Skip-thought involves predicting the next or previous sentence zthat follows orprecedes the current encoded sentence x. This sentence language model is realized using an LSTMdecoder, which sequentially generates a word at each position iconditioning on the previous encodedsentenceenc(x), and the words that have been decoded so far ( z1;;zi1). This model can beseen as maximizing the conditional probability:P(zjenc(x; 1);3) =jzj1Yi=1P(zijz1:i1;enc(x; 1);3): (3)Interestingly, both Skip-thought and our proposed method model the unlabeled data U, a sequenceof sentences u1;u2;;um, and aim to optimize for the joint probability distribution: P(Uj1;3).In the case of Skip-thought, there is an underlying Markovian assumption that the probabilityof a sentence only depends on the previous sentence. That is to approximate P(Uj1;3)byQkP(ukjenc(uk1;1);3). In the case of the proposed method, we assume the sentences are in-dependent of one another, so P(Uj1;3)can be approximated asQkP(enc(uk;1)j1;3). Bothmethods are generative models of the data, and both are useful sentence encoders for the downstreamtasks, as will be seen in the experiment sections.However, Skip-thought model has several disadvantages compared to the proposed method: (1)The architecture of Skip-thought model is much more complex. Skip-thought model requires anLSTM decoder with millions of parameters. Furthermore, the need for computing the probabilityof individual words requires having a large output space, as large as the size of the vocabulary.3Under review as a conference paper at ICLR 2019This further increases then number of parameters and the complexity of the overall architecture.(2) The modeling task of Skip-thought is much harder than that of the proposed method. Skip-thought requires a probability model for both complete and incomplete sentences, and this muchharder than modeling the probability of complete sentences alone (as proposed here). (3) Due to thecomplexity of the architecture and the difficulty of the modeling task, Skip-thought requires muchmore training data and longer training time. (4) Skip-thought is a purely generative model and itis only trained using the real data. Meanwhile, the proposed method is a generative model definedvia a discriminative function. This function can be trained using both real and fake data, where thefake data can be selectively chosen to be very hard examples. Specifically, we use word shufflingand word dropping to generate hard examples that do not differ from the original sentences. Thisenables the sentence encoder to focus on the subtle semantics information.3 R ELATED WORKPrevious sentence encoding approaches can be broadly classified as supervised (Conneau et al.,2017; Cer et al., 2018; Marcheggiani & Titov, 2017; Wieting et al., 2015), unsupervised (Kiroset al., 2015; Hill et al., 2016) or semi-supervised approaches (Peters et al., 2018; Dai & Le, 2015;Socher et al., 2011; Clark et al., 2018).The unsupervised approaches extend the skip-gram (Mikolov et al., 2013) to the sentence level, anduse the sentence embedding to predict the adjacent sentences. Skip-thought (Kiros et al., 2015) usesa BiLSTM encoder to obtain a fixed length embedding for a sentence, and uses a BiLSTM decoderto predict adjacent sentences. Training Skip-thought model is expensive, and even one epoch oftraining on the Toronto BookCorpus (Zhu et al., 2015) dataset takes more than two weeks (Hillet al., 2016) on a single GPU. FastSent (Hill et al., 2016) uses embeddings of a sentence to predictwords from the adjacent sentences. A sentence is represented by simply summing up the word rep-resentation of all the words in the sentence. FastSent requires less training time than Skip-thought,but FastSent has lower performance on most downstream tasks.The supervised approaches train the encoders on tasks such as NLI and use transfer learning to adaptthe learned encoders to different downstream tasks (Conneau et al., 2017).Some semi-supervised approaches (Peters et al., 2018; Dai & Le, 2015; Socher et al., 2011) trainsentence encoders on large unlabeled datasets, and do a task specific adaptation using labeled data,while some other approaches such as Cross-View Training(CVT) (Clark et al., 2018) jointly trainthe sentence encoder on the labeled and unlabeled data. CVT applies regular supervised learning onthe labeled data, and for the unlabeled data, it enforces consistency across the outputs obtained forpartial versions of the input sentence.In this work, we propose an unsupervised sentence encoder that takes around 20 hours to train ona single GPU, and outperforms Skip-thought and FastSent encoders on multiple downstream tasks.Unlike the previous unsupervised approaches, we use the binary task of real versus fake sentenceclassification to train a BiLSTM based sentence encoder.4 T RAINING TASKS FOR ENCODERSWe propose a discriminative task for training sentence encoders. The key bottleneck in trainingsentence encoders is the need for large amounts of labeled data. Prior work use language modelingas a purely generative training task that models the generation of unlabeled text data. The encoderis trained to produce sentence representations which are effective at either generating neighboringsentences (e.g., Skip-thought (Kiros et al., 2015) or to predict the words in the neighboring sen-tences (Hill et al., 2016). As we outlined in Section 2 this purely generative training raises multiplechallenges.Instead, we propose fake sentence detection, a task that targets the same overall generative model,one that generates unlabeled text, but is trained discriminatively. The task requires making a singleprediction over an input sentence. In particular, we propose to learn a sentence encoder by training asequential model to solve the binary classification task of detecting whether a given input sentence isfake or real. This real-fake sentence classification task would perhaps be trivial if the fake sentenceslook very different from the real sentences. We propose two simple methods to generate noisy4Under review as a conference paper at ICLR 2019Figure 1: Figure shows the block diagram of the encoder and fully connected layers. Encoderconsists of a bidirectional LSTM followed by a max pooling layer. For classification, we use a MLPwith two hidden layers.sentences which look mostly similar to real sentences. We describe the noisy sentence generationstrategies in Section 4.1. Thus, we create a labeled dataset of real and fake sentences, and train asequential model to distinguish between real and fake sentences, which results in a model whoseclassification layer has far fewer parameters than previous language model based encoders.Our model architecture is described in Section 4.2.4.1 F AKE SENTENCE GENERATIONFor a sentence X=w1;w2;:::;wncomprising of nwords, we consider two strategies to generate anoisy version of the sentence: 1) WordShuffle : randomly sample two indices iandjcorrespondingto wordswiandwjinX, and shuffle the words to obtain the noisy sentence ^X. Noisy sentence ^Xwould be of the same length as the original sentence X.2) WordDrop : randomly pick one index icorresponding to word wiand drop the word from the sentence to obtain ^X. Note there can be manyvariants for this strategy but here we experiment with this basic choice.4.2 R EAL VERSUS FAKE SENTENCE CLASSIFICATIONFigure 1 shows the proposed architecture of our fake sentence classifier with an encoder and a Multi-layer Perceptron(MLP) with 2 hidden layers. We do not use any nonlinearity after these hiddenlayers, hence, these layers correspond to the single linear layer represented by 3in Equation 1.Another motivation behind not using the nonlinearity is to keep the classifier simple enought so thatthe encoder is forced to learn useful representation while training for the fake sentence classificationtask. The encoder consists of a bidirectional LSTM followed by a max pooling layer. Given asentenceX=w1;w2;:::;wnconsisting of nwords, the forward LSTM of the encoder generates ahidden state vector !ht. Similarly, the backward LSTM generates the hidden state ht. The forwardand backward hidden states are concatenated to get ut= ( !ht, ht). Processing the entire sentenceresults innsuch vectors, and we apply max-pooling to these concatenated hidden states to obtainvectorz.zserves as a fixed length encoding of the sentence X, which we then use as input to aMLP for classifying the sentence into real/fake classes.5 E VALUATION SETUPDownstream Tasks: We compare the sentence encoders trained on a large collection (BookCor-pus (Zhu et al., 2015)) by testing them on multiple sentence level classification tasks (MR, CR,SUBJ, MPQA, TREC, SST) and one NLI task defined over sentence-pairs (SICK). We also eval-5Under review as a conference paper at ICLR 2019Name Size Task No. of ClassesMR 11K Sentiment 2CR 4K Product Review 2TREC 11K Question type 6SST 70K Sentiment 2MPQA 11K Opinion Polarity 2SUBJ 10K Subjectivity 2SICK 10K NLI 3COCO 123K Retrieval -Table 1: Downstream tasks and datasets.Model MR CR TREC SST MPQA SUBJ SICK COCO-Cap COCO-ImgNB-SVM 79.4 81.8 - - 86.3 93.2 - - -FastSent 70.8 78.4 80.6 - 80.6 88.7 - - -Skip-thought (full) 76.5 80.1 92.2 82.0 87.1 93.6 82.3 72.2 66.2Skip-thought (1M) 65.2 70.9 79.2 66.9 81.6 86.1 75.6 51.9 46.7Skip-thought (full) + NB 80.4 81.3 - - 87.5 93.6 - - -MC-QT 80.4 85.2 92.8 - 89.4 93.9 - - -WordDrop 78.8 82.2 86.6 82.9 89.8 92.7 83.2 73.8 67.3WordShuffle 79.8 82.4 88.4 82.4 89.8 92.6 82.3 74.2 67.3Table 2: Results on downstream tasks: Bold face indicates best result and underlined results showwhen fake sentence training is better than Skip-thought (full). COCO-Cap and COCO-Img arecaption and image retrieval tasks on COCO. We report Recall@5 for the COCO retrieval tasks.uate the sentence representations for image and caption retrieval tasks on the COCO dataset (Linet al., 2014). We use the same evaluation protocol and dataset split as (Karpathy & Fei-Fei, 2015;Conneau et al., 2017). Table 1 lists the classification tasks and the datasets. We also compare thesentence representations for how well they capture important syntactic and semantic properties usingprobing classification tasks (Conneau et al., 2018). For all downstream and probing tasks, we usethe encoders to obtain representation for all the sentences, and train logistic regression classifiers onthe training split. We tune the L2-norm regularizer using the validation split, and report the resultson the test split.Training Corpus: The FastSent and Skip-thought encoders are trained on the full Toronto Book-Corpus of 64M sentences (Zhu et al., 2015). Our models, however, train on a much smaller subsetofonly 1M sentences.Model SentLen WC TreeDepth TopConst BShift Tense SubjNum ObjNum SOMO CoordInvSkip-thought (full) 85.4 79.6 41.1 82.5 69.6 90.4 85.6 83.6 53.9 69.1Skip-thought (1M) 54.7 33.9 30.0 60.7 58.9 85.3 76.4 70.9 51.9 61.4WordDrop 86.7 90.1 48.0 81.9 73.2 87.7 87.3 82.7 59.2 70.6WordShuffle 84.9 91.2 48.8 82.3 79.9 88.2 86.7 83.3 59.8 70.7Table 3: Probing task accuracies. Tasks: SentLen: predict sentence length, WC: is word in sentence,TreeDepth: depth of syntactic tree, TopConst: predict top-level constituent, BShift: is bigram inflipped in sentence, Tense: predict tense of word, Subj(Obj)Num: singular or plural subject, SOMO:semantic odd man out, CoordInv: is co-ordination is inverted.Sentence Encoder Implementation: Our sentence encoder architecture is the same as theBiLSTM-max model (Conneau et al., 2017). We represent words using 300-d pretrained Gloveembeddings (Pennington et al., 2014). We use a single layer BiLSTM model, with 2048-d hiddenstates. The MLP classifier we use for fake sentence detection has two hidden layers with 1024 and512 neurons. We train separate models for word drop and word shuffle. The models are trained for15 epochs with a batch size of 64 using SGD algorithm, when training converges with a validation6Under review as a conference paper at ICLR 2019Original Neighbor ST Rank FS RankI love soccer. Soccer love I. 2 6He took her hand and kissed it. He kissed her hand and took it. 1 4Table 4: Example illustrating how Skip-thought (ST) and Fake sentence (FS) training allows repre-sentations to capture big shifts in meaning even with small changes to the sentence form. The tableshows where both models rank a specific fake version of a real sentence. The ranking is is obtainedusing cosine similarity over a random sample of 10000 real and fake sentences when projected viat-SNE.set accuracy of 89 for word shuffle. The entire training completes in less than 20 hours on a singleGPU machine.Baseline Approaches: We compare our results with previous unsupervised sentences encoders,Skip-thought (Kiros et al., 2015), FastSent (Hill et al., 2016), Quick-thought(MC-QT) (Logeswaran& Lee, 2018), and bag-of-words baseline NB-SVM (Wang & Manning, 2012). We use the FastSentand Skip-thought results trained on the full BookCorpus as mentioned in (Conneau et al., 2017). Wealso report Skip-thought results when trained on a smaller 1M sentence subset.6 R ESULTSClassification and NLI: Results are shown in Table 2. Both fake sentence training tasks yieldbetter performance on five out of the seven language tasks when compared to Skip-thought (full),i.e., even when it is trained on the full BookCorpus. Word drop and word shuffle performancesare mostly comparable. Skip-thought (1M) row shows that training on a sentence-level languagemodeling task can fare substantially worse when trained on a smaller subset of data. FastSent, whileeasier to train and has faster training cycles, is better than Skip-thought (1M) but is worse than thefull Skip-thought model.Further gain in performance can be obtained by validating the performance of the encoder on down-stream tasks after each epoch of training, and picking the encoder with best performance on thevalidation set for the downstream tasks. For MR and SUBJ tasks, this leads to accuracy of 80.3 and93.3 respectively. For the remaining tasks, performance gain is less significant.Image-Caption Retrieval: On both caption and image retrieval tasks (last 2 columns of Table 2),fake sentence training with word dropping and word shuffle are better than the published Skip-thought results.Linguistic Properties: Table 3 compares sentence encoders for various linguistic properties usingthe recently proposed probing tasks (Conneau et al., 2018). The goal of each task is to use the inputsentence encoding to predict a particular syntactic or semantic property of the original sentenceit encodes (e.g., predict if the sentence contains a specific word). Encodings from fake sentencetraining score higher in six out of the ten tasks.A key property of the discriminative training is that it forces the encoder to be sensitive to smallchanges that can induce large meaning shifts. WordShuffle encodings turn out to perform signif-icantly better on probing tasks that are related to the sensitivity property. Semantic odd man outrequires identifying word insertions that are incompatible in the context, tracking word content re-quires knowing if a particular word is present or not, and bigram shift (BShift) requires knowingif words in adjacent positions are swapped (a subset of the training task for WordShuffle). Table 4illustrates examples where fake sentence training is able to distinguish sentence meaning changesthat results from a small change in its surface form.Sentence Lengths We analyze the performance of fake sentence and Skip-thought models to un-derstand how the models behave for sentences of different lengths. Figure 2 compares the perfor-mance of fake sentence and Skip-thought encoders binned by length. It turns out that fake sentencetraining performs better on longer sentences on MR but not so on the SST, even though both aresentiment tasks. The analysis for other tasks (not shown here) also indicate that fake sentence is7Under review as a conference paper at ICLR 2019better for most sentence lengths but do not indicate a clear trend with respect to increasing sentencelengths.7072747678808284868890< 5< 10< 15< 20< 25< 30>= 30AccuracySentence LengthWordShuffleSkip-thought707580859095100< 5< 10< 15< 20< 25< 30>= 30AccuracySentence LengthWordShuffleSkip-thought(a) Downstream task – Movie Review (MR) (b) Downstream task – SSTFigure 2: Comparison between Skip-thought and WordShuffle encoders across different sentencelengths. Classification accuracy is shown for (a) MR and (b) SST sentiment classification tasks.12345678910707580859095100Training EpochAccuracy WordShuffleCRSSTObjnumBshiftFigure 3: Figure shows the test accuracies on WordShuffle task, downstream sentiment classificationtask (CR, SST) and probing task (ObjNum, BShift) for different training epochs. Convergence onthe fake sentence task roughly corresponds to convergence on downstream tasks.Relation between Fake Sentence Classification and Downstream Tasks In figure 3, we com-pare the test performance on fake sentence classification task(WordShuffle) with that on multipledownstream tasks. We notice that the encoder resulting in good performance on the fake sentenceclassification task results in good performance on the downstream task.7 C ONCLUSIONSThe effectiveness of universal sentence encoders depends both on the architecture of the encodersas well as the training task itself. Language modeling is a suitable task as it fits the generative goalof modeling the distribution of language (text data). However, tackling this in a purely generativefashion (i.e., actually generating sentences) requires large models and long training times.Instead, we introduced a discriminative formulation of the generative task called fake sentence de-tection. The sentence encoders are trained to produce representations which are effective at detectingif a given sentence is an original or a fake. This leads to better performance on downstream tasksand is able to represent semantic and syntactic properties, while also reducing the amount of train-ing needed. As future work, the discriminative setup opens up the possibility for the training to beinfluenced by a specific downstream target task. In particular, we can create negative samples (fakesentences) that are focused towards those specific phenomena relevant to the target task.8Under review as a conference paper at ICLR 2019
B1e6e0PInX
Method description confusing; empirical comparison against previous work is lacking
3: Clear rejection
This paper proposes a method for learning sentences encoders using artificially generated (fake) sentences. While the idea is interesting, the paper has the following issues: - There are other methods that aim at generating artificial training data, e.g.: Z. Zhao, D. Dua, S. Singh. Generating Natural Adversarial Examples. International Conference on Learning Representations (ICLR). 2018, but no direct comparison is made. Also InferSent (which is cited as related work) trains sentence encoders on SNLI: https://arxiv.org/pdf/1705.02364.pdf. Again a comparison is needed as the encoders learned perform very well on a variety of tasks. Finally, the proposed idea is very similar to ULMfit (https://arxiv.org/pdf/1801.06146.pdf) which trains a language model on a lot of unlabeled data and then finetunes it discriminatively. Finally, there should be a comparison against a langauge model without any extra training in order to assess the benefits of the fake sentence classification part of the model. - It is unclear why the fake sentence construction method proposed by either swapping words or just removing them produces sentences that are fake and/or useful to train on. Sure it is simple, but not necessarily fake. A language model would be able to discriminate between them anyway, by assigning high probability to the original ones, and low probability to the manipulated ones. Not sure we need to train a classifier on top of that. - I found the notation in section 2 confusing. What kind of distribution is P(enc(x,theta1)|theta2, theta3)? I understand that P(x|theta) is the probability of the sentence given a model, but what is the probability of the encoding? It would also be good to see the full derivation to arrive at the expression in the beginning of page 3. - An argument in favour of the proposed method is training speed; however, given that less data is used to train it, it should be faster indeed. In fact, if we consider the amount of time per million sentences, the previous method considered in comparison could be faster (20 hours of 1M sentences is 1280 hours for 64M sentences, more than 6 weeks). More importantly, it is unclear from the description if the same data is used in training both systems or not. - It is unclear how one can estimate the normalization factor in equation 2; it seems that one needs to enumerate over all fake sentences, which is a rather large number due to the number of possible word swaps in the sentence, - I am not sure the generator proposed generates realistic sentences only, "Chicago landed in John on Friday" is rather implausible. Also there is no generation method trained here, it is rule-based as far as I can tell. There is no way to tell the model trained to generate a fake sentence as far as I can tell. - It is a bit odd to criticise other methods ofr using LSTMs with "millions of parameters" while the proposed approach also uses them. A comparison should calculate the number of parameters used in either case. - what is the motivation for having multiple layers without non-linearity instead of a single layer?
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Fake Sentence Detection as a Training Task for Sentence Encoding ### Paper Abstract Sentence encoders are typically trained on generative language modeling tasks with large unlabeled datasets. While these encoders achieve strong results on many sentence-level tasks, they are difficult to train with long training cycles. We introduce fake sentence detection as a new discriminative training task for learning sentence encoders. We automatically generate fake sentences by corrupting original sentences from a source collection and train the encoders to produce representations that are effective at detecting fake sentences. This binary classification task turns to be quite efficient for training sentence encoders. We compare a basic BiLSTM encoder trained on this task with strong sentence encoding models (Skipthought and FastSent) trained on a language modeling task. We find that the BiLSTM trains much faster on fake sentence detection (20 hours instead of weeks) using smaller amounts of data (1M instead of 64M sentences). Further analysis shows the learned representations also capture many syntactic and semantic properties expected from good sentence representations. ### Paper Keywords ["fake sentence detection", "sentence encoders", "training task", "sentence", "tasks", "encoders", "fake sentences", "task", "generative language", "large unlabeled datasets"] ### Paper Content ABSTRACTSentence encoders are typically trained on generative language modeling taskswith large unlabeled datasets. While these encoders achieve strong results onmany sentence-level tasks, they are difficult to train with long training cycles. Weintroduce fake sentence detection as a new discriminative training task for learn-ing sentence encoders. We automatically generate fake sentences by corruptingoriginal sentences from a source collection and train the encoders to produce rep-resentations that are effective at detecting fake sentences. This binary classifica-tion task turns to be quite efficient for training sentence encoders. We compare abasic BiLSTM encoder trained on this task with strong sentence encoding mod-els (Skip-thought and FastSent) trained on a language modeling task. We findthat the BiLSTM trains much faster on fake sentence detection (20 hours insteadof weeks) using smaller amounts of data (1M instead of 64M sentences). Fur-ther analysis shows the learned representations also capture many syntactic andsemantic properties expected from good sentence representations.1 I NTRODUCTIONUniversal sentence encoding is a way to scale language processing tasks, especially when the tar-get training data is limited. Solutions to sentence-level language processing tasks can be seen asconsisting of two parts – one that creates a generic representation which approximates the meaningof the sentence, and another that uses this representations to make the target classifications. Theidea behind universal sentence encoding is to learn generic sentence representations from unlabeledtexts, which are often much larger and easier to obtain than the training data for the target tasks. Agood universal encoding is one which is effective for training downstream target tasks.The success of language modeling ideas for learning word representations has inspired similar ideasfor learning universal sentence representations. Skip-thought model (Kiros et al., 2015) uses a Bidi-rectional LSTM (BiLSTM) encoder, which encodes a given sentence, and uses the encoding togenerate neighboring sentences. Following up on this general idea, the FastSent model (Hill et al.,2016) simplifies the task and the encoder further by learning to output the average word embeddingsso that they can predict words in the neighboring sentences (but not necessarily in a sequence).This language modeling based training is undesirable in two respects. (1) Predicting neighboringsentences is a difficult generative training task. The task requires having a large output space,at least the size of the vocabulary. It also requires a complex decoding model such as an LSTMdecoder with millions of parameters. Effective training requires a large amount of training datawith long training cycles. Indeed Skip-thought requires tens of millions of training sentences andit takes more than a week to train. (2) Even minor changes to a sentence can drastically changeits meaning, a phenomenon that needs to be modeled for many NLP tasks. Language model basedtraining essentially relies on the training sample for this kind of generalization, i.e., the training textcollection should include many instances of sentences that have only minor lexical differences butfound in completely different contexts. Larger the dataset the more likely it is for the encoder tolearn how to model these fine grained distinctions.In this work, we introduce a discriminative training task fake sentence detection to address thesechallenges. The main idea is to generate fake sentences by corrupting an original sentence. Weuse two methods to generate fake sentences: word shuffling where we swap the positions of twowords at random and word dropping , where we drop a word at random from the original sentence.1Under review as a conference paper at ICLR 2019The resulting fake sentences are mostly similar to the original sentences—a fake sentence differsfrom its source in at most two word positions. We create the training corpus from a source corpusof unlabeled sentences. For each sentence in the source corpus we add multiple fake sentences bycorrupting the source sentence.This training task has three key advantages: (1) This binary classification task can be modeledwith fewer parameters in the output layer and can be trained more efficiently compared to languagemodeling based training tasks. (2) From a language standpoint, the task forces the encoder to trackboth syntax and semantics. For instance, swapping words within a sentence can break the syntax(e.g., “John landed in Chicago on Friday” versus “John landed Chicago in on Friday”), and breakor alter the semantics; it can lead to an incoherent or less plausible sentence (e.g., “John landed inChicago on Friday” versus “Chicago landed in John on Friday”). (3) The task explicitly forces theencoder to capture big shifts in meaning that arise from small changes to a sentence. In particular,since the discrimination is really dependent on a small change to an original sentence, encoderscannot get away with simple aggregation—they need to model compositional aspects well enoughto be able to detect small but semantically shifts in sentence constructions.In overview, we train a bidirectional LSTM (BiLSTM) as our universal encoder and use a three-layer feed-forward network that uses the encoded sentence to predict if it is fake or real. We thenevaluate this trained encoder without any further tuning on multiple sentence-level tasks and useprobing tasks (Conneau et al., 2018) to study the syntactic and semantic properties in the encodedrepresentations.In summary, this paper makes the following main contributions:1. Introduces fake sentence detection as an unsupervised training task for learning sentenceencoders that can distinguish between small changes in mostly similar sentences.2. Provides an empirical evaluation on multiple sentence-level tasks showing representationstrained on the fake sentence tasks outperform a strong baseline model trained on languagemodeling tasks, even when training on small amounts of data (1M vs. 64M sentences)reducing training time from weeks to within 20 hours.3. Demonstrates that the fake sentence training produces encoded representations that arebetter at capturing many linguistic properties compared to language model training.2 M OTIVATIONMany sentence-level prediction tasks require learning a function to estimate the probability of a labelygiven a sentence x. This function can be parameterized as P(yjx;), and the parameter vector can be learned by minimizing the negative conditional log likelihood of the training data D, i.e.,minP(x;y)2Dlog(P(yjx;)). However, the amount of training data is limited in many cases,so learningby optimizing the conditional probability leads to overfitting, especially if one uses adeep neural network with millions of parameters.One approach to address this problem is to break the task into two sub-tasks each with its own setof parameters: (1) sentence encoding—produce an effective low-dimensional representation of thesentenceenc(x; 1), (2) target prediction—estimate probability of the label yconditioned on theencodingP(yjx;enc (x; 1);2). The dimension of 2is much smaller than the dimension of andit can be learned with the provided labeled training data D. For the encoder function enc(x; 1), itis hoped that the parameter vector 1can be learned or pre-trained with a different sentence-leveltask using other data, which is possibly unlabeled, but can be easily collected in large quantity. Theencoder can be trained on the auxiliary task as part of a prediction function fwhich has its ownset of parameters 3. The encoder parameters 1and the auxiliary task parameters 3are learned tominimize the loss on the auxiliary task:Px2ULaux(f(enc(x; 1);3)).In this work, we are interested in learning an universal sentence encoder that can be subsequentlyused for various downstream sentence-level tasks. One reasonable choice for the auxiliary lossfunction would be the negative log likelihood of the data, i.e., logP(xj1;3)or equivalentlylog(P(enc(x; 1)j1;3)). Recall for the downstream task, we will seek the parameter vector 2to minimize the negative conditional log likelihood log(P(yjenc(x; 1);1;2)). The combina-tion of two loss functions is: log(P(yjenc(x; 1);1;2))log(P(enc(x; 1)j1;3)). This is2Under review as a conference paper at ICLR 2019equivalent tolog(P(y;enc (x; 1)j1;2;3)). In other words, we are maximizing the joint prob-ability of observing the label yand the encoding vector enc(x; 1). Optimizing the joint probabilityof the label and data corresponds to having a generative classifier, as opposed to the discriminativeclassifier that optimizes a conditional probability. As shown by (Ng & Jordan, 2002), a generativeclassifier converges faster to its asymptotic behavior than a discriminative classifier does. A genera-tive approach is particularly useful when there is limited amount of training data. Thus, we proposeto learn a sentence encoder by maximizing the data likelihood P(enc(x; 1)j1;3), and we expectit to be an useful universal sentence encoder, especially when there is limited amount of labeledtraining data for downstream sentence-level task.It is surprisingly simple to learn an encoder to maximize the data likelihood P(enc(x; 1)j1;3),and this paper proposes exactly that. We can parameterize this probability using a simple functionsuch as a log-linear model:P(enc(x; 1)j1;3) =1Zexp(T3enc(x; 1)) (1)where the functional form of the encoder encis an BiLSTM, and Zis a normalization constant toensure a valid probability function.Z=Xxexp(T3enc(x; 1)) =Xx2Uexp(T3enc(x; 1)) +Xx2Vexp(T3enc(x; 1)): (2)whereUis the set of real sentences, and Vis the set of fake sentences. Our objective isto learn1;3to maximize the quantity in Equation 1. This is equivalent to maximizingPx2Uexp(T3enc(x; 1))while minimizingPx2Vexp(T3enc(x; 1)). This can simply bedone by training a classifier to separate real sentences from fake sentences.In this work, we approximate the space of fake sentences by randomly swapping or dropping wordsfrom real sentences. Our method has some connection to the GAN framework (Goodfellow et al.,2014), where there is a generator that attempts to generate realistic data and a discriminator thatdistinguishes between real and generated data instances. As shown by Goodfellow et al. (2014), thisapproach will converge to a solution where the output of the discriminator is the estimated value fora data point to be real. In our case, we do not need to learn the generator and the discriminator inan adversarial manner because we already have a ‘strong’ generator that generates highly realisticsentences, simply by shuffling or dropping words. Furthermore, our method can be thought as agenerative model, defined and trained using a discriminative function. This aspect is related to therecently proposed Introspective Neural Networks (Lazarow et al., 2017).There are some similarities between the proposed method and Skip-thought sentence encoder. Thetraining task of Skip-thought involves predicting the next or previous sentence zthat follows orprecedes the current encoded sentence x. This sentence language model is realized using an LSTMdecoder, which sequentially generates a word at each position iconditioning on the previous encodedsentenceenc(x), and the words that have been decoded so far ( z1;;zi1). This model can beseen as maximizing the conditional probability:P(zjenc(x; 1);3) =jzj1Yi=1P(zijz1:i1;enc(x; 1);3): (3)Interestingly, both Skip-thought and our proposed method model the unlabeled data U, a sequenceof sentences u1;u2;;um, and aim to optimize for the joint probability distribution: P(Uj1;3).In the case of Skip-thought, there is an underlying Markovian assumption that the probabilityof a sentence only depends on the previous sentence. That is to approximate P(Uj1;3)byQkP(ukjenc(uk1;1);3). In the case of the proposed method, we assume the sentences are in-dependent of one another, so P(Uj1;3)can be approximated asQkP(enc(uk;1)j1;3). Bothmethods are generative models of the data, and both are useful sentence encoders for the downstreamtasks, as will be seen in the experiment sections.However, Skip-thought model has several disadvantages compared to the proposed method: (1)The architecture of Skip-thought model is much more complex. Skip-thought model requires anLSTM decoder with millions of parameters. Furthermore, the need for computing the probabilityof individual words requires having a large output space, as large as the size of the vocabulary.3Under review as a conference paper at ICLR 2019This further increases then number of parameters and the complexity of the overall architecture.(2) The modeling task of Skip-thought is much harder than that of the proposed method. Skip-thought requires a probability model for both complete and incomplete sentences, and this muchharder than modeling the probability of complete sentences alone (as proposed here). (3) Due to thecomplexity of the architecture and the difficulty of the modeling task, Skip-thought requires muchmore training data and longer training time. (4) Skip-thought is a purely generative model and itis only trained using the real data. Meanwhile, the proposed method is a generative model definedvia a discriminative function. This function can be trained using both real and fake data, where thefake data can be selectively chosen to be very hard examples. Specifically, we use word shufflingand word dropping to generate hard examples that do not differ from the original sentences. Thisenables the sentence encoder to focus on the subtle semantics information.3 R ELATED WORKPrevious sentence encoding approaches can be broadly classified as supervised (Conneau et al.,2017; Cer et al., 2018; Marcheggiani & Titov, 2017; Wieting et al., 2015), unsupervised (Kiroset al., 2015; Hill et al., 2016) or semi-supervised approaches (Peters et al., 2018; Dai & Le, 2015;Socher et al., 2011; Clark et al., 2018).The unsupervised approaches extend the skip-gram (Mikolov et al., 2013) to the sentence level, anduse the sentence embedding to predict the adjacent sentences. Skip-thought (Kiros et al., 2015) usesa BiLSTM encoder to obtain a fixed length embedding for a sentence, and uses a BiLSTM decoderto predict adjacent sentences. Training Skip-thought model is expensive, and even one epoch oftraining on the Toronto BookCorpus (Zhu et al., 2015) dataset takes more than two weeks (Hillet al., 2016) on a single GPU. FastSent (Hill et al., 2016) uses embeddings of a sentence to predictwords from the adjacent sentences. A sentence is represented by simply summing up the word rep-resentation of all the words in the sentence. FastSent requires less training time than Skip-thought,but FastSent has lower performance on most downstream tasks.The supervised approaches train the encoders on tasks such as NLI and use transfer learning to adaptthe learned encoders to different downstream tasks (Conneau et al., 2017).Some semi-supervised approaches (Peters et al., 2018; Dai & Le, 2015; Socher et al., 2011) trainsentence encoders on large unlabeled datasets, and do a task specific adaptation using labeled data,while some other approaches such as Cross-View Training(CVT) (Clark et al., 2018) jointly trainthe sentence encoder on the labeled and unlabeled data. CVT applies regular supervised learning onthe labeled data, and for the unlabeled data, it enforces consistency across the outputs obtained forpartial versions of the input sentence.In this work, we propose an unsupervised sentence encoder that takes around 20 hours to train ona single GPU, and outperforms Skip-thought and FastSent encoders on multiple downstream tasks.Unlike the previous unsupervised approaches, we use the binary task of real versus fake sentenceclassification to train a BiLSTM based sentence encoder.4 T RAINING TASKS FOR ENCODERSWe propose a discriminative task for training sentence encoders. The key bottleneck in trainingsentence encoders is the need for large amounts of labeled data. Prior work use language modelingas a purely generative training task that models the generation of unlabeled text data. The encoderis trained to produce sentence representations which are effective at either generating neighboringsentences (e.g., Skip-thought (Kiros et al., 2015) or to predict the words in the neighboring sen-tences (Hill et al., 2016). As we outlined in Section 2 this purely generative training raises multiplechallenges.Instead, we propose fake sentence detection, a task that targets the same overall generative model,one that generates unlabeled text, but is trained discriminatively. The task requires making a singleprediction over an input sentence. In particular, we propose to learn a sentence encoder by training asequential model to solve the binary classification task of detecting whether a given input sentence isfake or real. This real-fake sentence classification task would perhaps be trivial if the fake sentenceslook very different from the real sentences. We propose two simple methods to generate noisy4Under review as a conference paper at ICLR 2019Figure 1: Figure shows the block diagram of the encoder and fully connected layers. Encoderconsists of a bidirectional LSTM followed by a max pooling layer. For classification, we use a MLPwith two hidden layers.sentences which look mostly similar to real sentences. We describe the noisy sentence generationstrategies in Section 4.1. Thus, we create a labeled dataset of real and fake sentences, and train asequential model to distinguish between real and fake sentences, which results in a model whoseclassification layer has far fewer parameters than previous language model based encoders.Our model architecture is described in Section 4.2.4.1 F AKE SENTENCE GENERATIONFor a sentence X=w1;w2;:::;wncomprising of nwords, we consider two strategies to generate anoisy version of the sentence: 1) WordShuffle : randomly sample two indices iandjcorrespondingto wordswiandwjinX, and shuffle the words to obtain the noisy sentence ^X. Noisy sentence ^Xwould be of the same length as the original sentence X.2) WordDrop : randomly pick one index icorresponding to word wiand drop the word from the sentence to obtain ^X. Note there can be manyvariants for this strategy but here we experiment with this basic choice.4.2 R EAL VERSUS FAKE SENTENCE CLASSIFICATIONFigure 1 shows the proposed architecture of our fake sentence classifier with an encoder and a Multi-layer Perceptron(MLP) with 2 hidden layers. We do not use any nonlinearity after these hiddenlayers, hence, these layers correspond to the single linear layer represented by 3in Equation 1.Another motivation behind not using the nonlinearity is to keep the classifier simple enought so thatthe encoder is forced to learn useful representation while training for the fake sentence classificationtask. The encoder consists of a bidirectional LSTM followed by a max pooling layer. Given asentenceX=w1;w2;:::;wnconsisting of nwords, the forward LSTM of the encoder generates ahidden state vector !ht. Similarly, the backward LSTM generates the hidden state ht. The forwardand backward hidden states are concatenated to get ut= ( !ht, ht). Processing the entire sentenceresults innsuch vectors, and we apply max-pooling to these concatenated hidden states to obtainvectorz.zserves as a fixed length encoding of the sentence X, which we then use as input to aMLP for classifying the sentence into real/fake classes.5 E VALUATION SETUPDownstream Tasks: We compare the sentence encoders trained on a large collection (BookCor-pus (Zhu et al., 2015)) by testing them on multiple sentence level classification tasks (MR, CR,SUBJ, MPQA, TREC, SST) and one NLI task defined over sentence-pairs (SICK). We also eval-5Under review as a conference paper at ICLR 2019Name Size Task No. of ClassesMR 11K Sentiment 2CR 4K Product Review 2TREC 11K Question type 6SST 70K Sentiment 2MPQA 11K Opinion Polarity 2SUBJ 10K Subjectivity 2SICK 10K NLI 3COCO 123K Retrieval -Table 1: Downstream tasks and datasets.Model MR CR TREC SST MPQA SUBJ SICK COCO-Cap COCO-ImgNB-SVM 79.4 81.8 - - 86.3 93.2 - - -FastSent 70.8 78.4 80.6 - 80.6 88.7 - - -Skip-thought (full) 76.5 80.1 92.2 82.0 87.1 93.6 82.3 72.2 66.2Skip-thought (1M) 65.2 70.9 79.2 66.9 81.6 86.1 75.6 51.9 46.7Skip-thought (full) + NB 80.4 81.3 - - 87.5 93.6 - - -MC-QT 80.4 85.2 92.8 - 89.4 93.9 - - -WordDrop 78.8 82.2 86.6 82.9 89.8 92.7 83.2 73.8 67.3WordShuffle 79.8 82.4 88.4 82.4 89.8 92.6 82.3 74.2 67.3Table 2: Results on downstream tasks: Bold face indicates best result and underlined results showwhen fake sentence training is better than Skip-thought (full). COCO-Cap and COCO-Img arecaption and image retrieval tasks on COCO. We report Recall@5 for the COCO retrieval tasks.uate the sentence representations for image and caption retrieval tasks on the COCO dataset (Linet al., 2014). We use the same evaluation protocol and dataset split as (Karpathy & Fei-Fei, 2015;Conneau et al., 2017). Table 1 lists the classification tasks and the datasets. We also compare thesentence representations for how well they capture important syntactic and semantic properties usingprobing classification tasks (Conneau et al., 2018). For all downstream and probing tasks, we usethe encoders to obtain representation for all the sentences, and train logistic regression classifiers onthe training split. We tune the L2-norm regularizer using the validation split, and report the resultson the test split.Training Corpus: The FastSent and Skip-thought encoders are trained on the full Toronto Book-Corpus of 64M sentences (Zhu et al., 2015). Our models, however, train on a much smaller subsetofonly 1M sentences.Model SentLen WC TreeDepth TopConst BShift Tense SubjNum ObjNum SOMO CoordInvSkip-thought (full) 85.4 79.6 41.1 82.5 69.6 90.4 85.6 83.6 53.9 69.1Skip-thought (1M) 54.7 33.9 30.0 60.7 58.9 85.3 76.4 70.9 51.9 61.4WordDrop 86.7 90.1 48.0 81.9 73.2 87.7 87.3 82.7 59.2 70.6WordShuffle 84.9 91.2 48.8 82.3 79.9 88.2 86.7 83.3 59.8 70.7Table 3: Probing task accuracies. Tasks: SentLen: predict sentence length, WC: is word in sentence,TreeDepth: depth of syntactic tree, TopConst: predict top-level constituent, BShift: is bigram inflipped in sentence, Tense: predict tense of word, Subj(Obj)Num: singular or plural subject, SOMO:semantic odd man out, CoordInv: is co-ordination is inverted.Sentence Encoder Implementation: Our sentence encoder architecture is the same as theBiLSTM-max model (Conneau et al., 2017). We represent words using 300-d pretrained Gloveembeddings (Pennington et al., 2014). We use a single layer BiLSTM model, with 2048-d hiddenstates. The MLP classifier we use for fake sentence detection has two hidden layers with 1024 and512 neurons. We train separate models for word drop and word shuffle. The models are trained for15 epochs with a batch size of 64 using SGD algorithm, when training converges with a validation6Under review as a conference paper at ICLR 2019Original Neighbor ST Rank FS RankI love soccer. Soccer love I. 2 6He took her hand and kissed it. He kissed her hand and took it. 1 4Table 4: Example illustrating how Skip-thought (ST) and Fake sentence (FS) training allows repre-sentations to capture big shifts in meaning even with small changes to the sentence form. The tableshows where both models rank a specific fake version of a real sentence. The ranking is is obtainedusing cosine similarity over a random sample of 10000 real and fake sentences when projected viat-SNE.set accuracy of 89 for word shuffle. The entire training completes in less than 20 hours on a singleGPU machine.Baseline Approaches: We compare our results with previous unsupervised sentences encoders,Skip-thought (Kiros et al., 2015), FastSent (Hill et al., 2016), Quick-thought(MC-QT) (Logeswaran& Lee, 2018), and bag-of-words baseline NB-SVM (Wang & Manning, 2012). We use the FastSentand Skip-thought results trained on the full BookCorpus as mentioned in (Conneau et al., 2017). Wealso report Skip-thought results when trained on a smaller 1M sentence subset.6 R ESULTSClassification and NLI: Results are shown in Table 2. Both fake sentence training tasks yieldbetter performance on five out of the seven language tasks when compared to Skip-thought (full),i.e., even when it is trained on the full BookCorpus. Word drop and word shuffle performancesare mostly comparable. Skip-thought (1M) row shows that training on a sentence-level languagemodeling task can fare substantially worse when trained on a smaller subset of data. FastSent, whileeasier to train and has faster training cycles, is better than Skip-thought (1M) but is worse than thefull Skip-thought model.Further gain in performance can be obtained by validating the performance of the encoder on down-stream tasks after each epoch of training, and picking the encoder with best performance on thevalidation set for the downstream tasks. For MR and SUBJ tasks, this leads to accuracy of 80.3 and93.3 respectively. For the remaining tasks, performance gain is less significant.Image-Caption Retrieval: On both caption and image retrieval tasks (last 2 columns of Table 2),fake sentence training with word dropping and word shuffle are better than the published Skip-thought results.Linguistic Properties: Table 3 compares sentence encoders for various linguistic properties usingthe recently proposed probing tasks (Conneau et al., 2018). The goal of each task is to use the inputsentence encoding to predict a particular syntactic or semantic property of the original sentenceit encodes (e.g., predict if the sentence contains a specific word). Encodings from fake sentencetraining score higher in six out of the ten tasks.A key property of the discriminative training is that it forces the encoder to be sensitive to smallchanges that can induce large meaning shifts. WordShuffle encodings turn out to perform signif-icantly better on probing tasks that are related to the sensitivity property. Semantic odd man outrequires identifying word insertions that are incompatible in the context, tracking word content re-quires knowing if a particular word is present or not, and bigram shift (BShift) requires knowingif words in adjacent positions are swapped (a subset of the training task for WordShuffle). Table 4illustrates examples where fake sentence training is able to distinguish sentence meaning changesthat results from a small change in its surface form.Sentence Lengths We analyze the performance of fake sentence and Skip-thought models to un-derstand how the models behave for sentences of different lengths. Figure 2 compares the perfor-mance of fake sentence and Skip-thought encoders binned by length. It turns out that fake sentencetraining performs better on longer sentences on MR but not so on the SST, even though both aresentiment tasks. The analysis for other tasks (not shown here) also indicate that fake sentence is7Under review as a conference paper at ICLR 2019better for most sentence lengths but do not indicate a clear trend with respect to increasing sentencelengths.7072747678808284868890< 5< 10< 15< 20< 25< 30>= 30AccuracySentence LengthWordShuffleSkip-thought707580859095100< 5< 10< 15< 20< 25< 30>= 30AccuracySentence LengthWordShuffleSkip-thought(a) Downstream task – Movie Review (MR) (b) Downstream task – SSTFigure 2: Comparison between Skip-thought and WordShuffle encoders across different sentencelengths. Classification accuracy is shown for (a) MR and (b) SST sentiment classification tasks.12345678910707580859095100Training EpochAccuracy WordShuffleCRSSTObjnumBshiftFigure 3: Figure shows the test accuracies on WordShuffle task, downstream sentiment classificationtask (CR, SST) and probing task (ObjNum, BShift) for different training epochs. Convergence onthe fake sentence task roughly corresponds to convergence on downstream tasks.Relation between Fake Sentence Classification and Downstream Tasks In figure 3, we com-pare the test performance on fake sentence classification task(WordShuffle) with that on multipledownstream tasks. We notice that the encoder resulting in good performance on the fake sentenceclassification task results in good performance on the downstream task.7 C ONCLUSIONSThe effectiveness of universal sentence encoders depends both on the architecture of the encodersas well as the training task itself. Language modeling is a suitable task as it fits the generative goalof modeling the distribution of language (text data). However, tackling this in a purely generativefashion (i.e., actually generating sentences) requires large models and long training times.Instead, we introduced a discriminative formulation of the generative task called fake sentence de-tection. The sentence encoders are trained to produce representations which are effective at detectingif a given sentence is an original or a fake. This leads to better performance on downstream tasksand is able to represent semantic and syntactic properties, while also reducing the amount of train-ing needed. As future work, the discriminative setup opens up the possibility for the training to beinfluenced by a specific downstream target task. In particular, we can create negative samples (fakesentences) that are focused towards those specific phenomena relevant to the target task.8Under review as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title Method description confusing; empirical comparison against previous work is lacking ### Review Text This paper proposes a method for learning sentences encoders using artificially generated (fake) sentences. While the idea is interesting, the paper has the following issues: - There are other methods that aim at generating artificial training data, e.g.: Z. Zhao, D. Dua, S. Singh. Generating Natural Adversarial Examples. International Conference on Learning Representations (ICLR). 2018, but no direct comparison is made. Also InferSent (which is cited as related work) trains sentence encoders on SNLI: https://arxiv.org/pdf/1705.02364.pdf. Again a comparison is needed as the encoders learned perform very well on a variety of tasks. Finally, the proposed idea is very similar to ULMfit (https://arxiv.org/pdf/1801.06146.pdf) which trains a language model on a lot of unlabeled data and then finetunes it discriminatively. Finally, there should be a comparison against a langauge model without any extra training in order to assess the benefits of the fake sentence classification part of the model. - It is unclear why the fake sentence construction method proposed by either swapping words or just removing them produces sentences that are fake and/or useful to train on. Sure it is simple, but not necessarily fake. A language model would be able to discriminate between them anyway, by assigning high probability to the original ones, and low probability to the manipulated ones. Not sure we need to train a classifier on top of that. - I found the notation in section 2 confusing. What kind of distribution is P(enc(x,theta1)|theta2, theta3)? I understand that P(x|theta) is the probability of the sentence given a model, but what is the probability of the encoding? It would also be good to see the full derivation to arrive at the expression in the beginning of page 3. - An argument in favour of the proposed method is training speed; however, given that less data is used to train it, it should be faster indeed. In fact, if we consider the amount of time per million sentences, the previous method considered in comparison could be faster (20 hours of 1M sentences is 1280 hours for 64M sentences, more than 6 weeks). More importantly, it is unclear from the description if the same data is used in training both systems or not. - It is unclear how one can estimate the normalization factor in equation 2; it seems that one needs to enumerate over all fake sentences, which is a rather large number due to the number of possible word swaps in the sentence, - I am not sure the generator proposed generates realistic sentences only, "Chicago landed in John on Friday" is rather implausible. Also there is no generation method trained here, it is rule-based as far as I can tell. There is no way to tell the model trained to generate a fake sentence as far as I can tell. - It is a bit odd to criticise other methods ofr using LSTMs with "millions of parameters" while the proposed approach also uses them. A comparison should calculate the number of parameters used in either case. - what is the motivation for having multiple layers without non-linearity instead of a single layer? ### Review Rating 3: Clear rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
rJZ6RMTIG
ICLR.cc/2018/Workshop
2018
An Empirical Study of Weights in Deep Convolutional Neural Networks and Its Application to Training Convergence
["Haihao Shen", "Jiong Gong", "Jianhui Li", "Xiaoli Liu", "Xinan Lin"]
This paper presents an empirical study of weights in deep neural networks and propose a quantitative metric, Logarithmical Geometric Mean of absolute weight parameter (LoGM), to evaluate the impact of weight on training convergence. We develop an automatic tool to measure LoGM and conduct extensive experiments on ImageNet with three well-known deep convolutional neural networks (CNNs). We discover two empirical observations from the experiments on same model: 1) LoGM variance is small between weight snapshots per iteration; and 2) each CNN model has a reasonable divergence region. Preliminary results show our methodology is effective with convergence problem exposure time reduction from weeks to minutes. Three known convergence issues are confirmed and one new problem is detected at early stage of feature development. To the best of our knowledge, our work is first attempt to understand the impact of weight on convergence. We believe that our methodology is general and applicable on all deep learning frameworks. The code and training snapshots will be made publicly available.
["Deep Neural Networks", "Quantitative Metric", "Convergence", "Divergence Region"]
ABSTRACTThis paper presents an empirical study of weights in deep neural networks andpropose a quantitative metric, Logarithmical Geometric Mean of absolute weightparameter (LoGM), to evaluate the impact of weight on training convergence. Wedevelop an automatic tool to measure LoGM and conduct extensive experimentson ImageNet with three well-known deep convolutional neural networks (CNNs).We discover two empirical observations from the experiments on same model: 1)LoGM variance is small between weight snapshots per iteration; and 2) each CNNmodel has a reasonable divergence region. Preliminary results show our method-ology is effective with convergence problem exposure time reduction from weeksto minutes. Three known convergence issues are confirmed and one new problemis detected at early stage of feature development. To the best of our knowledge,our work is first attempt to understand the impact of weight on convergence. Webelieve that our methodology is general and applicable on all deep learning frame-works. The code and training snapshots will be made publicly available.1 I NTRODUCTIONDeep convolutional neural networks (CNNs) have demonstrated great success with break-throughresults on computer vision tasks such as image classification (Krizhevsky et al. (2012);Szegedy et al.(2015);Simonyan & Zisserman (2014);He et al. (2016)), object detection (Ren et al. (2015);Liu et al.(2016)), and semantic segmentation (Long et al. (2015);He et al. (2017)). With the improvement ofhardware computation powers and software framework optimizations, it provides more chances forusers to complete training on classical and new models. Recent work shows multi-node trainingwith large batch size is becoming popular to accelerate the time to train significantly by leveragingmore hardware resources (Goyal et al. (2017);You et al. (2017);Gitman & Ginsburg (2017)).Regardless of the publication of CNN models, users may encounter convergence problem undertheir own environment. In general, convergence problem consists of two aspects: 1) training loss isnot a number (NaN) or loss trend is not healthy; and 2) training cannot reach state of the art (SOTA)accuracy. To investigate the issue, users may compare the value with a reference implementation(called co-simulation). However, it cannot provide the insights on complex convergence problemwith training optimization (e.g., Winograd-based convolution (Lavin & Gray (2016))) and modeloptimization (e.g., weight quantization (Han et al. (2015))). Without effective diagnostic tool, usershave to wait and observe the training loss from time to time, debug the code or tune the hyper-parameters, and restart a new round of training. Recent research discussed the convergent learningon activation during training and proposed neuron aligns between two networks (Li et al. (2015)) tofacilitate the training process. Unfortunately, there is no systematic study on convergence problemby weights, although weight snapshot is the most critical output of training.In this paper, we propose a quantitative metric, Logarithmical Geometry Mean of absolute weightparameter (LoGM). LoGM inherits from standard geometric mean but is well-tuned to support theweight snapshot with millions of learnable weight parameters trained from CNNs. We developan automatic tool to measure LoGM by weight on top of Intel Caffe1and conduct extensive ex-1https://github.com/intel/caffe1Workshop track - ICLR 2018periments on ImageNet with three CNN models. We discover two empirical observations from theexperiments: 1) LoGM variance is small between weight snapshots per iteration on same model; and2) each CNN model has a reasonable divergence region. Preliminary results show our methodologyis effective with convergence problem exposure time reduction from weeks to minutes. We confirmthree known convergence issues and identify one new problem at early stage of feature develop-ment. To the best of our knowledge, our work is first attempt to understand the impact of weighton convergence. We believe that our methodology is general and applicable on all deep learningframeworks (e.g., TensorFlow, MXNet, and Caffe2). We recommend our metric tool is complemen-tary to existing co-simulation tool to identify the convergence issues more effectively. The code andtraining snapshots will be made publicly available.2 Q UANTITATIVE METRICWe define a quantitative metric LoGM in Equation (1) to measure the impact of weight. LoGM in-herits from traditional geometric mean but is well-tuned to support the weight snapshot with millionsof learnable weight parameters trained from CNNs. We define n is the number of weight parametersin a weight snapshot and use 10 as logarithm base to make the value more human-readable.LoGM =Pnilogjwijn(1)The metric indicates the similar idea of computation flow with multiplication of input activationsand weight parameters from the perspective of model inference (with the assumption of negligibleadditions). Note that we also employ another widely-used metric standard deviation (STD) in ourempirical study at the beginning. However, experimental result shows STD is not general for existingCNN models. It leads to unexpected big variance for modern CNN models with batch normalization(BN) (Ioffe & Szegedy (2015)) due to magnitude difference of weight number in convolution andBN. Therefore, it requires additional effort to handle the models with or without BN in practice. Tomake the metric simple and consistent, we employ LoGM as the only metric in our empirical study.2.1 I MPLEMENTATION DETAILSAlgorithm 1 illustrates the pseudocode on how to compute LoGM on weight. It is straightforwardwith nested loops: traversing layers from a weight snapshot W at outside loop and weight parametersfrom a layer at inside loop. At inside loop, it accumulates the logarithm of absolute of each weightparameter. Finally, it computes the mean value of sum by the number of weight parameters.Algorithm 1 Compute LoGM by weightInput: weight snapshot WOutput: LoGMsum = 0; num = 0foreach layer L in W doforeach parameter w in L dosum += log(abs(w))num += 1result = sum / numreturn resultWe implement the algorithm with Python interface on Intel Caffe and develop the tool to measureautomatically. The algorithm is general and easily applicable on all other deep learning frameworks.3 E XPERIMENTSWe perform empirical study on GoogleNet-V1, VGG-16, and ResNet-50. We use the models withhyper-parameters under Intel Caffe and leverage multi-node training on Intel@Xeon PhiTMProces-sor 7250 with Omni-Path architecture. We employ standard ImageNet as training dataset, consistingof 1,281,167 training images and 50,000 validation images in 1,000 classes.2Workshop track - ICLR 20183.1 E MPIRICAL OBSERVATIONSWe summarize two empirical observations during experiments on the same well-trained model2.Observation 1 : LoGM variance is small between weight snapshots per iteration. We perform theexperiments with iteration level on three models and measure LoGM variance as shown in Figure 1.Figure 1: LoGM VarianceObservation 2 : Each CNN model has a reasonable divergence region. We extend the experimentsfrom iteration to epoch level and show the divergence region per epoch in Table 1.Table 1: Divergence RegionTopology Divergence RegionGoogleNet-V1 (-0.09817, 0.09665)VGG-16 (-0.13456, 0.01279)ResNet-50 (-0.03719, 0.03691)3.2 A PPLICATIONSWe apply the divergence region in real applications with 3 issues confirmed and 1 new detected. Wedemonstrate two case studies on weight quantization and Winograd convolution optimization.3.2.1 W EIGHT QUANTIZATIONWeight quantization can reduce the data size transfer on network under multi-node training by da-ta compression and decompression. VGG-16 is a typical model with heavy weight parameters infull-connected layers. During feature development, weight partition is utilized with different scalingfactor. However, there is a subtle bug in decompression with wrong scaling, which leads the modelcannot reach SOTA accuracy. We measure the divergence variance on the first two weight snapshotsper iteration and find that the value is out of the reasonable region. After bug fixing, weight quanti-zation works well. Comparing with regular training by weeks on single CPU node, the case showsour methodology can shorten the debugging cycle significantly from weeks to minutes.3.2.2 W INOGRAD -BASED CONVOLUTION OPTIMIZATIONWinograd is a new class of fast algorithms for convolutional neural networks to speed up convolutioncomputation on small filters. Intel@math kernel library for deep neural networks (MKL-DNN) isan open source performance library for deep learning applications intended for acceleration of deeplearning frameworks on Intel architecture3. We enable Winograd convolution algorithm on IntelCaffe with MKL-DNN Winograd primitive and perform the training on VGG-16. The divergencevariance is 0.16922 on first two iterations in VGG-16 training, which is out of valid divergenceregion. We report the convergence issue and confirm with MKL-DNN team.3.3 S UMMARYWe believe the above observations are not limited to CNN models used in our experiments. Werecommend users measure reasonable divergence region on their own CNN model as baseline andapply the similar idea in real applications.2Well-trained model is proved to reach SOTA accuracy.3https://github.com/01org/mkl-dnn3Workshop track - ICLR 2018
S18Or9lFG
Quantitative metric to detect convergence problems in CNN training
5: Marginally below acceptance threshold
This paper proposes a metric called LoGM to evaluate the evolution of the CNN weight during the training. The authors claim that LoGM can be used to detect convergence problems in the CNN training. The paper is confusing and hard to understand. There are different aspects that are unclear. For example: - Authors state that "the problems in the convergence of CNN training can be due to obtining 'not a number' in the evaluation of the loss function, or due to the fact of non obtaining state-of-the-art accuracy". I think this is an unfounded simplification. For example, the problem could be also the batch size, among others. - LoGM is defined as the average log of the CNN weigths. What is the intuitive idea behind this measure? Algorithm 1 is unnecessary. - Figure 1 is not properly discussed. In my opinion these results (and also the results of Table 1) do not show the potential of the metric. - In section 3.2. states "We apply the divrgence region in real applications with 3 issues confirmed and 1 new detected". I do not know what this means in the context of the paper. - Authors mention often the concept "reasonable divergence region", but the term is not properly defined or discussed. I think this paper is not clear. The work pretends to make an observation but, from my viewpoint, it is not solid enough to be published.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title An Empirical Study of Weights in Deep Convolutional Neural Networks and Its Application to Training Convergence ### Paper Abstract This paper presents an empirical study of weights in deep neural networks and propose a quantitative metric, Logarithmical Geometric Mean of absolute weight parameter (LoGM), to evaluate the impact of weight on training convergence. We develop an automatic tool to measure LoGM and conduct extensive experiments on ImageNet with three well-known deep convolutional neural networks (CNNs). We discover two empirical observations from the experiments on same model: 1) LoGM variance is small between weight snapshots per iteration; and 2) each CNN model has a reasonable divergence region. Preliminary results show our methodology is effective with convergence problem exposure time reduction from weeks to minutes. Three known convergence issues are confirmed and one new problem is detected at early stage of feature development. To the best of our knowledge, our work is first attempt to understand the impact of weight on convergence. We believe that our methodology is general and applicable on all deep learning frameworks. The code and training snapshots will be made publicly available. ### Paper Keywords ["Deep Neural Networks", "Quantitative Metric", "Convergence", "Divergence Region"] ### Paper Content ABSTRACTThis paper presents an empirical study of weights in deep neural networks andpropose a quantitative metric, Logarithmical Geometric Mean of absolute weightparameter (LoGM), to evaluate the impact of weight on training convergence. Wedevelop an automatic tool to measure LoGM and conduct extensive experimentson ImageNet with three well-known deep convolutional neural networks (CNNs).We discover two empirical observations from the experiments on same model: 1)LoGM variance is small between weight snapshots per iteration; and 2) each CNNmodel has a reasonable divergence region. Preliminary results show our method-ology is effective with convergence problem exposure time reduction from weeksto minutes. Three known convergence issues are confirmed and one new problemis detected at early stage of feature development. To the best of our knowledge,our work is first attempt to understand the impact of weight on convergence. Webelieve that our methodology is general and applicable on all deep learning frame-works. The code and training snapshots will be made publicly available.1 I NTRODUCTIONDeep convolutional neural networks (CNNs) have demonstrated great success with break-throughresults on computer vision tasks such as image classification (Krizhevsky et al. (2012);Szegedy et al.(2015);Simonyan & Zisserman (2014);He et al. (2016)), object detection (Ren et al. (2015);Liu et al.(2016)), and semantic segmentation (Long et al. (2015);He et al. (2017)). With the improvement ofhardware computation powers and software framework optimizations, it provides more chances forusers to complete training on classical and new models. Recent work shows multi-node trainingwith large batch size is becoming popular to accelerate the time to train significantly by leveragingmore hardware resources (Goyal et al. (2017);You et al. (2017);Gitman & Ginsburg (2017)).Regardless of the publication of CNN models, users may encounter convergence problem undertheir own environment. In general, convergence problem consists of two aspects: 1) training loss isnot a number (NaN) or loss trend is not healthy; and 2) training cannot reach state of the art (SOTA)accuracy. To investigate the issue, users may compare the value with a reference implementation(called co-simulation). However, it cannot provide the insights on complex convergence problemwith training optimization (e.g., Winograd-based convolution (Lavin & Gray (2016))) and modeloptimization (e.g., weight quantization (Han et al. (2015))). Without effective diagnostic tool, usershave to wait and observe the training loss from time to time, debug the code or tune the hyper-parameters, and restart a new round of training. Recent research discussed the convergent learningon activation during training and proposed neuron aligns between two networks (Li et al. (2015)) tofacilitate the training process. Unfortunately, there is no systematic study on convergence problemby weights, although weight snapshot is the most critical output of training.In this paper, we propose a quantitative metric, Logarithmical Geometry Mean of absolute weightparameter (LoGM). LoGM inherits from standard geometric mean but is well-tuned to support theweight snapshot with millions of learnable weight parameters trained from CNNs. We developan automatic tool to measure LoGM by weight on top of Intel Caffe1and conduct extensive ex-1https://github.com/intel/caffe1Workshop track - ICLR 2018periments on ImageNet with three CNN models. We discover two empirical observations from theexperiments: 1) LoGM variance is small between weight snapshots per iteration on same model; and2) each CNN model has a reasonable divergence region. Preliminary results show our methodologyis effective with convergence problem exposure time reduction from weeks to minutes. We confirmthree known convergence issues and identify one new problem at early stage of feature develop-ment. To the best of our knowledge, our work is first attempt to understand the impact of weighton convergence. We believe that our methodology is general and applicable on all deep learningframeworks (e.g., TensorFlow, MXNet, and Caffe2). We recommend our metric tool is complemen-tary to existing co-simulation tool to identify the convergence issues more effectively. The code andtraining snapshots will be made publicly available.2 Q UANTITATIVE METRICWe define a quantitative metric LoGM in Equation (1) to measure the impact of weight. LoGM in-herits from traditional geometric mean but is well-tuned to support the weight snapshot with millionsof learnable weight parameters trained from CNNs. We define n is the number of weight parametersin a weight snapshot and use 10 as logarithm base to make the value more human-readable.LoGM =Pnilogjwijn(1)The metric indicates the similar idea of computation flow with multiplication of input activationsand weight parameters from the perspective of model inference (with the assumption of negligibleadditions). Note that we also employ another widely-used metric standard deviation (STD) in ourempirical study at the beginning. However, experimental result shows STD is not general for existingCNN models. It leads to unexpected big variance for modern CNN models with batch normalization(BN) (Ioffe & Szegedy (2015)) due to magnitude difference of weight number in convolution andBN. Therefore, it requires additional effort to handle the models with or without BN in practice. Tomake the metric simple and consistent, we employ LoGM as the only metric in our empirical study.2.1 I MPLEMENTATION DETAILSAlgorithm 1 illustrates the pseudocode on how to compute LoGM on weight. It is straightforwardwith nested loops: traversing layers from a weight snapshot W at outside loop and weight parametersfrom a layer at inside loop. At inside loop, it accumulates the logarithm of absolute of each weightparameter. Finally, it computes the mean value of sum by the number of weight parameters.Algorithm 1 Compute LoGM by weightInput: weight snapshot WOutput: LoGMsum = 0; num = 0foreach layer L in W doforeach parameter w in L dosum += log(abs(w))num += 1result = sum / numreturn resultWe implement the algorithm with Python interface on Intel Caffe and develop the tool to measureautomatically. The algorithm is general and easily applicable on all other deep learning frameworks.3 E XPERIMENTSWe perform empirical study on GoogleNet-V1, VGG-16, and ResNet-50. We use the models withhyper-parameters under Intel Caffe and leverage multi-node training on Intel@Xeon PhiTMProces-sor 7250 with Omni-Path architecture. We employ standard ImageNet as training dataset, consistingof 1,281,167 training images and 50,000 validation images in 1,000 classes.2Workshop track - ICLR 20183.1 E MPIRICAL OBSERVATIONSWe summarize two empirical observations during experiments on the same well-trained model2.Observation 1 : LoGM variance is small between weight snapshots per iteration. We perform theexperiments with iteration level on three models and measure LoGM variance as shown in Figure 1.Figure 1: LoGM VarianceObservation 2 : Each CNN model has a reasonable divergence region. We extend the experimentsfrom iteration to epoch level and show the divergence region per epoch in Table 1.Table 1: Divergence RegionTopology Divergence RegionGoogleNet-V1 (-0.09817, 0.09665)VGG-16 (-0.13456, 0.01279)ResNet-50 (-0.03719, 0.03691)3.2 A PPLICATIONSWe apply the divergence region in real applications with 3 issues confirmed and 1 new detected. Wedemonstrate two case studies on weight quantization and Winograd convolution optimization.3.2.1 W EIGHT QUANTIZATIONWeight quantization can reduce the data size transfer on network under multi-node training by da-ta compression and decompression. VGG-16 is a typical model with heavy weight parameters infull-connected layers. During feature development, weight partition is utilized with different scalingfactor. However, there is a subtle bug in decompression with wrong scaling, which leads the modelcannot reach SOTA accuracy. We measure the divergence variance on the first two weight snapshotsper iteration and find that the value is out of the reasonable region. After bug fixing, weight quanti-zation works well. Comparing with regular training by weeks on single CPU node, the case showsour methodology can shorten the debugging cycle significantly from weeks to minutes.3.2.2 W INOGRAD -BASED CONVOLUTION OPTIMIZATIONWinograd is a new class of fast algorithms for convolutional neural networks to speed up convolutioncomputation on small filters. Intel@math kernel library for deep neural networks (MKL-DNN) isan open source performance library for deep learning applications intended for acceleration of deeplearning frameworks on Intel architecture3. We enable Winograd convolution algorithm on IntelCaffe with MKL-DNN Winograd primitive and perform the training on VGG-16. The divergencevariance is 0.16922 on first two iterations in VGG-16 training, which is out of valid divergenceregion. We report the convergence issue and confirm with MKL-DNN team.3.3 S UMMARYWe believe the above observations are not limited to CNN models used in our experiments. Werecommend users measure reasonable divergence region on their own CNN model as baseline andapply the similar idea in real applications.2Well-trained model is proved to reach SOTA accuracy.3https://github.com/01org/mkl-dnn3Workshop track - ICLR 2018<|im_end|> <|im_start|>assistant ### Review Title Quantitative metric to detect convergence problems in CNN training ### Review Text This paper proposes a metric called LoGM to evaluate the evolution of the CNN weight during the training. The authors claim that LoGM can be used to detect convergence problems in the CNN training. The paper is confusing and hard to understand. There are different aspects that are unclear. For example: - Authors state that "the problems in the convergence of CNN training can be due to obtining 'not a number' in the evaluation of the loss function, or due to the fact of non obtaining state-of-the-art accuracy". I think this is an unfounded simplification. For example, the problem could be also the batch size, among others. - LoGM is defined as the average log of the CNN weigths. What is the intuitive idea behind this measure? Algorithm 1 is unnecessary. - Figure 1 is not properly discussed. In my opinion these results (and also the results of Table 1) do not show the potential of the metric. - In section 3.2. states "We apply the divrgence region in real applications with 3 issues confirmed and 1 new detected". I do not know what this means in the context of the paper. - Authors mention often the concept "reasonable divergence region", but the term is not properly defined or discussed. I think this paper is not clear. The work pretends to make an observation but, from my viewpoint, it is not solid enough to be published. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
_ptUyYP19mP
ICLR.cc/2021/Conference
2021
BeBold: Exploration Beyond the Boundary of Explored Regions
["Tianjun Zhang", "Huazhe Xu", "Xiaolong Wang", "Yi Wu", "Kurt Keutzer", "Joseph E. Gonzalez", "Yuandong Tian"]
Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. To guide exploration, previous work makes extensive use of intrinsic reward (IR). There are many heuristics for IR, including visitation counts, curiosity, and state-difference. In this paper, we analyze the pros and cons of each method and propose the regulated difference of inverse visitation counts as a simple but effective criterion for IR. The criterion helps the agent explore Beyond the Boundary of explored regions and mitigates common issues in count-based methods, such as short-sightedness and detachment. The resulting method, BeBold, solves the 12 most challenging procedurally-generated tasks in MiniGridwith just 120M environment steps, without any curriculum learning. In comparison, previous SoTA only solves 50%of the tasks. BeBold also achieves SoTAon multiple tasks in NetHack, a popular rogue-like game that contains more challenging procedurally-generated environments.
["reinforcement learning", "exploration"]
ABSTRACTEfficient exploration under sparse rewards remains a key challenge in deep rein-forcement learning. To guide exploration, previous work makes extensive use ofintrinsic reward (IR). There are many heuristics for IR, including visitation counts,curiosity, and state-difference. In this paper, we analyze the pros and cons of eachmethod and propose the regulated difference of inverse visitation counts as a sim-ple but effective criterion for IR. The criterion helps the agent explore BeyondtheBoundary of exp loredregions and mitigates common issues in count-basedmethods, such as short-sightedness anddetachment . The resulting method, Be-Bold , solves the 12 most challenging procedurally-generated tasks in MiniGridwith just 120M environment steps, without any curriculum learning. In compar-ison, previous SoTA only solves 50 %of the tasks. BeBold also achieves SoTAon multiple tasks in NetHack, a popular rogue-like game that contains more chal-lenging procedurally-generated environments.1 I NTRODUCTIONDeep reinforcement learning (RL) has experienced significant progress over the last several years,with impressive performance in games like Atari (Mnih et al., 2015; Badia et al., 2020a), Star-Craft (Vinyals et al., 2019) and Chess (Silver et al., 2016; 2017; 2018). However, most work re-quires either a manually-designed dense reward (Brockman et al., 2016) or a perfect environmentmodel (Silver et al., 2017; Morav ˇc ́ık et al., 2017). This is impractical for real-world settings, wherethe reward is sparse; in fact, the proper reward function for a task is often even unknown due tolack of domain knowledge. Random exploration (e.g., -greedy) in these environments is ofteninsufficient and leads to poor performance (Bellemare et al., 2016).Recent approaches have proposed to use intrinsic rewards (IR) (Schmidhuber, 2010) to motivateagents for exploration before any extrinsic rewards are obtained. Various criteria have been pro-posed, including curiosity/surprise -driven (Pathak et al., 2017), count -based (Bellemare et al., 2016;Burda et al., 2018a;b; Ostrovski et al., 2017; Badia et al., 2020b), and state-diff approaches (Zhanget al., 2019; Marino et al., 2019).Each approach has its upsides and downsides: Curiosity-driven approaches look for prediction errorsin the learned dynamics model and may be misled by the noisy TV (Burda et al., 2018b) problem,where environment dynamics are inherently stochastic. Count-based approaches favor novel statesin the environment but suffer from detachment andderailment (Ecoffet et al., 2019), in which theagent gets trapped into one (long) corridor and fails to try other choices. Count-based approaches arealsoshort-sighted : the agent often settles in local minima, sometimes oscillating between two statesthat alternately feature lower visitation counts (Burda et al., 2018b). Finally, state-diff approachesoffer rewards if, for each trajectory, representations of consecutive states differ significantly. Whilethese approaches consider the entire trajectory of the agent rather than a local state, it is asymptoti-cally inconsistent: the intrinsic reward remains positive when the visitation counts approach infinity.As a result, the final policy does not necessarily maximize the cumulative extrinsic reward.In this paper, we propose a novel exploration criterion that combines count-based and state-diff ap-proaches: instead of using the difference of state representations, we use the regulated difference ofinverse visitation counts of consecutive states in a trajectory. The inverse visitation count is approx-imated by Random Network Distillation (Burda et al., 2018b). Our IR provides two benefits: (1)This addresses asymptotic inconsistency in the state-diff, since the inverse visitation count vanisheswith sufficient explorations. (2) Our IR is large at the end of a trajectory and at the boundary be-tween the explored and the unexplored regions (Fig. 1). This motivates the agent to move Beyond1Under review as a conference paper at ICLR 2021StartEndRNDBeBold1. BeBoldassigns high IR (dark red) near the start and low IR for the rest (light red)2. BeBoldpushes every direction to the frontier of exploration uniformly (yellow)3. BeBoldcontinuously pushes the exploration frontier4. BeBoldreaches the end of exploration1. RND assigns high IR (dark green) throughout the environment2. RND temporarily focuses on the upper right corner (yellow)3. RND by chance starts exploring the bottom right corner heavily, resulting in the IR at top right higher than bottom right4. RND re-explores the upper right and forgets the bottom right, gets trappedFigure 1: A Hypothetical Demonstration of how exploration is done in BeBold versus Random NetworkDistillation (Burda et al., 2018b), in terms of distribution of intrinsic rewards (IR). BeBold reaches the goal bycontinuously pushing the frontier of exploration while RND got trapped. Note that IR is defined differently inRND ( 1=N(st)) versus BeBold ( max(1 =N(st+1)1=N(st);0), See Eqn. 3), and different color is used.theBoundary of the exp loredregions and step into the unknown, mitigating the short-sighted issuein count-based approaches.Following this simple criterion, we propose a novel algorithm BeBold and evaluate it on two verychallenging procedurally-generated (PG) environments: MiniGrid (Chevalier-Boisvert et al., 2018)and NetHack (K ̈uttler et al., 2020). MiniGrid is a popular benchmark for evaluating explorationalgorithms (Raileanu and Rockt ̈aschel, 2020; Campero et al., 2020; Goyal et al., 2019) and NetHackis a much more realistic environment with complex goals and skills. BeBold manages to solve the12 most challenging environments in MiniGrid within 120M environment steps, without curriculumlearning. In contrast, (Campero et al., 2020) solves 50% of the tasks, which were categorizedas “easy” and “medium”, by training a separate goal-generating teacher network in 500M steps.In NetHack, a more challenging procedurally-generated environment, BeBold also outperforms allbaselines with a significant margin on various tasks. In addition, we analyze BeBold extensivelyin MiniGrid. The quantitative results show that BeBold largely mitigates the detachment problem,with a much simpler design than Go-Explore (Ecoffet et al., 2020) which contains multiple hand-tune stages and hyper-parameters.Most Related Works . RIDE (Raileanu and Rockt ̈aschel, 2020) also combines multiple criteria to-gether. RIDE learns the state representation with curiosity-driven approaches, and then uses thedifference of learned representation along a trajectory as the reward, weighted by pseudo countsof the state. However, as a two-stage approach, RIDE heavily relies on the quality of generaliza-tion of the learned representation on novel states. As a result, BeBold shows substantially betterperformance in the same procedurally-generated environments.Go-Explore (Ecoffet et al., 2020) stores many visited states (including boundaries), reaches thesestates without exploration, and explores from them. BeBold focuses on boundaries, perform explo-ration without human-designed cell representation (e.g., image downsampling) and is end-to-end.Frontier-based exploration (Yamauchi, 1997; 1998; Topiwala et al., 2018) is used to help specificrobots explore the map by maximizing the information gain. The “frontier” is defined as the 2Dspatial regions out of the explored parts. No automatic policy optimization with deep models is used.In contrast, BeBold can be applied to more general partial observable MDPs with deep policies.2 B ACKGROUNDFollowing single agent Markov Decision Process (MDP), we define a state space S, an action spaceA, and a (non-deterministic) transition function T:SA!P(S)whereP(S)is the probabilityof next state given the current state and action. The goal is to maximize the expected reward R=E[PTk=0krt+k=1]wherertis the reward, is the discount factor, and the expectation is takenw.r.t. the policy and MDP transition P(S). In this paper, the total reward received at time step tisgiven byrt=ret+rit, whereretis the extrinsic reward given by the environment, ritis the intrinsicreward from the exploration criterion, and is a scaling hyperparameter.2Under review as a conference paper at ICLR 20213 E XPLORATION BEYOND THE BOUNDARYOur new exploration criterion combines both counting-based and state-diff-based criteria. Our crite-rion doesn’t suffer from short-sightedness and is asymptomatically consistent. We’ll first introduceBeBold and then analyse the advantages of BeBold over existing criteria in Sec. 4.Exploration Beyond the Boundary . BeBold gives intrinsic reward (IR) to the agent when it ex-plores beyond the boundary of explored regions, i.e., along a trajectory, the previous state sthasbeen sufficiently explored but st+1is new:ri(st;at;st+1) = max1N(st+1)1N(st);0; (1)HereNis the visitation counts. We clip the IR here because we don’t want to give a negative IR tothe agent if it transits back from a novel state to a familiar state. From the equation, only crossing thefrontier matters to the intrinsic reward; if both N(st)andN(st+1)are high or low, their differencewould be small. As we will show in Sec. 4, for each trajectory going towards the frontier/boundary,BeBold assigns an approximately equal IR, regardless of their length. As a result, the agent willcontinue pushing the frontier of exploration in a much more uniform manner than RND and won’tsuffer from short-sightedness. This motivates the agent to explore different trajectories uniformly.Also Eq. 1 is asymptotically consistent as ri!0whenN!1 .Like RIDE (Raileanu and Rockt ̈aschel, 2020), in our implementation, partial observation otare usedinstead of the real state st, when stis not available.Episodic Restriction on Intrinsic Reward (ERIR). In many environments where the state tran-sition is reversible, simply using intrinsic reward to guide exploration would result in the agentgoing back and forth between novel states st+1and their previous states st. RIDE (Raileanu andRockt ̈aschel, 2020) avoids this by scaling the intrinsic reward r(s)by the inverse of the state visita-tion counts. BeBold puts a more aggressive restriction: the agent is only rewarded when it visits thestate sfor the first time in an episode. Thus, the intrinsic reward of BeBold becomes:ri(st;at;st+1) = max1N(st+1)1N(st);01fNe(st+1) = 1g (2)Nehere stands for episodic state count and is reset every episode. In contrast, the visitation countNis a life-long memory bank counting state visitation across all of training.Inverse visitation counts as prediction difference. We use the difference between a teacher anda student network 0to approximate visitation counts: N(st+1)1jj(ot+1)0(ot+1)jj2, here ot+1is the observation of the agent in state st+1. This yields the following implementation of BeBold:ri(st;at;st+1) = max(jj(ot+1)0(ot+1)jj2jj(ot)0(ot)jj2;0)1fNe(ot+1) = 1g)(3)Shared visitation counts N(st)in the training of Procedurally-Generated (PG) Environments.During training, the environment changes constantly (e.g., blue keys becomes red), while the se-mantic links of these objects remain the same. We use a shared RND ( ,0) across different PGenvironments, and treat these semantically similar states as new without using domain knowledge(e.g., image downsampling like in Go-Explore (Ecoffet et al., 2019)). Partial observability and gen-eralization of neural network handles these differences and leads to count-sharing. For episodiccountNe(ot+1), since it is not shared across episodes (and environments), we use a hash table.4 C ONCEPTUAL ADVANTAGES OF BEBOLD OVER EXISTING CRITERIAShort-sightedness and Detachment . One issue in the count-based approach is its short-sightedness.Let’s assume in a simple environment, there are McorridorsfjgMj=1starting at s0and extendingto different parts of the environment. The corridor jhas a length of Tj. The agent starts at s0.For each visited state, the agent receives the reward of1N(s)whereN()is the visitation count, andlearns withQ-learning. Then with some calculation (See Appendix), we see that the agent has astrong preference on exploring the longest corridor first (say 1), and only after a long period doesit start to explore the second longest. This is because the agent initially receives high IR in 1due toits length, which makes the policy visit1more often, until it depletes the IR in 1.This behavior of “dedication” could lead to serious issues. If M3and 2 corridors are longenough (say 1and2are long), then before the agent is able to explore other corridors, its policy 3Under review as a conference paper at ICLR 2021has already been trained long enough so that it only remembers how to get into 1and2. When1has depleted its IR, the agent goes to 2following the policy. After that, the IR in 1revives since thevisitation counts in 1is now comparable or even smaller than 2, which lures the agent to explore1again following the policy. This leaves other corridors (e.g., 3) unexplored for a very long time.Note that using a neural-network-approximated IR (RND) instead of tabular IR could potentiallyalleviate this issue, but it is often far less than enough in complex environments.As mentioned in Go-Explore series (Ecoffet et al., 2019; 2020), count-based approaches also sufferfrom detachment : if the agent by chance starts exploring 2after briefly exploring the first fewstates of1, it would not return and explore 1further since 1is now “shorter” than 2and haslower IR than 2for a long period. Go-Explore tries to resolve this dilemma between “dedication”and “exploration” by using a two-stage approach with many hand-tuned parameters.In contrast, IR of BeBold depends on the difference of the visitation counts along the trajectory,and is insensitive to the length of the corridor. This leads to simultaneous exploration of multiplecorridors and yields a diverse policy (See Sec. 5.2 for empirical evidence). Moreover, the IRfocuses on the boundary between explored and unexplored regions, where the two goals (dedicationand exploration) align, yielding a much cleaner, one-stage method.Asymptotic Inconsistency. Approaches that define IR as the difference between state representa-tionsk (s) (s0)k( is a learned embedding network) (Zhang et al., 2019; Marino et al., 2019)suffer from asymptotic inconsistency. In other words, their IR does not vanish even after sufficientexploration: ri6!0whenN!1 . This is because when the embedding network converges aftersufficient exploration, the agent can always obtain non-zero IR if a major change in state represen-tation occurs (e.g., opening a door or picking up a key in MiniGrid). Therefore, the learned policydoes not maximize the extrinsic reward re, deviating from the goal of RL. Automatic curriculumapproaches (Campero et al., 2020)) have similar issues due to an ever-present IR.For this, (Zhang et al., 2019) proposes to learn a separate scheduler to switch between intrinsic andextrinsic rewards, and (Raileanu and Rockt ̈aschel, 2020) divides the state representation differenceby the square root of visitation counts. In comparison, BeBold does not require any extra stage andis a much simpler solution.5 E XPERIMENTSWe evaluate BeBold on challenging procedurally-generated environment MiniGrid (Chevalier-Boisvert et al., 2018) and the hard-exploration environment NetHack (K ̈uttler et al., 2020). Theseenvironments provide a good testbed for exploration in RL since the observations are symbolic ratherthan raw sensor input (e.g., visual input), which decouples perception from exploration. In MiniGrid,we compare BeBold with RND (Burda et al., 2018b), ICM (Pathak et al., 2017), RIDE (Raileanuand Rockt ̈aschel, 2020) and AMIGo (Campero et al., 2020). We only evaluate AMIGo for 120Msteps in our experiments, the algorithm obtains better results when trained for 500M steps as shownin (Campero et al., 2020). For all the other baselines, we follow the exact training paradigm from(Raileanu and Rockt ̈aschel, 2020). Mean and standard deviation across four runs of different seedsare computed. BeBold successfully solves the 12 most challenging environments provided by Min-iGrid. By contrast, all the baselines end up with zero reward on half of the environments we tested.In NetHack, BeBold also achieves SoTA results with a large margin over baselines.5.1 M INIGRIDENVIRONMENTSWe mainly use three challenging environments from MiniGird: Multi-Room (MR),Key Corridor(KC) and Obstructed Maze (OM). We use these abbreviations for the remaining of the paper (e.g.,OM2Dlh stands for ObstructedMaze2Dlh). Fig. 2 shows one example of a rendering on OMFull aswell as all the environments we tested with their relative difficulty.In MiniGrid, all the environments are size NN(Nis environment-specific) where each tilecontains an object: wall, door, key, ball, chest. The action space is defined as turn left, turn right,move forward, pick up an object, drop an object, and toggle an object (e.g., open or close a door).MR consists of a series of rooms connected by doors and the agent must open the door to get to thenext room. Success is achieved when the agent reaches the goal. In KC, the agent has to explore theenvironment to find the key and open the door along the way to achieve success. OM is the hardest:the doors are locked, the keys are hidden in boxes, and doors are obstructed by balls.4Under review as a conference paper at ICLR 2021AgentDoorObstructionGoalBoxMRN6MRN7S-8MRN12-S10KCS3R3KCS4R3KCS5R3KCS6R3OM2Dl-hOM2Dl-hbOM1QOM2QOMFULLICMRNDRIDEAMIGOBeBold*MR is short for MultiRoom, KC is for KeyCorridor, OM is for ObstructedMaze: Solved within 120M steps: UnsolvedFigure 2: MiniGrid Environments. Left: a procedurally-generated OMFull environment. Right: BeBoldsolves challenging tasks which previous approaches cannot solve. Note that we evaluate all methods for 120Msteps. AMIGo gets better results when trained for 500M steps as shown in (Campero et al., 2020).0.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: MultiRoom-N60.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: MultiRoom-N7-S80.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: MultiRoom-N12-S100 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: KeyCorridorS3R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: KeyCorridorS4R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: KeyCorridorS5R30 2 4 6Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: KeyCorridorS6R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: ObstructedMaze-2Dlh0 2 4 6Environment Steps 1e70.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-2Dlhb0 2 4 6Environment Steps 1e70.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-1Q0 2 4 6 8Environment Steps 1e70.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-2Q0.0 0.5 1.0Environment Steps 1e80.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-FullIMPALA ICM RND RIDE AMIGO BeBoldFigure 3: Results for various hard exploration environments from MiniGrid. BeBold successfully solves allthe environments while all other baselines only manage to solve two to three relatively easy ones.Results . We test BeBold on all environments from MiniGrid. BeBold manages to solve the 12 mostchallenging environments. By contrast, all baselines solve only up to medium-level tasks and fail tomake any progress on more difficult ones. Note that some medium-level tasks we define here arecategorized as hard tasks in RIDE and AMIGo (e.g., KCS4R3 is labeled as “KCHard” and KCS5R3is labeled as “KCHarder”). Fig. 3 shows the results of our experiments. Half of the environments(e.g.,KCS6R3 ,OM1Q ) are extremely hard and all the baselines fail. In contrast, BeBold easily solvesall such environments listed above without any curriculum learning. We also provide the final testingperformance for BeBold in Tab. 1. The results is averaged across 4 seeds and 32 random initializedenvironments.Multi Room environments are relatively easy in MiniGrid. However, all the baselines except RIDEfail. As we increase the room size and number (e.g., MRN12S10 ), BeBold can achieve the goalquicker than RIDE. Our method easily solves these environments within 20M environment steps.On Key Corridor environments, RND, AMIGo, RIDE, IMPALA and BeBold successfully solvesKCS3R3 while ICM makes reasonable progress. However, when we increase the room size (e.g.,KCS5R3 ), none of the baseline methods work. BeBold manages to solve these environments in 40Menvironment steps. The agent demonstrates the ability to explore the room and finds the correspond-ing key to open the door in a randomized, procedurally-generated environment.Obstructed Maze environments are also difficult. As shown in Fig. 3, RIDE and RND manage tosolve the easiest task OM2Dlh which doesn’t contain any obstructions. In contrast, BeBold not onlyrapidly solves OM2Dlh , but also solves four more challenging environments including OMFull .These environments have obstructions blocking the door (as shown in Fig. 2) and are much larger insize than OM2Dlh . In these environments, our agent learns to move the obstruction away from thedoor to open the door and enter the next room. This “skill” is hard to learn since there is no extrinsicreward assigned to moving the obstruction. However, learning the skill is critical to achieve the goal.5Under review as a conference paper at ICLR 2021Table 1: Final testing performance for BeBold and all baselines.MRN6 MRN7S8 MRN12S10 KCS3R3 KCS4R3 KCS5R3ICM 0.000.0 0.000.0 0.000.0 0.450.052 0.000.0 0.000.0RIDE 0.650.005 0.670.001 0.650.002 0.910.003 0.930.002 0.000.0RND 0.000.0 0.000.0 0.000.0 0.910.003 0.000.0 0.000.0IMPALA 0.000.0 0.000.0 0.000.0 0.910.004 0.000.0 0.000.0AMIGO 0.000.0 0.000.0 0.000.0 0.890.005 0.000.0 0.000.0BeBold 0.640.003 0.670.001 0.650.002 0.920.003 0.930.003 0.940.001KCS6R3 OM2Dlh OM2Dlhb OM1Q OM2Q OMFULLICM 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0RIDE 0.000.0 0.950.015 0.000.0 0.000.0 0.000.0 0.000.0RND 0.000.0 0.950.0066 0.000.0 0.000.0 0.000.0 0.000.0IMPALA 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0AMIGO 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0BeBold 0.940.017 0.960.005 0.890.063 0.880.067 0.930.028 0.960.058259KMultiRoomN7S8Explore All Rooms!BeBold1.7M2.8M4.6M160K2.5M...9.8MEnvironment StepsRNDStuck in the 5thRoom!Figure 4: Normalized visitation counts N(st)=Z(Zis a normalization constant) for the location of agents.BeBold successfully explores all rooms at 4:6M steps while RND gets stuck in the fifth room at 9:8M steps.5.2 A NALYSIS OF INTRINSIC REWARD USING PURE EXPLORATIONWe analyze how BeBold mitigates the short-sightedness issue by only using IR to guide exploration.Shorted-Sighted Problem in Long-Corridor Environment. To verify Sec. 4, we design a toyenvironment with four disconnected corridors with length 40, 10, 30, and 10 respectively startingfrom the same state. In this example, there is no extrinsic reward and the exploration of the agentis guided by IR. We combine Q-learning with tabular IR (count-based and BeBold tabular) andneural-network-approximated IR (RND and BeBold) respectively for this experiment. We removeclipping from BeBold for a fair comparison. Tab. 2 shows the visitation counts across 4 runs w.r.t.each corridor after 600 episodes of training. It is clear that BeBold tabular explores each corridorin a much more uniform manner. On the other hand, count-based approaches are greatly affectedby short-sightedness and focus only on two out of four corridors. BeBold also shows much morestable performance across runs as the standard deviation is much lower comparing with RND. Notethat comparing to the analysis in Sec. 4, in practical experiments, we perceive that the preference ofthe corridors for count-based methods can be arbitrary because of the random initialization of theQ-network.Visitation Counts Analysis in MiniGrid. To study how different intrinsic rewards can affect theexploration of the agent, we test BeBold and RND in a fixed (instead of procedurally-generated forsimplicity) MRN7S8 environment. The environment contains 7 rooms connected by doors. To be asuccessful exploration strategy, the agent should explore all states and give all states equal amountof exploration. We define two metrics to measure the effectiveness of an exploration strategy: (1)visitation counts at every state over training N(s), and (2) entropy of visitation counts in each room :H(0(s))where0(s) =N(s)Ps2SrN(s). We do not calculate entropy across all states, because theagent always starts in the first room and may not visit the last one as frequently. As a result, thevisitation counts for states in the first room will be several magnitudes larger than the last room.Fig. 4 shows the heatmap of normalizd visitation counts N(st)=Z, whereZis the normalizationconstant. At first, RND enters the second room faster than BeBold. However, BeBold consistentlymakes progress by pushing the frontier of exploration and discovers all the rooms in 5M steps, whileRND gets stuck in the fifth room even trained with 10M steps.In Tab. 3, the entropy of distribution in each room H(0(s))for BeBold is larger than that of RND.This suggests that BeBold encourages the agent to explore the state in a much more uniform manner.6Under review as a conference paper at ICLR 2021Table 2: Visitation counts for the toy corridor environmentafter 3K episodes. BeBold explores corridors more uniformlythan count-based approaches.C1 C2 C3 C4 EntropyLength 40 10 30 10 –Count-Based 66K 28K 8K 8K 23K 35K 13K 18K 1.06 0.39BeBold Tabular 26K 2K 28K 8K 25K 6K 29K 9K 1.970.02RND 0.2K 0.2K 70K 53K 0.2K 0.07K 26K 44K 0.24 0.28BeBold 27K 6K 23K 3K 31K 12K 26K 8K 1.960.05Table 3: Entropy of the visitation counts ofeach room. Such state distribution of Be-Bold is much more uniform than RND.0.2M 0.5M 2.0M 5.0MRoom1 3.48 / 3.54 3.41 / 3.53 3.51 / 3.56 3.49 / 3.56Room2 2.87 / 3.09 / 3.23 3.51 / 3.53 3.35 / 3.56Room3 / / /4.02 3.42 / 4.01Room4 / / /2.74 2.85 / 2.87Results are presented in the order of “RND / BeBold”.1.5M Steps3.1M StepsRNDBeBold6.4M Steps4.6M Steps7.5M Steps9.8M Steps1.0M Steps1.4M Steps3.4M Steps2.4M Steps3.9M Steps4.8M StepsFigure 5: IR heatmaps for the location of agents. BeBold mitigates the short-sighted problem.0 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnKeyCorridorS4R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnKeyCorridorS5R30.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnMultiRoom-N7-S80.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnMultiRoom-N12-S10RND RND with ERIR BeBold w.o. ERIR BeBold w.o. Clipping BeBoldFigure 6: Ablation Study on BeBold comparing with RND with episodic intrinsic reward. BeBold significantlyoutperforms RND with episodic intrinsic reward on all the environments.Intrinsic Reward Heatmap Analysis in MiniGrid . We also plot the heatmap of IR at differenttraining steps. We generate the plot by running the policy from different checkpoints for 2K stepsand plot the IR associated with each state in the trajectory. States that do not receive IR from thesampled trajectories are left blank. For BeBold, the IR is computed as the difference of inversevisitation counts (approximated by RND) between consecutive states standst+1in the trajectory.From Fig. 5, we can see that BeBold doesn’t suffer from short-sighted problem, as we can clearlysee that the areas with high IRs are continuously pushed forward from room1 to room7. This istrue of the whole training process. On the contrary, the IR heatmap for RND bounces betweentwo consecutive rooms. This is due to short-sightedness: when exploring the second room, the IRof that room will significantly decrease. Since (a) the policy has assigned the first room non-zeroprobability and (b) the first room now has lower vistation count, RND will revisit the first room.This continues indefinitely, as the agent oscillates between the two rooms.5.3 A BLATION STUDYEpisodic Restriction on Intrinsic Reward . We analyze the importance of each component inBeBold. To illustrate the importance of exploring beyond the boundary, we compare BeBold with aslightly modified RND: RND with ERIR. We only give RND intrinsic reward when it visits a newstate for the first time in an episode. We can see in Fig. 6 that although ERIR helps RND to solveKCS4R3 andMRN7S8 , without BeBold, the method still fails on more challenging tasks KCS5R3andMRN12S10 . A symmetric experiment of removing ERIR from BeBold is also conducted.Clipping in Beyond the Boundary Exploration . We also study the role of clipping max(;0)in ourmethod. Fig. 6 shows the effect of removing clipping from BeBold. We conclude it is suboptimal todesign an IR that incurs negative reward when transitioning from an unfamiliar to a familiar state.5.4 T HENETHACK LEARNING ENVIRONMENTTo evaluate BeBold on a more challenging and realistic environment, we choose the NetHack Learn-ing Environment (K ̈uttler et al., 2020). In the game, the player is assigned a role of hero at thebeginning of the game. The player needs to descend over 50 procedurally-generated levels to thebottom and find “Amulet of Yendor” in the dungeon. The procedure can be described as first retrieve7Under review as a conference paper at ICLR 20210.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e90255075100Average Returnstaircase0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e9020406080Average Returnpet0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e9101Average Returngold0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e90200400600800Average Returnscore0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e90500100015002000Average Returnscout0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e91.51.00.50.0Average ReturnoracleIMPALA RND BeBoldFigure 7: Results for tasks on NetHack. BeBold achieves the SoTA results comparing to RND and IMPALA.the amulet, then escape the dungeon and finally unlock five challenging levels (the four ElementalPlanes and the Astral Plane). We test BeBold on a set of tasks with tractable subgoals in the game:Staircase : navigating to a staircase to the next level, Pet: reaching a staircase and keeping the petalive, Gold : collecting gold, Score : maximizing the score in the game, Scout : scouting to exploreunseen areas in the environment, and Oracle : finding the oracle (an in-game character at level 5-9).Results in Fig. 7 show that BeBold surpasses RND and IMPALA on all tasks1. Especially on Pet,BeBold outperforms the other two with a huge margin. This again illustrates the strong performanceof BeBold in an environment with huge action spaces and long-horizon reward (several magnitudeslonger than StarCraft and Dota2). Oracle is the hardest task and no approaches are able to find theoracle and obtain a reward of 1000. BeBold still manages to find a policy with less negative rewards(i.e., penalty of taking actions that do not lead to game advancement, like moving towards a wall).For RIDE, we contacted the authors, who confirmed their method is not functional on NetHack. Wefurther attempted to tune RIDE but to no avail. So we do not report RIDE performance on NetHack.6 R ELATED WORKIn addition to the two criteria ( count -based and state-diff based) mentioned above, another streamof defining IRs is curiosity -based. The main idea is to encourage agents to explore areas wherethe prediction of the next state from the current learned dynamical model is wrong. Dynamic-AE (Stadie et al., 2015) computes the distance between the predicted and the real state on the outputof an autoencoder, ICM (Pathak et al., 2017) learns the state representation through a forward andinverse model and EMI (Kim et al., 2018) computes the representation through maximizng mutualinformationI([s;a];s0)andI([s;s0];a).Another line of research is using information gain to reward the agent. VIME (Houthooft et al.,2016) uses a Bayesian network to measure the uncertainty of the learned model. Later, to reducecomputation, a deterministic approach has been adopted (Achiam and Sastry, 2017). Other worksalso propose to use ensemble of networks for measuring uncertainty (Pathak et al., 2019; Shyamet al., 2019). We can also reward the agent by Empowerment (Klyubin et al., 2005; Gregor et al.,2016; Salge et al., 2014; Mohamed and Rezende, 2015), prioritizing the states that agent can takecontrol through its actions. It is different from state-diff: if st+1differs from stbutnotdue to agent’schoice of actions, then the empowerment at stis zero. Other criteria exist, e.g., diversity (Eysenbachet al., 2018), feature control (Jaderberg et al., 2016; Dilokthanakul et al., 2019) or the KL divergencebetween current distribution over states and a target distribution of states (Lee et al., 2019).Outside of intrinsic reward, researchers have proposed to use randomized value functions to encour-age exploration (Osband et al., 2016; Hessel et al., 2017; Osband et al., 2019). Adding noise to thenetwork is also shown to be effective (Fortunato et al., 2017; Plappert et al., 2017). There has alsobeen effort putting to either explicitly or implicitly separate exploration and exploitation (Colas et al.,2018; Forestier et al., 2017; Levine et al., 2016). The recently proposed Go-Explore series (Ecoffetet al., 2019; 2020) also fall in this category. We might also set up different goals for exploration (Guoand Brunskill, 2019; Oh et al., 2018; Andrychowicz et al., 2017).Curriculum learning (Bengio et al., 2009) has also been used to solve hard exploration environments.The curriculum can be explicitly generated by: searching the space (Schmidhuber, 2013), teacher-1ForGold task, there is a small fix introduced to the environment recently. We benchmark all methodsbefore the fix and will update the results.8Under review as a conference paper at ICLR 2021RNDRNDFigure 8: Results for CNN-based and RNN-based model on MonteZuma’s Revenge. BeBold achieves goodperformance.student setting (Matiisen et al., 2019), increasing distance between the starting point and goal (Jabriet al., 2019) or using a density model to generate a task distribution for the meta learner (Florensaet al., 2017). Our work can also be viewed as an implicit curriculum learning as gradually encouragesthe agent to expand the area of exploration. However, it never explicitly generates curriculum.7 M ONTE ZUMA ’SREVENGEWe also provide initial result of BeBold on MonteZuma’s Revenge. We same paradigm as RND andthe same set of hyperparameters except we use 128 parallel environments. In Fig. 8, we can see thatusing CNN-based model, BeBold achieves approximately 10000 external reward after two Billionframes while the performance reported in RND (Burda et al., 2018b) is around 6700. When using aRNN-baed model, BeBold reached around 13000 external reward in 100K updates while RND onlyachieves 4400. Please note that these are only initial result and we’ll provide the comparison withRND and average return across multiple seeds in the future.8 L IMITATIONS AND FUTURE WORKNoisy TV Problem . One of the limitations on BeBold is the well-known Noisy TV problem. Theproblem was raised by (Pathak et al., 2017): the agent trained using a count-based IR will getattracted to local sources of entropy in the environment. Thus, it will get high IR due to the random-ness in the environment even without making any movements. BeBold suffers from this problemas well since the difference between consecutive states can be caused by the stochasity in the envi-ronment. That could be the reason that BeBold doesn’t get a good performance on stochastic tasks(e.g., Dynamic-Obstacles-5x5 ). We will leave this problem to future research.Hash Table for ERIR . The ERIR in BeBold adopts a hash table for episodic visitation count. Thiscould potentially have a hard time when applying BeBold in a continuous-space environment (e.g.,some robotics tasks). One simple solution is to discretilize the space and still use a hash table forcounting. We also leave the more general and elegant fix to this problem to future work.9 C ONCLUSIONIn this work, we propose a new criterion for intrinsic reward (IR) that encourages exploration be-yond the boundary of explored regions using regulated difference of inverse visitation count alonga trajectory. Based on this criterion, the proposed algorithm BeBold successfully solves 12 of themost challenging tasks in a procedurally-generated MiniGrid environment. This is a significant im-provement over the previous SoTA, overcoming short-sightedness issues that plague count-basedexploration. We also evaluate BeBold on NetHack, a much more challenging environment. BeBoldoutperforms all the baselines by a significant margin. In summary, this simple criterion and the en-suing algorithm demonstrates effectiveness in solving the sparse reward problem in reinforcementlearning (RL), opening up new opportunities to many real-world applications.9Under review as a conference paper at ICLR 2021
pQkE-Uw2aIn
This paper is overall well-written, motivated and empirically-supported.
8: Top 50% of accepted papers, clear accept
------------------------------------ **Summary:** This paper proposes BeBold, a new definition of intrinsic reward to guide exploration in sparse reward problems. This intrinsic reward combines the ideas behind count-based approaches and state-diff approaches. They demonstrate the success of BeBold by comparing their algorithm to a set of state-of-the-art exploration methods using intrinsic rewards on a set of tasks from the MiniGrid and NetHack environments. ------------------------------------ **Strong points:** Overall, I believe this is a very good paper. The idea is simple, the motivations and intuitions are well explained. The empirical study is carefully designed, involves relevant baselines and environments. The paper is also well written. I will list the main strong points: * The problem of exploration in sparse environments is an important problem in the field of RL. Many exploration problems have been defined (detachment, short-sightedness, etc.). This paper presents these problems and proposes to improve on some of them. * The paper is clearly written and well organized. * The related work is relevant and seems complete. * The experiments are overall well designed. MiniGrid and Nethack are two types of benchmarks adapted to the study of exploration algorithms and the baselines are relevant. I like that the authors conducted ablations studies and additional experiments to study specific aspects of their strategy (visitation counts analysis and the study of the short-sighted problem). * The new methods seem to significantly outperform previous ones without being overly complicated. The empirical evidence supports the claim (although I have a problem with the use of “solve” here, see below). ------------------------------------ **Weak points:** I will now list a few weak points of the paper. * The paper lists various exploration problems (noisy TV problem, detachment, derailment, short-sightedness and asymptotically inconsistent). BeBold is said to solve detachment, short-sightedness and asymptotical inconsistency. Derailment and the noisy TV problem are neither defined nor discussed. I believe BeBold does not solve the noisy TV problem, it should be discussed. * I am not sure I get the intuition behind the clipping of the reward function. The authors write: “we do not want to give a negative IR to the agent if it transits back from a novel state to a familiar state”, but do not really explain why. Can you make this explicit? * Episodic restriction seems to restrict the use of BeBold to environments with discrete countable states (use of hash table). Do you see a way around that problem? * I think the main problem is the total absence of discussion w.r.t. the potential limits of BeBold (there is no discussion section). In particular, the authors could discuss the aspects mentioned above: the limitation to discrete states, the noisy TV and derailment problems. * Comparison to Go-Explore: Talking about BeBold, the paper says: “without using domain knowledge (e.g., image downsampling like in Go-Explore (Ecoffet et al., 2019)). It is true that BeBold does not use image downsampling, but it does not use images at all, which is probably the explanation. Go-Explore and RND can be applied to image-based environments like Montezuma’s revenge. To do so, Go-Explore indeed requires downsampling. However, I believe that if BeBold were to be applied to such environments, it would also need the downsampling trick (or equivalent) to be able to apply episodic restriction. * 4 seeds is a very small number. I know that very few works present much more than this, but I think it is bad practice. Please consider adding more (>=10). I admit that in your case, the trend is quite clear as competing algorithms show null performance most of the time. * I think the repeated use of the term “solved” is quite misleading. The traditional definition of “to solve” is probably “to reach maximum performance”. This is definitely not the case in several environments. Please consider either defining precisely what you mean by “solved”, or not using that term. * There is no codebase in the supplementary material. Do you plan on releasing it? Please consider doing so. I believe most of these weak points can be solved during the rebuttal. ------------------------------------ **Recommendation and justification:** I think this paper should be accepted for the reasons listed in the Strong Points section. I'm giving a score of 7. I'd be happy to raise this score to an 8 provided that the authors add a discussion section to discuss the points mentioned above. ------------------------------------ **Feedback to improve the paper (not part of assessment):** * “in which the agent gets trapped into one (long) corridor and fails to try other choices.” → not very clear at this point, although it’s explained later. * The episodic restriction seems to require a policy that has memory. Otherwise the reward would not be Markovian anymore. Maybe this should be said somewhere? * Short-sightedness paragraph: “but it is often far less than enough in a complex environment”: vague and not argumented. * What do the environment code stands for? S, R, Dl-h, Dl-hb, Q? * Results could be a little bit bigger, maybe switch the colors so that the two leading algorithms do not have almost identical colors (pink and red)? * In tables, what does bold indicate? Is it only the best value? does it assert statistical significance (if yes, which test at which confidence level?) * I believe the paper does not mention which RL algorithm is used. This seems like an important detail to provide. **Typos:** * “In contrast, (Campero et al. 2020) solves 50% of the tasks…” → remove parentheses. Same in the last paragraph before Sec. 5 “For this, (Zhang et al, 2019) ... (Raileanu)...”. Other examples throughout. * Reward definition in Sec2. Not well defined, we don’t know what are t, k or what the =1 refers to. * Do not use contractions “does’nt”, “don’t”, “won’t”, “let’s” etc. * Sec. 3, Episodic Restric. paragraph: “back and force” → “back and forth” * “the observation to the agent” → “the observation of the agent” ? * Table 2 caption: “th” → “the”. * Second to the last paragraph before the Intrinsic Reward Heatmap paragraph: “discover” → “discovers”. * “BeBold doesn’t suffer from short-sighted problem” → “BeBold does not suffer from short-sightedness” or “the short-sighted problem”. * “We only give RND intrinsics” → is “intrinsics” a word ? **Post-rebuttal update** The authors addressed most of my concerns during the rebuttal, added relevant discussions, experiments on Montezuma and clarified the reported evaluation metrics. I raise the score from 7 to 8.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title BeBold: Exploration Beyond the Boundary of Explored Regions ### Paper Abstract Efficient exploration under sparse rewards remains a key challenge in deep reinforcement learning. To guide exploration, previous work makes extensive use of intrinsic reward (IR). There are many heuristics for IR, including visitation counts, curiosity, and state-difference. In this paper, we analyze the pros and cons of each method and propose the regulated difference of inverse visitation counts as a simple but effective criterion for IR. The criterion helps the agent explore Beyond the Boundary of explored regions and mitigates common issues in count-based methods, such as short-sightedness and detachment. The resulting method, BeBold, solves the 12 most challenging procedurally-generated tasks in MiniGridwith just 120M environment steps, without any curriculum learning. In comparison, previous SoTA only solves 50%of the tasks. BeBold also achieves SoTAon multiple tasks in NetHack, a popular rogue-like game that contains more challenging procedurally-generated environments. ### Paper Keywords ["reinforcement learning", "exploration"] ### Paper Content ABSTRACTEfficient exploration under sparse rewards remains a key challenge in deep rein-forcement learning. To guide exploration, previous work makes extensive use ofintrinsic reward (IR). There are many heuristics for IR, including visitation counts,curiosity, and state-difference. In this paper, we analyze the pros and cons of eachmethod and propose the regulated difference of inverse visitation counts as a sim-ple but effective criterion for IR. The criterion helps the agent explore BeyondtheBoundary of exp loredregions and mitigates common issues in count-basedmethods, such as short-sightedness anddetachment . The resulting method, Be-Bold , solves the 12 most challenging procedurally-generated tasks in MiniGridwith just 120M environment steps, without any curriculum learning. In compar-ison, previous SoTA only solves 50 %of the tasks. BeBold also achieves SoTAon multiple tasks in NetHack, a popular rogue-like game that contains more chal-lenging procedurally-generated environments.1 I NTRODUCTIONDeep reinforcement learning (RL) has experienced significant progress over the last several years,with impressive performance in games like Atari (Mnih et al., 2015; Badia et al., 2020a), Star-Craft (Vinyals et al., 2019) and Chess (Silver et al., 2016; 2017; 2018). However, most work re-quires either a manually-designed dense reward (Brockman et al., 2016) or a perfect environmentmodel (Silver et al., 2017; Morav ˇc ́ık et al., 2017). This is impractical for real-world settings, wherethe reward is sparse; in fact, the proper reward function for a task is often even unknown due tolack of domain knowledge. Random exploration (e.g., -greedy) in these environments is ofteninsufficient and leads to poor performance (Bellemare et al., 2016).Recent approaches have proposed to use intrinsic rewards (IR) (Schmidhuber, 2010) to motivateagents for exploration before any extrinsic rewards are obtained. Various criteria have been pro-posed, including curiosity/surprise -driven (Pathak et al., 2017), count -based (Bellemare et al., 2016;Burda et al., 2018a;b; Ostrovski et al., 2017; Badia et al., 2020b), and state-diff approaches (Zhanget al., 2019; Marino et al., 2019).Each approach has its upsides and downsides: Curiosity-driven approaches look for prediction errorsin the learned dynamics model and may be misled by the noisy TV (Burda et al., 2018b) problem,where environment dynamics are inherently stochastic. Count-based approaches favor novel statesin the environment but suffer from detachment andderailment (Ecoffet et al., 2019), in which theagent gets trapped into one (long) corridor and fails to try other choices. Count-based approaches arealsoshort-sighted : the agent often settles in local minima, sometimes oscillating between two statesthat alternately feature lower visitation counts (Burda et al., 2018b). Finally, state-diff approachesoffer rewards if, for each trajectory, representations of consecutive states differ significantly. Whilethese approaches consider the entire trajectory of the agent rather than a local state, it is asymptoti-cally inconsistent: the intrinsic reward remains positive when the visitation counts approach infinity.As a result, the final policy does not necessarily maximize the cumulative extrinsic reward.In this paper, we propose a novel exploration criterion that combines count-based and state-diff ap-proaches: instead of using the difference of state representations, we use the regulated difference ofinverse visitation counts of consecutive states in a trajectory. The inverse visitation count is approx-imated by Random Network Distillation (Burda et al., 2018b). Our IR provides two benefits: (1)This addresses asymptotic inconsistency in the state-diff, since the inverse visitation count vanisheswith sufficient explorations. (2) Our IR is large at the end of a trajectory and at the boundary be-tween the explored and the unexplored regions (Fig. 1). This motivates the agent to move Beyond1Under review as a conference paper at ICLR 2021StartEndRNDBeBold1. BeBoldassigns high IR (dark red) near the start and low IR for the rest (light red)2. BeBoldpushes every direction to the frontier of exploration uniformly (yellow)3. BeBoldcontinuously pushes the exploration frontier4. BeBoldreaches the end of exploration1. RND assigns high IR (dark green) throughout the environment2. RND temporarily focuses on the upper right corner (yellow)3. RND by chance starts exploring the bottom right corner heavily, resulting in the IR at top right higher than bottom right4. RND re-explores the upper right and forgets the bottom right, gets trappedFigure 1: A Hypothetical Demonstration of how exploration is done in BeBold versus Random NetworkDistillation (Burda et al., 2018b), in terms of distribution of intrinsic rewards (IR). BeBold reaches the goal bycontinuously pushing the frontier of exploration while RND got trapped. Note that IR is defined differently inRND ( 1=N(st)) versus BeBold ( max(1 =N(st+1)1=N(st);0), See Eqn. 3), and different color is used.theBoundary of the exp loredregions and step into the unknown, mitigating the short-sighted issuein count-based approaches.Following this simple criterion, we propose a novel algorithm BeBold and evaluate it on two verychallenging procedurally-generated (PG) environments: MiniGrid (Chevalier-Boisvert et al., 2018)and NetHack (K ̈uttler et al., 2020). MiniGrid is a popular benchmark for evaluating explorationalgorithms (Raileanu and Rockt ̈aschel, 2020; Campero et al., 2020; Goyal et al., 2019) and NetHackis a much more realistic environment with complex goals and skills. BeBold manages to solve the12 most challenging environments in MiniGrid within 120M environment steps, without curriculumlearning. In contrast, (Campero et al., 2020) solves 50% of the tasks, which were categorizedas “easy” and “medium”, by training a separate goal-generating teacher network in 500M steps.In NetHack, a more challenging procedurally-generated environment, BeBold also outperforms allbaselines with a significant margin on various tasks. In addition, we analyze BeBold extensivelyin MiniGrid. The quantitative results show that BeBold largely mitigates the detachment problem,with a much simpler design than Go-Explore (Ecoffet et al., 2020) which contains multiple hand-tune stages and hyper-parameters.Most Related Works . RIDE (Raileanu and Rockt ̈aschel, 2020) also combines multiple criteria to-gether. RIDE learns the state representation with curiosity-driven approaches, and then uses thedifference of learned representation along a trajectory as the reward, weighted by pseudo countsof the state. However, as a two-stage approach, RIDE heavily relies on the quality of generaliza-tion of the learned representation on novel states. As a result, BeBold shows substantially betterperformance in the same procedurally-generated environments.Go-Explore (Ecoffet et al., 2020) stores many visited states (including boundaries), reaches thesestates without exploration, and explores from them. BeBold focuses on boundaries, perform explo-ration without human-designed cell representation (e.g., image downsampling) and is end-to-end.Frontier-based exploration (Yamauchi, 1997; 1998; Topiwala et al., 2018) is used to help specificrobots explore the map by maximizing the information gain. The “frontier” is defined as the 2Dspatial regions out of the explored parts. No automatic policy optimization with deep models is used.In contrast, BeBold can be applied to more general partial observable MDPs with deep policies.2 B ACKGROUNDFollowing single agent Markov Decision Process (MDP), we define a state space S, an action spaceA, and a (non-deterministic) transition function T:SA!P(S)whereP(S)is the probabilityof next state given the current state and action. The goal is to maximize the expected reward R=E[PTk=0krt+k=1]wherertis the reward, is the discount factor, and the expectation is takenw.r.t. the policy and MDP transition P(S). In this paper, the total reward received at time step tisgiven byrt=ret+rit, whereretis the extrinsic reward given by the environment, ritis the intrinsicreward from the exploration criterion, and is a scaling hyperparameter.2Under review as a conference paper at ICLR 20213 E XPLORATION BEYOND THE BOUNDARYOur new exploration criterion combines both counting-based and state-diff-based criteria. Our crite-rion doesn’t suffer from short-sightedness and is asymptomatically consistent. We’ll first introduceBeBold and then analyse the advantages of BeBold over existing criteria in Sec. 4.Exploration Beyond the Boundary . BeBold gives intrinsic reward (IR) to the agent when it ex-plores beyond the boundary of explored regions, i.e., along a trajectory, the previous state sthasbeen sufficiently explored but st+1is new:ri(st;at;st+1) = max1N(st+1)1N(st);0; (1)HereNis the visitation counts. We clip the IR here because we don’t want to give a negative IR tothe agent if it transits back from a novel state to a familiar state. From the equation, only crossing thefrontier matters to the intrinsic reward; if both N(st)andN(st+1)are high or low, their differencewould be small. As we will show in Sec. 4, for each trajectory going towards the frontier/boundary,BeBold assigns an approximately equal IR, regardless of their length. As a result, the agent willcontinue pushing the frontier of exploration in a much more uniform manner than RND and won’tsuffer from short-sightedness. This motivates the agent to explore different trajectories uniformly.Also Eq. 1 is asymptotically consistent as ri!0whenN!1 .Like RIDE (Raileanu and Rockt ̈aschel, 2020), in our implementation, partial observation otare usedinstead of the real state st, when stis not available.Episodic Restriction on Intrinsic Reward (ERIR). In many environments where the state tran-sition is reversible, simply using intrinsic reward to guide exploration would result in the agentgoing back and forth between novel states st+1and their previous states st. RIDE (Raileanu andRockt ̈aschel, 2020) avoids this by scaling the intrinsic reward r(s)by the inverse of the state visita-tion counts. BeBold puts a more aggressive restriction: the agent is only rewarded when it visits thestate sfor the first time in an episode. Thus, the intrinsic reward of BeBold becomes:ri(st;at;st+1) = max1N(st+1)1N(st);01fNe(st+1) = 1g (2)Nehere stands for episodic state count and is reset every episode. In contrast, the visitation countNis a life-long memory bank counting state visitation across all of training.Inverse visitation counts as prediction difference. We use the difference between a teacher anda student network 0to approximate visitation counts: N(st+1)1jj(ot+1)0(ot+1)jj2, here ot+1is the observation of the agent in state st+1. This yields the following implementation of BeBold:ri(st;at;st+1) = max(jj(ot+1)0(ot+1)jj2jj(ot)0(ot)jj2;0)1fNe(ot+1) = 1g)(3)Shared visitation counts N(st)in the training of Procedurally-Generated (PG) Environments.During training, the environment changes constantly (e.g., blue keys becomes red), while the se-mantic links of these objects remain the same. We use a shared RND ( ,0) across different PGenvironments, and treat these semantically similar states as new without using domain knowledge(e.g., image downsampling like in Go-Explore (Ecoffet et al., 2019)). Partial observability and gen-eralization of neural network handles these differences and leads to count-sharing. For episodiccountNe(ot+1), since it is not shared across episodes (and environments), we use a hash table.4 C ONCEPTUAL ADVANTAGES OF BEBOLD OVER EXISTING CRITERIAShort-sightedness and Detachment . One issue in the count-based approach is its short-sightedness.Let’s assume in a simple environment, there are McorridorsfjgMj=1starting at s0and extendingto different parts of the environment. The corridor jhas a length of Tj. The agent starts at s0.For each visited state, the agent receives the reward of1N(s)whereN()is the visitation count, andlearns withQ-learning. Then with some calculation (See Appendix), we see that the agent has astrong preference on exploring the longest corridor first (say 1), and only after a long period doesit start to explore the second longest. This is because the agent initially receives high IR in 1due toits length, which makes the policy visit1more often, until it depletes the IR in 1.This behavior of “dedication” could lead to serious issues. If M3and 2 corridors are longenough (say 1and2are long), then before the agent is able to explore other corridors, its policy 3Under review as a conference paper at ICLR 2021has already been trained long enough so that it only remembers how to get into 1and2. When1has depleted its IR, the agent goes to 2following the policy. After that, the IR in 1revives since thevisitation counts in 1is now comparable or even smaller than 2, which lures the agent to explore1again following the policy. This leaves other corridors (e.g., 3) unexplored for a very long time.Note that using a neural-network-approximated IR (RND) instead of tabular IR could potentiallyalleviate this issue, but it is often far less than enough in complex environments.As mentioned in Go-Explore series (Ecoffet et al., 2019; 2020), count-based approaches also sufferfrom detachment : if the agent by chance starts exploring 2after briefly exploring the first fewstates of1, it would not return and explore 1further since 1is now “shorter” than 2and haslower IR than 2for a long period. Go-Explore tries to resolve this dilemma between “dedication”and “exploration” by using a two-stage approach with many hand-tuned parameters.In contrast, IR of BeBold depends on the difference of the visitation counts along the trajectory,and is insensitive to the length of the corridor. This leads to simultaneous exploration of multiplecorridors and yields a diverse policy (See Sec. 5.2 for empirical evidence). Moreover, the IRfocuses on the boundary between explored and unexplored regions, where the two goals (dedicationand exploration) align, yielding a much cleaner, one-stage method.Asymptotic Inconsistency. Approaches that define IR as the difference between state representa-tionsk (s) (s0)k( is a learned embedding network) (Zhang et al., 2019; Marino et al., 2019)suffer from asymptotic inconsistency. In other words, their IR does not vanish even after sufficientexploration: ri6!0whenN!1 . This is because when the embedding network converges aftersufficient exploration, the agent can always obtain non-zero IR if a major change in state represen-tation occurs (e.g., opening a door or picking up a key in MiniGrid). Therefore, the learned policydoes not maximize the extrinsic reward re, deviating from the goal of RL. Automatic curriculumapproaches (Campero et al., 2020)) have similar issues due to an ever-present IR.For this, (Zhang et al., 2019) proposes to learn a separate scheduler to switch between intrinsic andextrinsic rewards, and (Raileanu and Rockt ̈aschel, 2020) divides the state representation differenceby the square root of visitation counts. In comparison, BeBold does not require any extra stage andis a much simpler solution.5 E XPERIMENTSWe evaluate BeBold on challenging procedurally-generated environment MiniGrid (Chevalier-Boisvert et al., 2018) and the hard-exploration environment NetHack (K ̈uttler et al., 2020). Theseenvironments provide a good testbed for exploration in RL since the observations are symbolic ratherthan raw sensor input (e.g., visual input), which decouples perception from exploration. In MiniGrid,we compare BeBold with RND (Burda et al., 2018b), ICM (Pathak et al., 2017), RIDE (Raileanuand Rockt ̈aschel, 2020) and AMIGo (Campero et al., 2020). We only evaluate AMIGo for 120Msteps in our experiments, the algorithm obtains better results when trained for 500M steps as shownin (Campero et al., 2020). For all the other baselines, we follow the exact training paradigm from(Raileanu and Rockt ̈aschel, 2020). Mean and standard deviation across four runs of different seedsare computed. BeBold successfully solves the 12 most challenging environments provided by Min-iGrid. By contrast, all the baselines end up with zero reward on half of the environments we tested.In NetHack, BeBold also achieves SoTA results with a large margin over baselines.5.1 M INIGRIDENVIRONMENTSWe mainly use three challenging environments from MiniGird: Multi-Room (MR),Key Corridor(KC) and Obstructed Maze (OM). We use these abbreviations for the remaining of the paper (e.g.,OM2Dlh stands for ObstructedMaze2Dlh). Fig. 2 shows one example of a rendering on OMFull aswell as all the environments we tested with their relative difficulty.In MiniGrid, all the environments are size NN(Nis environment-specific) where each tilecontains an object: wall, door, key, ball, chest. The action space is defined as turn left, turn right,move forward, pick up an object, drop an object, and toggle an object (e.g., open or close a door).MR consists of a series of rooms connected by doors and the agent must open the door to get to thenext room. Success is achieved when the agent reaches the goal. In KC, the agent has to explore theenvironment to find the key and open the door along the way to achieve success. OM is the hardest:the doors are locked, the keys are hidden in boxes, and doors are obstructed by balls.4Under review as a conference paper at ICLR 2021AgentDoorObstructionGoalBoxMRN6MRN7S-8MRN12-S10KCS3R3KCS4R3KCS5R3KCS6R3OM2Dl-hOM2Dl-hbOM1QOM2QOMFULLICMRNDRIDEAMIGOBeBold*MR is short for MultiRoom, KC is for KeyCorridor, OM is for ObstructedMaze: Solved within 120M steps: UnsolvedFigure 2: MiniGrid Environments. Left: a procedurally-generated OMFull environment. Right: BeBoldsolves challenging tasks which previous approaches cannot solve. Note that we evaluate all methods for 120Msteps. AMIGo gets better results when trained for 500M steps as shown in (Campero et al., 2020).0.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: MultiRoom-N60.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: MultiRoom-N7-S80.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: MultiRoom-N12-S100 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnEasy: KeyCorridorS3R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: KeyCorridorS4R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: KeyCorridorS5R30 2 4 6Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: KeyCorridorS6R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnMedium: ObstructedMaze-2Dlh0 2 4 6Environment Steps 1e70.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-2Dlhb0 2 4 6Environment Steps 1e70.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-1Q0 2 4 6 8Environment Steps 1e70.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-2Q0.0 0.5 1.0Environment Steps 1e80.00.20.40.60.81.0Average ReturnHard: ObstructedMaze-FullIMPALA ICM RND RIDE AMIGO BeBoldFigure 3: Results for various hard exploration environments from MiniGrid. BeBold successfully solves allthe environments while all other baselines only manage to solve two to three relatively easy ones.Results . We test BeBold on all environments from MiniGrid. BeBold manages to solve the 12 mostchallenging environments. By contrast, all baselines solve only up to medium-level tasks and fail tomake any progress on more difficult ones. Note that some medium-level tasks we define here arecategorized as hard tasks in RIDE and AMIGo (e.g., KCS4R3 is labeled as “KCHard” and KCS5R3is labeled as “KCHarder”). Fig. 3 shows the results of our experiments. Half of the environments(e.g.,KCS6R3 ,OM1Q ) are extremely hard and all the baselines fail. In contrast, BeBold easily solvesall such environments listed above without any curriculum learning. We also provide the final testingperformance for BeBold in Tab. 1. The results is averaged across 4 seeds and 32 random initializedenvironments.Multi Room environments are relatively easy in MiniGrid. However, all the baselines except RIDEfail. As we increase the room size and number (e.g., MRN12S10 ), BeBold can achieve the goalquicker than RIDE. Our method easily solves these environments within 20M environment steps.On Key Corridor environments, RND, AMIGo, RIDE, IMPALA and BeBold successfully solvesKCS3R3 while ICM makes reasonable progress. However, when we increase the room size (e.g.,KCS5R3 ), none of the baseline methods work. BeBold manages to solve these environments in 40Menvironment steps. The agent demonstrates the ability to explore the room and finds the correspond-ing key to open the door in a randomized, procedurally-generated environment.Obstructed Maze environments are also difficult. As shown in Fig. 3, RIDE and RND manage tosolve the easiest task OM2Dlh which doesn’t contain any obstructions. In contrast, BeBold not onlyrapidly solves OM2Dlh , but also solves four more challenging environments including OMFull .These environments have obstructions blocking the door (as shown in Fig. 2) and are much larger insize than OM2Dlh . In these environments, our agent learns to move the obstruction away from thedoor to open the door and enter the next room. This “skill” is hard to learn since there is no extrinsicreward assigned to moving the obstruction. However, learning the skill is critical to achieve the goal.5Under review as a conference paper at ICLR 2021Table 1: Final testing performance for BeBold and all baselines.MRN6 MRN7S8 MRN12S10 KCS3R3 KCS4R3 KCS5R3ICM 0.000.0 0.000.0 0.000.0 0.450.052 0.000.0 0.000.0RIDE 0.650.005 0.670.001 0.650.002 0.910.003 0.930.002 0.000.0RND 0.000.0 0.000.0 0.000.0 0.910.003 0.000.0 0.000.0IMPALA 0.000.0 0.000.0 0.000.0 0.910.004 0.000.0 0.000.0AMIGO 0.000.0 0.000.0 0.000.0 0.890.005 0.000.0 0.000.0BeBold 0.640.003 0.670.001 0.650.002 0.920.003 0.930.003 0.940.001KCS6R3 OM2Dlh OM2Dlhb OM1Q OM2Q OMFULLICM 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0RIDE 0.000.0 0.950.015 0.000.0 0.000.0 0.000.0 0.000.0RND 0.000.0 0.950.0066 0.000.0 0.000.0 0.000.0 0.000.0IMPALA 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0AMIGO 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0 0.000.0BeBold 0.940.017 0.960.005 0.890.063 0.880.067 0.930.028 0.960.058259KMultiRoomN7S8Explore All Rooms!BeBold1.7M2.8M4.6M160K2.5M...9.8MEnvironment StepsRNDStuck in the 5thRoom!Figure 4: Normalized visitation counts N(st)=Z(Zis a normalization constant) for the location of agents.BeBold successfully explores all rooms at 4:6M steps while RND gets stuck in the fifth room at 9:8M steps.5.2 A NALYSIS OF INTRINSIC REWARD USING PURE EXPLORATIONWe analyze how BeBold mitigates the short-sightedness issue by only using IR to guide exploration.Shorted-Sighted Problem in Long-Corridor Environment. To verify Sec. 4, we design a toyenvironment with four disconnected corridors with length 40, 10, 30, and 10 respectively startingfrom the same state. In this example, there is no extrinsic reward and the exploration of the agentis guided by IR. We combine Q-learning with tabular IR (count-based and BeBold tabular) andneural-network-approximated IR (RND and BeBold) respectively for this experiment. We removeclipping from BeBold for a fair comparison. Tab. 2 shows the visitation counts across 4 runs w.r.t.each corridor after 600 episodes of training. It is clear that BeBold tabular explores each corridorin a much more uniform manner. On the other hand, count-based approaches are greatly affectedby short-sightedness and focus only on two out of four corridors. BeBold also shows much morestable performance across runs as the standard deviation is much lower comparing with RND. Notethat comparing to the analysis in Sec. 4, in practical experiments, we perceive that the preference ofthe corridors for count-based methods can be arbitrary because of the random initialization of theQ-network.Visitation Counts Analysis in MiniGrid. To study how different intrinsic rewards can affect theexploration of the agent, we test BeBold and RND in a fixed (instead of procedurally-generated forsimplicity) MRN7S8 environment. The environment contains 7 rooms connected by doors. To be asuccessful exploration strategy, the agent should explore all states and give all states equal amountof exploration. We define two metrics to measure the effectiveness of an exploration strategy: (1)visitation counts at every state over training N(s), and (2) entropy of visitation counts in each room :H(0(s))where0(s) =N(s)Ps2SrN(s). We do not calculate entropy across all states, because theagent always starts in the first room and may not visit the last one as frequently. As a result, thevisitation counts for states in the first room will be several magnitudes larger than the last room.Fig. 4 shows the heatmap of normalizd visitation counts N(st)=Z, whereZis the normalizationconstant. At first, RND enters the second room faster than BeBold. However, BeBold consistentlymakes progress by pushing the frontier of exploration and discovers all the rooms in 5M steps, whileRND gets stuck in the fifth room even trained with 10M steps.In Tab. 3, the entropy of distribution in each room H(0(s))for BeBold is larger than that of RND.This suggests that BeBold encourages the agent to explore the state in a much more uniform manner.6Under review as a conference paper at ICLR 2021Table 2: Visitation counts for the toy corridor environmentafter 3K episodes. BeBold explores corridors more uniformlythan count-based approaches.C1 C2 C3 C4 EntropyLength 40 10 30 10 –Count-Based 66K 28K 8K 8K 23K 35K 13K 18K 1.06 0.39BeBold Tabular 26K 2K 28K 8K 25K 6K 29K 9K 1.970.02RND 0.2K 0.2K 70K 53K 0.2K 0.07K 26K 44K 0.24 0.28BeBold 27K 6K 23K 3K 31K 12K 26K 8K 1.960.05Table 3: Entropy of the visitation counts ofeach room. Such state distribution of Be-Bold is much more uniform than RND.0.2M 0.5M 2.0M 5.0MRoom1 3.48 / 3.54 3.41 / 3.53 3.51 / 3.56 3.49 / 3.56Room2 2.87 / 3.09 / 3.23 3.51 / 3.53 3.35 / 3.56Room3 / / /4.02 3.42 / 4.01Room4 / / /2.74 2.85 / 2.87Results are presented in the order of “RND / BeBold”.1.5M Steps3.1M StepsRNDBeBold6.4M Steps4.6M Steps7.5M Steps9.8M Steps1.0M Steps1.4M Steps3.4M Steps2.4M Steps3.9M Steps4.8M StepsFigure 5: IR heatmaps for the location of agents. BeBold mitigates the short-sighted problem.0 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnKeyCorridorS4R30 1 2 3 4Environment Steps 1e70.00.20.40.60.81.0Average ReturnKeyCorridorS5R30.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnMultiRoom-N7-S80.0 0.5 1.0 1.5 2.0Environment Steps 1e70.00.20.40.60.81.0Average ReturnMultiRoom-N12-S10RND RND with ERIR BeBold w.o. ERIR BeBold w.o. Clipping BeBoldFigure 6: Ablation Study on BeBold comparing with RND with episodic intrinsic reward. BeBold significantlyoutperforms RND with episodic intrinsic reward on all the environments.Intrinsic Reward Heatmap Analysis in MiniGrid . We also plot the heatmap of IR at differenttraining steps. We generate the plot by running the policy from different checkpoints for 2K stepsand plot the IR associated with each state in the trajectory. States that do not receive IR from thesampled trajectories are left blank. For BeBold, the IR is computed as the difference of inversevisitation counts (approximated by RND) between consecutive states standst+1in the trajectory.From Fig. 5, we can see that BeBold doesn’t suffer from short-sighted problem, as we can clearlysee that the areas with high IRs are continuously pushed forward from room1 to room7. This istrue of the whole training process. On the contrary, the IR heatmap for RND bounces betweentwo consecutive rooms. This is due to short-sightedness: when exploring the second room, the IRof that room will significantly decrease. Since (a) the policy has assigned the first room non-zeroprobability and (b) the first room now has lower vistation count, RND will revisit the first room.This continues indefinitely, as the agent oscillates between the two rooms.5.3 A BLATION STUDYEpisodic Restriction on Intrinsic Reward . We analyze the importance of each component inBeBold. To illustrate the importance of exploring beyond the boundary, we compare BeBold with aslightly modified RND: RND with ERIR. We only give RND intrinsic reward when it visits a newstate for the first time in an episode. We can see in Fig. 6 that although ERIR helps RND to solveKCS4R3 andMRN7S8 , without BeBold, the method still fails on more challenging tasks KCS5R3andMRN12S10 . A symmetric experiment of removing ERIR from BeBold is also conducted.Clipping in Beyond the Boundary Exploration . We also study the role of clipping max(;0)in ourmethod. Fig. 6 shows the effect of removing clipping from BeBold. We conclude it is suboptimal todesign an IR that incurs negative reward when transitioning from an unfamiliar to a familiar state.5.4 T HENETHACK LEARNING ENVIRONMENTTo evaluate BeBold on a more challenging and realistic environment, we choose the NetHack Learn-ing Environment (K ̈uttler et al., 2020). In the game, the player is assigned a role of hero at thebeginning of the game. The player needs to descend over 50 procedurally-generated levels to thebottom and find “Amulet of Yendor” in the dungeon. The procedure can be described as first retrieve7Under review as a conference paper at ICLR 20210.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e90255075100Average Returnstaircase0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e9020406080Average Returnpet0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e9101Average Returngold0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e90200400600800Average Returnscore0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e90500100015002000Average Returnscout0.0 0.2 0.4 0.6 0.8 1.0Environment Steps 1e91.51.00.50.0Average ReturnoracleIMPALA RND BeBoldFigure 7: Results for tasks on NetHack. BeBold achieves the SoTA results comparing to RND and IMPALA.the amulet, then escape the dungeon and finally unlock five challenging levels (the four ElementalPlanes and the Astral Plane). We test BeBold on a set of tasks with tractable subgoals in the game:Staircase : navigating to a staircase to the next level, Pet: reaching a staircase and keeping the petalive, Gold : collecting gold, Score : maximizing the score in the game, Scout : scouting to exploreunseen areas in the environment, and Oracle : finding the oracle (an in-game character at level 5-9).Results in Fig. 7 show that BeBold surpasses RND and IMPALA on all tasks1. Especially on Pet,BeBold outperforms the other two with a huge margin. This again illustrates the strong performanceof BeBold in an environment with huge action spaces and long-horizon reward (several magnitudeslonger than StarCraft and Dota2). Oracle is the hardest task and no approaches are able to find theoracle and obtain a reward of 1000. BeBold still manages to find a policy with less negative rewards(i.e., penalty of taking actions that do not lead to game advancement, like moving towards a wall).For RIDE, we contacted the authors, who confirmed their method is not functional on NetHack. Wefurther attempted to tune RIDE but to no avail. So we do not report RIDE performance on NetHack.6 R ELATED WORKIn addition to the two criteria ( count -based and state-diff based) mentioned above, another streamof defining IRs is curiosity -based. The main idea is to encourage agents to explore areas wherethe prediction of the next state from the current learned dynamical model is wrong. Dynamic-AE (Stadie et al., 2015) computes the distance between the predicted and the real state on the outputof an autoencoder, ICM (Pathak et al., 2017) learns the state representation through a forward andinverse model and EMI (Kim et al., 2018) computes the representation through maximizng mutualinformationI([s;a];s0)andI([s;s0];a).Another line of research is using information gain to reward the agent. VIME (Houthooft et al.,2016) uses a Bayesian network to measure the uncertainty of the learned model. Later, to reducecomputation, a deterministic approach has been adopted (Achiam and Sastry, 2017). Other worksalso propose to use ensemble of networks for measuring uncertainty (Pathak et al., 2019; Shyamet al., 2019). We can also reward the agent by Empowerment (Klyubin et al., 2005; Gregor et al.,2016; Salge et al., 2014; Mohamed and Rezende, 2015), prioritizing the states that agent can takecontrol through its actions. It is different from state-diff: if st+1differs from stbutnotdue to agent’schoice of actions, then the empowerment at stis zero. Other criteria exist, e.g., diversity (Eysenbachet al., 2018), feature control (Jaderberg et al., 2016; Dilokthanakul et al., 2019) or the KL divergencebetween current distribution over states and a target distribution of states (Lee et al., 2019).Outside of intrinsic reward, researchers have proposed to use randomized value functions to encour-age exploration (Osband et al., 2016; Hessel et al., 2017; Osband et al., 2019). Adding noise to thenetwork is also shown to be effective (Fortunato et al., 2017; Plappert et al., 2017). There has alsobeen effort putting to either explicitly or implicitly separate exploration and exploitation (Colas et al.,2018; Forestier et al., 2017; Levine et al., 2016). The recently proposed Go-Explore series (Ecoffetet al., 2019; 2020) also fall in this category. We might also set up different goals for exploration (Guoand Brunskill, 2019; Oh et al., 2018; Andrychowicz et al., 2017).Curriculum learning (Bengio et al., 2009) has also been used to solve hard exploration environments.The curriculum can be explicitly generated by: searching the space (Schmidhuber, 2013), teacher-1ForGold task, there is a small fix introduced to the environment recently. We benchmark all methodsbefore the fix and will update the results.8Under review as a conference paper at ICLR 2021RNDRNDFigure 8: Results for CNN-based and RNN-based model on MonteZuma’s Revenge. BeBold achieves goodperformance.student setting (Matiisen et al., 2019), increasing distance between the starting point and goal (Jabriet al., 2019) or using a density model to generate a task distribution for the meta learner (Florensaet al., 2017). Our work can also be viewed as an implicit curriculum learning as gradually encouragesthe agent to expand the area of exploration. However, it never explicitly generates curriculum.7 M ONTE ZUMA ’SREVENGEWe also provide initial result of BeBold on MonteZuma’s Revenge. We same paradigm as RND andthe same set of hyperparameters except we use 128 parallel environments. In Fig. 8, we can see thatusing CNN-based model, BeBold achieves approximately 10000 external reward after two Billionframes while the performance reported in RND (Burda et al., 2018b) is around 6700. When using aRNN-baed model, BeBold reached around 13000 external reward in 100K updates while RND onlyachieves 4400. Please note that these are only initial result and we’ll provide the comparison withRND and average return across multiple seeds in the future.8 L IMITATIONS AND FUTURE WORKNoisy TV Problem . One of the limitations on BeBold is the well-known Noisy TV problem. Theproblem was raised by (Pathak et al., 2017): the agent trained using a count-based IR will getattracted to local sources of entropy in the environment. Thus, it will get high IR due to the random-ness in the environment even without making any movements. BeBold suffers from this problemas well since the difference between consecutive states can be caused by the stochasity in the envi-ronment. That could be the reason that BeBold doesn’t get a good performance on stochastic tasks(e.g., Dynamic-Obstacles-5x5 ). We will leave this problem to future research.Hash Table for ERIR . The ERIR in BeBold adopts a hash table for episodic visitation count. Thiscould potentially have a hard time when applying BeBold in a continuous-space environment (e.g.,some robotics tasks). One simple solution is to discretilize the space and still use a hash table forcounting. We also leave the more general and elegant fix to this problem to future work.9 C ONCLUSIONIn this work, we propose a new criterion for intrinsic reward (IR) that encourages exploration be-yond the boundary of explored regions using regulated difference of inverse visitation count alonga trajectory. Based on this criterion, the proposed algorithm BeBold successfully solves 12 of themost challenging tasks in a procedurally-generated MiniGrid environment. This is a significant im-provement over the previous SoTA, overcoming short-sightedness issues that plague count-basedexploration. We also evaluate BeBold on NetHack, a much more challenging environment. BeBoldoutperforms all the baselines by a significant margin. In summary, this simple criterion and the en-suing algorithm demonstrates effectiveness in solving the sparse reward problem in reinforcementlearning (RL), opening up new opportunities to many real-world applications.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title This paper is overall well-written, motivated and empirically-supported. ### Review Text ------------------------------------ **Summary:** This paper proposes BeBold, a new definition of intrinsic reward to guide exploration in sparse reward problems. This intrinsic reward combines the ideas behind count-based approaches and state-diff approaches. They demonstrate the success of BeBold by comparing their algorithm to a set of state-of-the-art exploration methods using intrinsic rewards on a set of tasks from the MiniGrid and NetHack environments. ------------------------------------ **Strong points:** Overall, I believe this is a very good paper. The idea is simple, the motivations and intuitions are well explained. The empirical study is carefully designed, involves relevant baselines and environments. The paper is also well written. I will list the main strong points: * The problem of exploration in sparse environments is an important problem in the field of RL. Many exploration problems have been defined (detachment, short-sightedness, etc.). This paper presents these problems and proposes to improve on some of them. * The paper is clearly written and well organized. * The related work is relevant and seems complete. * The experiments are overall well designed. MiniGrid and Nethack are two types of benchmarks adapted to the study of exploration algorithms and the baselines are relevant. I like that the authors conducted ablations studies and additional experiments to study specific aspects of their strategy (visitation counts analysis and the study of the short-sighted problem). * The new methods seem to significantly outperform previous ones without being overly complicated. The empirical evidence supports the claim (although I have a problem with the use of “solve” here, see below). ------------------------------------ **Weak points:** I will now list a few weak points of the paper. * The paper lists various exploration problems (noisy TV problem, detachment, derailment, short-sightedness and asymptotically inconsistent). BeBold is said to solve detachment, short-sightedness and asymptotical inconsistency. Derailment and the noisy TV problem are neither defined nor discussed. I believe BeBold does not solve the noisy TV problem, it should be discussed. * I am not sure I get the intuition behind the clipping of the reward function. The authors write: “we do not want to give a negative IR to the agent if it transits back from a novel state to a familiar state”, but do not really explain why. Can you make this explicit? * Episodic restriction seems to restrict the use of BeBold to environments with discrete countable states (use of hash table). Do you see a way around that problem? * I think the main problem is the total absence of discussion w.r.t. the potential limits of BeBold (there is no discussion section). In particular, the authors could discuss the aspects mentioned above: the limitation to discrete states, the noisy TV and derailment problems. * Comparison to Go-Explore: Talking about BeBold, the paper says: “without using domain knowledge (e.g., image downsampling like in Go-Explore (Ecoffet et al., 2019)). It is true that BeBold does not use image downsampling, but it does not use images at all, which is probably the explanation. Go-Explore and RND can be applied to image-based environments like Montezuma’s revenge. To do so, Go-Explore indeed requires downsampling. However, I believe that if BeBold were to be applied to such environments, it would also need the downsampling trick (or equivalent) to be able to apply episodic restriction. * 4 seeds is a very small number. I know that very few works present much more than this, but I think it is bad practice. Please consider adding more (>=10). I admit that in your case, the trend is quite clear as competing algorithms show null performance most of the time. * I think the repeated use of the term “solved” is quite misleading. The traditional definition of “to solve” is probably “to reach maximum performance”. This is definitely not the case in several environments. Please consider either defining precisely what you mean by “solved”, or not using that term. * There is no codebase in the supplementary material. Do you plan on releasing it? Please consider doing so. I believe most of these weak points can be solved during the rebuttal. ------------------------------------ **Recommendation and justification:** I think this paper should be accepted for the reasons listed in the Strong Points section. I'm giving a score of 7. I'd be happy to raise this score to an 8 provided that the authors add a discussion section to discuss the points mentioned above. ------------------------------------ **Feedback to improve the paper (not part of assessment):** * “in which the agent gets trapped into one (long) corridor and fails to try other choices.” → not very clear at this point, although it’s explained later. * The episodic restriction seems to require a policy that has memory. Otherwise the reward would not be Markovian anymore. Maybe this should be said somewhere? * Short-sightedness paragraph: “but it is often far less than enough in a complex environment”: vague and not argumented. * What do the environment code stands for? S, R, Dl-h, Dl-hb, Q? * Results could be a little bit bigger, maybe switch the colors so that the two leading algorithms do not have almost identical colors (pink and red)? * In tables, what does bold indicate? Is it only the best value? does it assert statistical significance (if yes, which test at which confidence level?) * I believe the paper does not mention which RL algorithm is used. This seems like an important detail to provide. **Typos:** * “In contrast, (Campero et al. 2020) solves 50% of the tasks…” → remove parentheses. Same in the last paragraph before Sec. 5 “For this, (Zhang et al, 2019) ... (Raileanu)...”. Other examples throughout. * Reward definition in Sec2. Not well defined, we don’t know what are t, k or what the =1 refers to. * Do not use contractions “does’nt”, “don’t”, “won’t”, “let’s” etc. * Sec. 3, Episodic Restric. paragraph: “back and force” → “back and forth” * “the observation to the agent” → “the observation of the agent” ? * Table 2 caption: “th” → “the”. * Second to the last paragraph before the Intrinsic Reward Heatmap paragraph: “discover” → “discovers”. * “BeBold doesn’t suffer from short-sighted problem” → “BeBold does not suffer from short-sightedness” or “the short-sighted problem”. * “We only give RND intrinsics” → is “intrinsics” a word ? **Post-rebuttal update** The authors addressed most of my concerns during the rebuttal, added relevant discussions, experiments on Montezuma and clarified the reported evaluation metrics. I raise the score from 7 to 8. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
8YFhXYe1Ps
ICLR.cc/2021/Conference
2021
Interpretability Through Invertibility: A Deep Convolutional Network With Ideal Counterfactuals And Isosurfaces
["Leon Sixt", "Martin Schuessler", "Philipp Wei\u00df", "Tim Landgraf"]
Current state of the art computer vision applications rely on highly complex models. Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model. To elucidate a model’s decision, we present a novel interpretable model based on an invertible deep convolutional network. Our model generates meaningful, faithful, and ideal counterfactuals. Using PCA on the classifier’s input, we can also create “isofactuals”– image interpolations with the same outcome but visually meaningful different features. Counter- and isofactuals can be used to identify positive and negative evidence in an image. This can also be visualized with heatmaps. We evaluate our approach against gradient-based attribution methods, which we find to produce meaningless adversarial perturbations. Using our method, we reveal biases in three different datasets. In a human subject experiment, we test whether non-experts find our method useful to spot spurious correlations learned by a model. Our work is a step towards more trustworthy explanations for computer vision.
["Interpretable Machine Learning", "Counterfactuals", "Computer Vision", "Human Evaluation", "User Study"]
ABSTRACTCurrent state of the art computer vision applications rely on highly complex mod-els. Their interpretability is mostly limited to post-hoc methods which are notguaranteed to be faithful to the model. To elucidate a model’s decision, wepresent a novel interpretable model based on an invertible deep convolutionalnetwork. Our model generates meaningful, faithful, and ideal counterfactuals.Using PCA on the classifier’s input, we can also create “isofactuals”– image in-terpolations with the same outcome but visually meaningful different features.Counter- and isofactuals can be used to identify positive and negative evidencein an image. This can also be visualized with heatmaps. We evaluate our ap-proach against gradient-based attribution methods, which we find to producemeaningless adversarial perturbations. Using our method, we reveal biases inthree different datasets. In a human subject experiment, we test whether non-experts find our method useful to spot spurious correlations learned by a model.Our work is a step towards more trustworthy explanations for computer vision.For code: https://anonymous.4open.science/r/ae263acc-aad1-42f8-a639-aec20ff31fc3/1 I NTRODUCTIONThe lack of interpretability is a significant obstacle for adopting Deep Learning in practice. As deepconvolutional neural networks (CNNs) can fail in unforeseeable ways, are susceptible to adversarialperturbations, and may reinforce harmful biases, companies rightly refrain from automating high-riskapplications without understanding the underlying algorithms and the patterns used by the model.Interpretable Machine Learning aims to discover insights into how the model makes its predictions.For image classification with CNNs, a common explanation technique are saliency maps, whichestimate the importance of individual image areas for a given output. The underlying assumption,that users studying local explanations can obtain a global understanding of the model (Ribeiro et al.,2016), was, however, refuted. Several user studies demonstrated that saliency explanations didnot significantly improve users’ task performance, trust calibration, or model understanding (Kauret al., 2020; Adebayo et al., 2020; Alqaraawi et al., 2020; Chu et al., 2020). Alqaraawi et al. (2020)attributed these shortcomings to the inability to highlight global image features or absent ones, makingit difficult to provide counterfactual evidence. Even worse, many saliency methods fail to representthe model’s behavior faithfully (Sixt et al., 2020; Adebayo et al., 2018; Nie et al., 2018). While nocommonly agreed definition of faithfulness exists, it is often characterized by describing what anunfaithful explanation is (Jacovi & Goldberg, 2020). For example, if the method fails to create thesame explanations for identically behaving models.To ensure faithfulness, previous works have proposed building networks with interpretable com-ponents (e.g. ProtoPNet (Chen et al., 2018) or Brendel & Bethge (2018)) or mapping networkactivations to human-defined concepts (e.g. TCA V (Kim et al., 2018)). However, the interpretablenetwork components mostly rely on fixed-sized patches and concepts have to be defined a priori .Here, we argue that explanations should neither be limited to patches and not rely on a prioriknowledge. Instead, users should discover hypotheses in the input space themselves with faithfulcounterfactuals that are ideal, i.e. samples that exhibit changes that directly and exclusively correspond1Under review as a conference paper at ICLR 2021to changes in the network’s prediction (Wachter et al., 2018). We can guarantee this property bycombining an invertible deep neural network z='(x)with a linear classifier y=wT'(x) +b.This yields three major advantages: 1) the model is powerful (can approximate any function Zhanget al. (2019)), 2) the weight vector wof the classifier directly and faithfully encodes the featureimportance of a target class yin thezfeature space. 3) Human-interpretable explanations can beobtained by simply inverting explanations for the linear classifier back to input space.As a local explanation for one sample x, we generate ideal counterfactuals by altering its featurerepresentation zalong the direction of the weight vector ~z=z+w. The logit score can bemanipulated directly via . Inverting ~zback to input space results in a human-understandablecounterfactual ~x='1(z+w). Any change orthogonal to wwill create an “isofactual” , a samplethat looks different but results in the same prediction. While many vectors are orthogonal to w, wefind the directions that explain the highest variance of the features zusing PCA. As the principalcomponents explain all variance of the features, they can be used to summarize the model’s behaviorglobally.We demonstrate the usefulness of our method on a broad range of evaluations. We compared ourapproach to gradient-based saliency methods and find that gradient-based counterfactuals are not idealas they also change irrelevant features. We evaluated our method on three datasets, which allowedus to create hypotheses about potential biases in all three. After statistical evaluation, we confirmedthat these biases existed. Finally, we evaluated our method’s utility against a strong baseline ofexample-based explanations in an online user study. We confirmed that participants could identifythe patterns relevant to the model’s output and reject irrelevant ones. This work demonstrates thatinvertible neural networks provide interpretability that conceptually stands out against the morecommonly used alternatives.2 M ETHODThroughout this work, we rely on the following definitions, which are based on Wachter et al. (2018):Definition 2.1 (Counterfactual Example) .Given a data point xand its prediction y, acounterfactualexample is an alteration of x, defined as ~x=x+ x, with a altered prediction ~y=y+ ywherey6= 0. Samples xwithy= 0are designated “isofactuals” .Almost any xwill match the counterfactual definition, including those that additionally changeaspects which are unrelated to the model’s prediction, e.g. removing an object but also changing thebackground’s color. It is desirable to isolate the change most informative about a prediction:Definition 2.2 (Ideal Counterfactual) .Given a set of unrelated properties (x) =fi(x)g, a sample~xis called ideal counterfactual of xif all unrelated properties iremain the same.The following paragraphs describe how we generate explanations using an invertible neural network':Rn7!Rn. The forward function 'maps a data point xto a feature vector z='(x). Since'isinvertible, one can regain xby applying the inverse x='1(z). We used the features zto train abinary classifier f(x) =wTz+bthat predicts the label y. In addition to the supervised loss, wealso trained'as a generative model (Dinh et al., 2016; 2015) to ensure that the inverted samples arehuman-understandable.Counterfactuals To create a counterfactual example ~xfor a datapoint x, we can exploit that wencodes feature importance in the z-space directly. To change the logit score of the classifier, wesimply add the weight vector to the features zand then invert the result back to the input space:~x='1(z+w). Hence, for any sample x, we can create counterfactuals ~xwith an arbitrarychange in logit value y=wTwby choosing accordingly. Figure 1a shows several suchexamples. Since the generation ( '1) and prediction ( ') are performed by the same model, we knowthat~xwill correspond exactly to the logit offset wTw. Consequently, ~xis afaithful explanation.To show that our counterfactuals are ideal, we have to verify that no property unrelated to theprediction is changed. For such a property (x) =vTz,vhas to be orthogonal to w.1As theunrelated property does not change for the counterfactual (~x) =vT(z+w) =vTz=(x),we know that ~x='1(z+w)is indeed an ideal counterfactual.1(x)could actually be non-linear in the features zas long as the gradient@@zis orthogonal to w.2Under review as a conference paper at ICLR 2021PCA Isosurface Since users can only study a limited number of examples, it is desirable to choosesamples that summarize the model’s behavior well (Ribeiro et al., 2016; Alqaraawi et al., 2020).For counterfactual explanations, the change xmay vary significantly per example as '(x)is anon-linear function. As each xhas a unique representation zin the feature space, we want to findexamples describing the different directions of the feature distribution. To isolate the effect of w,such examples would have the same prediction and only vary in features unrelated to the prediction.We implement this by first removing the variation along wusing a simple projection z?=z(wTz=wTw)wand then applying PCA on z?. The resulting principal components e1:::emareorthogonal towexcept of the last principal component emwhich has zero variance and can thereforebe discarded. The principal components span a hyperplane w+Pm1iiei. Since all samples onthis hyperplane have the same prediction (a logit value of wTw), it is an isosurface .As a principal component eiis a vector in the z-space, we can create counterfactuals for it'1(ei+w)and understand how the changes of adding wdiffer per location in the z-space.Thee1;:::;em1are sorted by the explained variance allowing to prioritize the most relevantchanges in the data. As the principal components cover the whole feature distribution, understandingthe effect ofwon them allows forming a global understanding of the model’s behavior.Saliency maps Saliency maps are supposed to draw attention to features most relevant to a predic-tion. In our case, it is most reasonable to highlight the difference between xand the counterfactual~x. We can measure the difference although in an intermediate feature map h. The saliency mapof an intermediate layer can be resized to fit the input’s resolution as information remains localin convolutional networks. Per feature map location (i;j), we calculate the similarity measurem(i;j)=jhijjcos](hij;hij). The sign of the saliency map mdepends on the alignment ofthe change hwith the feature vector h, i.e.(](hij;hij)>0). The magnitude is dominated bythe length of the change jhijj. Figure 1b presents saliency maps for the CELEB AAttractive label.Model Our invertible network follows the Glow architecture (Kingma & Dhariwal, 2018). Thenetwork is trained to map the data distribution to a standard normal distribution. We reduce theinput dimensionality of (3, 128, 128) down to (786) by fading half of the channels out with eachdownsampling step. When generating a counterfactual, we reuse the zvalues out-faded from thelower layers as they correspond to small details and noise. We have 7 downsampling steps and 351flow layers. The network has 158.769.600 parameters in total. An important design decision is thatthe final layer’s output is not input to the linear classifier. The PCA would fail to discover meaningfuldirections as theN(0;I)prior induces equal variance in all directions. The classifier uses the outputof layer 321. The layers 322-351 are optimized using the standard unsupervised flow objective. Forthe first 321 layers, we also train on the classifier’s supervised loss (for details see Appendix A.1).3 E VALUATIONWe evaluated the ability to construct hypotheses about the model’s behavior on three datasets andwith a user study. We focused on these aspects as our method is faithful by construction, needing noempirical confirmation. Instead, we use the strong faithfulness guarantees of our model to evaluategradient-based attribution methods.3.1 H YPOTHESIS DISCOVERYCelebA A claimed utility of our method is that it allows users to discover hyphotheses about themodels features used for prediction. We choose CELEB A(Liu et al., 2015), a popular face dataset,because it is a challenging dataset for feature attribution: how can an abstract concept as attractivenessbe linked to pixels? Additionally, it already contains annotations (e.g. make-up, accessories, hair),which makes it easier for us to accept or reject a given hypothesis about feature importance.We especially focus on the Attractive class as it is unclearer what the relevant features are.TheCELEB ADataset in general and the class attractive in particular are ethically questionable. How cana subjective label, which depends on individual or even cultural preferences, be reduced to a binarylabel? Unfortunately, (Liu et al., 2015) did not state the annotation process (which is consideredgood practice - (Gebru et al., 2020; Geiger et al., 2020)). Furthermore, the dataset was criticized forlacking diversity (K ̈arkk ̈ainen & Joo, 2019).3Under review as a conference paper at ICLR 20218.04.00.0-4.0-8.0(a) C ELEB A8.04.00.0-4.0-8.01L 1R 2L 2R 3L 3R 4L 4R 5L 5R 6L 6R 7L 7R 8L 8R(b) C ELEB AFigure 1: (a)We generate counterfactual images by moving along the direction of the classifierweightswof the attractive class and inverting it back to the input. Last row shows the saliencymaps from the center row (logit y=0) to the top row ( y=8). Blue marks features changed into adifferent direction and red marks features getting enhanced. (b)We extract principal components(L=ei, R=ei) orthogonal to the classifier weight w. All images in a row have the exact same logitscore given on the left. The saliency maps show the change between the bottom ( y=-8) and top ( y=8).Figure 1b shows the first 8 principal components at different logit values. We base our investigationon them, as they cover the feature distribution well by construction. At this point, we invite the readerto study the explanations: What are your hypotheses about the model’s used features?Studying the counterfactuals in rows (3R, 5L, 6R, 8R), one might hypothesize that glasses influencethe prediction of attractiveness negatively. To validate this, we analyzed our model’s predictions onthe test set. Since glasses are a labeled feature of CELEB Ait is easy to test the hypothesis empirically.Only 3.5% of the portrait photos, which are showing glasses were labeled as attractive by the model.Furthermore, the correlation of the presence of glasses and the logit score was r=-0.35.Another insight noticeable in 1L is that the amount and density of facial hair changes the prediction.The correlation of the absence of facial hair with the attractiveness logit score was r=0.35. At thesame time, less head hair seemed to reduce attractiveness predictions in rows 1L, 2R, 4R. Row 6Lpaints the opposite picture, which illustrates the varying effect wcan have on different datapoints.We found a correlation ( r= 0:30) of hair-loss (combination of baldness or receding hairline) withattractiveness.Indicative of higher attractiveness appear to be a more feminine appearance (e.g. 4R in Figure1) . This hints to a gender bias, which we confirmed as only 20.0% of men are predicted to beattractive, and the label male was negatively correlated with the prediction ( r=0:59). Further,it is noticeable that counterfactuals for higher attractiveness tend to have redder lips (1R, 2R,4Rand 5L). This hypothesis could also be confirmed as the label Wearing Lipstick is also positivelycorrelated (r= 0:64). For age, similar patterns can be found in 1L, 3R, 8L ( r= 0:44). Table 4 in theAppendix D lists the correlation of all 40 attributes. Some attributes cannot be found in the principalcomponents because the cropping hides them (double chin, necklace, necktie). Others describe localdetails such as arched eyebrows, earrings. While earrings do not show up in the counterfactuals, theyare correlated with the model’s logit score by r=0.20. This might be because PCA tends to captureglobal image features while smaller local changes are scattered over many principal components.Another explanation could be that earings are actually not that relevant: if we control for gender usingpartial correlation the earings are only correlated by r=-0.01.Darker skin color seems to influence the network negatively, as in principal components (2R, 3R,6L) a light skin color suggests high attractiveness. Since CELEB Ahas no labels for skin color, weannotated 3250 randomly selected images: 249 photos matched the Fitzpatrick skin type V-VI andwere labeled as dark skin (Fitzpatrick, 1986). For light skin, the percentage of Attractive was 52.0%.The same bias is contained in the model: r=-0.187 (-0.22, -0.15) 95%.Two4Two The TWO4TWOdataset (Anonymous, 2020) is a set of computer-generated imagesintended to evaluate interpretable ML – to test both humans and algorithms. While the dataset issimple, we control the data generation process and can create arbitrary images to test the model. Thedataset contains two abstract animals , Sticky and Stretchy. For Sticky, the right arms are movedinwards and for Stretchy outwards (see Figure 2b). As the arms overlap sometimes, it is beneficialalso to use the color which is slightly predictive (blue for Stretchy and red for Sticky). Building4Under review as a conference paper at ICLR 202110.05.00.0-5.0-10.01L 1R 2L 2R 3L 3R 4L 4R 5L 5R(a) Principal ComponentsSticky StretchyArms inwards Arms outwards (b) OverviewFigure 2: (a)The principal components for TWO4TWO. Sticky is on the top and Strechy below.The saliency maps shown below fail to highlight the object movement well. (b)The main feature ofStretchy are the outward moved left arms . For Sticky, they are moved inwardsblocks (cubes or spheres), bending, rotation, and background are sampled independently. For theTWO4TWOdataset, the invertible neural network 'was only trained on an unsupervised loss, i.e. thegradients of the classifier were detachted. Probably due to the datasets simplicity, we had problems toalign the unsupervised and supervised loss well.The principal components in Figure 2a suggest that the model indeed learned to use the color bias. Wean confirm this by resampling only the color and measure how the logit score is correlated: r=0.352.For the arm’s position, we found a correlation with the model’s probability of -0.798. Additionally,Sticky on the top seems to be more rotated, which we can also confirm as only changing the rotationresults in a correlation of the logit score with the absolute value of teh rotation of with r=0.136 (0.11,0.16) 95%. At high rotations, the model is more certain that it is a Sticky. Although not intended bythe dataset, this bias can be well explained by the fact that 'was not trained on the supervised loss.Black Mice We wanted to check our method on a dataset which is not already known to have biasesas the CelebA dataset and is harder for a human to understand. The BLACK MICEdataset Andresenet al. (2020) contains images of laboratory mice after different treatments. The label to predict isrelated to the amount of pain. For a detailed discussion of the dataset, see Appendix ??. The maintake-away point is that we find that the yellow bedding material, which is changed by our model’scounterfactuals, is indeed predictive of the label.3.2 C OMPARISON OF THEGRADIENT OF xANDTHEDIRECTIONAL DERIVATIVE d'1=dwIn this evaluation, we propose a simple validity check for attribution methods and apply it to ourmethod and gradient-based attribution methods. The idea is to relate saliency maps to counterfactuals.As saliency maps should highlight features most influential for the outcome of a datapoint, amplifyingthese features should increase the prediction and therefore create a counterfactual. We propose thefollowing test: integrate the raw feature attribution values and then check if (1) the counterfactualincreases the logit score and (2) if the changes are into the direction of wor rather into the directionof unrelated properties. We measure (2) by calculating the changes in the directions of the principalcomponents:=EzwhereEis the matrix of all ei.We construct an infinitesimal version of our counterfactuals by lim!0'1(z+w)jwj. This gives thedirectional derivative2of the input w.r.t. to the classifier weight: rwx=rw'1= d'1(z)=dw.Moving the input xinto the directionrwxwill result in a move of zinto thewdirection.3We evaluate the directional derivative against the raw gradient, which serves as a basis for manysaliency methods (SmoothGrad, LRP , LRP,-rule, and integrated gradients (Smilkov et al.,2017; Bach et al., 2015; Montavon et al., 2019; Sundararajan et al., 2017)).4Additionally, weinclude SmoothGrad ( sm.g) and build two additional methods by penalizing changes in the unrelated2TCA V (Kim et al., 2018) uses the directional derivative of the networks output w.r.t. a concept vector v:dfdv. Different to our method, TCA V computes the gradient of the forward model and not on the inverse '1.3A reader familiar with differential geometry might recognize this as the pushforward of wusing'1.4The gradient and the directional derivative have a mathematical similarity which can be seen on the Jacobian:rxf=J'(x)wandrwx=J'1(z)w.5Under review as a conference paper at ICLR 2021ima.cf.cf.sl.int.g.sm.g.(a) Saliency mapsima.-5.1di.der.6.0grad.46.910pe.g.8.810sm.g.10.010pe.s.g9.810(b) Integrationdi.der.grad.pe.g.sm.g.pe.s.g.-505Change in (c) Change toFigure 3: (a)Saliency maps computed for the Eyeglasses class of our method ( cf.sl. ), integratedgradients ( int.g. ), and SmoothGrad ( sm.g. ).cf.denotes counterfactuals with logit y=6.(b)Integrationof the raw feature attribution values, e.g. gradient w.r.t. to a single neuron. The gradient ( grad )results in a strong logit change (given on top) but fails to create visible changes. Differences withthe original images ( img) are magnified below ( 10). SmoothGrad and the respective penalizedversion ( pe.gr andpe.s.g ). show similar results. The directional derivative d'1=dwadds sunglasses.(c)The distribution of is shown in the first row. All gradient-based methods result in strong andtherefore less interpretable counterfactual. The directional derivative rw'1changeslittle.propertiesusing a mean squared error with the of the original image ( pe.gr. for gradient and forSmoothGrad pe.s.g ). The integration is done by iterative steps into the direction of the integratedquantity, e.g. for the gradient we would calculate xt+1=xt+rxf(xt)whereis a small step(see Appendix A.2 for all technical details).Figure 3b shows exemplary results of the integration for the Eyeglass dimension. While the gradient-based counterfactual increases the logit score by an order of magnitude, the resulting image is hardlydifferent from the original. Only noise patterns appear – similar to adversarial examples. SmoothGradresults in both a lower logit score and even smaller changes to the image. Penalizing changes inunrelated properties only yields amplified noise patterns. At the start of the integration, the differencein0is zero, which probably results in first moving along . In contrast, integrating the directionalderivative adds sunglasses to the astronaut – a meaningful counterfactual.We measure the quality of a counterfactual by measuring how strongly unrelated factors change on100 random samples and report the results in Figure 3c. Thus, gradient-based counterfactuals donot only explain the increase of the logit score, but also all the other changes too. A user studyingthe gradient counterfactual could not differentiate between changes done to the prediction and theunrelated factors. The counterfactual based on the directional derivative keeps the independent factorsalmost unchanged up to numerical imprecision.3.3 H UMAN SUBJECT STUDYOur aim was to evaluate whether counterfactual interpolations can help lay users to form hypothesesabout a models used patterns and potential biases. Evaluating explanation techniques with users isimportant though a challenging endeavor as it requires mimicking a realistic setting, while avoidingoverburdening participants (Doshi-Velez & Kim, 2017; Wortman Vaughan & Wallach, 2020).The choice of the dataset is important for any evaluation. Some datasets introduce participants’domain knowledge as a cofounding factor (e.g. images of dog breeds). While others like CELEB Aintroduce subjectivity. Datasets can have many relevant features, creating an enormous amount ofpossible and valid hypotheses. If participant were allowed to develop hypotheses about them withoutlimitation this would require us to mostly evaluate them manually which would be too labor intensive.Asking participants to reason about pre-selected hypothesis prevents us from assessing their totalunderstanding of the model as there are potentially many relevant features.We chose the TWO4TWOdata set (Section 3.1) as it addresses these issues (Anonymous, 2020).The simple scenario enables us to control the available patterns and limit the number of feasiblehypotheses, allowing for comparable quantitative analysis. Concretely, we assessed a participant’sjudgment about the plausibility of six hypotheses. Three hypotheses were reasonable (sensitivity tospatial compositions, color, and rotation). Two others were not (sensitivity to background and shapeof individuals blocks). We also asked them to reason about the model’s maturity and measured theirperception of the explanations using applicable statements taken from the Explanation SatisfactionScale (Hoffman et al., 2018).6Under review as a conference paper at ICLR 2021Baseline Selection Many studies in machine learning solely demonstrate their methods feasibilitywithout a baseline comparison (e.g. Ribeiro et al. (2016); Singla et al. (2020)). In contrast, wecarefully considered what would be the best alternative method available to allow users to discoverhypotheses about a model. As discussed previously in this work, many feature attribution techniquessuffer from a lack of faithfulness and fail to provide meaningful counterfactuals. If counterfactualsare meaningful and faithful to the model they can be expected to look similar. Hence, comparingour method to other counterfactual generation methods (e.g. to GANs (Singla et al., 2020)) provideslimited insight about their practical usefulness if there are alternative ways of discovering similarhypotheses. As for saliency maps, in addition to concerns about their faithfulness, there are alsogrowing concerns about their practical usefulness. While early works found they can calibrate users’trust in a model (e.g. Ribeiro et al. (2016)), more recent works cast doubts about this claimed utility(Kaur et al., 2020; Chu et al., 2020). Studies found that while they are useful to direct users’ attentiontowards relevant features, they facilitate limited insight (Alqaraawi et al., 2020; Chu et al., 2020).Other studies found they may even harm users’ understanding about errors of the model (Shen &Huang, 2020). After all, users often seem to ignore them, relying predominantly on predictionsinstead when reasoning about a model (Chu et al., 2020; Adebayo et al., 2020).While we introduce a faithful saliency method, we do not claim that it would not suffer from the sameusability problems, especially with lay users (see Figure 7 for examples generated for TWO4TWO).After all our maps would need to be used in conjunction with counterfactuals, potentially adding adependent variable (presence of saliency map) to experiment. For these reasons, we decided againstconsidering saliency maps in this evaluation.We also did not consider methods based on infilling (e.g. Goyal et al. (2019)), as we expected themto suffer from similar usability problems. For example, as they explain features locally by removingthem, paying no attention to overlapping features, they can be expected to remove the entire objectfrom the scene when explaining the model’s bias towards the object’s color. This would leave theuser puzzled what feature of the object (shape, position or color) is important.A simple alternative is to study the system predictions on exemplary input. Such reasoning on naturalimages to understand model behavior has surfaced as a strong baseline in another study (Borowskiet al., 2020). Hence, we choose example-based explanations as our baseline treatment.Explanation Presentation Considering that participants’ attention is limited and to allow for a faircomparison, we wanted to provide the same amount of visual information in both conditions. Wechoose a 30x5 image grid (3 rows shown in Figure 4). Each column represented a logit range. Rangeswere chosen so that high confidence predictions for Stretchy were shown on the far left column andhigh confidence predictions Sticky on the far right. Less confident predictions were shown in thedirectly adjoining columns. The remaining middle column represented borderline-cases. This visualdesign had prevailed throughout numerous iterations and ten pilot studies, as it allows users to quicklyscan for similar features in columns and differing features in rows.Both conditions only varied in the images that were used to populate the grid. In the baseline, the gridwas filled with images drawn from validation set that matched the corresponding logit ranges. In thecounterfactual interpolations conditions , only the diagonal of the grid was filled randomly with such“original” images. They were marked with a golden frame. The remaining cells were filled row-wisewith counterfactuals of the original images that matched the corresponding columns score range.Our online study was preregistered5and followed a between-group design. Participants (N=60) wererecruited from Prolific and needed to hold an academic degree with basic mathematical education.Participants were randomly but equally assigned to view either counterfactual interpolations orthe baseline. Upon commencing the study on the Qualtrics platform, participants were shownhandcrafted video instructions. After that, they studied the image grid while rating their agreement tosix statements on a 7-point Likert scale. Participants also rated their agreement to four applicablestatements taken from the Explanation Satisfaction Scale (Hoffman et al., 2018).Study Results and Discussion The significance of rating difference was assessed using a Kruskal-Wallis Test. To account for multiple comparisons, we applied Bonferroni correction to all reportedp-values. For a detailed assessment of all preregistered hypothesis, please refer to the Appendix(Section E.1). Figure 4a summarizes the responses.5see supplementary material7Under review as a conference paper at ICLR 2021ArmsColorRotationBkgrndBlocksMaturity100% 50% 0% 50% 100%0%0%0%7%10%10%3%7%0%0%3%0%97%83%70%43%33%47%13%10%27%10%73%70%3%17%30%50%57%43%83%83%73%90%23%30%StronglydisagreeDisagreeSomewhatdisagreeStronglyagreeAgreeSomewhatagreeNeither agreenor disagreeBLCFBLCFBLCFBLCFBLCFBLCFBL:BaselineCF:Counterfactual(b) Treatment: Counterfactual(a) Results (c) Treatment: BaselineFigure 4: Left: Participants agreements to statements about the networks used patterns. Right: Thestudy interface (vertically cropped) in the counterfactual interpolations (Top) and baseline condition(Bottom). Each participant was assigned to only one treatment.Counterfactual interpolations allowed users to identify the model’s main pattern: the position of thearms of Stretchy and Sticky. They did this with high certainty, as 83.34% strongly agreed with thecorresponding statement. They were more certain about this pattern than with the baseline technique(H(1) = 8.86, p = 0.018), even though the baseline technique also performed well for this task. Thelarge majority (70%) also identified the color bias with counterfactual interpolations, while only43% identified this bias using the baseline explanations. However, the difference in rating betweenconditions for the corresponding statement about color bias was not significant (H(1) = 3.21, p =0.42). Participants who had missed the color bias using our method were later asked to provide theirreasoning. A participant stated: “I would think that the color would be relevant if I saw an examplewhere it went from certain to very certain and only the color, brightness or intensity changed. ” Suchrule-based rather than probabilistic cognitive models of the network may have led users to reject thepresence of color bias even though we instructed them clearly that interpolation would only changerelevant features.To our surprise, fewer participants noticed the network’s more subtle bias towards object rotation inboth conditions. As Figure 4 indicates, participants were somewhat undecided about the relevance,leaning rather to conclude that the network is not sensitive to rotation. As a limitation, we note thatparticipants may not have noticed the rotation bias due to how we had phrased the correspondingstatement. When we asked them to explain their reasoning, many explained that they instead focusedon the individual blocks’ rotation rather than the whole animal.Both explanation techniques allowed participants to confidently reject statements about irrelevantpatterns (sensitivity to the background, sensitivity to the type of blocks). We argue this indicatesa high quality of collected responses and good utility of both explanation techniques. Somewhatworrying is participants’ assessment of the system’s maturity. They were very confident that thenetwork has learned the right patterns and is ready to use for both techniques. Such bias towardsmodel deployment has previously surfaced in other studies (Kaur et al., 2020).Explanation Satisfaction ratings were very high for both techniques (see Figure 10 in Appendix)underlining that participants perceived both methods very well. While this also means that ourmethod was unable to outperform the baseline, it also shows that our careful visual design and ourclear instructions how to use the explanations technique were well received. As a limitation, we notethat participants may have found the introductory videos very informative as many reported enjoyingthe study. This may have led them to more favorable ratings and the conclusion that they understandthe system very well regardless of the explanation technique they had used.4 R ELATED WORKOthers have suggested methods for counterfactual generation. Chang et al. (2019) identifies relevantregions by optimizing for sufficiency and necessarity for the prediction. The classifier is then probed8Under review as a conference paper at ICLR 2021on the counterfactuals replacing relevant regions with heuristical or generative infilling. Goyal et al.(2019) find regions in a distractor image that would change the prediction if present. Both worksassume that relevant features are localized, but for many datasets these may cover the entire image,e.g. changes due to gender or age in face images. Singla et al. (2020); Liu et al. (2019); Baumgartneret al. (2018) explain a black-box neural network by generating counterfactuals with GANs which cangenerate counterfactuals of similar or even better visual quality. However, the GANs model does nothave to align with the explained model perfectly, e.g. see Figure 3 in (Singla et al., 2020).The TCA V method (Kim et al., 2018) estimates how much manually defined concepts influencethe final prediction. Recent work has extended TCA V to discover concepts using super-pixelsautomatically (Ghorbani et al., 2019). Goyal et al. (2020) extend TCA V to causal effects of conceptsand use a V AE as generative model.Being able to interpolate in feature space and inverting these latent representations is one of theadvantages of invertible networks (Jacobsen et al., 2018; Kingma & Dhariwal, 2018). Mackowiaket al. (2020) use invertibility to improve the trustworthiness but focus on out-of-distribution andadversarial examples. (Rombach et al., 2020; Esser et al., 2020) employ invertible networks tounderstand vanilla convolutional networks better.One example of an interpretable model is ProtoPNet (Chen et al., 2019). The feature maps of imagepatches that correspond to prototypical samples in the dataset are used for the final prediction. Thisway, a result can be explained by pointing to labeled patches. The method is limited to a fixedpatch size and does not allow counterfactual reasoning. Another patch-based interpretable model isproposed in Brendel & Bethge (2018).Our combination of PCA and invertible neural networks for interpretability is novel. The finding thatthe directional derivative corresponds to ideal counterfactuals, whereas the gradient does not, has notbeen reported before. We are also not aware of a user study has previously demonstrated that visualcounterfactual can help users identify biases of a neural network.5 D ISCUSSIONA disadvantage of our method is that it requires an invertible network architecture — the weights ofan existing CNN cannot be reused. Learning the input distribution entails additional computationalcosts, when training an invertible neural network. For non-image domains such as natural languageor graphs, the construction of an inverse is currently more difficult. However, first works have takenon the challenge (MacKay et al., 2018; Madhawa et al., 2019). Furthermore, learning the inputdistribution requires a larger network. Given that our method performed similar to the baseline in theuser study in all but one category, an obvious question is whether it is worth the additional effort.However, the same question applies to almost any explanation method and remains largely unan-swered. Unfortunately user evaluations that include a reasonable baselines are very rare. An additionalfinding of this work is that explanation methods should be evaluated for their utility and usabilityagainst a reasonable baseline . For image classification our work shows, that studying the raw inputand corresponding predictions is such a reasonable baseline.It has the potential to allow lay users toidentify, many but not all, high level features used for prediction. Even though we found a strongbaseline, the user study also demonstrated that our method is useful to lay users as they found twoout of three relevant patterns and rejected two more irrelevant patterns. It also highlights that somemore subtle patterns may still go unnoticed even when using our method.We would like to argue that the additonal effort required to implent invertability, may well be justifiedespecially in high-stakes domains. Combining an invertible neural network with a linear classifierenables the use of simple explanation techniques which are otherwise restricted to low complexitymodels. Here, we can use them on a deep model with much greater predictive power. Counterfactualscan be created by simply using the weight vector of the classifier. In contrast to many other techniques,they are faithful to the model, changing only features relevant for the prediction. Since, they canbe inverted back to the input space the high level features they encode are human interpretable.This allows users to discover hypotheses about the models used patterns largely independent oftheir preconception about feature importance. Using our method we found biases in three datasetsincluding some that have not been previously reported. As we have demonstrated in this work, thatinvertibility has mayor advantages for interpretability .9Under review as a conference paper at ICLR 2021
ZF17IvHJIus
Use of invertible CNNs to construct counterfactuals and isosurfaces
5: Marginally below acceptance threshold
This paper describes a computational method to construct ideal counterfactuals and isosurfaces via invertible CNNs, and uses it to reveal biases in three different datasets. Strengths: 1. The use of directional derivative to construct ideal counterfactuals is interesting. 2. Leveraging PCA to construct isosurfaces is neat. 3. The human study is a plus, where the stimuli are based on counterfactual interpolations created by the proposed method. Weaknesses: 1. The reviewer finds the manuscript hard to follow, especially Section II. The authors may come up with a clearer presentation. 2. The descriptions about saliency maps are less relevant to the main idea, further confounding the reviewer. 3. The comparison between simple gradient and direction derivative is less fair, as the directional derivative makes use of the very information direction w (e.g., the direction of no sunglass -> sunglass). What happens if we visualize $\phi^{-1}(\phi(x)+ \alpha w)$ directly, for different values of $\alpha$. 4. The human study may need to conduct another set of control experiments to show that only original training images (not counterfactual interpolations) are $\textbf{less}$ helpful for uses to identify CNN patterns and biases. The reviewer conjectures that for this simple TWO2TWO data, the subjects may spot shortcuts easily even using original training images. Other minor comments: 1. Figure 1: There is no explanation for (a). What is $w$? The reader may not understand it for the first reading. 2. Figure 4: The reviewer believes normalized scores on the top of the images make better sense.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Interpretability Through Invertibility: A Deep Convolutional Network With Ideal Counterfactuals And Isosurfaces ### Paper Abstract Current state of the art computer vision applications rely on highly complex models. Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model. To elucidate a model’s decision, we present a novel interpretable model based on an invertible deep convolutional network. Our model generates meaningful, faithful, and ideal counterfactuals. Using PCA on the classifier’s input, we can also create “isofactuals”– image interpolations with the same outcome but visually meaningful different features. Counter- and isofactuals can be used to identify positive and negative evidence in an image. This can also be visualized with heatmaps. We evaluate our approach against gradient-based attribution methods, which we find to produce meaningless adversarial perturbations. Using our method, we reveal biases in three different datasets. In a human subject experiment, we test whether non-experts find our method useful to spot spurious correlations learned by a model. Our work is a step towards more trustworthy explanations for computer vision. ### Paper Keywords ["Interpretable Machine Learning", "Counterfactuals", "Computer Vision", "Human Evaluation", "User Study"] ### Paper Content ABSTRACTCurrent state of the art computer vision applications rely on highly complex mod-els. Their interpretability is mostly limited to post-hoc methods which are notguaranteed to be faithful to the model. To elucidate a model’s decision, wepresent a novel interpretable model based on an invertible deep convolutionalnetwork. Our model generates meaningful, faithful, and ideal counterfactuals.Using PCA on the classifier’s input, we can also create “isofactuals”– image in-terpolations with the same outcome but visually meaningful different features.Counter- and isofactuals can be used to identify positive and negative evidencein an image. This can also be visualized with heatmaps. We evaluate our ap-proach against gradient-based attribution methods, which we find to producemeaningless adversarial perturbations. Using our method, we reveal biases inthree different datasets. In a human subject experiment, we test whether non-experts find our method useful to spot spurious correlations learned by a model.Our work is a step towards more trustworthy explanations for computer vision.For code: https://anonymous.4open.science/r/ae263acc-aad1-42f8-a639-aec20ff31fc3/1 I NTRODUCTIONThe lack of interpretability is a significant obstacle for adopting Deep Learning in practice. As deepconvolutional neural networks (CNNs) can fail in unforeseeable ways, are susceptible to adversarialperturbations, and may reinforce harmful biases, companies rightly refrain from automating high-riskapplications without understanding the underlying algorithms and the patterns used by the model.Interpretable Machine Learning aims to discover insights into how the model makes its predictions.For image classification with CNNs, a common explanation technique are saliency maps, whichestimate the importance of individual image areas for a given output. The underlying assumption,that users studying local explanations can obtain a global understanding of the model (Ribeiro et al.,2016), was, however, refuted. Several user studies demonstrated that saliency explanations didnot significantly improve users’ task performance, trust calibration, or model understanding (Kauret al., 2020; Adebayo et al., 2020; Alqaraawi et al., 2020; Chu et al., 2020). Alqaraawi et al. (2020)attributed these shortcomings to the inability to highlight global image features or absent ones, makingit difficult to provide counterfactual evidence. Even worse, many saliency methods fail to representthe model’s behavior faithfully (Sixt et al., 2020; Adebayo et al., 2018; Nie et al., 2018). While nocommonly agreed definition of faithfulness exists, it is often characterized by describing what anunfaithful explanation is (Jacovi & Goldberg, 2020). For example, if the method fails to create thesame explanations for identically behaving models.To ensure faithfulness, previous works have proposed building networks with interpretable com-ponents (e.g. ProtoPNet (Chen et al., 2018) or Brendel & Bethge (2018)) or mapping networkactivations to human-defined concepts (e.g. TCA V (Kim et al., 2018)). However, the interpretablenetwork components mostly rely on fixed-sized patches and concepts have to be defined a priori .Here, we argue that explanations should neither be limited to patches and not rely on a prioriknowledge. Instead, users should discover hypotheses in the input space themselves with faithfulcounterfactuals that are ideal, i.e. samples that exhibit changes that directly and exclusively correspond1Under review as a conference paper at ICLR 2021to changes in the network’s prediction (Wachter et al., 2018). We can guarantee this property bycombining an invertible deep neural network z='(x)with a linear classifier y=wT'(x) +b.This yields three major advantages: 1) the model is powerful (can approximate any function Zhanget al. (2019)), 2) the weight vector wof the classifier directly and faithfully encodes the featureimportance of a target class yin thezfeature space. 3) Human-interpretable explanations can beobtained by simply inverting explanations for the linear classifier back to input space.As a local explanation for one sample x, we generate ideal counterfactuals by altering its featurerepresentation zalong the direction of the weight vector ~z=z+w. The logit score can bemanipulated directly via . Inverting ~zback to input space results in a human-understandablecounterfactual ~x='1(z+w). Any change orthogonal to wwill create an “isofactual” , a samplethat looks different but results in the same prediction. While many vectors are orthogonal to w, wefind the directions that explain the highest variance of the features zusing PCA. As the principalcomponents explain all variance of the features, they can be used to summarize the model’s behaviorglobally.We demonstrate the usefulness of our method on a broad range of evaluations. We compared ourapproach to gradient-based saliency methods and find that gradient-based counterfactuals are not idealas they also change irrelevant features. We evaluated our method on three datasets, which allowedus to create hypotheses about potential biases in all three. After statistical evaluation, we confirmedthat these biases existed. Finally, we evaluated our method’s utility against a strong baseline ofexample-based explanations in an online user study. We confirmed that participants could identifythe patterns relevant to the model’s output and reject irrelevant ones. This work demonstrates thatinvertible neural networks provide interpretability that conceptually stands out against the morecommonly used alternatives.2 M ETHODThroughout this work, we rely on the following definitions, which are based on Wachter et al. (2018):Definition 2.1 (Counterfactual Example) .Given a data point xand its prediction y, acounterfactualexample is an alteration of x, defined as ~x=x+ x, with a altered prediction ~y=y+ ywherey6= 0. Samples xwithy= 0are designated “isofactuals” .Almost any xwill match the counterfactual definition, including those that additionally changeaspects which are unrelated to the model’s prediction, e.g. removing an object but also changing thebackground’s color. It is desirable to isolate the change most informative about a prediction:Definition 2.2 (Ideal Counterfactual) .Given a set of unrelated properties (x) =fi(x)g, a sample~xis called ideal counterfactual of xif all unrelated properties iremain the same.The following paragraphs describe how we generate explanations using an invertible neural network':Rn7!Rn. The forward function 'maps a data point xto a feature vector z='(x). Since'isinvertible, one can regain xby applying the inverse x='1(z). We used the features zto train abinary classifier f(x) =wTz+bthat predicts the label y. In addition to the supervised loss, wealso trained'as a generative model (Dinh et al., 2016; 2015) to ensure that the inverted samples arehuman-understandable.Counterfactuals To create a counterfactual example ~xfor a datapoint x, we can exploit that wencodes feature importance in the z-space directly. To change the logit score of the classifier, wesimply add the weight vector to the features zand then invert the result back to the input space:~x='1(z+w). Hence, for any sample x, we can create counterfactuals ~xwith an arbitrarychange in logit value y=wTwby choosing accordingly. Figure 1a shows several suchexamples. Since the generation ( '1) and prediction ( ') are performed by the same model, we knowthat~xwill correspond exactly to the logit offset wTw. Consequently, ~xis afaithful explanation.To show that our counterfactuals are ideal, we have to verify that no property unrelated to theprediction is changed. For such a property (x) =vTz,vhas to be orthogonal to w.1As theunrelated property does not change for the counterfactual (~x) =vT(z+w) =vTz=(x),we know that ~x='1(z+w)is indeed an ideal counterfactual.1(x)could actually be non-linear in the features zas long as the gradient@@zis orthogonal to w.2Under review as a conference paper at ICLR 2021PCA Isosurface Since users can only study a limited number of examples, it is desirable to choosesamples that summarize the model’s behavior well (Ribeiro et al., 2016; Alqaraawi et al., 2020).For counterfactual explanations, the change xmay vary significantly per example as '(x)is anon-linear function. As each xhas a unique representation zin the feature space, we want to findexamples describing the different directions of the feature distribution. To isolate the effect of w,such examples would have the same prediction and only vary in features unrelated to the prediction.We implement this by first removing the variation along wusing a simple projection z?=z(wTz=wTw)wand then applying PCA on z?. The resulting principal components e1:::emareorthogonal towexcept of the last principal component emwhich has zero variance and can thereforebe discarded. The principal components span a hyperplane w+Pm1iiei. Since all samples onthis hyperplane have the same prediction (a logit value of wTw), it is an isosurface .As a principal component eiis a vector in the z-space, we can create counterfactuals for it'1(ei+w)and understand how the changes of adding wdiffer per location in the z-space.Thee1;:::;em1are sorted by the explained variance allowing to prioritize the most relevantchanges in the data. As the principal components cover the whole feature distribution, understandingthe effect ofwon them allows forming a global understanding of the model’s behavior.Saliency maps Saliency maps are supposed to draw attention to features most relevant to a predic-tion. In our case, it is most reasonable to highlight the difference between xand the counterfactual~x. We can measure the difference although in an intermediate feature map h. The saliency mapof an intermediate layer can be resized to fit the input’s resolution as information remains localin convolutional networks. Per feature map location (i;j), we calculate the similarity measurem(i;j)=jhijjcos](hij;hij). The sign of the saliency map mdepends on the alignment ofthe change hwith the feature vector h, i.e.(](hij;hij)>0). The magnitude is dominated bythe length of the change jhijj. Figure 1b presents saliency maps for the CELEB AAttractive label.Model Our invertible network follows the Glow architecture (Kingma & Dhariwal, 2018). Thenetwork is trained to map the data distribution to a standard normal distribution. We reduce theinput dimensionality of (3, 128, 128) down to (786) by fading half of the channels out with eachdownsampling step. When generating a counterfactual, we reuse the zvalues out-faded from thelower layers as they correspond to small details and noise. We have 7 downsampling steps and 351flow layers. The network has 158.769.600 parameters in total. An important design decision is thatthe final layer’s output is not input to the linear classifier. The PCA would fail to discover meaningfuldirections as theN(0;I)prior induces equal variance in all directions. The classifier uses the outputof layer 321. The layers 322-351 are optimized using the standard unsupervised flow objective. Forthe first 321 layers, we also train on the classifier’s supervised loss (for details see Appendix A.1).3 E VALUATIONWe evaluated the ability to construct hypotheses about the model’s behavior on three datasets andwith a user study. We focused on these aspects as our method is faithful by construction, needing noempirical confirmation. Instead, we use the strong faithfulness guarantees of our model to evaluategradient-based attribution methods.3.1 H YPOTHESIS DISCOVERYCelebA A claimed utility of our method is that it allows users to discover hyphotheses about themodels features used for prediction. We choose CELEB A(Liu et al., 2015), a popular face dataset,because it is a challenging dataset for feature attribution: how can an abstract concept as attractivenessbe linked to pixels? Additionally, it already contains annotations (e.g. make-up, accessories, hair),which makes it easier for us to accept or reject a given hypothesis about feature importance.We especially focus on the Attractive class as it is unclearer what the relevant features are.TheCELEB ADataset in general and the class attractive in particular are ethically questionable. How cana subjective label, which depends on individual or even cultural preferences, be reduced to a binarylabel? Unfortunately, (Liu et al., 2015) did not state the annotation process (which is consideredgood practice - (Gebru et al., 2020; Geiger et al., 2020)). Furthermore, the dataset was criticized forlacking diversity (K ̈arkk ̈ainen & Joo, 2019).3Under review as a conference paper at ICLR 20218.04.00.0-4.0-8.0(a) C ELEB A8.04.00.0-4.0-8.01L 1R 2L 2R 3L 3R 4L 4R 5L 5R 6L 6R 7L 7R 8L 8R(b) C ELEB AFigure 1: (a)We generate counterfactual images by moving along the direction of the classifierweightswof the attractive class and inverting it back to the input. Last row shows the saliencymaps from the center row (logit y=0) to the top row ( y=8). Blue marks features changed into adifferent direction and red marks features getting enhanced. (b)We extract principal components(L=ei, R=ei) orthogonal to the classifier weight w. All images in a row have the exact same logitscore given on the left. The saliency maps show the change between the bottom ( y=-8) and top ( y=8).Figure 1b shows the first 8 principal components at different logit values. We base our investigationon them, as they cover the feature distribution well by construction. At this point, we invite the readerto study the explanations: What are your hypotheses about the model’s used features?Studying the counterfactuals in rows (3R, 5L, 6R, 8R), one might hypothesize that glasses influencethe prediction of attractiveness negatively. To validate this, we analyzed our model’s predictions onthe test set. Since glasses are a labeled feature of CELEB Ait is easy to test the hypothesis empirically.Only 3.5% of the portrait photos, which are showing glasses were labeled as attractive by the model.Furthermore, the correlation of the presence of glasses and the logit score was r=-0.35.Another insight noticeable in 1L is that the amount and density of facial hair changes the prediction.The correlation of the absence of facial hair with the attractiveness logit score was r=0.35. At thesame time, less head hair seemed to reduce attractiveness predictions in rows 1L, 2R, 4R. Row 6Lpaints the opposite picture, which illustrates the varying effect wcan have on different datapoints.We found a correlation ( r= 0:30) of hair-loss (combination of baldness or receding hairline) withattractiveness.Indicative of higher attractiveness appear to be a more feminine appearance (e.g. 4R in Figure1) . This hints to a gender bias, which we confirmed as only 20.0% of men are predicted to beattractive, and the label male was negatively correlated with the prediction ( r=0:59). Further,it is noticeable that counterfactuals for higher attractiveness tend to have redder lips (1R, 2R,4Rand 5L). This hypothesis could also be confirmed as the label Wearing Lipstick is also positivelycorrelated (r= 0:64). For age, similar patterns can be found in 1L, 3R, 8L ( r= 0:44). Table 4 in theAppendix D lists the correlation of all 40 attributes. Some attributes cannot be found in the principalcomponents because the cropping hides them (double chin, necklace, necktie). Others describe localdetails such as arched eyebrows, earrings. While earrings do not show up in the counterfactuals, theyare correlated with the model’s logit score by r=0.20. This might be because PCA tends to captureglobal image features while smaller local changes are scattered over many principal components.Another explanation could be that earings are actually not that relevant: if we control for gender usingpartial correlation the earings are only correlated by r=-0.01.Darker skin color seems to influence the network negatively, as in principal components (2R, 3R,6L) a light skin color suggests high attractiveness. Since CELEB Ahas no labels for skin color, weannotated 3250 randomly selected images: 249 photos matched the Fitzpatrick skin type V-VI andwere labeled as dark skin (Fitzpatrick, 1986). For light skin, the percentage of Attractive was 52.0%.The same bias is contained in the model: r=-0.187 (-0.22, -0.15) 95%.Two4Two The TWO4TWOdataset (Anonymous, 2020) is a set of computer-generated imagesintended to evaluate interpretable ML – to test both humans and algorithms. While the dataset issimple, we control the data generation process and can create arbitrary images to test the model. Thedataset contains two abstract animals , Sticky and Stretchy. For Sticky, the right arms are movedinwards and for Stretchy outwards (see Figure 2b). As the arms overlap sometimes, it is beneficialalso to use the color which is slightly predictive (blue for Stretchy and red for Sticky). Building4Under review as a conference paper at ICLR 202110.05.00.0-5.0-10.01L 1R 2L 2R 3L 3R 4L 4R 5L 5R(a) Principal ComponentsSticky StretchyArms inwards Arms outwards (b) OverviewFigure 2: (a)The principal components for TWO4TWO. Sticky is on the top and Strechy below.The saliency maps shown below fail to highlight the object movement well. (b)The main feature ofStretchy are the outward moved left arms . For Sticky, they are moved inwardsblocks (cubes or spheres), bending, rotation, and background are sampled independently. For theTWO4TWOdataset, the invertible neural network 'was only trained on an unsupervised loss, i.e. thegradients of the classifier were detachted. Probably due to the datasets simplicity, we had problems toalign the unsupervised and supervised loss well.The principal components in Figure 2a suggest that the model indeed learned to use the color bias. Wean confirm this by resampling only the color and measure how the logit score is correlated: r=0.352.For the arm’s position, we found a correlation with the model’s probability of -0.798. Additionally,Sticky on the top seems to be more rotated, which we can also confirm as only changing the rotationresults in a correlation of the logit score with the absolute value of teh rotation of with r=0.136 (0.11,0.16) 95%. At high rotations, the model is more certain that it is a Sticky. Although not intended bythe dataset, this bias can be well explained by the fact that 'was not trained on the supervised loss.Black Mice We wanted to check our method on a dataset which is not already known to have biasesas the CelebA dataset and is harder for a human to understand. The BLACK MICEdataset Andresenet al. (2020) contains images of laboratory mice after different treatments. The label to predict isrelated to the amount of pain. For a detailed discussion of the dataset, see Appendix ??. The maintake-away point is that we find that the yellow bedding material, which is changed by our model’scounterfactuals, is indeed predictive of the label.3.2 C OMPARISON OF THEGRADIENT OF xANDTHEDIRECTIONAL DERIVATIVE d'1=dwIn this evaluation, we propose a simple validity check for attribution methods and apply it to ourmethod and gradient-based attribution methods. The idea is to relate saliency maps to counterfactuals.As saliency maps should highlight features most influential for the outcome of a datapoint, amplifyingthese features should increase the prediction and therefore create a counterfactual. We propose thefollowing test: integrate the raw feature attribution values and then check if (1) the counterfactualincreases the logit score and (2) if the changes are into the direction of wor rather into the directionof unrelated properties. We measure (2) by calculating the changes in the directions of the principalcomponents:=EzwhereEis the matrix of all ei.We construct an infinitesimal version of our counterfactuals by lim!0'1(z+w)jwj. This gives thedirectional derivative2of the input w.r.t. to the classifier weight: rwx=rw'1= d'1(z)=dw.Moving the input xinto the directionrwxwill result in a move of zinto thewdirection.3We evaluate the directional derivative against the raw gradient, which serves as a basis for manysaliency methods (SmoothGrad, LRP , LRP,-rule, and integrated gradients (Smilkov et al.,2017; Bach et al., 2015; Montavon et al., 2019; Sundararajan et al., 2017)).4Additionally, weinclude SmoothGrad ( sm.g) and build two additional methods by penalizing changes in the unrelated2TCA V (Kim et al., 2018) uses the directional derivative of the networks output w.r.t. a concept vector v:dfdv. Different to our method, TCA V computes the gradient of the forward model and not on the inverse '1.3A reader familiar with differential geometry might recognize this as the pushforward of wusing'1.4The gradient and the directional derivative have a mathematical similarity which can be seen on the Jacobian:rxf=J'(x)wandrwx=J'1(z)w.5Under review as a conference paper at ICLR 2021ima.cf.cf.sl.int.g.sm.g.(a) Saliency mapsima.-5.1di.der.6.0grad.46.910pe.g.8.810sm.g.10.010pe.s.g9.810(b) Integrationdi.der.grad.pe.g.sm.g.pe.s.g.-505Change in (c) Change toFigure 3: (a)Saliency maps computed for the Eyeglasses class of our method ( cf.sl. ), integratedgradients ( int.g. ), and SmoothGrad ( sm.g. ).cf.denotes counterfactuals with logit y=6.(b)Integrationof the raw feature attribution values, e.g. gradient w.r.t. to a single neuron. The gradient ( grad )results in a strong logit change (given on top) but fails to create visible changes. Differences withthe original images ( img) are magnified below ( 10). SmoothGrad and the respective penalizedversion ( pe.gr andpe.s.g ). show similar results. The directional derivative d'1=dwadds sunglasses.(c)The distribution of is shown in the first row. All gradient-based methods result in strong andtherefore less interpretable counterfactual. The directional derivative rw'1changeslittle.propertiesusing a mean squared error with the of the original image ( pe.gr. for gradient and forSmoothGrad pe.s.g ). The integration is done by iterative steps into the direction of the integratedquantity, e.g. for the gradient we would calculate xt+1=xt+rxf(xt)whereis a small step(see Appendix A.2 for all technical details).Figure 3b shows exemplary results of the integration for the Eyeglass dimension. While the gradient-based counterfactual increases the logit score by an order of magnitude, the resulting image is hardlydifferent from the original. Only noise patterns appear – similar to adversarial examples. SmoothGradresults in both a lower logit score and even smaller changes to the image. Penalizing changes inunrelated properties only yields amplified noise patterns. At the start of the integration, the differencein0is zero, which probably results in first moving along . In contrast, integrating the directionalderivative adds sunglasses to the astronaut – a meaningful counterfactual.We measure the quality of a counterfactual by measuring how strongly unrelated factors change on100 random samples and report the results in Figure 3c. Thus, gradient-based counterfactuals donot only explain the increase of the logit score, but also all the other changes too. A user studyingthe gradient counterfactual could not differentiate between changes done to the prediction and theunrelated factors. The counterfactual based on the directional derivative keeps the independent factorsalmost unchanged up to numerical imprecision.3.3 H UMAN SUBJECT STUDYOur aim was to evaluate whether counterfactual interpolations can help lay users to form hypothesesabout a models used patterns and potential biases. Evaluating explanation techniques with users isimportant though a challenging endeavor as it requires mimicking a realistic setting, while avoidingoverburdening participants (Doshi-Velez & Kim, 2017; Wortman Vaughan & Wallach, 2020).The choice of the dataset is important for any evaluation. Some datasets introduce participants’domain knowledge as a cofounding factor (e.g. images of dog breeds). While others like CELEB Aintroduce subjectivity. Datasets can have many relevant features, creating an enormous amount ofpossible and valid hypotheses. If participant were allowed to develop hypotheses about them withoutlimitation this would require us to mostly evaluate them manually which would be too labor intensive.Asking participants to reason about pre-selected hypothesis prevents us from assessing their totalunderstanding of the model as there are potentially many relevant features.We chose the TWO4TWOdata set (Section 3.1) as it addresses these issues (Anonymous, 2020).The simple scenario enables us to control the available patterns and limit the number of feasiblehypotheses, allowing for comparable quantitative analysis. Concretely, we assessed a participant’sjudgment about the plausibility of six hypotheses. Three hypotheses were reasonable (sensitivity tospatial compositions, color, and rotation). Two others were not (sensitivity to background and shapeof individuals blocks). We also asked them to reason about the model’s maturity and measured theirperception of the explanations using applicable statements taken from the Explanation SatisfactionScale (Hoffman et al., 2018).6Under review as a conference paper at ICLR 2021Baseline Selection Many studies in machine learning solely demonstrate their methods feasibilitywithout a baseline comparison (e.g. Ribeiro et al. (2016); Singla et al. (2020)). In contrast, wecarefully considered what would be the best alternative method available to allow users to discoverhypotheses about a model. As discussed previously in this work, many feature attribution techniquessuffer from a lack of faithfulness and fail to provide meaningful counterfactuals. If counterfactualsare meaningful and faithful to the model they can be expected to look similar. Hence, comparingour method to other counterfactual generation methods (e.g. to GANs (Singla et al., 2020)) provideslimited insight about their practical usefulness if there are alternative ways of discovering similarhypotheses. As for saliency maps, in addition to concerns about their faithfulness, there are alsogrowing concerns about their practical usefulness. While early works found they can calibrate users’trust in a model (e.g. Ribeiro et al. (2016)), more recent works cast doubts about this claimed utility(Kaur et al., 2020; Chu et al., 2020). Studies found that while they are useful to direct users’ attentiontowards relevant features, they facilitate limited insight (Alqaraawi et al., 2020; Chu et al., 2020).Other studies found they may even harm users’ understanding about errors of the model (Shen &Huang, 2020). After all, users often seem to ignore them, relying predominantly on predictionsinstead when reasoning about a model (Chu et al., 2020; Adebayo et al., 2020).While we introduce a faithful saliency method, we do not claim that it would not suffer from the sameusability problems, especially with lay users (see Figure 7 for examples generated for TWO4TWO).After all our maps would need to be used in conjunction with counterfactuals, potentially adding adependent variable (presence of saliency map) to experiment. For these reasons, we decided againstconsidering saliency maps in this evaluation.We also did not consider methods based on infilling (e.g. Goyal et al. (2019)), as we expected themto suffer from similar usability problems. For example, as they explain features locally by removingthem, paying no attention to overlapping features, they can be expected to remove the entire objectfrom the scene when explaining the model’s bias towards the object’s color. This would leave theuser puzzled what feature of the object (shape, position or color) is important.A simple alternative is to study the system predictions on exemplary input. Such reasoning on naturalimages to understand model behavior has surfaced as a strong baseline in another study (Borowskiet al., 2020). Hence, we choose example-based explanations as our baseline treatment.Explanation Presentation Considering that participants’ attention is limited and to allow for a faircomparison, we wanted to provide the same amount of visual information in both conditions. Wechoose a 30x5 image grid (3 rows shown in Figure 4). Each column represented a logit range. Rangeswere chosen so that high confidence predictions for Stretchy were shown on the far left column andhigh confidence predictions Sticky on the far right. Less confident predictions were shown in thedirectly adjoining columns. The remaining middle column represented borderline-cases. This visualdesign had prevailed throughout numerous iterations and ten pilot studies, as it allows users to quicklyscan for similar features in columns and differing features in rows.Both conditions only varied in the images that were used to populate the grid. In the baseline, the gridwas filled with images drawn from validation set that matched the corresponding logit ranges. In thecounterfactual interpolations conditions , only the diagonal of the grid was filled randomly with such“original” images. They were marked with a golden frame. The remaining cells were filled row-wisewith counterfactuals of the original images that matched the corresponding columns score range.Our online study was preregistered5and followed a between-group design. Participants (N=60) wererecruited from Prolific and needed to hold an academic degree with basic mathematical education.Participants were randomly but equally assigned to view either counterfactual interpolations orthe baseline. Upon commencing the study on the Qualtrics platform, participants were shownhandcrafted video instructions. After that, they studied the image grid while rating their agreement tosix statements on a 7-point Likert scale. Participants also rated their agreement to four applicablestatements taken from the Explanation Satisfaction Scale (Hoffman et al., 2018).Study Results and Discussion The significance of rating difference was assessed using a Kruskal-Wallis Test. To account for multiple comparisons, we applied Bonferroni correction to all reportedp-values. For a detailed assessment of all preregistered hypothesis, please refer to the Appendix(Section E.1). Figure 4a summarizes the responses.5see supplementary material7Under review as a conference paper at ICLR 2021ArmsColorRotationBkgrndBlocksMaturity100% 50% 0% 50% 100%0%0%0%7%10%10%3%7%0%0%3%0%97%83%70%43%33%47%13%10%27%10%73%70%3%17%30%50%57%43%83%83%73%90%23%30%StronglydisagreeDisagreeSomewhatdisagreeStronglyagreeAgreeSomewhatagreeNeither agreenor disagreeBLCFBLCFBLCFBLCFBLCFBLCFBL:BaselineCF:Counterfactual(b) Treatment: Counterfactual(a) Results (c) Treatment: BaselineFigure 4: Left: Participants agreements to statements about the networks used patterns. Right: Thestudy interface (vertically cropped) in the counterfactual interpolations (Top) and baseline condition(Bottom). Each participant was assigned to only one treatment.Counterfactual interpolations allowed users to identify the model’s main pattern: the position of thearms of Stretchy and Sticky. They did this with high certainty, as 83.34% strongly agreed with thecorresponding statement. They were more certain about this pattern than with the baseline technique(H(1) = 8.86, p = 0.018), even though the baseline technique also performed well for this task. Thelarge majority (70%) also identified the color bias with counterfactual interpolations, while only43% identified this bias using the baseline explanations. However, the difference in rating betweenconditions for the corresponding statement about color bias was not significant (H(1) = 3.21, p =0.42). Participants who had missed the color bias using our method were later asked to provide theirreasoning. A participant stated: “I would think that the color would be relevant if I saw an examplewhere it went from certain to very certain and only the color, brightness or intensity changed. ” Suchrule-based rather than probabilistic cognitive models of the network may have led users to reject thepresence of color bias even though we instructed them clearly that interpolation would only changerelevant features.To our surprise, fewer participants noticed the network’s more subtle bias towards object rotation inboth conditions. As Figure 4 indicates, participants were somewhat undecided about the relevance,leaning rather to conclude that the network is not sensitive to rotation. As a limitation, we note thatparticipants may not have noticed the rotation bias due to how we had phrased the correspondingstatement. When we asked them to explain their reasoning, many explained that they instead focusedon the individual blocks’ rotation rather than the whole animal.Both explanation techniques allowed participants to confidently reject statements about irrelevantpatterns (sensitivity to the background, sensitivity to the type of blocks). We argue this indicatesa high quality of collected responses and good utility of both explanation techniques. Somewhatworrying is participants’ assessment of the system’s maturity. They were very confident that thenetwork has learned the right patterns and is ready to use for both techniques. Such bias towardsmodel deployment has previously surfaced in other studies (Kaur et al., 2020).Explanation Satisfaction ratings were very high for both techniques (see Figure 10 in Appendix)underlining that participants perceived both methods very well. While this also means that ourmethod was unable to outperform the baseline, it also shows that our careful visual design and ourclear instructions how to use the explanations technique were well received. As a limitation, we notethat participants may have found the introductory videos very informative as many reported enjoyingthe study. This may have led them to more favorable ratings and the conclusion that they understandthe system very well regardless of the explanation technique they had used.4 R ELATED WORKOthers have suggested methods for counterfactual generation. Chang et al. (2019) identifies relevantregions by optimizing for sufficiency and necessarity for the prediction. The classifier is then probed8Under review as a conference paper at ICLR 2021on the counterfactuals replacing relevant regions with heuristical or generative infilling. Goyal et al.(2019) find regions in a distractor image that would change the prediction if present. Both worksassume that relevant features are localized, but for many datasets these may cover the entire image,e.g. changes due to gender or age in face images. Singla et al. (2020); Liu et al. (2019); Baumgartneret al. (2018) explain a black-box neural network by generating counterfactuals with GANs which cangenerate counterfactuals of similar or even better visual quality. However, the GANs model does nothave to align with the explained model perfectly, e.g. see Figure 3 in (Singla et al., 2020).The TCA V method (Kim et al., 2018) estimates how much manually defined concepts influencethe final prediction. Recent work has extended TCA V to discover concepts using super-pixelsautomatically (Ghorbani et al., 2019). Goyal et al. (2020) extend TCA V to causal effects of conceptsand use a V AE as generative model.Being able to interpolate in feature space and inverting these latent representations is one of theadvantages of invertible networks (Jacobsen et al., 2018; Kingma & Dhariwal, 2018). Mackowiaket al. (2020) use invertibility to improve the trustworthiness but focus on out-of-distribution andadversarial examples. (Rombach et al., 2020; Esser et al., 2020) employ invertible networks tounderstand vanilla convolutional networks better.One example of an interpretable model is ProtoPNet (Chen et al., 2019). The feature maps of imagepatches that correspond to prototypical samples in the dataset are used for the final prediction. Thisway, a result can be explained by pointing to labeled patches. The method is limited to a fixedpatch size and does not allow counterfactual reasoning. Another patch-based interpretable model isproposed in Brendel & Bethge (2018).Our combination of PCA and invertible neural networks for interpretability is novel. The finding thatthe directional derivative corresponds to ideal counterfactuals, whereas the gradient does not, has notbeen reported before. We are also not aware of a user study has previously demonstrated that visualcounterfactual can help users identify biases of a neural network.5 D ISCUSSIONA disadvantage of our method is that it requires an invertible network architecture — the weights ofan existing CNN cannot be reused. Learning the input distribution entails additional computationalcosts, when training an invertible neural network. For non-image domains such as natural languageor graphs, the construction of an inverse is currently more difficult. However, first works have takenon the challenge (MacKay et al., 2018; Madhawa et al., 2019). Furthermore, learning the inputdistribution requires a larger network. Given that our method performed similar to the baseline in theuser study in all but one category, an obvious question is whether it is worth the additional effort.However, the same question applies to almost any explanation method and remains largely unan-swered. Unfortunately user evaluations that include a reasonable baselines are very rare. An additionalfinding of this work is that explanation methods should be evaluated for their utility and usabilityagainst a reasonable baseline . For image classification our work shows, that studying the raw inputand corresponding predictions is such a reasonable baseline.It has the potential to allow lay users toidentify, many but not all, high level features used for prediction. Even though we found a strongbaseline, the user study also demonstrated that our method is useful to lay users as they found twoout of three relevant patterns and rejected two more irrelevant patterns. It also highlights that somemore subtle patterns may still go unnoticed even when using our method.We would like to argue that the additonal effort required to implent invertability, may well be justifiedespecially in high-stakes domains. Combining an invertible neural network with a linear classifierenables the use of simple explanation techniques which are otherwise restricted to low complexitymodels. Here, we can use them on a deep model with much greater predictive power. Counterfactualscan be created by simply using the weight vector of the classifier. In contrast to many other techniques,they are faithful to the model, changing only features relevant for the prediction. Since, they canbe inverted back to the input space the high level features they encode are human interpretable.This allows users to discover hypotheses about the models used patterns largely independent oftheir preconception about feature importance. Using our method we found biases in three datasetsincluding some that have not been previously reported. As we have demonstrated in this work, thatinvertibility has mayor advantages for interpretability .9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Use of invertible CNNs to construct counterfactuals and isosurfaces ### Review Text This paper describes a computational method to construct ideal counterfactuals and isosurfaces via invertible CNNs, and uses it to reveal biases in three different datasets. Strengths: 1. The use of directional derivative to construct ideal counterfactuals is interesting. 2. Leveraging PCA to construct isosurfaces is neat. 3. The human study is a plus, where the stimuli are based on counterfactual interpolations created by the proposed method. Weaknesses: 1. The reviewer finds the manuscript hard to follow, especially Section II. The authors may come up with a clearer presentation. 2. The descriptions about saliency maps are less relevant to the main idea, further confounding the reviewer. 3. The comparison between simple gradient and direction derivative is less fair, as the directional derivative makes use of the very information direction w (e.g., the direction of no sunglass -> sunglass). What happens if we visualize $\phi^{-1}(\phi(x)+ \alpha w)$ directly, for different values of $\alpha$. 4. The human study may need to conduct another set of control experiments to show that only original training images (not counterfactual interpolations) are $\textbf{less}$ helpful for uses to identify CNN patterns and biases. The reviewer conjectures that for this simple TWO2TWO data, the subjects may spot shortcuts easily even using original training images. Other minor comments: 1. Figure 1: There is no explanation for (a). What is $w$? The reader may not understand it for the first reading. 2. Figure 4: The reviewer believes normalized scores on the top of the images make better sense. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
6d5El_LENnf
iscaconf.org/ISCA/2023/Workshop/ASSYST
2023
TAP: Efficient Derivation of Tensor Parallel Plans for Large Neural Networks
["Ziji Shi", "Le Jiang", "Ang Wang", "Jie Zhang", "Xianyan Jia", "Yong Li", "Chencan Wu", "Jialin Li", "Wei Lin"]
Model parallelism is essential to train large language models efficiently. However, determining the optimal model parallel schedule for a given neural network can be slow and inefficient due to the vast choice space. To address this challenge, we propose a tensor model parallelism framework called TAP, which automatically searches for the best data and tensor parallel schedules. Our approach is based on the observation that a neural network can be represented as a directed acyclic graph, within which only exists a limited set of frequent subgraphs. With that, we design a graph pruning algorithm that efficiently folds the search space. As a result, TAP runs at sub-linear complexity with respect to model size, which makes it a practical solution for large-scale networks. Experimental results demonstrate that TAP outperforms the state-of-the-art automatic parallelism frameworks by $20-160\times$ in searching time. Moreover, the performance of TAP's discovered schedules is competitive with expert-engineered ones. In summary, TAP provides a powerful and efficient tool for model parallelism that can help alleviate the burden of manual tuning.
["distributed learning", "machine learning system", "model parallelism"]
Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023TAP: Efficient Derivation of Tensor Parallel Plansfor Large Neural NetworksZiji Shi∗†, Le Jiang†, Jie Zhang†, Xianyan Jia†, Yong Li†, Chencan Wu†, Jialin Li∗, Wei Lin†,∗National University of Singapore†Alibaba GroupAbstract —Model parallelism is essential to train large languagemodels efficiently. However, determining the optimal modelparallel schedule for a given neural network can be slow andinefficient due to the vast choice space. To address this challenge,we propose a tensor model parallelism framework called TAP,which automatically searches for the best data and tensor parallelschedules.Our approach is based on the observation that a neuralnetwork can be represented as a directed acyclic graph, withinwhich only exists a limited set of frequent subgraphs. With that,we design a graph pruning algorithm that efficiently folds thesearch space. As a result, TAP runs at sub-linear complexitywith respect to model size, which makes it a practical solutionfor large-scale networks.Experimental results demonstrate that TAP outperforms thestate-of-the-art automatic parallelism frameworks by 20−160×in searching time. Moreover, the performance of TAP’s discov-ered schedules is competitive with expert-engineered ones. Insummary, TAP provides a powerful and efficient tool for modelparallelism that can help alleviate the burden of manual tuning.I. I NTRODUCTIONRecent years have witnessed a burgeoning of large deepneural networks (DNNs) that deliver unprecedented accuracyacross a wide range of AI tasks. The rate of DNN model sizeincrease, however, has far surpassed the growth in acceleratormemory capacity. To address this challenge, model parallelismhas been proposed, where model weights are sharded ontomultiple devices during distributed DNN training.There are two main paradigms in model parallelism:pipeline parallelism and tensor parallelism. Pipeline paral-lelism divides the model into layers. Only activations arecommunicated during the forward pass, while gradient tensorsare exchanged in the backward phase. Pipeline parallelismhas recently drawn much attention, with many proposed al-gorithms aiming to find the optimal pipeline schedule thatminimizes the pipeline idle time (i.e., ”bubble size”). However,pipeline parallelism suffers from two significant drawbacks:1) each layer must fit into a single accelerator’s memory,and 2) interleaving different layers can be challenging formodels with imbalanced architectures. As an alternative, tensorparallelism partitions the model weights and distributes themto multiple devices, thus lifting the restriction on the size ofindividual layers. In this work, we focus on tensor parallelism.Manual specification of tensor parallelism is a daunting task,given that the quality of a partitioning scheme depends onthe neural network architecture and the hardware system. Toaddress this challenge, automatic parallelism approaches havebeen proposed which leverage user hints or guided searchesover the entire partitioning candidate space. We argue that abrute-force search of the space is unnecessary in the majorityof cases. Our research makes two key observations: Firstly,most neural networks include shared subgraphs that can sig-nificantly reduce the search space. Secondly, communicationis the primary bottleneck during tensor parallelism training,and contiguous partitions in a block cannot overlap. Therefore,the search process can be accelerated by only searching forunique neural network sub-modules and evaluating candidatestrategies based on their communication cost.Based on those observations, we present TAP , a deeplearning framework that automatically derives tensor-parallelplans for arbitrary neural networks without requiring expertannotations. TAP first constructs a skimmed DAG by removingauxiliary nodes, then it finds all of the shared subgraphsand searches for the optimal sharding schedule for each ofthem. In the end, TAP reconstructs the DAG by applyingthe found solution to the original graph. TAP drasticallyreduces the search space for tensor parallel plans, achieving20×−160×speedup compared with the state-of-the-art auto-parallel framework. Evaluations demonstrate that our approachcan also generate comparable solutions to the tensor parallelschedules designed by an expert [17].Our paper makes the following contributions:•A set of intermediate representations (IRs) of the compu-tational graph that abstract away from low-level imple-mentation details;•A graph pruning algorithm that exploits the shared sub-structure to facilitate efficient searching;•A communication-based cost model that accurately cap-tures the communication requirements for tensor-paralleltraining.II. B ACKGROUNDA. Model ParallelismModel parallelism distributes model weights onto differentdevices and synchronizes the full model through collectivecommunication [6]. Model parallelism can be further dividedinto categories, pipeline parallelism and tensor parallelism.1) Tensor Parallelism: Tensor parallelism splits the modellayer and distributes it across multiple devices, thus dispersingthe computational overhead of the layer [17], [23], [26]. Eachdevice stores only a portion of the input tensors in its local1memory. Therefore, the final result needs to be aggregatedfrom partial results through collective communication. Tensorparallelism can alleviate the challenge of training heteroge-neous models using pipeline parallelism and can achieve betterperformance.B. Automatic ParallelismAutomatic parallelism is a recent line of research on auto-matically distributing a local model from a single device tomultiple devices using the data and model parallel strategies.Existing approaches for automatic parallelism rely on userhints or brute-force searches across the entire space.1) User hint: User-hint-based automatic parallelism scalessingle-device programs to multi-device systems by incorpo-rating user annotations. For instance, GSPMD [26] infers theoperator partitioning scheme based on user annotations, whileWhale [12] allows for the inclusion of user hints for semi-autoparallelization of large models and introduces a hardware-wareload balance algorithm. However, user-hint-based automaticparallelism approaches require users to possess a deep under-standing of both the system and model, and hard-coded userhints may not be transferable when either the model or systemchanges.2) Search algorithm: Recent work has proposed fully auto-matic approaches based on search algorithms to optimize dis-tributed DNN training. For example, Tofu [25] uses a recursivesearch algorithm based on dynamic programming and DNN-specific heuristics to minimize communication for the entiredataflow graph. Flexflow [13] employs randomized search tofind the best parallel strategy in the SOAP (Sample, Operator,Attribute, and Parameter) space. Alpa [28] optimizes largeDL models through two-level optimizations: inter-operator andintra-operator. It enables inter-operator parallelism using dy-namic programming and intra-operator parallelism with integerlinear programming. Unity [24] represents parallelization andalgebraic transformations as substitutions on a unified graphrepresentation, uses a novel hierarchical search algorithm toidentify an optimized sequence of graph substitutions, andscales to large numbers of GPUs and complex DNNs.3) Challenge of exploding search space: Search-based ap-proaches face the challenge of exploding search space asmodel size scales, resulting in significant time costs. Forexample, each tensor (assuming 2D) can be partitioned inthree ways: not sharded, sharded along the first dimension(row-wise), or sharded along the second dimension (column-wise). Given a neural network G(E, V)withVweight tensors,there exists 3Vpossible sharding plans. Therefore, finding anoptimal sharding plan is an NP-hard problem.III. A PPROACHIn this section, we formulate the problem of searching for anoptimal tensor parallel schedule, followed by our observationof the common presence of shared sub-structures in a largeneural network, leading to the motivation of our design.A. Problem FormulationA neural network can be represented as a directed acyclicgraph G(E, V)comprised of Llayers. The set of vertices Vrepresents the operators, and the set of edges Erepresents thedata flow from producer to consumer operators. Operators canoptionally carry a weight tensor. During the forward pass, anedge represents an activation tensor, while in the backwardphase, it represents a gradient tensor. A layer Li∈Lis eithera layer or a cluster of operators with a similar composition.Let the physical training system be S(m, n)where mis thenumber of worker nodes, and nis the number of acceleratorsper worker node. A parallel plan pis a new graph mathemati-cally equivalent to G. The cost function, Cost(p, S), measurestraining latency for a given plan and training system. The goalis to find an optimal parallel plan p∗where:minimizepCost(p, S)subject to p(X) =G(X)∀XHow can an automated system find such a plan? Fig. 1illustrates the typical workflow of an auto-parallel system. Thesystem first reduces the search space for model splitting usingpruning techniques. Next, a search algorithm is employed togenerate one or more candidate plans for evaluation. Finally,a cost model evaluates all candidate plans and selects the onewith the lowest cost based on predefined evaluation criteria.Search SpaceSmallerSpaceSearchAlgorithmCost ModelCandidate PlansBest PlanFig. 1. General recipe of automatic model parallelism frameworks.The end-to-end duration to produce an optimal scheduleis a critical metric for an auto-parallel system. We identifythree primary factors that contribute to the overall completiontime: the size of the search space, the time complexity of thesearching algorithm, and the speed of the evaluation method.B. Challenges and ObservationsAs mentioned earlier, a major challenge faced by auto-parallel systems is the search space explosion problem. Thisexponential increase in candidate space has led to impracticalsearch time for modern large models [28] (§ V-B). This createsa dilemma: while auto-parallel systems aim to accelerate largemodel training, if the derivation step itself is too slow, it mayoffset the benefit of using an auto-parallel system.How to effectively reduce this large candidate search space?To answer this question, we studied common scaling tech-niques for popular DNN models and summarized our find-ings in Table I. We observe that these techniques can be2ScalingTechniqueTask Model # Params Shared Subgraph (SS) # of SSBy widthVision ResNet50 [11] 23M Conv 50×Vision + Language CLIP-Base [18] 63M Transformer 12×Language Model WideNet [27] 63M MoE layer 32×Vision ViT-Huge [8] 632M Transformer 32×Vision V-MoE [22] 15B MoE layer 24×By depthSpeech wav2vec 2.0 [3] 317M Conv, Transformer 7×,24×Language Model BERT [7] 340M Transformer 24×Language Model T5-Large [19] 770M Transformer 24×Language Model GPT-3 [4] 175B Transformer 96×Language Model Switch Transformer [10] 1571B MoE layer 15×TABLE ISHARED SUBGRAPHS EXIST ON MANY NEURAL NETWORK MODELS . ”C ONV ”MEANS CONVOLUTIONAL LAYER , ”M OE”MEANS MIXTURE -OF-EXPERTLAYER .grouped into two categories: scaling on the width, achievedby increasing the dimension of layers (e.g., adding moreclasses, attention heads, or convolutional channels), or scalingon the depth by increasing the number of layers. Notably, bothtechniques start with a base subgraph , a group of layers oroperators, and expand from it. For instance, large pre-trainedlanguage models like BERT [7] and T5 [19] comprise tensof transformer layers, while multi-class object classificationnetworks like ResNet-50 [11] are built from convolutionallayers.Furthermore, upon analyzing expert-designed parallelschedules ( [17], [20], [21]), we notice that parallel schedulesare predominately similar for layers of the same type . This isdue to the fact that similar layers have comparable computa-tional and memory consumption. This finding motivates us toinvestigate reusing parallel schedules discovered for identicallayers, which can reduce the search effort.IV. D ESIGN AND IMPLEMENTATIONA. OverviewFig. 2 illustrates the workflow of TAP . Given a neuralnetwork represented as a graph, TAP first converts the graphinto an intermediate representation(§ IV-B) called GraphNodeand removes auxiliary nodes. TAP then performs graph prun-ing(§ IV-C) to restrict the search space from the completegraph to the subgraphs. After pruning, TAP explores thepossible sharding opportunities using pre-defined shardingpatterns(§ IV-D) and validates the candidate plans(§ IV-E).If a valid plan is found, it is evaluated using the costmodel(§ IV-F). TAP takes the overall best plan, performsadditional communication-level optimizations, and rewrites themodel into a parallel version(§ IV-G). To use TAP , users onlyneed to specify the device mesh as shown in the examplebelow.1. Example with TAP on 2 workers each with 8 GPUsimport tensor_auto_parallel as tapmesh = [2, 8]tap.auto_parallel(tap.split(mesh))model_def()B. Intermediate RepresentationTAP defines a family of high-level Intermediate Represen-tations (IRs) to facilitate the derivation of parallel schedules.Compared to MLIR HLO [14], TAP IRs operate on a coarsergranularity while preserving the necessary information forsharding.Upon obtaining the original neural network graph, TAP firsttrims the graph by deleting the auxiliary operators (Step 1in Fig. 2). This will remove the initialization and checkpoint-related operators, which will be recovered when convertedback to a neural network graph later. As a result, the remaininggraph will consist of only computing and communicationoperators.TAP IRs consists of:a) GraphNode.: A GraphNode represents a group ofcomputing or communication operators. It can be a layer or alogical group of operators, which is the basic unit for derivingthe sharding schedule. The TAP graph is made of GraphNodewhile preserving the directed edges from the original DAG.Using the GraphNode IR, we reduce the number of nodes inthe T5-large model from 60k to 1015 weight variables.b) Sharding pattern.: A GraphNode could have multipleways of sharding. For instance, a 2D matrix weight can be spliton either dimension or replicated. TAP defines each shardingpattern using the SRC abstraction. TAP also establishes thecost of each sharding pattern based on communication cost.c) Sharding plan.: A sharding plan is a set of subgraphs(blocks of GraphNodes) with sharding patterns connectingthem.C. Pruning using Shared SubgraphIt is common for DNN models to contain shared subgraphs.If we could identify the shared subgraphs, we could prunethe search space by searching only within the subgraph. Wepropose a graph pruning algorithm to compress the searchspace into a shared structure (Step 2):In deep learning frameworks like TensorFlow [2], eachvariable is referred to by the operator that produces it. As such,variables under the same layer share the same name scopebecause they receive input from the same operator. Therefore,it is possible to cluster operators that fall under the same namescope.3InputOutputInputNeural NetworkShardingPlan ExplorerCost ModelInputParallelized Neural Network1Convert2Prune3Search4Query5RewriteOutputOutputCompute/communication opAuxiliary opIn/OutEntry pointFig. 2. Overview of the TAP system.Algorithm 1 Graph Pruning1:procedure PRUNE GRAPH (modelDef, minDuplicate )2: nodeTree ← ∅3: maxDepth ←modelDef.depth4: for all depth∈maxDepth ···1do5: nodeTree [depth ] ←longestCommonPrefix (modelDef.nodes.name )6: opCount =findSimilarBlk (nodeTree [depth ])7: ifopCount ≥minDuplicate then8: subgraphs.append (nodeTree [depth ])9: else10: break11: end if12: end for13: return subgraphs14:end procedureAlgorithm 1 starts by constructing a nodeTree , which iden-tifies and groups the GraphNodes on each level by using thelongest common prefix algorithm on the GraphNodes names(line 2-5). After that, it finds the blocks of GraphNodes witha similar composition of operators and compares the numberof operators with the minimum duplicate threshold (line 7).As the depth decreases, we will see a larger subgraph withless homogeneous compositions. Notice that multiple sharedsubgraphs may exist since a neural network may have multipleleaf nodes.D. Sharding Plan GeneratorA sharding pattern, defining the way a GraphNode canbe sharded, also serves as the directed edge between nodes.According to the SRC abstractions, the communication patternis determined once the split/replica decision is made. UnderAlgorithm 2 Derivation of Optimal Plan1:procedure DERIVE PLAN(modelDef, shardingPatterns )2: subgraphs ←PruneGraph (modelDef )3: candidatePlans ←enumerateAllPlans (subgraphs )4: validPlans ← {}5: for all p∈candidatePlans do6: validated ←PatternRouting (p)7: ifvalidated then8: validPlans.insert (p)9: end if10: end for11: bestPlan ←min(QueryCost (validPlans ))12: return bestPlan13:end procedurethe hood, the sharding patterns connect to each other like achain.After pruning, TAP proceeds to derive the optimal plan(Step 3and 4) using Algorithm 2. In the first phase, TAPenumerates all possible sharding plans given the subgraphs.TAP only needs to work on hundreds of plans thanks topruning. However, not every plan is valid because we onlyhave weekly connected subgraphs. Therefore, the candidateplans need to be validated by checking the connectivity (line5-10). Upon checking, TAP evaluates the performance of eachplan using a cost model and selects the best plan.E. Pattern RoutingIn the pattern routing step (Algorithm 3), TAP tries toassemble the weakly connected GraphNodes into a validsharding plan by checking the connectivity. This is to ensurethe success of graph rewriting (Step 5). TAP does so usingbreadth-first-search (BFS) starting from the root node, and the4Algorithm 3 Plan Validation1:procedure PATTERN ROUTING (currPlan )2: TopoSort (currPlan )3: nodesQ ←currPlan.root4: while nodesQ ̸=∅do5: currNode ←nodesQ.dequeue ()6: for all childNode ∈currNode.next ()do7: sp←lookUpShrdPatn (currNode, childNode )8: ifsp̸=∅then9: ifchildNode ==currPlan.leaf then10: return TRUE11: else12: nodeQ.enqueue (childNode )13: end if14: end if15: end for16: end while17: return FALSE18:end proceduregoal is to make sure there exists at least a connected path fromthe root to the leaf chained using the sharding patterns.One challenge is that a pair of contracting sharding patternsmay have different input and output tensors, and a consumeroperator’s input is not ready until its producer is ready. Inother words, dependencies exist between GraphNodes, but theinformation was kept in the original edges and could be lostwhen we perform pruning.To solve it, we perform a topological search for the GraphN-ode based on the readiness of the input tensor. We leveragethat neural networks can be represented using a directedacyclic graph, and reconstruct the edges based on the orderof the producer-consumer relationship. This way, TAP avoidschecking the order for every pair of GraphNodes.F . Cost ModelTo build a cost model, we first profile different tensorparallel plans to understand the bottleneck. Fig. 3 summarizesthe result. Data were collected from two nodes interconnectedby 32 Gbps Ethernet, each equipped with 8 GPUs. We observethat inter-node communication is the main bottleneck fortensor parallelism , and the best plan is not necessarily theone that splits every weight tensor , in line with [6].As the number of devices increases from 8×to16×, thedifference between communication time and computation timeis further pronounced. This is because the bottleneck hasshifted from high-speed intra-node communication (PCI-e) toslower inter-node communication (Ethernet).Furthermore, the best tensor parallel plan for 16 GPUs(16w-FFN ) only shards the weight in the feed-forward layer.We conjecture that with more tensors split instead of repli-cated, there are fewer FLOPs per device and the computationtime is lower. However, this comes at the cost of having morecommunication. In the case of training in the data center wherenodes are interconnected by Ethernet, the speed bottleneck8w-DP8w-MHA 8w-FFN8w-Megatron16w-DP16w-MHA 16w-FFN16w-Megatron02500500075001000012500Sec/iteration (ms)Time breakdown for tensor parallel plansComputationCommunicationFig. 3. Time breakdown for tensor parallel plans on T5-large model on 8 and16 GPUs (8w/16w). DP means data parallel, MHA means sharding the multi-head attention, FFN means sharding the feed-forward layer, and Megatronrefers to the tensor sharding plan described in [17].may shift from computation to communication instead. There-fore, communication cost is the main consideration when wedesign the cost model.TAP addresses these issues using an analytical cost modelbased on the tensor’s communication method, shape, and dataformat. Each sharding pattern is associated with a cost, andthe total cost is calculated by summing all pattern costs alongthe critical path.G. Graph RewritingAfter evaluating the cost of each sharding plan, TAP assem-bles the parallel plan. It does so by first restoring the originalorder of operators. Then, TAP identifies optimization opportu-nities that can be performed through gradient packing. In theend, TAP passes the resulting parallelized neural network planto the deep learning framework runtime.H. Limitation and Future WorkTo further optimize the memory consumption, TAP couldleverage other orthogonal techniques such as Auto MixedPrecision (AMP) [1], recomputation [5], and pipeline par-allelism. Since both AMP and TAP optimize on the graphrepresentation of the neural network, they can be made intodifferent passes. Also, gradient checkpointing can be used tooffload the selected GraphNode onto the main memory. TAPmay also be used with pipeline parallelism through automatic[9], [12], [15], [16] or manual placements.V. P RELIMINARY EVALUATIONA. SetupWe first evaluate the pruning algorithm and the use of Just-In-Time compilation for TAP . Then, for comparison withanother auto-parallel framework, we use Alpa version 0.7running with JAX 0.3.5. Next, we use Megatron running onPyTorch to compare against expert-engineered tensor parallel5plans. Finally, we present the training convergence runninggigantic neural networks.The evaluation was performed on Company A’s public cloudnode with 756GB main memory, 2×Intel 8163 CPUs at24 cores each, and 8×Nvidia V100 SXM2 32GB GPUs.Additionally, TAP builds on top of TensorFlow 1.12.B. End-to-End EvaluationIn this section, we compare TAP with auto-parallel frame-work Alpa on search time and performance of the discoveredplan.1) Search time.: As explained in § ??, TAP has a sub-linear time complexity, which is desirable when the models’size scales up. In the experiments with Alpa, we present theend-to-end search time with respect to model scaling, definedby the duration from the start of the experiment till the momentthat the training process begins. Due to time constraints, weshortlisted a search space of 16 plans for T5 and 5 plans forResNet, while we did not restrict the search space for TAP .100M 200M 350M 770M 1.4BNumber of Parameters0200linear scaleSearch time (minutes) - T5 ModelAlpaTAP100M 200M 350M 770M 1.4BNumber of Parameters100101102log scaleFig. 4. End-to-end search time when scaling on the number of parametersfor dense transformer model.To scale the model along the depth, we increase the numberof transformer layers for T5, an encoder-decoder transformerarchitecture for language modeling. Increasing the depth ofdense transformer models is a common practice to improveperformance. Fig. 4 shows that, with rising parameters, TAPcan still find a plausible schedule in under 15 mins, which is21× −67×faster than Alpa.To scale the model size along the width for the ResNet50model, we choose to increase the size of the classificationlayer. The original ResNet50 model has 1024 classes in theclassification layer. As we increase the dimensions for theclassification layer, the total number of parameters also scalesup. As shown in Fig. 5, TAP is two orders of magnitudefaster than Alpa in finding the optimal solution. Our systemoutperforms it by 103× −162×.We further analyze the time breakdown during the search.For example, for 24-layer T5-large (770M parameters), Alpaspent 5 mins profiling the operators and 5 mins constructing1024 10k 100k 250k 400kNumber of Classes050100linear scaleSearch time (minutes) - ResNet50 ModelAlpaTAP1024 10k 100k 250k 400kNumber of Classes100101102log scaleFig. 5. End-to-end search time when scaling on the number of parametersfor the large-scale classification model.the pipeline stages out of the operators. Instead, TAP reducesthe architecture to one transformer block and searches forshardable parameters within that only, drastically reducing thesearch space. As a result, Alpa takes 197 minutes to searchfor 16 candidate plans, while TAP requires only 6 minutes toexamine 729 candidate plans.100M 200M 300M 760MNumber of Parameters0.00.20.40.60.8Iteration time (sec)Iteration time - T5AlpaTAPFig. 6. Training time per iteration for T5 (batch size=16). The blue bandrepresents the standard derivation.2) Training speed.: We also evaluate the performance ofthe best plans produced by Alpa and TAP . We observe thatAlpa favors pipeline parallel schedules, while the optimalschedule found by TAP is similar to the Megatron-style tensorparallel schedule. Since the plans using pipeline parallelismrequire less communication, the plans from Alpa have a higherthroughput.We also observe that as the width of the model increases, theperformance of TAP plans is better and more consistent. Fig. 7shows the time to finish one iteration of training for parallelplans of ResNet50. We first observe that TAP consistently61024 10k 100k 250k 400kNumber of classes0.00.51.01.52.0Iteration time (sec)Iteration time - ResNet50AlpaTAPFig. 7. Training time per iteration for ResNet50 (batch size=1024).outperforms Alpa. Further, the variance (blue band) in plansdiscovered by Alpa shows that it struggles to find consistentlygood plans.VI. C ONCLUSIONWe present TAP, an automatic parallelism framework thatefficiently discovers tensor parallel plans for large models.Leveraging the observation that shared subgraphs widely existin neural networks, we design a pruning algorithm that effi-ciently reduces the search space with a sub-linear end-to-endcomplexity. The best plans found by TAP are comparable withthe state-of-the-art expert-engineered plans while only takingminutes to discover.
yNq4OOJjcC
Good improvements but not enough technical detail.
5: Marginally below acceptance threshold
I enjoyed reading the paper, and found the approach of finding subgraphs in large neural network graphs and exploring parallelism schemes in these subgraphs to reduce the search space sensible. I was a bit disappointed not to see any detail on the GraphNode IR, especially since this is listed as a main contribution
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title TAP: Efficient Derivation of Tensor Parallel Plans for Large Neural Networks ### Paper Abstract Model parallelism is essential to train large language models efficiently. However, determining the optimal model parallel schedule for a given neural network can be slow and inefficient due to the vast choice space. To address this challenge, we propose a tensor model parallelism framework called TAP, which automatically searches for the best data and tensor parallel schedules. Our approach is based on the observation that a neural network can be represented as a directed acyclic graph, within which only exists a limited set of frequent subgraphs. With that, we design a graph pruning algorithm that efficiently folds the search space. As a result, TAP runs at sub-linear complexity with respect to model size, which makes it a practical solution for large-scale networks. Experimental results demonstrate that TAP outperforms the state-of-the-art automatic parallelism frameworks by $20-160\times$ in searching time. Moreover, the performance of TAP's discovered schedules is competitive with expert-engineered ones. In summary, TAP provides a powerful and efficient tool for model parallelism that can help alleviate the burden of manual tuning. ### Paper Keywords ["distributed learning", "machine learning system", "model parallelism"] ### Paper Content Architecture and System Support for Transformer Models (ASSYST), ISCA, 2023TAP: Efficient Derivation of Tensor Parallel Plansfor Large Neural NetworksZiji Shi∗†, Le Jiang†, Jie Zhang†, Xianyan Jia†, Yong Li†, Chencan Wu†, Jialin Li∗, Wei Lin†,∗National University of Singapore†Alibaba GroupAbstract —Model parallelism is essential to train large languagemodels efficiently. However, determining the optimal modelparallel schedule for a given neural network can be slow andinefficient due to the vast choice space. To address this challenge,we propose a tensor model parallelism framework called TAP,which automatically searches for the best data and tensor parallelschedules.Our approach is based on the observation that a neuralnetwork can be represented as a directed acyclic graph, withinwhich only exists a limited set of frequent subgraphs. With that,we design a graph pruning algorithm that efficiently folds thesearch space. As a result, TAP runs at sub-linear complexitywith respect to model size, which makes it a practical solutionfor large-scale networks.Experimental results demonstrate that TAP outperforms thestate-of-the-art automatic parallelism frameworks by 20−160×in searching time. Moreover, the performance of TAP’s discov-ered schedules is competitive with expert-engineered ones. Insummary, TAP provides a powerful and efficient tool for modelparallelism that can help alleviate the burden of manual tuning.I. I NTRODUCTIONRecent years have witnessed a burgeoning of large deepneural networks (DNNs) that deliver unprecedented accuracyacross a wide range of AI tasks. The rate of DNN model sizeincrease, however, has far surpassed the growth in acceleratormemory capacity. To address this challenge, model parallelismhas been proposed, where model weights are sharded ontomultiple devices during distributed DNN training.There are two main paradigms in model parallelism:pipeline parallelism and tensor parallelism. Pipeline paral-lelism divides the model into layers. Only activations arecommunicated during the forward pass, while gradient tensorsare exchanged in the backward phase. Pipeline parallelismhas recently drawn much attention, with many proposed al-gorithms aiming to find the optimal pipeline schedule thatminimizes the pipeline idle time (i.e., ”bubble size”). However,pipeline parallelism suffers from two significant drawbacks:1) each layer must fit into a single accelerator’s memory,and 2) interleaving different layers can be challenging formodels with imbalanced architectures. As an alternative, tensorparallelism partitions the model weights and distributes themto multiple devices, thus lifting the restriction on the size ofindividual layers. In this work, we focus on tensor parallelism.Manual specification of tensor parallelism is a daunting task,given that the quality of a partitioning scheme depends onthe neural network architecture and the hardware system. Toaddress this challenge, automatic parallelism approaches havebeen proposed which leverage user hints or guided searchesover the entire partitioning candidate space. We argue that abrute-force search of the space is unnecessary in the majorityof cases. Our research makes two key observations: Firstly,most neural networks include shared subgraphs that can sig-nificantly reduce the search space. Secondly, communicationis the primary bottleneck during tensor parallelism training,and contiguous partitions in a block cannot overlap. Therefore,the search process can be accelerated by only searching forunique neural network sub-modules and evaluating candidatestrategies based on their communication cost.Based on those observations, we present TAP , a deeplearning framework that automatically derives tensor-parallelplans for arbitrary neural networks without requiring expertannotations. TAP first constructs a skimmed DAG by removingauxiliary nodes, then it finds all of the shared subgraphsand searches for the optimal sharding schedule for each ofthem. In the end, TAP reconstructs the DAG by applyingthe found solution to the original graph. TAP drasticallyreduces the search space for tensor parallel plans, achieving20×−160×speedup compared with the state-of-the-art auto-parallel framework. Evaluations demonstrate that our approachcan also generate comparable solutions to the tensor parallelschedules designed by an expert [17].Our paper makes the following contributions:•A set of intermediate representations (IRs) of the compu-tational graph that abstract away from low-level imple-mentation details;•A graph pruning algorithm that exploits the shared sub-structure to facilitate efficient searching;•A communication-based cost model that accurately cap-tures the communication requirements for tensor-paralleltraining.II. B ACKGROUNDA. Model ParallelismModel parallelism distributes model weights onto differentdevices and synchronizes the full model through collectivecommunication [6]. Model parallelism can be further dividedinto categories, pipeline parallelism and tensor parallelism.1) Tensor Parallelism: Tensor parallelism splits the modellayer and distributes it across multiple devices, thus dispersingthe computational overhead of the layer [17], [23], [26]. Eachdevice stores only a portion of the input tensors in its local1memory. Therefore, the final result needs to be aggregatedfrom partial results through collective communication. Tensorparallelism can alleviate the challenge of training heteroge-neous models using pipeline parallelism and can achieve betterperformance.B. Automatic ParallelismAutomatic parallelism is a recent line of research on auto-matically distributing a local model from a single device tomultiple devices using the data and model parallel strategies.Existing approaches for automatic parallelism rely on userhints or brute-force searches across the entire space.1) User hint: User-hint-based automatic parallelism scalessingle-device programs to multi-device systems by incorpo-rating user annotations. For instance, GSPMD [26] infers theoperator partitioning scheme based on user annotations, whileWhale [12] allows for the inclusion of user hints for semi-autoparallelization of large models and introduces a hardware-wareload balance algorithm. However, user-hint-based automaticparallelism approaches require users to possess a deep under-standing of both the system and model, and hard-coded userhints may not be transferable when either the model or systemchanges.2) Search algorithm: Recent work has proposed fully auto-matic approaches based on search algorithms to optimize dis-tributed DNN training. For example, Tofu [25] uses a recursivesearch algorithm based on dynamic programming and DNN-specific heuristics to minimize communication for the entiredataflow graph. Flexflow [13] employs randomized search tofind the best parallel strategy in the SOAP (Sample, Operator,Attribute, and Parameter) space. Alpa [28] optimizes largeDL models through two-level optimizations: inter-operator andintra-operator. It enables inter-operator parallelism using dy-namic programming and intra-operator parallelism with integerlinear programming. Unity [24] represents parallelization andalgebraic transformations as substitutions on a unified graphrepresentation, uses a novel hierarchical search algorithm toidentify an optimized sequence of graph substitutions, andscales to large numbers of GPUs and complex DNNs.3) Challenge of exploding search space: Search-based ap-proaches face the challenge of exploding search space asmodel size scales, resulting in significant time costs. Forexample, each tensor (assuming 2D) can be partitioned inthree ways: not sharded, sharded along the first dimension(row-wise), or sharded along the second dimension (column-wise). Given a neural network G(E, V)withVweight tensors,there exists 3Vpossible sharding plans. Therefore, finding anoptimal sharding plan is an NP-hard problem.III. A PPROACHIn this section, we formulate the problem of searching for anoptimal tensor parallel schedule, followed by our observationof the common presence of shared sub-structures in a largeneural network, leading to the motivation of our design.A. Problem FormulationA neural network can be represented as a directed acyclicgraph G(E, V)comprised of Llayers. The set of vertices Vrepresents the operators, and the set of edges Erepresents thedata flow from producer to consumer operators. Operators canoptionally carry a weight tensor. During the forward pass, anedge represents an activation tensor, while in the backwardphase, it represents a gradient tensor. A layer Li∈Lis eithera layer or a cluster of operators with a similar composition.Let the physical training system be S(m, n)where mis thenumber of worker nodes, and nis the number of acceleratorsper worker node. A parallel plan pis a new graph mathemati-cally equivalent to G. The cost function, Cost(p, S), measurestraining latency for a given plan and training system. The goalis to find an optimal parallel plan p∗where:minimizepCost(p, S)subject to p(X) =G(X)∀XHow can an automated system find such a plan? Fig. 1illustrates the typical workflow of an auto-parallel system. Thesystem first reduces the search space for model splitting usingpruning techniques. Next, a search algorithm is employed togenerate one or more candidate plans for evaluation. Finally,a cost model evaluates all candidate plans and selects the onewith the lowest cost based on predefined evaluation criteria.Search SpaceSmallerSpaceSearchAlgorithmCost ModelCandidate PlansBest PlanFig. 1. General recipe of automatic model parallelism frameworks.The end-to-end duration to produce an optimal scheduleis a critical metric for an auto-parallel system. We identifythree primary factors that contribute to the overall completiontime: the size of the search space, the time complexity of thesearching algorithm, and the speed of the evaluation method.B. Challenges and ObservationsAs mentioned earlier, a major challenge faced by auto-parallel systems is the search space explosion problem. Thisexponential increase in candidate space has led to impracticalsearch time for modern large models [28] (§ V-B). This createsa dilemma: while auto-parallel systems aim to accelerate largemodel training, if the derivation step itself is too slow, it mayoffset the benefit of using an auto-parallel system.How to effectively reduce this large candidate search space?To answer this question, we studied common scaling tech-niques for popular DNN models and summarized our find-ings in Table I. We observe that these techniques can be2ScalingTechniqueTask Model # Params Shared Subgraph (SS) # of SSBy widthVision ResNet50 [11] 23M Conv 50×Vision + Language CLIP-Base [18] 63M Transformer 12×Language Model WideNet [27] 63M MoE layer 32×Vision ViT-Huge [8] 632M Transformer 32×Vision V-MoE [22] 15B MoE layer 24×By depthSpeech wav2vec 2.0 [3] 317M Conv, Transformer 7×,24×Language Model BERT [7] 340M Transformer 24×Language Model T5-Large [19] 770M Transformer 24×Language Model GPT-3 [4] 175B Transformer 96×Language Model Switch Transformer [10] 1571B MoE layer 15×TABLE ISHARED SUBGRAPHS EXIST ON MANY NEURAL NETWORK MODELS . ”C ONV ”MEANS CONVOLUTIONAL LAYER , ”M OE”MEANS MIXTURE -OF-EXPERTLAYER .grouped into two categories: scaling on the width, achievedby increasing the dimension of layers (e.g., adding moreclasses, attention heads, or convolutional channels), or scalingon the depth by increasing the number of layers. Notably, bothtechniques start with a base subgraph , a group of layers oroperators, and expand from it. For instance, large pre-trainedlanguage models like BERT [7] and T5 [19] comprise tensof transformer layers, while multi-class object classificationnetworks like ResNet-50 [11] are built from convolutionallayers.Furthermore, upon analyzing expert-designed parallelschedules ( [17], [20], [21]), we notice that parallel schedulesare predominately similar for layers of the same type . This isdue to the fact that similar layers have comparable computa-tional and memory consumption. This finding motivates us toinvestigate reusing parallel schedules discovered for identicallayers, which can reduce the search effort.IV. D ESIGN AND IMPLEMENTATIONA. OverviewFig. 2 illustrates the workflow of TAP . Given a neuralnetwork represented as a graph, TAP first converts the graphinto an intermediate representation(§ IV-B) called GraphNodeand removes auxiliary nodes. TAP then performs graph prun-ing(§ IV-C) to restrict the search space from the completegraph to the subgraphs. After pruning, TAP explores thepossible sharding opportunities using pre-defined shardingpatterns(§ IV-D) and validates the candidate plans(§ IV-E).If a valid plan is found, it is evaluated using the costmodel(§ IV-F). TAP takes the overall best plan, performsadditional communication-level optimizations, and rewrites themodel into a parallel version(§ IV-G). To use TAP , users onlyneed to specify the device mesh as shown in the examplebelow.1. Example with TAP on 2 workers each with 8 GPUsimport tensor_auto_parallel as tapmesh = [2, 8]tap.auto_parallel(tap.split(mesh))model_def()B. Intermediate RepresentationTAP defines a family of high-level Intermediate Represen-tations (IRs) to facilitate the derivation of parallel schedules.Compared to MLIR HLO [14], TAP IRs operate on a coarsergranularity while preserving the necessary information forsharding.Upon obtaining the original neural network graph, TAP firsttrims the graph by deleting the auxiliary operators (Step 1in Fig. 2). This will remove the initialization and checkpoint-related operators, which will be recovered when convertedback to a neural network graph later. As a result, the remaininggraph will consist of only computing and communicationoperators.TAP IRs consists of:a) GraphNode.: A GraphNode represents a group ofcomputing or communication operators. It can be a layer or alogical group of operators, which is the basic unit for derivingthe sharding schedule. The TAP graph is made of GraphNodewhile preserving the directed edges from the original DAG.Using the GraphNode IR, we reduce the number of nodes inthe T5-large model from 60k to 1015 weight variables.b) Sharding pattern.: A GraphNode could have multipleways of sharding. For instance, a 2D matrix weight can be spliton either dimension or replicated. TAP defines each shardingpattern using the SRC abstraction. TAP also establishes thecost of each sharding pattern based on communication cost.c) Sharding plan.: A sharding plan is a set of subgraphs(blocks of GraphNodes) with sharding patterns connectingthem.C. Pruning using Shared SubgraphIt is common for DNN models to contain shared subgraphs.If we could identify the shared subgraphs, we could prunethe search space by searching only within the subgraph. Wepropose a graph pruning algorithm to compress the searchspace into a shared structure (Step 2):In deep learning frameworks like TensorFlow [2], eachvariable is referred to by the operator that produces it. As such,variables under the same layer share the same name scopebecause they receive input from the same operator. Therefore,it is possible to cluster operators that fall under the same namescope.3InputOutputInputNeural NetworkShardingPlan ExplorerCost ModelInputParallelized Neural Network1Convert2Prune3Search4Query5RewriteOutputOutputCompute/communication opAuxiliary opIn/OutEntry pointFig. 2. Overview of the TAP system.Algorithm 1 Graph Pruning1:procedure PRUNE GRAPH (modelDef, minDuplicate )2: nodeTree ← ∅3: maxDepth ←modelDef.depth4: for all depth∈maxDepth ···1do5: nodeTree [depth ] ←longestCommonPrefix (modelDef.nodes.name )6: opCount =findSimilarBlk (nodeTree [depth ])7: ifopCount ≥minDuplicate then8: subgraphs.append (nodeTree [depth ])9: else10: break11: end if12: end for13: return subgraphs14:end procedureAlgorithm 1 starts by constructing a nodeTree , which iden-tifies and groups the GraphNodes on each level by using thelongest common prefix algorithm on the GraphNodes names(line 2-5). After that, it finds the blocks of GraphNodes witha similar composition of operators and compares the numberof operators with the minimum duplicate threshold (line 7).As the depth decreases, we will see a larger subgraph withless homogeneous compositions. Notice that multiple sharedsubgraphs may exist since a neural network may have multipleleaf nodes.D. Sharding Plan GeneratorA sharding pattern, defining the way a GraphNode canbe sharded, also serves as the directed edge between nodes.According to the SRC abstractions, the communication patternis determined once the split/replica decision is made. UnderAlgorithm 2 Derivation of Optimal Plan1:procedure DERIVE PLAN(modelDef, shardingPatterns )2: subgraphs ←PruneGraph (modelDef )3: candidatePlans ←enumerateAllPlans (subgraphs )4: validPlans ← {}5: for all p∈candidatePlans do6: validated ←PatternRouting (p)7: ifvalidated then8: validPlans.insert (p)9: end if10: end for11: bestPlan ←min(QueryCost (validPlans ))12: return bestPlan13:end procedurethe hood, the sharding patterns connect to each other like achain.After pruning, TAP proceeds to derive the optimal plan(Step 3and 4) using Algorithm 2. In the first phase, TAPenumerates all possible sharding plans given the subgraphs.TAP only needs to work on hundreds of plans thanks topruning. However, not every plan is valid because we onlyhave weekly connected subgraphs. Therefore, the candidateplans need to be validated by checking the connectivity (line5-10). Upon checking, TAP evaluates the performance of eachplan using a cost model and selects the best plan.E. Pattern RoutingIn the pattern routing step (Algorithm 3), TAP tries toassemble the weakly connected GraphNodes into a validsharding plan by checking the connectivity. This is to ensurethe success of graph rewriting (Step 5). TAP does so usingbreadth-first-search (BFS) starting from the root node, and the4Algorithm 3 Plan Validation1:procedure PATTERN ROUTING (currPlan )2: TopoSort (currPlan )3: nodesQ ←currPlan.root4: while nodesQ ̸=∅do5: currNode ←nodesQ.dequeue ()6: for all childNode ∈currNode.next ()do7: sp←lookUpShrdPatn (currNode, childNode )8: ifsp̸=∅then9: ifchildNode ==currPlan.leaf then10: return TRUE11: else12: nodeQ.enqueue (childNode )13: end if14: end if15: end for16: end while17: return FALSE18:end proceduregoal is to make sure there exists at least a connected path fromthe root to the leaf chained using the sharding patterns.One challenge is that a pair of contracting sharding patternsmay have different input and output tensors, and a consumeroperator’s input is not ready until its producer is ready. Inother words, dependencies exist between GraphNodes, but theinformation was kept in the original edges and could be lostwhen we perform pruning.To solve it, we perform a topological search for the GraphN-ode based on the readiness of the input tensor. We leveragethat neural networks can be represented using a directedacyclic graph, and reconstruct the edges based on the orderof the producer-consumer relationship. This way, TAP avoidschecking the order for every pair of GraphNodes.F . Cost ModelTo build a cost model, we first profile different tensorparallel plans to understand the bottleneck. Fig. 3 summarizesthe result. Data were collected from two nodes interconnectedby 32 Gbps Ethernet, each equipped with 8 GPUs. We observethat inter-node communication is the main bottleneck fortensor parallelism , and the best plan is not necessarily theone that splits every weight tensor , in line with [6].As the number of devices increases from 8×to16×, thedifference between communication time and computation timeis further pronounced. This is because the bottleneck hasshifted from high-speed intra-node communication (PCI-e) toslower inter-node communication (Ethernet).Furthermore, the best tensor parallel plan for 16 GPUs(16w-FFN ) only shards the weight in the feed-forward layer.We conjecture that with more tensors split instead of repli-cated, there are fewer FLOPs per device and the computationtime is lower. However, this comes at the cost of having morecommunication. In the case of training in the data center wherenodes are interconnected by Ethernet, the speed bottleneck8w-DP8w-MHA 8w-FFN8w-Megatron16w-DP16w-MHA 16w-FFN16w-Megatron02500500075001000012500Sec/iteration (ms)Time breakdown for tensor parallel plansComputationCommunicationFig. 3. Time breakdown for tensor parallel plans on T5-large model on 8 and16 GPUs (8w/16w). DP means data parallel, MHA means sharding the multi-head attention, FFN means sharding the feed-forward layer, and Megatronrefers to the tensor sharding plan described in [17].may shift from computation to communication instead. There-fore, communication cost is the main consideration when wedesign the cost model.TAP addresses these issues using an analytical cost modelbased on the tensor’s communication method, shape, and dataformat. Each sharding pattern is associated with a cost, andthe total cost is calculated by summing all pattern costs alongthe critical path.G. Graph RewritingAfter evaluating the cost of each sharding plan, TAP assem-bles the parallel plan. It does so by first restoring the originalorder of operators. Then, TAP identifies optimization opportu-nities that can be performed through gradient packing. In theend, TAP passes the resulting parallelized neural network planto the deep learning framework runtime.H. Limitation and Future WorkTo further optimize the memory consumption, TAP couldleverage other orthogonal techniques such as Auto MixedPrecision (AMP) [1], recomputation [5], and pipeline par-allelism. Since both AMP and TAP optimize on the graphrepresentation of the neural network, they can be made intodifferent passes. Also, gradient checkpointing can be used tooffload the selected GraphNode onto the main memory. TAPmay also be used with pipeline parallelism through automatic[9], [12], [15], [16] or manual placements.V. P RELIMINARY EVALUATIONA. SetupWe first evaluate the pruning algorithm and the use of Just-In-Time compilation for TAP . Then, for comparison withanother auto-parallel framework, we use Alpa version 0.7running with JAX 0.3.5. Next, we use Megatron running onPyTorch to compare against expert-engineered tensor parallel5plans. Finally, we present the training convergence runninggigantic neural networks.The evaluation was performed on Company A’s public cloudnode with 756GB main memory, 2×Intel 8163 CPUs at24 cores each, and 8×Nvidia V100 SXM2 32GB GPUs.Additionally, TAP builds on top of TensorFlow 1.12.B. End-to-End EvaluationIn this section, we compare TAP with auto-parallel frame-work Alpa on search time and performance of the discoveredplan.1) Search time.: As explained in § ??, TAP has a sub-linear time complexity, which is desirable when the models’size scales up. In the experiments with Alpa, we present theend-to-end search time with respect to model scaling, definedby the duration from the start of the experiment till the momentthat the training process begins. Due to time constraints, weshortlisted a search space of 16 plans for T5 and 5 plans forResNet, while we did not restrict the search space for TAP .100M 200M 350M 770M 1.4BNumber of Parameters0200linear scaleSearch time (minutes) - T5 ModelAlpaTAP100M 200M 350M 770M 1.4BNumber of Parameters100101102log scaleFig. 4. End-to-end search time when scaling on the number of parametersfor dense transformer model.To scale the model along the depth, we increase the numberof transformer layers for T5, an encoder-decoder transformerarchitecture for language modeling. Increasing the depth ofdense transformer models is a common practice to improveperformance. Fig. 4 shows that, with rising parameters, TAPcan still find a plausible schedule in under 15 mins, which is21× −67×faster than Alpa.To scale the model size along the width for the ResNet50model, we choose to increase the size of the classificationlayer. The original ResNet50 model has 1024 classes in theclassification layer. As we increase the dimensions for theclassification layer, the total number of parameters also scalesup. As shown in Fig. 5, TAP is two orders of magnitudefaster than Alpa in finding the optimal solution. Our systemoutperforms it by 103× −162×.We further analyze the time breakdown during the search.For example, for 24-layer T5-large (770M parameters), Alpaspent 5 mins profiling the operators and 5 mins constructing1024 10k 100k 250k 400kNumber of Classes050100linear scaleSearch time (minutes) - ResNet50 ModelAlpaTAP1024 10k 100k 250k 400kNumber of Classes100101102log scaleFig. 5. End-to-end search time when scaling on the number of parametersfor the large-scale classification model.the pipeline stages out of the operators. Instead, TAP reducesthe architecture to one transformer block and searches forshardable parameters within that only, drastically reducing thesearch space. As a result, Alpa takes 197 minutes to searchfor 16 candidate plans, while TAP requires only 6 minutes toexamine 729 candidate plans.100M 200M 300M 760MNumber of Parameters0.00.20.40.60.8Iteration time (sec)Iteration time - T5AlpaTAPFig. 6. Training time per iteration for T5 (batch size=16). The blue bandrepresents the standard derivation.2) Training speed.: We also evaluate the performance ofthe best plans produced by Alpa and TAP . We observe thatAlpa favors pipeline parallel schedules, while the optimalschedule found by TAP is similar to the Megatron-style tensorparallel schedule. Since the plans using pipeline parallelismrequire less communication, the plans from Alpa have a higherthroughput.We also observe that as the width of the model increases, theperformance of TAP plans is better and more consistent. Fig. 7shows the time to finish one iteration of training for parallelplans of ResNet50. We first observe that TAP consistently61024 10k 100k 250k 400kNumber of classes0.00.51.01.52.0Iteration time (sec)Iteration time - ResNet50AlpaTAPFig. 7. Training time per iteration for ResNet50 (batch size=1024).outperforms Alpa. Further, the variance (blue band) in plansdiscovered by Alpa shows that it struggles to find consistentlygood plans.VI. C ONCLUSIONWe present TAP, an automatic parallelism framework thatefficiently discovers tensor parallel plans for large models.Leveraging the observation that shared subgraphs widely existin neural networks, we design a pruning algorithm that effi-ciently reduces the search space with a sub-linear end-to-endcomplexity. The best plans found by TAP are comparable withthe state-of-the-art expert-engineered plans while only takingminutes to discover.<|im_end|> <|im_start|>assistant ### Review Title Good improvements but not enough technical detail. ### Review Text I enjoyed reading the paper, and found the approach of finding subgraphs in large neural network graphs and exploring parallelism schemes in these subgraphs to reduce the search space sensible. I was a bit disappointed not to see any detail on the GraphNode IR, especially since this is listed as a main contribution ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
SJi9WOeRb
ICLR.cc/2018/Conference
2018
Gradient Estimators for Implicit Models
["Yingzhen Li", "Richard E. Turner"]
Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models. This paper alleviates the need for such approximations by proposing the \emph{Stein gradient estimator}, which directly estimates the score function of the implicitly defined distribution. The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity.
["Implicit Models", "Approximate Inference", "Deep Learning"]
ABSTRACTImplicit models, which allow for the generation of samples but not for point-wiseevaluation of probabilities, are omnipresent in real-world problems tackled by ma-chine learning and a hot topic of current research. Some examples include datasimulators that are widely used in engineering and scientific research, generativeadversarial networks (GANs) for image synthesis, and hot-off-the-press approxi-mate inference techniques relying on implicit distributions. The majority of exist-ing approaches to learning implicit models rely on approximating the intractabledistribution or optimisation objective for gradient-based optimisation, which isliable to produce inaccurate updates and thus poor models. This paper allevi-ates the need for such approximations by proposing the Stein gradient estimator ,which directly estimates the score function of the implicitly defined distribution.The efficacy of the proposed estimator is empirically demonstrated by examplesthat include gradient-free MCMC, meta-learning for approximate inference andentropy regularised GANs that provide improved sample diversity.1 I NTRODUCTIONModelling is fundamental to the success of technological innovations for artificial intelligence. Apowerful model learns a useful representation of the observations for a specified prediction task,and generalises to unknown instances that follow similar generative mechanics. A well establishedarea of machine learning research focuses on developing prescribed probabilistic models (Diggle& Gratton, 1984), where learning is based on evaluating the probability of observations under themodel. Implicit probabilistic models , on the other hand, are defined by a stochastic procedure thatallows for direct generation of samples, but not for the evaluation of model probabilities. Theseare omnipresent in scientific and engineering research involving data analysis, for instance ecology,climate science and geography, where simulators are used to fit real-world observations to produceforecasting results. Within the machine learning community there is a recent interest in a specifictype of implicit models, generative adversarial networks (GANs) (Goodfellow et al., 2014), whichhas been shown to be one of the most successful approaches to image and text generation (Radfordet al., 2016; Yu et al., 2017; Arjovsky et al., 2017; Berthelot et al., 2017). Very recently, implicit dis-tributions have also been considered as approximate posterior distributions for Bayesian inference,e.g. see Liu & Feng (2016); Wang & Liu (2016); Li & Liu (2016); Karaletsos (2016); Meschederet al. (2017); Husz ́ar (2017); Li et al. (2017); Tran et al. (2017). These examples demonstrate the su-perior flexibility of implicit models, which provide highly expressive means of modelling complexdata structures.Whilst prescribed probabilistic models can be learned by standard (approximate) maximum like-lihood or Bayesian inference, implicit probabilistic models require substantially more severe ap-proximations due to the intractability of the model distribution. Many existing approaches firstapproximate the model distribution or optimisation objective function and then use those approx-imations to learn the associated parameters. However, for any finite number of data points thereexists an infinite number of functions, with arbitrarily diverse gradients, that can approximate per-fectly the objective function at the training datapoints, and optimising such approximations can leadto unstable training and poor results. Recent research on GANs, where the issue is highly preva-lent, suggest that restricting the representational power of the discriminator is effective in stabilisingtraining (e.g. see Arjovsky et al., 2017; Kodali et al., 2017). However, such restrictions often intro-1Published as a conference paper at ICLR 2018true lossapprox. lossapprox. loss minimatrue minimum(a) approximate loss functiontrue gradientapprox. gradienttrue minimum (b) approximate gradientsFigure 1: A comparison between the two approximation schemes. Since in practice the optimiseronly visits finite number of locations in the parameter space, it can lead to over-fitting if the neu-ral network based functional approximator is not carefully regularised, and therefore the curvatureinformation of the approximated loss can be very different from that of the original loss (shown in(a)). On the other hand, the gradient approximation scheme (b) can be more accurate since it onlyinvolves estimating the sensitivity of the loss function to the parameters in a local region.duce undesirable biases, responsible for problems such as mode collapse in the context of GANs,and the underestimation of uncertainty in variational inference methods (Turner & Sahani, 2011).In this paper we explore approximating the derivative of the log density, known as the score func-tion, as an alternative method for training implicit models. An accurate approximation of the scorefunction then allows the application of many well-studied algorithms, such as maximum likelihood,maximum entropy estimation, variational inference and gradient-based MCMC, to implicit models.Concretely, our contributions include:theStein gradient estimator , a novel generalisation of the score matching gradient estimator(Hyv ̈arinen, 2005), that includes both parametric and non-parametric forms;a comparison of the proposed estimator with the score matching and the KDE plug-inestimators on performing gradient-free MCMC, meta-learning of approximate posteriorsamplers for Bayesian neural networks, and entropy based regularisation of GANs.2 L EARNING IMPLICIT PROBABILISTIC MODELSGiven a datasetDcontaining i.i.d. samples we would like to learn a probabilistic model p(x)for theunderlying data distribution pD(x). In the case of implicit models, p(x)is defined by a generativeprocess. For example, to generate images, one might define a generative model p(x)that consists ofsampling randomly a latent variable zp0(z)and then defining x=f(z). Herefis a functionparametrised by , usually a deep neural network or a simulator. We assume fto be differentiablew.r.t.. An extension to this scenario is presented by conditional implicit models, where the additionof a supervision signal y, such as an image label, allows us to define a conditional distribution p(xjy)implicitly by the transformation x=f(z;y). A related methodology, wild variational inference(Liu & Feng, 2016; Li & Liu, 2016) assumes a tractable joint density p(x;z), but uses implicitproposal distributions to approximate an intractable exact posterior p(zjx). Here the approximateposteriorq(zjx)can likewise be represented by a deep neural network, but also by a truncatedMarkov chain, such as that given by Langevin dynamics with learnable step-size.Whilst providing extreme flexibility and expressive power, the intractability of density eval-uation also brings serious optimisation issues for implicit models. This is because manylearning algorithms, e.g. maximum likelihood estimation (MLE), rely on minimising a dis-tance/divergence/discrepancy measure D[pjjpD], which often requires evaluating the model density(c.f. Ranganath et al., 2016; Liu & Feng, 2016). Thus good approximations to the optimisationprocedure are the key to learning implicit models that can describe complex data structure. In thecontext of GANs, the Jensen-Shannon divergence is approximated by a variational lower-bound2Published as a conference paper at ICLR 2018represented by a discriminator (Barber & Agakov, 2003; Goodfellow et al., 2014). Related workfor wild variational inference (Li & Liu, 2016; Mescheder et al., 2017; Husz ́ar, 2017; Tran et al.,2017) uses a GAN-based technique to construct a density ratio estimator for q=p0(Sugiyama et al.,2009; 2012; Uehara et al., 2016; Mohamed & Lakshminarayanan, 2016) and then approximates theKL-divergence term in the variational lower-bound:LVI(q) =Eq[logp(xjz)]KL[q(zjx)jjp0(z)]: (1)In addition, Li & Liu (2016) and Mescheder et al. (2017) exploit the additive structure of the KL-divergence and suggest discriminating between qand an auxiliary distribution that is close to q,making the density ratio estimation more accurate. Nevertheless all these algorithms involve a min-imax optimisation, and the current practice of gradient-based optimisation is notoriously unstable.The stabilisation of GAN training is itself a recent trend of related research (e.g. see Salimans et al.,2016; Arjovsky et al., 2017). However, as the gradient-based optimisation only interacts with gradi-ents, there is no need to use a discriminator if an accurate approximation to the intractable gradientscould be obtained. As an example, consider a variational inference task with the approximate pos-terior defined as zq(zjx),();z=f(;x). Notice that the variational lower-boundcan be rewritten asLVI(q) =Eq[logp(x;z)] +H[q(zjx)]; (2)the gradient of the variational parameters can be computed by a sum of the path gradient ofthe first term (i.e. Erflogp(x;f(;x))Trf(;x)) and the gradient of the entropy termrH[q(zjx)]. Expanding the latter, we haverH[q(zjx)] =rE()[logq(f(;x))]=E()[rlogq(f(;x))]=E()[rlogq(zjx)jz=f(;x)+rflogq(f(;x)jx)rf(;x)]=Eq(zjx)[rlogq(zjx)]E()[rflogq(f(;x)jx)rf(;x)];(3)in which the first term in the last line is zero (Roeder et al., 2017). As we typically assume thetractability ofrf, an accurate approximation to rzlogq(zjx)would remove the requirement ofdiscriminators, speed-up the learning and obtain potentially a better model. Many gradient approx-imation techniques exist (Stone, 1985; Fan & Gijbels, 1996; Zhou & Wolfe, 2000; De Brabanteret al., 2013), and in particular, in the next section we will review kernel-based methods such askernel density estimation (Singh, 1977) and score matching (Hyv ̈arinen, 2005) in more detail, andmotivate the main contribution of the paper.3 G RADIENT APPROXIMATION WITH THE STEIN GRADIENT ESTIMATORWe propose the Stein gradient estimator as a novel generalisation of the score matching gradient es-timator. Before presenting it we first set-up the notation. Column vectors and matrices are boldfaced.The random variable under consideration is x2X withX=Rd1if not specifically mentioned. Toavoid misleading notation we use the distribution q(x)to derive the gradient approximations for gen-eral cases. As Monte Carlo methods are heavily used for implicit models, in the rest of the paper wemainly consider approximating the gradient g(xk) :=rxklogq(xk)forxkq(x);k= 1;:::;K .We usexijto denote the jth element of the ith samplexi. We also denote the matrix form of the col-lected gradients as G:=rx1logq(x1);;rxKlogq(xK)T2RKd, and its approximation^G:=^g(x1);;^g(xK)Twith^g(xk) =rxklog ^q(xk)for some ^q(x).3.1 S TEIN GRADIENT ESTIMATOR :INVERTING STEIN ’S IDENTITYWe start from introducing Stein’s identity that was first developed for Gaussian random variables(Stein, 1972; 1981) then extended to general cases (Gorham & Mackey, 2015; Liu et al., 2016). Leth:Rd1!Rd01be a differentiable multivariate test function which maps xto a column vectorh(x) = [h1(x);h2(x);:::;hd0(x)]T. We further assume the boundary condition forh:q(x)h(x)j@X=0;orlimx!1q(x)h(x) = 0 ifX=Rd: (4)3Published as a conference paper at ICLR 2018This condition holds for almost any test function if qhas sufficiently fast-decaying tails (e.g. Gaus-sian tails). Now we introduce Stein’s identity (Stein, 1981; Gorham & Mackey, 2015; Liu et al.,2016)Eq[h(x)rxlogq(x)T+rxh(x)] =0; (5)in which the gradient matrix term rxh(x) = (rxh1(x);;rxhd0(x))T2Rd0d:This identitycan be proved using integration by parts : for theith row of the matrix h(x)rxlogq(x)T, we haveEq[hi(x)rxlogq(x)T] =Zhi(x)rxq(x)Tdx=q(x)hi(x)j@XZq(x)rxhi(x)Tdx=Eq[rxhi(x)T]:(6)Observing that the gradient term rxlogq(x)of interest appears in Stein’s identity (5), we proposetheStein gradient estimator by inverting Stein’s identity. As the expectation in (5) is intractable, wefurther approximate the above with Monte Carlo (MC):1KKXk=1h(xk)rxklogq(xk)T+err=1KKXk=1rxkh(xk);xkq(xk); (7)with err2Rd0dthe random error due to MC approximation, which has mean 0and vanishesasK!+1. Now by temporarily denoting H=h(x1);;h(xK)2Rd0K;rxh=1KPKk=1rxkh(xk)2Rd0d;equation (7) can be rewritten as 1KHG+err=rxh:Thus weconsider a ridge regression method (i.e. adding an `2regulariser) to estimate G:^GSteinV:= arg min^G2RKdjjrxh+1KH^Gjj2F+K2jj^Gjj2F; (8)withjjjjFthe Frobenius norm of a matrix and 0. Simple calculation shows that^GSteinV=(K+I)1hr;Ki; (9)where K:=HTH;Kij=K(xi;xj) :=h(xi)Th(xj);hr;Ki:=KHTrxh;hr;Kiij=PKk=1rxkjK(xi;xk):One can show that the RBF kernel satisfies Stein’s identity (Liu et al., 2016).In this caseh(x) =K(x;);d0= +1and by the reproducing kernel property (Berlinet & Thomas-Agnan, 2011), h(x)Th(x0) =hK(x;);K(x0;)iH=K(x;x0):3.2 S TEIN GRADIENT ESTIMATOR MINIMISES THE KERNELISED STEIN DISCREPANCYIn this section we derive the Stein gradient estimator again, but from a divergence/discrepancy min-imisation perspective. Stein’s method also provides a tool for checking if two distributions q(x)and^q(x)are identical. If the test function set His sufficiently rich, then one can define a Steindiscrepancy measure byS(q;^q) := suph2HEqrxlog ^q(x)Th(x) +hr;hi; (10)see Gorham & Mackey (2015) for an example derivation. When His defined as a unit ball in anRKHS induced by a kernel K(x;), Liu et al. (2016) and Chwialkowski et al. (2016) showed thatthe supremum in (10) can be analytically obtained as (with Kxx0shorthand forK(x;x0)):S2(q;^q) =Ex;x0q(^g(x)g(x))TKxx0(^g(x0)g(x0)); (11)which is also named the kernelised Stein discrepancy (KSD). Chwialkowski et al. (2016) showed thatforC0-universal kernels satisfying the boundary condition, KSD is indeed a discrepancy measure:S2(q;^q) = 0,q= ^q. Gorham & Mackey (2017) further characterised the power of KSD ondetecting non-convergence cases. Furthermore, if the kernel is twice differentiable, then using thesame technique as to derive (16) one can compute KSD byS2(q;^q) =Ex;x0q^g(x)TKxx0^g(x0) +^g(x)Trx0Kxx0+rxKTxx0^g(x0) +Tr(rx;x0Kxx0):(12)In practice KSD is estimated with samples fxkgKk=1q, and simple derivations show that the V-statistic of KSD can be reformulated as S2V(q;^q) =1K2Tr(^GTK^G+ 2^GThr;Ki) +C. Thus thel2error in (8) is equivalent to the V-statistic of KSD if h(x) =K(x;), and we have the following:4Published as a conference paper at ICLR 2018Theorem 1. ^GSteinVis the solution of the following KSD V-statistic minimisation problem^GSteinV= arg min^G2RKdS2V(q;^q) +K2jj^Gjj2F: (13)One can also minimise the U-statistic of KSD to obtain gradient approximations, and a full derivationof which, including the optimal solution, can be found in the appendix. In experiments we use V-statistic solutions and leave comparisons between these methods to future work.3.3 C OMPARISONS TO EXISTING KERNEL -BASED GRADIENT ESTIMATORSThere exist other gradient estimators that do not require explicit evaluations of rxlogq(x), e.g. thedenoising auto-encoder (DAE) (Vincent et al., 2008; Vincent, 2011; Alain & Bengio, 2014) which,with infinitesimal noise, also provides an estimate of rxlogq(x)at convergence. However, ap-plying such gradient estimators result in a double-loop optimisation procedure since the gradientapproximation is repeatedly required for fitting implicit distributions, which can be significantlyslower than the proposed approach. Therefore we focus on “quick and dirty” approximations andonly include comparisons to kernel-based gradient estimators in the following.3.3.1 KDE GRADIENT ESTIMATOR :PLUG -IN ESTIMATOR WITH DENSITY ESTIMATIONA naive approach for gradient approximation would first estimate the intractable density ^q(x)q(x)(up to a constant), then approximate the exact gradient by rxlog ^q(x)rxlogq(x). Specif-ically, Singh (1977) considered kernel density estimation (KDE) ^q(x) =1KPKk=1K(x;xk)C:,then differentiated through the KDE estimate to obtain the gradient estimator:^GKDEij=KXk=1rxijK(xi;xk)=KXk=1K(xi;xk): (14)Interestingly for translation invariant kernels K(x;x0) =K(xx0)theKDE gradient estimator(14) can be rewritten as ^GKDE=diag(K1)1hr;Ki:Inspecting and comparing it with theStein gradient estimator (9), one might notice that the Stein method uses the full kernel matrixas the pre-conditioner, while the KDE method computes an averaged “kernel similarity” for thedenominator. We conjecture that this difference is key to the superior performance of the Steingradient estimator when compared to the KDE gradient estimator (see later experiments). The KDEmethod only collects the similarity information between xkand other samples xjto form an estimateofrxklogq(xk), whereas for the Stein gradient estimator, the kernel similarity between xiandxjfor alli;j6=kare also incorporated. Thus it is reasonable to conjecture that the Stein method canbe more sample efficient, which also implies higher accuracy when the same number of samples arecollected.3.3.2 S CORE MATCHING GRADIENT ESTIMATOR :MINIMISING MSEThe KDE gradient estimator performs indirect approximation of the gradient via density estimation,which can be inaccurate. An alternative approach directly approximates the gradient rxlogq(x)by minimising the expected `2error w.r.t. the approximation ^g(x) = (^g1(x);;^gd(x))T:F(^g) :=Eqjj^g(x)rxlogq(x)jj22: (15)It has been shown in Hyv ̈arinen (2005) that this objective can be reformulated asF(^g) =Eqjj^g(x)jj22+ 2hr;^g(x)i+C;hr;^g(x)i=dXj=1rxj^gj(x): (16)The key insight here is again the usage of integration by parts: after expanding the `2loss objective,the cross term can be rewritten as Eq^g(x)Trxlogq(x)=Eq[hr;^g(x)i];if assuming theboundary condition (4) for ^g(see (6)). The optimum of (16) is referred as the score matchinggradient estimator . The`2objective (15) is also called Fisher divergence (Johnson, 2004) which isa special case of KSD (11) by selecting K(x;x0) =x=x0. Thus the Stein gradient estimator can beviewed as a generalisation of the score matching estimator.5Published as a conference paper at ICLR 2018The comparison between the two estimators is more complicated. Certainly by the Cauchy-Schwarzinequality the Fisher divergence is stronger than KSD in terms of detecting convergence (Liu et al.,2016). However it is difficult to perform direct gradient estimation by minimising the Fisher diver-gence, since (i) the Dirac kernel is non-differentiable so that it is impossible to rewrite the divergencein a similar form to (12), and (ii) the transformation to (16) involves computing rx^g(x). So oneneeds to propose a parametric approximation to Gand then optimise the associated parameters ac-cordingly, and indeed Sasaki et al. (2014) and Strathmann et al. (2015) derived a parametric solutionby first approximating the log density up to a constant as log ^q(x) :=PKk=1akK(x;xk) +C, thenminimising (16) to obtain the coefficients ^ascorekand constructing the gradient estimator as^Gscorei=KXk=1^ascorekrxiK(xi;xk): (17)Therefore the usage of parametric estimation can potentially remove the advantage of using astronger divergence. Conversely, the proposed Stein gradient estimator (9) is non-parametric inthat it directly optimises over functions evaluated at locations fxkgKk=1. This brings in two key ad-vantages over the score matching gradient estimator: (i) it removes the approximation error due tothe use of restricted family of parametric approximations and thus can be potentially more accurate;(ii) it has a much simpler and ubiquitous form that applies to any kernel satisfying the boundarycondition , whereas the score matching estimator requires tedious derivations for different kernelsrepeatedly (see appendix).In terms of computation speed, since in most of the cases the computation of the score matchinggradient estimator also involves kernel matrix inversions, both estimators are of the same order ofcomplexity, which is O(K3+K2d)(kernel matrix computation plus inversion). Low-rank approx-imations such as the Nystr ̈om method (Smola & Sch ̈okopf, 2000; Williams & Seeger, 2001) canenable speed-up, but this is not investigated in the paper. Again we note here that kernel-based gra-dient estimators can still be faster than e.g. the DAE estimator since no double-loop optimisation isrequired. Certainly it is possible to apply early-stopping for the inner-loop DAE fitting. Howeverthe resulting gradient approximation might be very poor, which leads to unstable training and poorlyfitted implicit distributions.3.4 A DDING PREDICTIVE POWERThough providing potentially more accurate approximations, the non-parametric estimator (9) hasno predictive power as described so far. Crucially, many tasks in machine learning require predictinggradient functions at samples drawn from distributions other than q, for example, in MLE q(x)corresponds to the model distribution which is learned using samples from the data distributioninstead. To address this issue, we derive two predictive estimators, one generalised from the non-parametric estimator and the other minimises KSD using parametric approximations.Predictions using the non-parametric estimator. Let us consider an unseen datum y. Ifyis sam-pled fromq, then one can also apply the non-parametric estimator (9) for gradient approximation,given the observed data X=fx1;:::;xKgq. Concretely, if writing ^g(y)rylogq(y)2Rd1then the non-parametric Stein gradient estimator computed on X[fygis^g(y)T^G=(K+I)1ryK(y;y) +PKk=1rxkK(y;xk)hr;Ki+ryK(;y);K=KyyKyXKXyK;withryK(;y)denoting aKdmatrix with rowsryK(xk;y), andryK(y;y)only differen-tiates through the second argument. Then we demonstrate in the appendix that, by simple matrixcalculations and assuming a translation invariant kernel, we have (with column vector 12RK1):rylogq(y)TKyy+KyX(K+I)1KXy1KyX^GSteinVKyX(K+I)1+1TryK(;y):(18)In practice one would store the computed gradient ^GSteinV, the kernel matrix inverse (K+I)1andas the “parameters” of the predictive estimator. For a new observation ypin general, one can“pretend”yis a sample from qand apply the above estimator as well. The approximation qualitydepends on the similarity between qandp, and we conjecture here that this similarity measure, ifcan be described, is closely related to the KSD.6Published as a conference paper at ICLR 2018Fitting a parametric estimator using KSD. The non-parametric predictive estimator could becomputationally demanding. Setting aside the cost of fitting the “parameters”, in prediction thetime complexity for the non-parametric estimator is O(K2+Kd). Also storing the “parameters”needsO(Kd)memory for ^GSteinV. These costs make the non-parametric estimator undesirable forhigh-dimensional data, since in order to obtain accurate predictions it often requires Kscaling withdas well. To address this, one can also minimise the KSD using parametric approximations, in asimilar way as to derive the score matching estimator in Section 3.3.2. More precisely, we definea parametric approximation in a similar fashion as (17), and in the appendix we show that if theRBF kernel is used for both the KSD and the parametric approximation, then the linear coefficientsa= (a1;:::;aK)Tcan be calculated analytically: ^aSteinV= (+I)1b, where=X(KKK ) +K(KX)K((KK)X)KK((KK)X);b=(Kdiag(X)K+ (KK)XK(KX)(KX)K)1;(19)withXthe “gram matrix” that has elements Xij= (xi)Txj. Then for an unseen observation ypthe gradient approximation returns rylogq(y)(^aSteinV)TryK(;y). In this case one onlymaintains the linear coefficients ^aSteinVand computes a linear combination in prediction, which takesO(K)memory andO(Kd)time and therefore is computationally cheaper than the non-parametricprediction model (27).4 A PPLICATIONSWe present some case studies that apply the gradient estimators to implicit models. Detailed set-tings (architecture, learning rate, etc.) are presented in the appendix. Implementation is released athttps://github.com/YingzhenLi/SteinGrad .4.1 S YNTHETIC EXAMPLE : HAMILTONIAN FLOW WITH APPROXIMATE GRADIENTSWe first consider a simple synthetic example to demonstrate the accuracy of the proposed gradientestimator. More precisely we consider the kernel induced Hamiltonian flow ( notan exact sampler)(Strathmann et al., 2015) on a 2-dimensional banana-shaped object: x B(x;b= 0:03;v=100),x1N(x1; 0;v);x2=+b(x21v);N(; 0;1). The approximate Hamiltonian flowis constructed using the same operator as in Hamiltonian Monte Carlo (HMC) (Neal et al., 2011),except that the exact score function rxlogB(x)is replaced by the approximate gradients. We stilluse the exact target density to compute the rejection step as we mainly focus on testing the accuracyof the gradient estimators. We test both versions of the predictive Stein gradient estimator (seesection 3.4) since we require the particles of parallel chains to be independent with each other. Wefit the gradient estimators on K= 200 training datapoints from the target density. The bandwidthof the RBF kernel is computed by the median heuristic and scaled up by a scalar between [1;5].All three methods are simulated for T= 2;000iterations, share the same initial locations that areconstructed by target distribution samples plus Gaussian noises of standard deviation 2.0, and theresults are averaged over 200 parallel chains.We visualise the samples and some MCMC statistics in Figure 2. In general all the resulting Hamil-tonian flows are HMC-like, which give us the confidence that the gradient estimators extrapolatereasonably well at unseen locations. However all of these methods have trouble exploring the ex-tremes, because at those locations there are very few or even no training data-points. Indeed wefound it necessary to use large (but not too large) bandwidths, in order to both allow explorationof those extremes, and ensure that the corresponding test function is not too smooth. In terms ofquantitative metrics, the acceptance rates are reasonably high for all the gradient estimators, and theKSD estimates (across chains) as a measure of sample quality are also close to that computed onHMC samples. The returned estimates of E[x1]are close to zero which is the ground true value. Wefound that the non-parametric Stein gradient estimator is more sensitive to hyper-parameters of thedynamics, e.g. the stepsize of each HMC step. We believe a careful selection of the kernel (e.g. thosewith long tails) and a better search for the hyper-parameters (for both the kernel and the dynamics)can further improve the sample quality and the chain mixing time, but this is not investigated here.7Published as a conference paper at ICLR 2018Figure 2: Kernel induced Hamiltonian flow compared with HMC. Top: samples generated from thedynamics, training data (in cyan), an the trajectory of a particle for T= 1to200starting at the starlocation (in yellow). Bottom: statistics computed during simulations. See main text for details.4.2 M ETA-LEARNING OF APPROXIMATE POSTERIOR SAMPLERS FOR BAYESIAN NN SOne of the recent focuses on meta-learning has been on learning optimisers for training deep neuralnetworks, e.g. see (Andrychowicz et al., 2016). Could analogous goals be achieved for approximateinference? In this section we attempt to learn an approximate posterior sampler for Bayesian neuralnetworks (Bayesian NNs, BNNs) that generalises to unseen datasets and architectures. A moredetailed introduction of Bayesian neural networks is included in the appendix, and in a nutshell,we consider a binary classification task: p(y= 1jx;) =sigmoid (NN(x)),p0() =N(;0;I).After observing the training data D=f(xn;yn)gNn=1, we first obtain the approximate posteriorq()p(jD)/p0()QNn=1p(ynjxn;), then approximate the predictive distribution for anew observation as p(y= 1jx;D)1KPKk=1p(y= 1jx;k);kq():In this task wedefine an implicit approximate posterior distribution q()as the following stochastic normalisingflow (Rezende & Mohamed, 2015) t+1=f(t;rt;t): given the current location tand themini-batch dataf(xm;ym)gMm=1, the update for the next step ist+1=t+(t;rt) +(t;rt)t;tN(;0;I);rt=rt"NMMXm=1logp(ymjxm;t) + logp0(t)#:(20)The coordinates of the noise standard deviation (t;rt)and the moving direction (t;rt)are parametrised by a coordinate-wise neural network. If properly trained, this neural network willlearn the best combination of the current location and gradient information, and produce approxi-mate posterior samples efficiently on different probabilistic modelling tasks. Here we propose usingthe variational inference objective (2) computed on the samples fktgto learn the variational param-eters. Since in this case the gradient of the log joint distribution can be computed analytically,we only approximate the gradient of the entropy term H[q]as in (3), with the exact score func-tion replaced by the presented gradient estimators. We report the results using the non-parametricStein gradient estimator as we found it works better than the parametric version. The RBF kernelis applied for gradient estimation, with the hyper-parameters determined by a grid search on thebandwidth22f0:25;1:0;4:0;10:0;median trickgand2f0:1;0:5;1:0;2:0g.We briefly describe the test protocol. We take from the UCI repository (Lichman, 2013) six binaryclassification datasets (australian, breast, crabs, ionosphere, pima, sonar), train an approximate sam-pler on crabs with a small neural network that has one 20-unit hidden layer with ReLU activation,and generalise to the remaining datasets with a bigger network that has 50 hidden units and usessigmoid activation. We use ionosphere as the validation set to tune . The remaining 4 datasetsare further split into 40% training subset for simulating samples from the approximate sampler, and60% test subsets for evaluating the sampler’s performance.Figure 3 presents the (negative) test log-likelihood (LL), classification error, and an estimate ofthe KSD U-statistic S2U(p(jD);q())(with data sub-sampling) over 5 splits of each test dataset.Besides the gradient estimators we also compare with two baselines: an approximate posterior sam-pler trained by maximum a posteriori (MAP), and stochastic gradient Langevin dynamics (SGLD)8Published as a conference paper at ICLR 2018Figure 3: Generalisation performances for trained approximate posterior samplers.(Welling & Teh, 2011) evaluated on the test datasets directly. In summary, SGLD returns best re-sults in KSD metric. The Stein approach performs equally well or a little better than SGLD interms of test-LL and test error. The KDE method is slightly worse and is close to MAP, indicatingthat the KDE estimator does not provide a very informative gradient for the entropy term. Surpris-ingly the score matching estimator method produces considerably worse results (except for breastdataset), even after carefully tuning the bandwidth and the regularisation parameter . Future workshould investigate the usage of advanced recurrent neural networks such as an LSTM (Hochreiter &Schmidhuber, 1997), which is expected to return better performance.4.3 T OWARDS ADDRESSING MODE COLLAPSE IN GAN S USING ENTROPY REGULARISATIONGANs are notoriously difficult to train in practice. Besides the instability of gradient-based minimaxoptimisation which has been partially addressed by many recent proposals (Salimans et al., 2016;Arjovsky et al., 2017; Berthelot et al., 2017), they also suffer from mode collapse. We proposeadding an entropy regulariser to the GAN generator loss. Concretely, assume the generative modelp(x)is implicitly defined by x=f(z);zp0(z), then the generator’s loss is defined by~Jgen() =Jgen()H[p(x)]; (21)whereJgen()is the original loss function for the generator from any GAN algorithm and is ahyper-parameter. In practice (the gradient of) (21) is estimated using Monte Carlo.We empirically investigate the entropy regularisation idea on the very recently proposed boundaryequilibrium GAN (BEGAN) (Berthelot et al., 2017) method using (continuous) MNIST, and werefer to the appendix for the detailed mathematical set-up. In this case the non-parametric V-statisticStein gradient estimator is used. We use a convolutional generative network and a convolutionalauto-encoder and select the hyper-parameters of BEGAN 2f0:3;0:5;0:7g,2[0;1]and=0:001. The Epanechnikov kernel K(x;x0) :=1dPdj=1(1(xjx0j)2)is used as the pixel values liein a unit interval (see appendix for the expression of the score matching estimator), and to ensure theboundary condition we clip the pixel values into range [108;1108]. The generated images arevisualised in Figure 4. BEGAN without the entropy regularisation fails to generate diverse sampleseven when trained with learning rate decay. The other three images clearly demonstrate the benefit ofthe entropy regularisation technique, with the Stein approach obtaining the highest diversity withoutcompromising visual quality.We further consider four metrics to assess the trained models quantitatively. First 500 samplesare generated for each trained model, then we compute their nearest neighbours in the training setusingl1distance, and obtain a probability vector pby averaging over these neighbour images’label vectors. In Figure 5 we depict the entropy of p(top left), averaged l1distances to the nearestneighbour (top right), and the difference between the largest and smallest elements in p(bottomright). The error bars are obtained by 5 independent runs. These results demonstrate that the Stein9Published as a conference paper at ICLR 2018Figure 4: Visualisation of generated images from trained BEGAN models.Figure 5: Quantitative evaluation on entropy regularised BEGAN. The higher the better for the LHSpanels and the other way around for the RHS ones. See main text for details.approach performs significantly better than the other two, in that it learns a better generative modelnot only faster but also in a more stable way. Interestingly the KDE approach achieves the lowestaveragel1distance to nearest neighbours, possibly because it tends to memorise training examples.We next train a fully connected network (yjx)on MNIST that achieves 98.16% text accuracy,and compute on the generated images an empirical estimate of the inception score (Salimans et al.,2016) Ep(x)[KL[(yjx)jj(y)]]with(y) =Ep(x)[(yjx)](bottom left panel). High inceptionscore indicates that the generate images tend to be both realistic looking and diverse, and again theStein approach out-performs the others on this metric by a large margin.Concerning computation speed, all the three methods are of the same order: 10.20s/epoch for KDE,10.85s/epoch for Score, and 10.30s/epoch for Stein.1This is because K < d (in the experimentsK= 100 andd= 784 ) so that the complexity terms are dominated by kernel computations(O(K2d)) required by all the three methods. Also for a comparison, the original BEGAN methodwithout entropy regularisation runs for 9.05s/epoch. Therefore the main computation cost is dom-inated by the optimisation of the discriminator/generator, and the proposed entropy regularisationcan be applied to many GAN frameworks with little computational burden.5 C ONCLUSIONS AND FUTURE WORKWe have presented the Stein gradient estimator as a novel generalisation to the score matching gra-dient estimator. With a focus on learning implicit models, we have empirically demonstrated theefficacy of the proposed estimator by showing how it opens the door to a range of novel learningtasks: approximating gradient-free MCMC, meta-learning for approximate inference, and unsu-pervised learning for image generation. Future work will expand the understanding of gradientestimators in both theoretical and practical aspects. Theoretical development will compare boththe V-statistic and U-statistic Stein gradient estimators and formalise consistency proofs. Practicalwork will improve the sample efficiency of kernel estimators in high dimensions and develop fastyet accurate approximations to matrix inversion. It is also interesting to investigate applications ofgradient approximation methods to training implicit generative models without the help of discrim-inators. Finally it remains an open question that how to generalise the Stein gradient estimator tonon-kernel settings and discrete distributions.1All the methods are timed on a machine with an NVIDIA GeForce GTX TITAN X GPU.10Published as a conference paper at ICLR 2018ACKNOWLEDGEMENTWe thank Marton Havasi, Jiri Hron, David Janz, Qiang Liu, Maria Lomeli, Cuong Viet Nguyen andMark Rowland for their comments and helps on the manuscript. We also acknowledge the anony-mous reviewers for their review. Yingzhen Li thanks Schlumberger Foundation FFTF fellowship.Richard E. Turner thanks Google and EPSRC grants EP/M0269571 and EP/L000776/1.
rJ8QfICez
an interesting paper
6: Marginally above acceptance threshold
This paper deals with the estimation of the score function, i.e., the derivative of the log likelihood. Some methods were introduced and a new method using Stein identity was proposed. The setup of the trasnductive learning was introduced to add the prediction power to the proposed method. The method was used to several applications. This is an interesting approach to estimate the score function for location models in a non-parametric way. I have a couple of minor comments below. - Stein identity is the formula that holds for the class of ellipsoidal distribution including Gaussian distribution. I'm not sure the term "Stein identity" is appropriate to express the equation (8). - Some boundary condition should be assumed to assure that integration by parts works properly. Describing an explicit boundary condition to guarantee the proper estimation would be nice.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Gradient Estimators for Implicit Models ### Paper Abstract Implicit models, which allow for the generation of samples but not for point-wise evaluation of probabilities, are omnipresent in real-world problems tackled by machine learning and a hot topic of current research. Some examples include data simulators that are widely used in engineering and scientific research, generative adversarial networks (GANs) for image synthesis, and hot-off-the-press approximate inference techniques relying on implicit distributions. The majority of existing approaches to learning implicit models rely on approximating the intractable distribution or optimisation objective for gradient-based optimisation, which is liable to produce inaccurate updates and thus poor models. This paper alleviates the need for such approximations by proposing the \emph{Stein gradient estimator}, which directly estimates the score function of the implicitly defined distribution. The efficacy of the proposed estimator is empirically demonstrated by examples that include meta-learning for approximate inference and entropy regularised GANs that provide improved sample diversity. ### Paper Keywords ["Implicit Models", "Approximate Inference", "Deep Learning"] ### Paper Content ABSTRACTImplicit models, which allow for the generation of samples but not for point-wiseevaluation of probabilities, are omnipresent in real-world problems tackled by ma-chine learning and a hot topic of current research. Some examples include datasimulators that are widely used in engineering and scientific research, generativeadversarial networks (GANs) for image synthesis, and hot-off-the-press approxi-mate inference techniques relying on implicit distributions. The majority of exist-ing approaches to learning implicit models rely on approximating the intractabledistribution or optimisation objective for gradient-based optimisation, which isliable to produce inaccurate updates and thus poor models. This paper allevi-ates the need for such approximations by proposing the Stein gradient estimator ,which directly estimates the score function of the implicitly defined distribution.The efficacy of the proposed estimator is empirically demonstrated by examplesthat include gradient-free MCMC, meta-learning for approximate inference andentropy regularised GANs that provide improved sample diversity.1 I NTRODUCTIONModelling is fundamental to the success of technological innovations for artificial intelligence. Apowerful model learns a useful representation of the observations for a specified prediction task,and generalises to unknown instances that follow similar generative mechanics. A well establishedarea of machine learning research focuses on developing prescribed probabilistic models (Diggle& Gratton, 1984), where learning is based on evaluating the probability of observations under themodel. Implicit probabilistic models , on the other hand, are defined by a stochastic procedure thatallows for direct generation of samples, but not for the evaluation of model probabilities. Theseare omnipresent in scientific and engineering research involving data analysis, for instance ecology,climate science and geography, where simulators are used to fit real-world observations to produceforecasting results. Within the machine learning community there is a recent interest in a specifictype of implicit models, generative adversarial networks (GANs) (Goodfellow et al., 2014), whichhas been shown to be one of the most successful approaches to image and text generation (Radfordet al., 2016; Yu et al., 2017; Arjovsky et al., 2017; Berthelot et al., 2017). Very recently, implicit dis-tributions have also been considered as approximate posterior distributions for Bayesian inference,e.g. see Liu & Feng (2016); Wang & Liu (2016); Li & Liu (2016); Karaletsos (2016); Meschederet al. (2017); Husz ́ar (2017); Li et al. (2017); Tran et al. (2017). These examples demonstrate the su-perior flexibility of implicit models, which provide highly expressive means of modelling complexdata structures.Whilst prescribed probabilistic models can be learned by standard (approximate) maximum like-lihood or Bayesian inference, implicit probabilistic models require substantially more severe ap-proximations due to the intractability of the model distribution. Many existing approaches firstapproximate the model distribution or optimisation objective function and then use those approx-imations to learn the associated parameters. However, for any finite number of data points thereexists an infinite number of functions, with arbitrarily diverse gradients, that can approximate per-fectly the objective function at the training datapoints, and optimising such approximations can leadto unstable training and poor results. Recent research on GANs, where the issue is highly preva-lent, suggest that restricting the representational power of the discriminator is effective in stabilisingtraining (e.g. see Arjovsky et al., 2017; Kodali et al., 2017). However, such restrictions often intro-1Published as a conference paper at ICLR 2018true lossapprox. lossapprox. loss minimatrue minimum(a) approximate loss functiontrue gradientapprox. gradienttrue minimum (b) approximate gradientsFigure 1: A comparison between the two approximation schemes. Since in practice the optimiseronly visits finite number of locations in the parameter space, it can lead to over-fitting if the neu-ral network based functional approximator is not carefully regularised, and therefore the curvatureinformation of the approximated loss can be very different from that of the original loss (shown in(a)). On the other hand, the gradient approximation scheme (b) can be more accurate since it onlyinvolves estimating the sensitivity of the loss function to the parameters in a local region.duce undesirable biases, responsible for problems such as mode collapse in the context of GANs,and the underestimation of uncertainty in variational inference methods (Turner & Sahani, 2011).In this paper we explore approximating the derivative of the log density, known as the score func-tion, as an alternative method for training implicit models. An accurate approximation of the scorefunction then allows the application of many well-studied algorithms, such as maximum likelihood,maximum entropy estimation, variational inference and gradient-based MCMC, to implicit models.Concretely, our contributions include:theStein gradient estimator , a novel generalisation of the score matching gradient estimator(Hyv ̈arinen, 2005), that includes both parametric and non-parametric forms;a comparison of the proposed estimator with the score matching and the KDE plug-inestimators on performing gradient-free MCMC, meta-learning of approximate posteriorsamplers for Bayesian neural networks, and entropy based regularisation of GANs.2 L EARNING IMPLICIT PROBABILISTIC MODELSGiven a datasetDcontaining i.i.d. samples we would like to learn a probabilistic model p(x)for theunderlying data distribution pD(x). In the case of implicit models, p(x)is defined by a generativeprocess. For example, to generate images, one might define a generative model p(x)that consists ofsampling randomly a latent variable zp0(z)and then defining x=f(z). Herefis a functionparametrised by , usually a deep neural network or a simulator. We assume fto be differentiablew.r.t.. An extension to this scenario is presented by conditional implicit models, where the additionof a supervision signal y, such as an image label, allows us to define a conditional distribution p(xjy)implicitly by the transformation x=f(z;y). A related methodology, wild variational inference(Liu & Feng, 2016; Li & Liu, 2016) assumes a tractable joint density p(x;z), but uses implicitproposal distributions to approximate an intractable exact posterior p(zjx). Here the approximateposteriorq(zjx)can likewise be represented by a deep neural network, but also by a truncatedMarkov chain, such as that given by Langevin dynamics with learnable step-size.Whilst providing extreme flexibility and expressive power, the intractability of density eval-uation also brings serious optimisation issues for implicit models. This is because manylearning algorithms, e.g. maximum likelihood estimation (MLE), rely on minimising a dis-tance/divergence/discrepancy measure D[pjjpD], which often requires evaluating the model density(c.f. Ranganath et al., 2016; Liu & Feng, 2016). Thus good approximations to the optimisationprocedure are the key to learning implicit models that can describe complex data structure. In thecontext of GANs, the Jensen-Shannon divergence is approximated by a variational lower-bound2Published as a conference paper at ICLR 2018represented by a discriminator (Barber & Agakov, 2003; Goodfellow et al., 2014). Related workfor wild variational inference (Li & Liu, 2016; Mescheder et al., 2017; Husz ́ar, 2017; Tran et al.,2017) uses a GAN-based technique to construct a density ratio estimator for q=p0(Sugiyama et al.,2009; 2012; Uehara et al., 2016; Mohamed & Lakshminarayanan, 2016) and then approximates theKL-divergence term in the variational lower-bound:LVI(q) =Eq[logp(xjz)]KL[q(zjx)jjp0(z)]: (1)In addition, Li & Liu (2016) and Mescheder et al. (2017) exploit the additive structure of the KL-divergence and suggest discriminating between qand an auxiliary distribution that is close to q,making the density ratio estimation more accurate. Nevertheless all these algorithms involve a min-imax optimisation, and the current practice of gradient-based optimisation is notoriously unstable.The stabilisation of GAN training is itself a recent trend of related research (e.g. see Salimans et al.,2016; Arjovsky et al., 2017). However, as the gradient-based optimisation only interacts with gradi-ents, there is no need to use a discriminator if an accurate approximation to the intractable gradientscould be obtained. As an example, consider a variational inference task with the approximate pos-terior defined as zq(zjx),();z=f(;x). Notice that the variational lower-boundcan be rewritten asLVI(q) =Eq[logp(x;z)] +H[q(zjx)]; (2)the gradient of the variational parameters can be computed by a sum of the path gradient ofthe first term (i.e. Erflogp(x;f(;x))Trf(;x)) and the gradient of the entropy termrH[q(zjx)]. Expanding the latter, we haverH[q(zjx)] =rE()[logq(f(;x))]=E()[rlogq(f(;x))]=E()[rlogq(zjx)jz=f(;x)+rflogq(f(;x)jx)rf(;x)]=Eq(zjx)[rlogq(zjx)]E()[rflogq(f(;x)jx)rf(;x)];(3)in which the first term in the last line is zero (Roeder et al., 2017). As we typically assume thetractability ofrf, an accurate approximation to rzlogq(zjx)would remove the requirement ofdiscriminators, speed-up the learning and obtain potentially a better model. Many gradient approx-imation techniques exist (Stone, 1985; Fan & Gijbels, 1996; Zhou & Wolfe, 2000; De Brabanteret al., 2013), and in particular, in the next section we will review kernel-based methods such askernel density estimation (Singh, 1977) and score matching (Hyv ̈arinen, 2005) in more detail, andmotivate the main contribution of the paper.3 G RADIENT APPROXIMATION WITH THE STEIN GRADIENT ESTIMATORWe propose the Stein gradient estimator as a novel generalisation of the score matching gradient es-timator. Before presenting it we first set-up the notation. Column vectors and matrices are boldfaced.The random variable under consideration is x2X withX=Rd1if not specifically mentioned. Toavoid misleading notation we use the distribution q(x)to derive the gradient approximations for gen-eral cases. As Monte Carlo methods are heavily used for implicit models, in the rest of the paper wemainly consider approximating the gradient g(xk) :=rxklogq(xk)forxkq(x);k= 1;:::;K .We usexijto denote the jth element of the ith samplexi. We also denote the matrix form of the col-lected gradients as G:=rx1logq(x1);;rxKlogq(xK)T2RKd, and its approximation^G:=^g(x1);;^g(xK)Twith^g(xk) =rxklog ^q(xk)for some ^q(x).3.1 S TEIN GRADIENT ESTIMATOR :INVERTING STEIN ’S IDENTITYWe start from introducing Stein’s identity that was first developed for Gaussian random variables(Stein, 1972; 1981) then extended to general cases (Gorham & Mackey, 2015; Liu et al., 2016). Leth:Rd1!Rd01be a differentiable multivariate test function which maps xto a column vectorh(x) = [h1(x);h2(x);:::;hd0(x)]T. We further assume the boundary condition forh:q(x)h(x)j@X=0;orlimx!1q(x)h(x) = 0 ifX=Rd: (4)3Published as a conference paper at ICLR 2018This condition holds for almost any test function if qhas sufficiently fast-decaying tails (e.g. Gaus-sian tails). Now we introduce Stein’s identity (Stein, 1981; Gorham & Mackey, 2015; Liu et al.,2016)Eq[h(x)rxlogq(x)T+rxh(x)] =0; (5)in which the gradient matrix term rxh(x) = (rxh1(x);;rxhd0(x))T2Rd0d:This identitycan be proved using integration by parts : for theith row of the matrix h(x)rxlogq(x)T, we haveEq[hi(x)rxlogq(x)T] =Zhi(x)rxq(x)Tdx=q(x)hi(x)j@XZq(x)rxhi(x)Tdx=Eq[rxhi(x)T]:(6)Observing that the gradient term rxlogq(x)of interest appears in Stein’s identity (5), we proposetheStein gradient estimator by inverting Stein’s identity. As the expectation in (5) is intractable, wefurther approximate the above with Monte Carlo (MC):1KKXk=1h(xk)rxklogq(xk)T+err=1KKXk=1rxkh(xk);xkq(xk); (7)with err2Rd0dthe random error due to MC approximation, which has mean 0and vanishesasK!+1. Now by temporarily denoting H=h(x1);;h(xK)2Rd0K;rxh=1KPKk=1rxkh(xk)2Rd0d;equation (7) can be rewritten as 1KHG+err=rxh:Thus weconsider a ridge regression method (i.e. adding an `2regulariser) to estimate G:^GSteinV:= arg min^G2RKdjjrxh+1KH^Gjj2F+K2jj^Gjj2F; (8)withjjjjFthe Frobenius norm of a matrix and 0. Simple calculation shows that^GSteinV=(K+I)1hr;Ki; (9)where K:=HTH;Kij=K(xi;xj) :=h(xi)Th(xj);hr;Ki:=KHTrxh;hr;Kiij=PKk=1rxkjK(xi;xk):One can show that the RBF kernel satisfies Stein’s identity (Liu et al., 2016).In this caseh(x) =K(x;);d0= +1and by the reproducing kernel property (Berlinet & Thomas-Agnan, 2011), h(x)Th(x0) =hK(x;);K(x0;)iH=K(x;x0):3.2 S TEIN GRADIENT ESTIMATOR MINIMISES THE KERNELISED STEIN DISCREPANCYIn this section we derive the Stein gradient estimator again, but from a divergence/discrepancy min-imisation perspective. Stein’s method also provides a tool for checking if two distributions q(x)and^q(x)are identical. If the test function set His sufficiently rich, then one can define a Steindiscrepancy measure byS(q;^q) := suph2HEqrxlog ^q(x)Th(x) +hr;hi; (10)see Gorham & Mackey (2015) for an example derivation. When His defined as a unit ball in anRKHS induced by a kernel K(x;), Liu et al. (2016) and Chwialkowski et al. (2016) showed thatthe supremum in (10) can be analytically obtained as (with Kxx0shorthand forK(x;x0)):S2(q;^q) =Ex;x0q(^g(x)g(x))TKxx0(^g(x0)g(x0)); (11)which is also named the kernelised Stein discrepancy (KSD). Chwialkowski et al. (2016) showed thatforC0-universal kernels satisfying the boundary condition, KSD is indeed a discrepancy measure:S2(q;^q) = 0,q= ^q. Gorham & Mackey (2017) further characterised the power of KSD ondetecting non-convergence cases. Furthermore, if the kernel is twice differentiable, then using thesame technique as to derive (16) one can compute KSD byS2(q;^q) =Ex;x0q^g(x)TKxx0^g(x0) +^g(x)Trx0Kxx0+rxKTxx0^g(x0) +Tr(rx;x0Kxx0):(12)In practice KSD is estimated with samples fxkgKk=1q, and simple derivations show that the V-statistic of KSD can be reformulated as S2V(q;^q) =1K2Tr(^GTK^G+ 2^GThr;Ki) +C. Thus thel2error in (8) is equivalent to the V-statistic of KSD if h(x) =K(x;), and we have the following:4Published as a conference paper at ICLR 2018Theorem 1. ^GSteinVis the solution of the following KSD V-statistic minimisation problem^GSteinV= arg min^G2RKdS2V(q;^q) +K2jj^Gjj2F: (13)One can also minimise the U-statistic of KSD to obtain gradient approximations, and a full derivationof which, including the optimal solution, can be found in the appendix. In experiments we use V-statistic solutions and leave comparisons between these methods to future work.3.3 C OMPARISONS TO EXISTING KERNEL -BASED GRADIENT ESTIMATORSThere exist other gradient estimators that do not require explicit evaluations of rxlogq(x), e.g. thedenoising auto-encoder (DAE) (Vincent et al., 2008; Vincent, 2011; Alain & Bengio, 2014) which,with infinitesimal noise, also provides an estimate of rxlogq(x)at convergence. However, ap-plying such gradient estimators result in a double-loop optimisation procedure since the gradientapproximation is repeatedly required for fitting implicit distributions, which can be significantlyslower than the proposed approach. Therefore we focus on “quick and dirty” approximations andonly include comparisons to kernel-based gradient estimators in the following.3.3.1 KDE GRADIENT ESTIMATOR :PLUG -IN ESTIMATOR WITH DENSITY ESTIMATIONA naive approach for gradient approximation would first estimate the intractable density ^q(x)q(x)(up to a constant), then approximate the exact gradient by rxlog ^q(x)rxlogq(x). Specif-ically, Singh (1977) considered kernel density estimation (KDE) ^q(x) =1KPKk=1K(x;xk)C:,then differentiated through the KDE estimate to obtain the gradient estimator:^GKDEij=KXk=1rxijK(xi;xk)=KXk=1K(xi;xk): (14)Interestingly for translation invariant kernels K(x;x0) =K(xx0)theKDE gradient estimator(14) can be rewritten as ^GKDE=diag(K1)1hr;Ki:Inspecting and comparing it with theStein gradient estimator (9), one might notice that the Stein method uses the full kernel matrixas the pre-conditioner, while the KDE method computes an averaged “kernel similarity” for thedenominator. We conjecture that this difference is key to the superior performance of the Steingradient estimator when compared to the KDE gradient estimator (see later experiments). The KDEmethod only collects the similarity information between xkand other samples xjto form an estimateofrxklogq(xk), whereas for the Stein gradient estimator, the kernel similarity between xiandxjfor alli;j6=kare also incorporated. Thus it is reasonable to conjecture that the Stein method canbe more sample efficient, which also implies higher accuracy when the same number of samples arecollected.3.3.2 S CORE MATCHING GRADIENT ESTIMATOR :MINIMISING MSEThe KDE gradient estimator performs indirect approximation of the gradient via density estimation,which can be inaccurate. An alternative approach directly approximates the gradient rxlogq(x)by minimising the expected `2error w.r.t. the approximation ^g(x) = (^g1(x);;^gd(x))T:F(^g) :=Eqjj^g(x)rxlogq(x)jj22: (15)It has been shown in Hyv ̈arinen (2005) that this objective can be reformulated asF(^g) =Eqjj^g(x)jj22+ 2hr;^g(x)i+C;hr;^g(x)i=dXj=1rxj^gj(x): (16)The key insight here is again the usage of integration by parts: after expanding the `2loss objective,the cross term can be rewritten as Eq^g(x)Trxlogq(x)=Eq[hr;^g(x)i];if assuming theboundary condition (4) for ^g(see (6)). The optimum of (16) is referred as the score matchinggradient estimator . The`2objective (15) is also called Fisher divergence (Johnson, 2004) which isa special case of KSD (11) by selecting K(x;x0) =x=x0. Thus the Stein gradient estimator can beviewed as a generalisation of the score matching estimator.5Published as a conference paper at ICLR 2018The comparison between the two estimators is more complicated. Certainly by the Cauchy-Schwarzinequality the Fisher divergence is stronger than KSD in terms of detecting convergence (Liu et al.,2016). However it is difficult to perform direct gradient estimation by minimising the Fisher diver-gence, since (i) the Dirac kernel is non-differentiable so that it is impossible to rewrite the divergencein a similar form to (12), and (ii) the transformation to (16) involves computing rx^g(x). So oneneeds to propose a parametric approximation to Gand then optimise the associated parameters ac-cordingly, and indeed Sasaki et al. (2014) and Strathmann et al. (2015) derived a parametric solutionby first approximating the log density up to a constant as log ^q(x) :=PKk=1akK(x;xk) +C, thenminimising (16) to obtain the coefficients ^ascorekand constructing the gradient estimator as^Gscorei=KXk=1^ascorekrxiK(xi;xk): (17)Therefore the usage of parametric estimation can potentially remove the advantage of using astronger divergence. Conversely, the proposed Stein gradient estimator (9) is non-parametric inthat it directly optimises over functions evaluated at locations fxkgKk=1. This brings in two key ad-vantages over the score matching gradient estimator: (i) it removes the approximation error due tothe use of restricted family of parametric approximations and thus can be potentially more accurate;(ii) it has a much simpler and ubiquitous form that applies to any kernel satisfying the boundarycondition , whereas the score matching estimator requires tedious derivations for different kernelsrepeatedly (see appendix).In terms of computation speed, since in most of the cases the computation of the score matchinggradient estimator also involves kernel matrix inversions, both estimators are of the same order ofcomplexity, which is O(K3+K2d)(kernel matrix computation plus inversion). Low-rank approx-imations such as the Nystr ̈om method (Smola & Sch ̈okopf, 2000; Williams & Seeger, 2001) canenable speed-up, but this is not investigated in the paper. Again we note here that kernel-based gra-dient estimators can still be faster than e.g. the DAE estimator since no double-loop optimisation isrequired. Certainly it is possible to apply early-stopping for the inner-loop DAE fitting. Howeverthe resulting gradient approximation might be very poor, which leads to unstable training and poorlyfitted implicit distributions.3.4 A DDING PREDICTIVE POWERThough providing potentially more accurate approximations, the non-parametric estimator (9) hasno predictive power as described so far. Crucially, many tasks in machine learning require predictinggradient functions at samples drawn from distributions other than q, for example, in MLE q(x)corresponds to the model distribution which is learned using samples from the data distributioninstead. To address this issue, we derive two predictive estimators, one generalised from the non-parametric estimator and the other minimises KSD using parametric approximations.Predictions using the non-parametric estimator. Let us consider an unseen datum y. Ifyis sam-pled fromq, then one can also apply the non-parametric estimator (9) for gradient approximation,given the observed data X=fx1;:::;xKgq. Concretely, if writing ^g(y)rylogq(y)2Rd1then the non-parametric Stein gradient estimator computed on X[fygis^g(y)T^G=(K+I)1ryK(y;y) +PKk=1rxkK(y;xk)hr;Ki+ryK(;y);K=KyyKyXKXyK;withryK(;y)denoting aKdmatrix with rowsryK(xk;y), andryK(y;y)only differen-tiates through the second argument. Then we demonstrate in the appendix that, by simple matrixcalculations and assuming a translation invariant kernel, we have (with column vector 12RK1):rylogq(y)TKyy+KyX(K+I)1KXy1KyX^GSteinVKyX(K+I)1+1TryK(;y):(18)In practice one would store the computed gradient ^GSteinV, the kernel matrix inverse (K+I)1andas the “parameters” of the predictive estimator. For a new observation ypin general, one can“pretend”yis a sample from qand apply the above estimator as well. The approximation qualitydepends on the similarity between qandp, and we conjecture here that this similarity measure, ifcan be described, is closely related to the KSD.6Published as a conference paper at ICLR 2018Fitting a parametric estimator using KSD. The non-parametric predictive estimator could becomputationally demanding. Setting aside the cost of fitting the “parameters”, in prediction thetime complexity for the non-parametric estimator is O(K2+Kd). Also storing the “parameters”needsO(Kd)memory for ^GSteinV. These costs make the non-parametric estimator undesirable forhigh-dimensional data, since in order to obtain accurate predictions it often requires Kscaling withdas well. To address this, one can also minimise the KSD using parametric approximations, in asimilar way as to derive the score matching estimator in Section 3.3.2. More precisely, we definea parametric approximation in a similar fashion as (17), and in the appendix we show that if theRBF kernel is used for both the KSD and the parametric approximation, then the linear coefficientsa= (a1;:::;aK)Tcan be calculated analytically: ^aSteinV= (+I)1b, where=X(KKK ) +K(KX)K((KK)X)KK((KK)X);b=(Kdiag(X)K+ (KK)XK(KX)(KX)K)1;(19)withXthe “gram matrix” that has elements Xij= (xi)Txj. Then for an unseen observation ypthe gradient approximation returns rylogq(y)(^aSteinV)TryK(;y). In this case one onlymaintains the linear coefficients ^aSteinVand computes a linear combination in prediction, which takesO(K)memory andO(Kd)time and therefore is computationally cheaper than the non-parametricprediction model (27).4 A PPLICATIONSWe present some case studies that apply the gradient estimators to implicit models. Detailed set-tings (architecture, learning rate, etc.) are presented in the appendix. Implementation is released athttps://github.com/YingzhenLi/SteinGrad .4.1 S YNTHETIC EXAMPLE : HAMILTONIAN FLOW WITH APPROXIMATE GRADIENTSWe first consider a simple synthetic example to demonstrate the accuracy of the proposed gradientestimator. More precisely we consider the kernel induced Hamiltonian flow ( notan exact sampler)(Strathmann et al., 2015) on a 2-dimensional banana-shaped object: x B(x;b= 0:03;v=100),x1N(x1; 0;v);x2=+b(x21v);N(; 0;1). The approximate Hamiltonian flowis constructed using the same operator as in Hamiltonian Monte Carlo (HMC) (Neal et al., 2011),except that the exact score function rxlogB(x)is replaced by the approximate gradients. We stilluse the exact target density to compute the rejection step as we mainly focus on testing the accuracyof the gradient estimators. We test both versions of the predictive Stein gradient estimator (seesection 3.4) since we require the particles of parallel chains to be independent with each other. Wefit the gradient estimators on K= 200 training datapoints from the target density. The bandwidthof the RBF kernel is computed by the median heuristic and scaled up by a scalar between [1;5].All three methods are simulated for T= 2;000iterations, share the same initial locations that areconstructed by target distribution samples plus Gaussian noises of standard deviation 2.0, and theresults are averaged over 200 parallel chains.We visualise the samples and some MCMC statistics in Figure 2. In general all the resulting Hamil-tonian flows are HMC-like, which give us the confidence that the gradient estimators extrapolatereasonably well at unseen locations. However all of these methods have trouble exploring the ex-tremes, because at those locations there are very few or even no training data-points. Indeed wefound it necessary to use large (but not too large) bandwidths, in order to both allow explorationof those extremes, and ensure that the corresponding test function is not too smooth. In terms ofquantitative metrics, the acceptance rates are reasonably high for all the gradient estimators, and theKSD estimates (across chains) as a measure of sample quality are also close to that computed onHMC samples. The returned estimates of E[x1]are close to zero which is the ground true value. Wefound that the non-parametric Stein gradient estimator is more sensitive to hyper-parameters of thedynamics, e.g. the stepsize of each HMC step. We believe a careful selection of the kernel (e.g. thosewith long tails) and a better search for the hyper-parameters (for both the kernel and the dynamics)can further improve the sample quality and the chain mixing time, but this is not investigated here.7Published as a conference paper at ICLR 2018Figure 2: Kernel induced Hamiltonian flow compared with HMC. Top: samples generated from thedynamics, training data (in cyan), an the trajectory of a particle for T= 1to200starting at the starlocation (in yellow). Bottom: statistics computed during simulations. See main text for details.4.2 M ETA-LEARNING OF APPROXIMATE POSTERIOR SAMPLERS FOR BAYESIAN NN SOne of the recent focuses on meta-learning has been on learning optimisers for training deep neuralnetworks, e.g. see (Andrychowicz et al., 2016). Could analogous goals be achieved for approximateinference? In this section we attempt to learn an approximate posterior sampler for Bayesian neuralnetworks (Bayesian NNs, BNNs) that generalises to unseen datasets and architectures. A moredetailed introduction of Bayesian neural networks is included in the appendix, and in a nutshell,we consider a binary classification task: p(y= 1jx;) =sigmoid (NN(x)),p0() =N(;0;I).After observing the training data D=f(xn;yn)gNn=1, we first obtain the approximate posteriorq()p(jD)/p0()QNn=1p(ynjxn;), then approximate the predictive distribution for anew observation as p(y= 1jx;D)1KPKk=1p(y= 1jx;k);kq():In this task wedefine an implicit approximate posterior distribution q()as the following stochastic normalisingflow (Rezende & Mohamed, 2015) t+1=f(t;rt;t): given the current location tand themini-batch dataf(xm;ym)gMm=1, the update for the next step ist+1=t+(t;rt) +(t;rt)t;tN(;0;I);rt=rt"NMMXm=1logp(ymjxm;t) + logp0(t)#:(20)The coordinates of the noise standard deviation (t;rt)and the moving direction (t;rt)are parametrised by a coordinate-wise neural network. If properly trained, this neural network willlearn the best combination of the current location and gradient information, and produce approxi-mate posterior samples efficiently on different probabilistic modelling tasks. Here we propose usingthe variational inference objective (2) computed on the samples fktgto learn the variational param-eters. Since in this case the gradient of the log joint distribution can be computed analytically,we only approximate the gradient of the entropy term H[q]as in (3), with the exact score func-tion replaced by the presented gradient estimators. We report the results using the non-parametricStein gradient estimator as we found it works better than the parametric version. The RBF kernelis applied for gradient estimation, with the hyper-parameters determined by a grid search on thebandwidth22f0:25;1:0;4:0;10:0;median trickgand2f0:1;0:5;1:0;2:0g.We briefly describe the test protocol. We take from the UCI repository (Lichman, 2013) six binaryclassification datasets (australian, breast, crabs, ionosphere, pima, sonar), train an approximate sam-pler on crabs with a small neural network that has one 20-unit hidden layer with ReLU activation,and generalise to the remaining datasets with a bigger network that has 50 hidden units and usessigmoid activation. We use ionosphere as the validation set to tune . The remaining 4 datasetsare further split into 40% training subset for simulating samples from the approximate sampler, and60% test subsets for evaluating the sampler’s performance.Figure 3 presents the (negative) test log-likelihood (LL), classification error, and an estimate ofthe KSD U-statistic S2U(p(jD);q())(with data sub-sampling) over 5 splits of each test dataset.Besides the gradient estimators we also compare with two baselines: an approximate posterior sam-pler trained by maximum a posteriori (MAP), and stochastic gradient Langevin dynamics (SGLD)8Published as a conference paper at ICLR 2018Figure 3: Generalisation performances for trained approximate posterior samplers.(Welling & Teh, 2011) evaluated on the test datasets directly. In summary, SGLD returns best re-sults in KSD metric. The Stein approach performs equally well or a little better than SGLD interms of test-LL and test error. The KDE method is slightly worse and is close to MAP, indicatingthat the KDE estimator does not provide a very informative gradient for the entropy term. Surpris-ingly the score matching estimator method produces considerably worse results (except for breastdataset), even after carefully tuning the bandwidth and the regularisation parameter . Future workshould investigate the usage of advanced recurrent neural networks such as an LSTM (Hochreiter &Schmidhuber, 1997), which is expected to return better performance.4.3 T OWARDS ADDRESSING MODE COLLAPSE IN GAN S USING ENTROPY REGULARISATIONGANs are notoriously difficult to train in practice. Besides the instability of gradient-based minimaxoptimisation which has been partially addressed by many recent proposals (Salimans et al., 2016;Arjovsky et al., 2017; Berthelot et al., 2017), they also suffer from mode collapse. We proposeadding an entropy regulariser to the GAN generator loss. Concretely, assume the generative modelp(x)is implicitly defined by x=f(z);zp0(z), then the generator’s loss is defined by~Jgen() =Jgen()H[p(x)]; (21)whereJgen()is the original loss function for the generator from any GAN algorithm and is ahyper-parameter. In practice (the gradient of) (21) is estimated using Monte Carlo.We empirically investigate the entropy regularisation idea on the very recently proposed boundaryequilibrium GAN (BEGAN) (Berthelot et al., 2017) method using (continuous) MNIST, and werefer to the appendix for the detailed mathematical set-up. In this case the non-parametric V-statisticStein gradient estimator is used. We use a convolutional generative network and a convolutionalauto-encoder and select the hyper-parameters of BEGAN 2f0:3;0:5;0:7g,2[0;1]and=0:001. The Epanechnikov kernel K(x;x0) :=1dPdj=1(1(xjx0j)2)is used as the pixel values liein a unit interval (see appendix for the expression of the score matching estimator), and to ensure theboundary condition we clip the pixel values into range [108;1108]. The generated images arevisualised in Figure 4. BEGAN without the entropy regularisation fails to generate diverse sampleseven when trained with learning rate decay. The other three images clearly demonstrate the benefit ofthe entropy regularisation technique, with the Stein approach obtaining the highest diversity withoutcompromising visual quality.We further consider four metrics to assess the trained models quantitatively. First 500 samplesare generated for each trained model, then we compute their nearest neighbours in the training setusingl1distance, and obtain a probability vector pby averaging over these neighbour images’label vectors. In Figure 5 we depict the entropy of p(top left), averaged l1distances to the nearestneighbour (top right), and the difference between the largest and smallest elements in p(bottomright). The error bars are obtained by 5 independent runs. These results demonstrate that the Stein9Published as a conference paper at ICLR 2018Figure 4: Visualisation of generated images from trained BEGAN models.Figure 5: Quantitative evaluation on entropy regularised BEGAN. The higher the better for the LHSpanels and the other way around for the RHS ones. See main text for details.approach performs significantly better than the other two, in that it learns a better generative modelnot only faster but also in a more stable way. Interestingly the KDE approach achieves the lowestaveragel1distance to nearest neighbours, possibly because it tends to memorise training examples.We next train a fully connected network (yjx)on MNIST that achieves 98.16% text accuracy,and compute on the generated images an empirical estimate of the inception score (Salimans et al.,2016) Ep(x)[KL[(yjx)jj(y)]]with(y) =Ep(x)[(yjx)](bottom left panel). High inceptionscore indicates that the generate images tend to be both realistic looking and diverse, and again theStein approach out-performs the others on this metric by a large margin.Concerning computation speed, all the three methods are of the same order: 10.20s/epoch for KDE,10.85s/epoch for Score, and 10.30s/epoch for Stein.1This is because K < d (in the experimentsK= 100 andd= 784 ) so that the complexity terms are dominated by kernel computations(O(K2d)) required by all the three methods. Also for a comparison, the original BEGAN methodwithout entropy regularisation runs for 9.05s/epoch. Therefore the main computation cost is dom-inated by the optimisation of the discriminator/generator, and the proposed entropy regularisationcan be applied to many GAN frameworks with little computational burden.5 C ONCLUSIONS AND FUTURE WORKWe have presented the Stein gradient estimator as a novel generalisation to the score matching gra-dient estimator. With a focus on learning implicit models, we have empirically demonstrated theefficacy of the proposed estimator by showing how it opens the door to a range of novel learningtasks: approximating gradient-free MCMC, meta-learning for approximate inference, and unsu-pervised learning for image generation. Future work will expand the understanding of gradientestimators in both theoretical and practical aspects. Theoretical development will compare boththe V-statistic and U-statistic Stein gradient estimators and formalise consistency proofs. Practicalwork will improve the sample efficiency of kernel estimators in high dimensions and develop fastyet accurate approximations to matrix inversion. It is also interesting to investigate applications ofgradient approximation methods to training implicit generative models without the help of discrim-inators. Finally it remains an open question that how to generalise the Stein gradient estimator tonon-kernel settings and discrete distributions.1All the methods are timed on a machine with an NVIDIA GeForce GTX TITAN X GPU.10Published as a conference paper at ICLR 2018ACKNOWLEDGEMENTWe thank Marton Havasi, Jiri Hron, David Janz, Qiang Liu, Maria Lomeli, Cuong Viet Nguyen andMark Rowland for their comments and helps on the manuscript. We also acknowledge the anony-mous reviewers for their review. Yingzhen Li thanks Schlumberger Foundation FFTF fellowship.Richard E. Turner thanks Google and EPSRC grants EP/M0269571 and EP/L000776/1.<|im_end|> <|im_start|>assistant ### Review Title an interesting paper ### Review Text This paper deals with the estimation of the score function, i.e., the derivative of the log likelihood. Some methods were introduced and a new method using Stein identity was proposed. The setup of the trasnductive learning was introduced to add the prediction power to the proposed method. The method was used to several applications. This is an interesting approach to estimate the score function for location models in a non-parametric way. I have a couple of minor comments below. - Stein identity is the formula that holds for the class of ellipsoidal distribution including Gaussian distribution. I'm not sure the term "Stein identity" is appropriate to express the equation (8). - Some boundary condition should be assumed to assure that integration by parts works properly. Describing an explicit boundary condition to guarantee the proper estimation would be nice. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
rywDjg-RW
ICLR.cc/2018/Conference
2018
Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples
["Ashwin Kalyan", "Abhishek Mohta", "Oleksandr Polozov", "Dhruv Batra", "Prateek Jain", "Sumit Gulwani"]
Synthesizing user-intended programs from a small number of input-output exam- ples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand- engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems.
["Program synthesis", "deductive search", "deep learning", "program induction", "recurrent neural networks"]
ABSTRACTSynthesizing user-intended programs from a small number of input-output exam-ples is a challenging problem with several important applications like spreadsheetmanipulation, data wrangling and code refactoring. Existing synthesis systemseither completely rely on deductive logic techniques that are extensively hand-engineered or on purely statistical models that need massive amounts of data, and ingeneral fail to provide real-time synthesis on challenging benchmarks. In this work,we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis techniquethat combines the best of both symbolic logic techniques and statistical models.Thus, it produces programs that satisfy the provided specifications by constructionand generalize well on unseen examples, similar to data-driven systems. Ourtechnique effectively utilizes the deductive search framework to reduce the learningproblem of the neural component to a simple supervised learning setup. Further,this allows us to both train on sparingly available real-world data and still leveragepowerful recurrent neural network encoders. We demonstrate the effectivenessof our method by evaluating on real-world customer scenarios by synthesizingaccurate programs with up to 12 speed-up compared to state-of-the-art systems.1 I NTRODUCTIONAutomatic synthesis of programs that satisfy a given specification is a classical problem inAI (Waldinger & Lee, 1969), with extensive literature in both machine learning and programminglanguages communities. Recently, this area has gathered widespread interest, mainly spurred bythe emergence of a sub-area – Programming by Examples (PBE) (Gulwani, 2011). A PBE systemsynthesizes programs that map a given set of example inputs to their specified example outputs. Suchsystems make many tasks accessible to a wider audience as example-based specifications can beeasily provided even by end users without programming skills. See Figure 1 for an example. PBEsystems are usually evaluated on three key criteria: (a)correctness : whether the synthesized programInput OutputYann LeCunn Y LeCunnHugo Larochelle H LarochelleTara Sainath T SainathYoshua Bengio ?Figure 1: An example input-output spec; the goal is to learn aprogram that maps the given inputs to the corresponding outputsand generalizes well to new inputs. Both programs belowsatisfy the spec: (i)Concat (1stletter of 1stword, 2ndword), (ii)Concat (4th-last letter of 1stword, 2ndword). However, program(i)clearly generalizes better: for instance, its output on “YoshuaBengio” is “Y Bengio” while program (ii)produces “s Bengio”.Work done during an internship at Microsoft Research.yEqual contribution.1Published as a conference paper at ICLR 2018satisfies the spec i.e.the provided example input-output mapping, (b)generalization : whether theprogram produces the desired outputs on unseen inputs, and finally, (c)performance : synthesis time.State-of-the-art PBE systems are either symbolic , based on enumerative or deductive search (Gulwani,2011; Polozov & Gulwani, 2015) or statistical , based on data-driven learning to induce the most likelyprogram for the spec (Gaunt et al., 2016; Balog et al., 2017; Devlin et al., 2017). Symbolic systems aredesigned to produce a correct program by construction using logical reasoning and domain-specificknowledge. They also produce the intended program with few input-output examples (often just 1).However, they require significant engineering effort and their underlying search processes strugglewith real-time performance, which is critical for user-facing PBE scenarios.In contrast, statistical systems do not rely on specialized deductive algorithms, which makes theirimplementation and training easier. However, they lack in two critical aspects. First, they requirea lot of training data and so are often trained using randomly generated tasks. As a result, inducedprograms can be fairly unnatural and fail to generalize to real-world tasks with a small number ofexamples. Second, purely statistical systems like RobustFill (Devlin et al., 2017) do not guaranteethat the generated program satisfies the spec. Thus, solving the synthesis task requires generatingmultiple programs with a beam search and post-hoc filtering, which defeats real-time performance.Neural-Guided Deductive Search Motivated by shortcomings of both the above approaches,we propose Neural-Guided Deductive Search (NGDS), a hybrid synthesis technique that bringstogether the desirable aspects of both methods. The symbolic foundation of NGDS is deductivesearch (Polozov & Gulwani, 2015) and is parameterized by an underlying domain-specific language(DSL) of target programs. Synthesis proceeds by recursively applying production rules of the DSL todecompose the initial synthesis problem into smaller sub-problems and further applying the samesearch technique on them. Our key observation I is that most of the deduced sub-problems do notcontribute to the final best program and therefore a priori predicting the usefulness of pursuing aparticular sub-problem streamlines the search process resulting in considerable time savings. InNGDS, we use a statistical model trained on real-world data to predict a score that corresponds to thelikelihood of finding a generalizable program as a result of exploring a sub-problem branch.Our key observation II is that speeding up deductive search while retaining its correctness orgeneralization requires a close integration of symbolic and statistical approaches via an intelligentcontroller. It is based on the “branch & bound” technique from combinatorial optimization (Clausen,1999). The overall algorithm integrates (i) deductive search, (ii) a statistical model that predicts, apriori , the generalization score of the best program from a branch, and (iii) a controller that selectssub-problems for further exploration based on the model’s predictions.Since program synthesis is a sequential process wherein a sequence of decisions (here, selectionsof DSL rules) collectively construct the final program, a reinforcement learning setup seems morenatural. However, our key observation III is that deductive search is Markovian – it generatesindependent sub-problems at every level. In other words, we can reason about a satisfying programfor the sub-problem without factoring in the bigger problem from which it was deduced. This bringsthree benefits enabling a supervised learning formulation: (a)a dataset of search decisions at everylevel over a relatively small set of PBE tasks that contains an exponential amount of informationabout the DSL promoting generalization, (b)such search traces can be generated and used for offlinetraining, (c)we can learn separate models for different classes of sub-problems (e.g. DSL levels orrules), with relatively simpler supervised learning tasks.Evaluation We evaluate NGDS on the string transformation domain, building on top of PROSE,a commercially successful deductive synthesis framework for PBE (Polozov & Gulwani, 2015).It represents one of the most widespread and challenging applications of PBE and has shipped inmultiple mass-market tools including Microsoft Excel and Azure ML Workbench.1We train andvalidate our method on 375scenarios obtained from real-world customer tasks (Gulwani, 2011;Devlin et al., 2017). Thanks to the Markovian search properties described above, these scenariosgenerate a dataset of 400;000+ intermediate search decisions. NGDS produces intended programson68% of the scenarios despite using only oneinput-output example. In contrast, state-of-the-artneural synthesis techniques (Balog et al., 2017; Devlin et al., 2017) learn intended programs from a1https://microsoft.github.io/prose/impact/2Published as a conference paper at ICLR 2018single example in only 24-36% of scenarios taking 4more time. Moreover, NGDS matches theaccuracy of baseline PROSE while providing a speed-up of up to 12over challenging tasks.Contributions First, we present a branch-and-bound optimization based controller that exploitsdeep neural network based score predictions to select grammar rules efficiently (Section 3.2). Second,we propose a program synthesis algorithm that combines key traits of a symbolic and a statisticalapproach to retain desirable properties like correctness, robust generalization, and real-time perfor-mance (Section 3.3). Third, we evaluate NGDS against state-of-the-art baselines on real customertasks and show significant gains (speed-up of up to 12) on several critical cases (Section 4).2 B ACKGROUNDIn this section, we provide a brief background on PBE and the PROSE framework, using establishedformalism from the programming languages community.Domain-Specific Language A program synthesis problem is defined over a domain-specific lan-guage (DSL). A DSL is a restricted programming language that is suitable for expressing tasks in agiven domain, but small enough to restrict a search space for program synthesis. For instance, typicalreal-life DSLs with applications in textual data transformations (Gulwani, 2011) often include condi-tionals, limited forms of loops, and domain-specific operators such as string concatenation, regularexpressions, and date/time formatting. DSLs for tree transformations such as code refactoring (Rolimet al., 2017) and data extraction (Le & Gulwani, 2014) include list/data-type processing operatorssuch as Map andFilter , as well as domain-specific matching operators. Formally, a DSL Lis speci-fied as a context-free grammar, with each non-terminal symbol Ndefined by a set of productions.The right-hand side of each production is an application of some operator F(N1;:::;Nk)to somesymbols ofL. All symbols and operators are strongly typed. Figure 2 shows a subset of the Flash FillDSL that we use as a running example in this paper.Inductive Program Synthesis The task of inductive program synthesis is characterized by a spec.A spec'is a set ofminput-output constraintsfi igmi=1, where:•, aninput state is a mapping of free variables of the desired program Pto some correspondinglytyped values. At the top level of L, a program (and its expected input state) has only one freevariable – the input variable of the DSL (e.g., inputs in Figure 2). Additional local variables areintroduced insideLwith a let construct.• is an output constraint on the execution result of the desired program P(i). At the top level ofL, when provided by the user, is usually the output example – precisely the expected result ofP(i). However, other intermediate constraints arise during the synthesis process. For instance, may be a disjunction of multiple allowed outputs.The overall goal of program synthesis is thus: given a spec ', find a program Pin the underlyingDSLLthatsatisfies',i.e., its outputs P(i)satisfy all the corresponding constraints i.Example 1. Consider the task of formatting a phone number, characterized by the spec '=finputs : [“(612) 8729128 ”]g “612-872-9128 ”. It has a single input-output example,with an input state containing a single variable inputs and its value which is a list with a singleinput string. The output constraint is simply the desired program result.The program the user is most likely looking for is the one that extracts (a) the part of the inputenclosed in the first pair of parentheses, (b) the 7thto 4thcharacters from the end, and (c) the last 4characters, and then concatenates all three parts using hyphens. In our DSL, this corresponds to:ConcatSubStr 0(RegexPosition (x;h“(”;"i;0);RegexPosition (x;h";“)”i;0));ConstStr (“-”);SubStr 0(AbsolutePosition (x;8);AbsolutePosition (x;5));ConstStr (“-”);SubStr 0(AbsolutePosition (x;5);AbsolutePosition (x;1))where"is an empty regex, SubStr 0(pos 1;pos 2)is an abbreviation for “ letx=std:Kth(inputs; 0)inSubstring (x;hpos 1;pos 2i)”, andhiis an abbreviation for std:Pair.However, many other programs in the DSL also satisfy '. For instance, all occurrences of “8”inthe output can be produced via a subprogram that simply extracts the last character. Such a programoverfits to'and is bound to fail for other inputs where the last character and the 4thone differ.3Published as a conference paper at ICLR 2018// Nonterminals@start string transform :=atom | Concat( atom ,transform );stringatom :=ConstStr( s)| let string x= std.Kth( inputs ,k) in Substring( x,pp);Tuple<int, int> pp:=std.Pair( pos,pos) | RegexOccurrence( x,r,k);intpos:=AbsolutePosition( x,k) | RegexPosition( x, std.Pair( r,r),k);// Terminals@input string[] inputs ; string s; int k; Regex r;Figure 2: A subset of the FlashFill DSL (Gulwani, 2011), used as a running example in this paper.Every program takes as input a list of strings inputs , and returns an output string, a concatenationofatoms . Each atom is either a constant or a substring of one of the inputs ( x), extracted usingsome position logic. The RegexOccurrence position logic finds kthoccurrence of a regex rinxandreturns its boundaries. Alternatively, start and end positions can be selected independently either asabsolute indices in xfrom left or right ( AbsolutePosition ) or as thekthoccurrence of a pair of regexessurrounding the position ( RegexPosition ). See Gulwani (2011) for an in-depth DSL description.As Example 1 shows, typical real-life problems are severely underspecified. A DSL like FlashFillmay contain up to 1020programs that satisfy a given spec of 1-3 input-output examples (Polozov &Gulwani, 2015). Therefore, the main challenge lies in finding a program that not only satisfies theprovided input-output examples but also generalizes to unseen inputs . Thus, the synthesis processusually interleaves search andranking : the search phase finds a set of spec-satisfying programs in theDSL, from which the ranking phase selects top programs ordered using a domain-specific rankingfunctionh:L~!Rwhere is the set of all input states. The ranking function takes as input acandidate program P2L and a set of input states ~ 2~(usually~ =inputs in the given spec + anyavailable unlabeled inputs), and produces a score for P’sgeneralization .The implementation of hexpresses a subtle balance between program generality, complexity, andbehavior on available inputs. For instance, in FlashFill hpenalizes overly specific regexes, prefersprograms that produce fewer empty outputs, and prioritizes lower Kolmogorov complexity, amongother features. In modern PBE systems like PROSE, his usually learned in a data-driven mannerfrom customer tasks (Singh & Gulwani, 2015; Ellis & Gulwani, 2017). While designing and learningsuch a ranking is an interesting problem in itself, in this work we assume a black-box access to h.Finally, the problem of inductive program synthesis can be summarized as follows:Problem 1. Given a DSLL, a ranking function h, a spec'=fi igmi=1, optionally a setof unlabeled inputs ~ u, and a target number of programs K, let~ =~ u[figmi=1. The goal ofinductive program synthesis is to find a program set S=fP1;:::;PKgL such that (a)everyprogram inSsatisfies', and (b)the programs inSgeneralize best: h(Pi;~ )h(P;~ )for anyotherP2L that satisfies '.Search Strategy Deductive search strategy for program synthesis, employed by PROSE exploresthe grammar ofLtop-down – iteratively unrolling the productions into partial programs starting fromthe root symbol. Following the divide-and-conquer paradigm, at each step it reduces its synthesisproblem to smaller subproblems defined over the parameters of the current production. Formally,given a spec 'and a symbol N, PROSE computes the set Learn (N;')of top programs w.r.t. husingtwo guiding principles:1.IfNis defined through nproductions N:=F1(:::)j:::jFn(:::), PROSE finds a '-satisfyingprogram set for everyFi, and unites the results, i.e., Learn (N;') =[iLearn (Fi(:::);').2.For a given production N:=F(N1;:::;Nk), PROSE spawns off ksmaller synthesis problemsLearn (Nj;'j);1jkwherein PROSE deduces necessary and sufficient specs 'jfor eachNjsuch that every program of type F(P1;:::;Pk), wherePj2Learn (Nj;'j), satisfies'. Thededuction logic (called a witness function ) is domain-specific for each operator F. PROSE thenagain recursively solves each subproblem and unites a cross-product of the results.Example 2. Consider a spec '=f“Yann ” “Y.L”gon atransform program. Via the firstproductiontransform :=atom , the only'-satisfying program is ConstStr (“Y.L”). The secondproduction on the same level is Concat (atom; transform ). A necessary & sufficient spec on theatom sub-program is that it should produce some prefix of the output string. Thus, the witnessfunction for the Concat operator produces a disjunctive spec 'a=f“Yann ” “Y”_“Y.”g. Each4Published as a conference paper at ICLR 2018of these disjuncts, in turn, induces a corresponding necessary and sufficient suffix spec on the secondparameter:'t1=f“Yann ” “.L”g, and't2=f“Yann ” “L”g, respectively. The disjunctsin'awill be recursively satisfied by different program sets: “Y.”can only be produced via anatom path with a ConstStr program, whereas “Y”can also be extracted from the input using manySubstring logics (their generalization capabilities vary). Figure 3 shows the resulting search DAG.transform“Y.L”Concat (:::)“Y.L”atom“Y.L”atom“Y”_“Y.”transform“L”atom“L”transform“.L”atom“.L”Concat (:::)“.L”atom“.”ConstStr (s)“Y.L”. . . . . . . . . . . .ConstStr (s)“Y”_“Y.”letx= . . .“Y”_“Y.”...Substring (:::)“Y”pp(0;1). . .Figure 3: A portion of the search DAG from Example 2. Only the output parts of the respective specsare shown in each node, their common input state is a single string “Yann ”. Dashed arrows showrecursive Learn calls on a corresponding DSL symbol.Notice that the above mentioned principles create logical non-determinism due to which we mightneed to explore multiple alternatives in a search tree. As such non-determinism arises at every level ofthe DSL with potentially any operator, the search tree (and the resulting search process) is exponentialin size. While all the branches of the tree by construction produce programs that satisfy the givenspec, most of the branches do not contribute to the overall top-ranked generalizable program. Duringdeductive search, PROSE has limited information about the programs potentially produced fromeach branch, and cannot estimate their quality, thus exploring the entire tree unnecessarily. Our maincontribution is a neural-guided search algorithm that predicts the best program scores from eachbranch, and allows PROSE to omit branches that are unlikely to produce the desired program a priori .3 S YNTHESIS ALGORITHMConsider an arbitrary branching moment in the top-down search strategy of PROSE. For example, letNbe a nonterminal symbol in L, defined through a set of productions N:=F1(:::)j:::jFn(:::),and let'be a spec on N, constructed earlier during the recursive descent over L. A conservativeway to select the top kprograms rooted at N(as defined by the ranking function h), i.e., to computeLearn (N;'), is to learn the top kprograms of kind Fi(:::)for alli2[k]and then select the top kprograms overall from the union of program sets learned for each production. Naturally, exploring allthe branches for each nonterminal in the search tree is computationally expensive.In this work, we propose a data-driven method to select an appropriate production rule N:=Fi(N1;:::;Nk)that would most likely lead to a top-ranked program. To this end, we use the currentspec'to determine the “optimal” rule. Now, it might seem unintuitive that even without exploringa production rule and finding the best program in the corresponding program set, we can a prioridetermine optimality of that rule. However, we argue that by understanding 'and its relationshipwith the ranking function h, we can predict the intended branch in many real-life scenarios.Example 3. Consider a spec '=f“alice ” “alice@iclr.org ”;“bob” “bob@iclr.org ”g. While learning a program in Lgiven by Figure 2 that satisfies ', it is clearright at the beginning of the search procedure that the rule transform :=atom does not apply. Thisis because any programs derived from transform :=atom can either extract a substring from theinput or return a constant string, both of which fail to produce the desired output. Hence, we shouldonly consider transform :=Concat (:::), thus significantly reducing the search space.Similarly, consider another spec '=f“alice smith ” “alice ”;“bob jones ” “bob”g. In this case, the output appears to be a substring of input, thus selecting transform :=atomat the beginning of the search procedure is a better option than transform :=Concat (:::).However, many such decisions are more subtle and depend on the ranking function hitself. Forexample, consider a spec '=f“alice liddell ” “al”;“bob ong ” “bo”g. Now,5Published as a conference paper at ICLR 2018LSTM for input encoding LSTM for output encodingChar EmbeddingInput stateChar EmbeddingOutput example(s) EmbeddingProduction rule TwoFClayersPredicted scoreFigure 4: LSTM-based model for predicting the score of a candidate production for a given spec '.bothtransform :=atom andtransform :=Concat (:::)may lead to viable programs becausethe output can be constructed using the first two letters of the input (i.e. a substring atom) or byconcatenating the first letters of each word. Hence, the branch that produces the best program isultimately determined by the ranking function hsince both branches generate valid programs.Example 3 shows that to design a data-driven search strategy for branch selection, we need to learnthe subtle relationship between ',h, and the candidate branch. Below, we provide one such model.3.1 P REDICTING THE GENERALIZATION SCOREAs mentioned above, our goal is to predict one or more production rules that for a given spec 'willlead to a top-ranked program (as ranked a posteriori byh). Formally, given black-box access to h,we want to learn a function fsuch that,f(;') maxP2S(;' )h(P;');whereis a production rule in L, andS(;')is aprogram set of all DSL programs derived fromthe rulethat satisfy'. In other words, we want to predict the score of the top-ranked '-satisfyingprogram that is synthesized by unrolling the rule . We assume that the symbolic search of PROSEhandles the construction of S(;')and ensures that programs in it satisfy 'by construction. Thegoal offis to optimize the score of a program derived from assuming this program is valid. If noprogram derived from can satisfy',fshould return1. Note that, drawing upon observationsmentioned in Section 1, we have cast the production selection problem as a supervised learningproblem, thus simplifying the learning task as opposed to end-to-end reinforcement learning solution.We have evaluated two models for learning f. The loss function for the prediction is given by:L(f;;') =f(;')maxP2S(;' )h(P;')2:Figure 4 shows a common structure of both models we have evaluated. Both are based on a standardmulti-layer LSTM architecture (Hochreiter & Schmidhuber, 1997) and involve (a)embedding thegiven spec',(b)encoding the given production rule , and (c)a feed-forward network to output ascoref(;'). One model attends over input when it encodes the output, whereas another does not.3.2 C ONTROLLER FOR BRANCH SELECTIONA score model falone is insufficient to perfectly predict the branches that should be explored atevery level. Consider again a branching decision moment N:=F1(:::)j:::jFn(:::)in a searchprocess for top kprograms satisfying a spec '. One naïve approach to using the predictions of fis toalways follow the highest-scored production rule argmaxif(Fi;'). However, this means that anysingle incorrect decision on the path from the DSL root to the desired program will eliminate thatprogram from the learned program set . If our search algorithm fails to produce the desired programby committing to a suboptimal branch anytime during the search process, then the user may neverdiscover that such a program exists unless they supply additional input-output example.Thus, a branch selection strategy based on the predictions of fmust balance a trade-off of performanceandgeneralization . Selecting too few branches (a single best branch in the extreme case) riskscommitting to an incorrect path early in the search process and producing a suboptimal program orno program at all. Selecting too many branches (all nbranches in the extreme case) is no differentfrom baseline PROSE and fails to exploit the predictions of fto improve its performance.Formally, a controller for branch selection at a symbol N:=F1(:::)j:::jFn(:::)targetingkbest programs must (a)predict the expected score of the best program from each program set:6Published as a conference paper at ICLR 2018function THRESHOLD BASED (';h;k;s 1;. . .;sn)1:Result setS []2:i argmaxisi3:for all 1indo4: ifjsisijthen//Recursive search5:S+= LEARN (Fi;';k )6:return the topkprograms ofSw.r.t.hfunction BNBBASED (';h;k;s 1;. . .;sn)1:Result setS []; Program target k0 k2:ReorderFiin the descending order of si3:for all 1indo4:Si LEARN (Fi;';k0)//Recursive search5:j BINARY SEARCH (si+1;Map(h;Si))6:S=Si[Si[0::j];k0 k0j7: ifk00then break8:returnSFigure 5: The controllers for guiding the search process to construct a most generalizable '-satisfyingprogram setSof sizekgiven thef-predicted best scores s1;. . .;snof the productions F1;. . .;Fn.Given: DSLL, ranking function h, controllerCfrom Figure 5 ( THRESHOLD BASED orBNBBASED ),symbolic search algorithm LEARN (Production rule , spec', targetk) as in PROSE (Polozov &Gulwani, 2015, Figure 7) with all recursive calls to L EARN replaced with L EARN NGDSfunction LEARN NGDS(Symbol N:=F1(:::)j:::jFn(:::), spec', target number of programs k)1:ifn= 1then return LEARN (F1;';k )2:Pick a score model fbased on depth (N;L)3:s1;:::;s n f(F1;');:::;f (Fn;')4:returnC(';h;k;s 1;:::;s n)Figure 6: Neural-guided deductive search over L, parameterized with a branch selection controller C.si=f(Fi;')81in;and(b)use the predicted scores sito narrow down the set of productionsF1;:::;Fnto explore and to obtain the overall result by selecting a subset of generated programs. Inthis work, we propose and evaluate two controllers. Their pseudocode is shown in Figure 5.Threshold-based: Fix a score threshold , and explore those branches whose predicted score differsby at mostfrom the maximum predicted score. This is a simple extension of the naïve “ argmax ”controller discussed earlier that also explores any branches that are predicted “approximately as goodas the best one”. When = 0, it reduces to the “ argmax ” one.Branch & Bound: This controller is based on the “branch & bound” technique in combinatorialoptimization (Clausen, 1999). Assume the branches Fiare ordered in the descending order of theirrespective predicted scores si. After recursive learning produces its program set Si, the controllerproceeds to the next branch only if si+1exceeds the score of the worst program in Si. Moreover, itreduces the target number of programs to be learned, using si+1as a lower bound on the scores ofthe programs inSi. That is, rather than relying blindly on the predicted scores, the controller guidesthe remaining search process by accounting for the actual synthesized programs as well.3.3 N EURAL -GUIDED DEDUCTIVE SEARCHWe now combine the above components to present our unified algorithm for program synthesis. Itbuilds upon the deductive search of the PROSE system, which uses symbolic PL insights in the formofwitness functions to construct and narrow down the search space, and a ranking function hto pickthe most generalizable program from the found set of spec-satisfying ones. However, it significantlyspeeds up the search process by guiding it a priori at each branching decision using the learnedscore model fand a branch selection controller, outlined in Sections 3.1 and 3.2. The resultingneural-guided deductive search (NGDS) keeps the symbolic insights that construct the search treeensuring correctness of the found programs, but explores only those branches of this tree that arelikely to produce the user-intended generalizable program, thus eliminating unproductive search time.A key idea in NGDS is that the score prediction model fdoes not have to be the same for all decisionsin the search process. It is possible to train separate models for different DSL levels, symbols, or evenproductions. This allows the model to use different features of the input-output spec for evaluatingthe fitness of different productions, and also leads to much simpler supervised learning problems.Figure 6 shows the pseudocode of NGDS. It builds upon the deductive search of PROSE, but augmentsevery branching decision on a symbol with some branch selection controller from Section 3.2. Wepresent a comprehensive evaluation of different strategies in Section 4.7Published as a conference paper at ICLR 2018Metric PROSE DC 1 DC 2 DC 3 RF1 RF2 RF3NGDSAccuracy (% of 73) 67.12 35.81 47.38 62.92 24.53 39.72 56.41 68.49Speed-up (PROSE) 1.00 1.82 1.53 1.42 0.25 0.27 0.30 1.67Table 1: Accuracy and average speed-up of NGDS vs. baseline methods. Accuracies are computedon a test set of 73tasks. Speed-up of a method is the geometric mean of its per-task speed-up (ratioof synthesis time of PROSE and of the method) when restricted to a subset of tasks with PROSE’ssynthesis time is0:5sec.4 E VALUATIONIn this section, we evaluate our NGDS algorithm over the string manipulation domain with a DSLgiven by Figure 2; see Figure 1 for an example task. We evaluate NGDS, its ablations, and baselinetechniques on two key metrics: (a) generalization accuracy on unseen inputs, (b) synthesis time.Dataset. We use a dataset of 375tasks collected from real-world customer string manipulation prob-lems, split into 65% training, 15% validation, and 20% test data. Some of the common applicationsfound in our dataset include date/time formatting, manipulating addresses, modifying names, automat-ically generating email IDs, etc. Each task contains about 10inputs, of which only one is provided asthe spec to the synthesis system, mimicking industrial applications. The remaining unseen examplesare used to evaluate generalization performance of the synthesized programs. After running synthesisof top-1 programs with PROSE on all training tasks, we have collected a dataset of 400,000intermediate search decisions, i.e.tripleshproduction;spec';a posteriori best scoreh(P;')i.Baselines. We compare our method against two state-of-the-art neural synthesis algorithms: Ro-bustFill (Devlin et al., 2017) and DeepCoder (Balog et al., 2017). For RobustFill, we use thebest-performing Attention-C model and use their recommended DP-Beam Search with a beam size of100 as it seems to perform the best; Table 3 in Appendix A presents results with different beam sizes.As in the original work, we select the top-1 program ranked according to the generated log-likelihood.DeepCoder is a generic framework that allows their neural predictions to be combined with anyprogram synthesis method. So, for fair comparison, we combine DeepCoder’s predictions withPROSE. We train DeepCoder model to predict a distribution over L’s operators and as proposed, useit to guide PROSE synthesis. Since both RobustFill and DeepCoder are trained on randomly sampledprograms and are not optimized for generalization in the real-world, we include their variants trainedwith 2 or 3 examples (denoted RF mand DCm) for fairness, although m= 1example is the mostimportant scenario in real-life industrial usage.Ablations. As mentioned in Section 3, our novel usage of score predictors to guide the searchenables us to have multiple prediction models and controllers at various stages of the synthesisprocess. Here we investigate ablations of our approach with models that specialize in predictions forindividual levels in the search process. The model T1is trained for symbol transform (Figure 2)when expanded in the first level. Similarly, PP,POS refer to models trained for the ppandpossymbol, respectively. Finally, we train all our LSTM-based models with CNTK (Seide & Agarwal,2016) using Adam (Kingma & Ba, 2014) with a learning rate of 102and a batch size of 32, usingearly stopping on the validation loss to select the best performing model (thus, 100-600 epochs).We also evaluate three controllers: threshold-based (Thr) and branch-and-bound (BB) controllersgiven in Figure 5, and a combination of them – branch-and-bound with a 0:2threshold predecessor(BB 0:2). In Tables 1 and 2 we denote different model combinations as NGDS( f,C) wherefis asymbol-based model and Cis a controller. The final algorithm selection depends on its accuracy-performance trade-off. In Table 1, we use NGDS( T1+POS , BB), the best performing algorithm onthe test set, although NGDS( T1, BB) performs slightly better on the validation set.Evaluation Metrics. Generalization accuracy is the percentage of test tasks for which the generatedprogram satisfies allunseen inputs in the task. Synthesis time is measured as the wall-clock timetaken by a synthesis method to find the correct program, median over 5 runs. We run all the methodson the same machine with 2.3 GHz Intel Xeon processor, 64GB of RAM, and Windows Server 2016.Results. Table 1 presents generalization accuracy as well as synthesis time speed-up of variousmethods w.r.t. PROSE. As we strive to provide real-time synthesis, we only compare the times fortasks which require PROSE more than 0:5sec. Note that, with one example, NGDS and PROSE are8Published as a conference paper at ICLR 2018MethodValidation Test% of branchesAccuracy Speed-up Accuracy Speed-upPROSE 70.21 1 67.12 1 100.00NGDS(T1, Thr) 59.57 1.15 67.12 1.27 62.72NGDS(T1, BB) 63.83 1.58 68.49 1.22 51.78NGDS(T1, BB 0:2) 61.70 1.03 67.12 1.22 63.16NGDS(T1+PP, Thr) 59.57 0.76 67.12 0.97 56.41NGDS(T1+PP, BB) 61.70 1.05 72.60 0.89 50.22NGDS(T1+PP, BB 0:2) 61.70 0.72 67.12 0.86 56.43NGDS(T1+POS , Thr) 61.70 1.19 67.12 1.93 55.63NGDS(T1+POS , BB) 63.83 1.13 68.49 1.67 50.44NGDS(T1+POS , BB 0:2) 63.83 1.19 67.12 1.73 55.73Table 2: Accuracies, mean speed-ups, and % of branches taken for different ablations of NGDS.significantly more accurate than RobustFill and DeepCoder. This is natural as those methods arenot trained to optimize generalization, but it also highlights advantage of a close integration with asymbolic system (PROSE) that incorporates deep domain knowledge. Moreover, on an average, ourmethod saves more than 50% of synthesis time over PROSE. While DeepCoder with one examplespeeds up the synthesis even more, it does so at the expense of accuracy, eliminating branches withcorrect programs in 65% of tasks.Table 2 presents speed-up obtained by variations of our models and controllers. In addition togeneralization accuracy and synthesis speed-up, we also show a fraction of branches that wereselected for exploration by the controller. Our method obtains impressive speed-up of >1:5in22cases. One such test case where we obtain 12speedup is a simple extraction case whichis fairly common in Web mining: f“alpha,beta,charlie,delta ” “alpha ”g. Forsuch cases, our model determine transform :=atom to be the correct branch (that leads tothe final Substring based program) and hence saves time required to explore the entire Concatoperator which is expensive. Another interesting test case where we observe 2:7speed-up is:f“457 124th St S, Seattle, WA 98111 ” “Seattle-WA ”g. This test case involveslearning a Concat operator initially followed by Substring andRegexPosition operator. Appendix Bincludes a comprehensive table of NGDS performance on all the validation and test tasks.All the models in Table 2 run without attention. As measured by score flip accuracies (i.e.per-centage of correct orderings of branch scores on the same level), attention-based models performbest, achieving 99:57=90:4=96:4%accuracy on train/validation/test, respectively (as compared to96:09=91:24=91:12% for non-attention models). However, an attention-based model is significantlymore computationally expensive at prediction time. Evaluating it dominates the synthesis timeand eliminates any potential speed-ups. Thus, we decided to forgo attention in initial NGDS andinvestigate model compression/binarization in future work.Error Analysis. As Appendix B shows, NGDS is slower than PROSE on some tasks. This occurswhen the predictions do not satisfy the constraints of the controller i.e.all the predicted scores arewithin the threshold or they violate the actual scores during B&B exploration. This leads to NGDSevaluating the LSTM for branches that were previously pruned. This is especially harmful whenbranches pruned out at the very beginning of the search need to be reconsidered – as it could leadto evaluating the neural network many times. While a single evaluation of the network is quick, asearch tree involves many evaluations, and when performance of PROSE is already <1s, this resultsin considerable relative slowdown. We provide two examples to illustrate both the failure modes:(a)“41.7114830017,-91.41233825683,41.60762786865,-91.63739013671 ” “41.7114830017 ”. The intended program is a simple substring extraction. However, at depth 1,the predicted score of Concat is much higher than the predicted score of Atom , and thus NGDSexplores only the Concat branch. The found Concat program is incorrect because it uses absoluteposition indexes and does not generalize to other similar extraction tasks. We found this scenariocommon with punctuation in the output string, which the model considers a strong signal for Concat .(b) “type size = 36: Bartok.Analysis.CallGraphNode type size = 32:Bartok.Analysis.CallGraphNode CallGraphNode ” “36->32 ”. In this case,NGDS correctly explores only the Concat branch, but the slowdown happens at the possymbol.9Published as a conference paper at ICLR 2018There are many different logics to extract the “36”and“32”substrings. NGDS explores theRelativePosition branch first, but the score of the resulting program is less then the prediction forRegexPositionRelative . Thus, the B&B controller explores both branches anyway, which leads to arelative slowdown caused by the network evaluation time.5 R ELATED WORKNeural Program Induction systems synthesize a program by training a newneural network modelto map the example inputs to example outputs (Graves et al., 2014; Reed & De Freitas, 2016;Zaremba et al., 2016). Examples include Neural Turing Machines (Graves et al., 2014) that can learnsimple programs like copying/sorting, work of Kaiser & Sutskever (2015) that can perform morecomplex computations like binary multiplications, and more recent work of Cai et al. (2017) that canincorporate recursions. While we are interested in ultimately producing the right output, all thesemodels need to be re-trained for a given problem type, thus making them unsuitable for real-lifesynthesis of different programs with fewexamples.Neural Program Synthesis systems synthesize a program in a given Lwith a pre-learned neuralnetwork. Seminal works of Bosnjak et al. (2017) and Gaunt et al. (2016) proposed first producing ahigh-level sketch of the program using procedural knowledge, and then synthesizing the program bycombining the sketch with a neural or enumerative synthesis engine. In contrast, R3NN (Parisottoet al., 2016) and RobustFill (Devlin et al., 2017) systems synthesize the program end-to-end usinga neural network; Devlin et al. (2017) show that RobustFill in fact outperforms R3NN. However,RobustFill does not guarantee generation of spec-satisfying programs and often requires more thanone example to find the intended program. In fact, our empirical evaluation (Section 4) shows thatour hybrid synthesis approach significantly outperforms the purely statistical approach of RobustFill.DeepCoder (Balog et al., 2017) is also a hybrid synthesis system that guides enumerative programsynthesis by prioritizing DSL operators according to a spec-driven likelihood distribution on the same.However, NGDS differs from DeepCoder in two important ways: (a) it guides the search process ateach recursive level in a top-down goal-oriented enumeration and thus reshapes the search tree, (b) itis trained on real-world data instead of random programs, thus achieving better generalization.Symbolic Program Synthesis has been studied extensively in the PL community (Gulwani et al.,2017; Alur et al., 2013), dating back as far as 1960s (Waldinger & Lee, 1969). Most approachesemploy either bottom-up enumerative search (Udupa et al., 2013), constraint solving (Torlak & Bodik,2013), or inductive logic programming (Lin et al., 2014), and thus scale poorly to real-world industrialapplications (e.g. data wrangling applications). In this work, we build upon deductive search, firststudied for synthesis by Manna & Waldinger (1971), and primarily used for program synthesisfrom formal logical specifications (Puschel et al., 2005; Chaudhari & Damani, 2015). Gulwani(2011) and later Polozov & Gulwani (2015) used it to build PROSE, a commercially successfuldomain-agnostic system for PBE. While its deductive search guarantees program correctness and alsogood generalization via an accurate ranking function, it still takes several seconds on complex tasks.Thus, speeding up deductive search requires considerable engineering to develop manual heuristics.NGDS instead integrates neural-driven predictions at each level of deductive search to alleviate thisdrawback. Work of Loos et al. (2017) represents the closest work with a similar technique but theirwork is applied to an automated theorem prover, and hence need not care about generalization. Incontrast, NGDS guides the search toward generalizable programs while relying on the underlyingsymbolic engine to generate correct programs.6 C ONCLUSIONWe studied the problem of real-time program synthesis with a small number of input-output examples.For this problem, we proposed a neural-guided system that builds upon PROSE, a state-of-the-artsymbolic logic based system. Our system avoids top-down enumerative grammar exploration requiredby PROSE thus providing impressive synthesis performance while still retaining key advantages ofa deductive system. That is, compared to existing neural synthesis techniques, our system enjoysfollowing advantages: a) correctness : programs generated by our system are guaranteed to satisfy thegiven input-output specification, b) generalization : our system learns the user-intended program withjust one input-output example in around 60% test cases while existing neural systems learn such a10Published as a conference paper at ICLR 2018program in only 16% test cases, c) synthesis time : our system can solve most of the test cases in lessthan 0.1 sec and provide impressive performance gains over both neural as well symbolic systems.The key take-home message of this work is that a deep integration of a symbolic deductive inferencebased system with statistical techniques leads to best of both the worlds where we can avoid extensiveengineering effort required by symbolic systems without compromising the quality of generatedprograms, and at the same time provide significant performance (when measured as synthesis time)gains. For future work, exploring better learning models for production rule selection and applyingour technique to diverse and more powerful grammars should be important research directions.
SyFsGdSlM
Although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic.
6: Marginally above acceptance threshold
The paper presents a branch-and-bound approach to learn good programs (consistent with data, expected to generalise well), where an LSTM is used to predict which branches in the search tree should lead to good programs (at the leaves of the search tree). The LSTM learns from inputs of program spec + candidate branch (given by a grammar production rule) and ouputs of quality scores for programms. The issue of how greedy to be in this search is addressed. In the authors' set up we simply assume we are given a 'ranking function' h as an input (which we treat as black-box). In practice this will simply be a guess (perhaps a good educated one) on which programs will perform correctly on future data. As the authors indicate, a more ambitious paper would consider learning h, rather than assuming it as a given. The paper has a number of positive features. It is clearly written (without typo or grammatical problems). The empirical evaluation against PROSE is properly done and shows the presented method working as hoped. This was a competent approach to an interesting (real) problem. However, the 'deep learning' aspect of the paper is not prominent: an LSTM is used as a plug-in and that is about it. Also, although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic. The authors do not explain what "without attention" means. I think the authors should mention the existence of (logic) program synthesis using inductive logic programming. There are also (closely related) methods developed by the LOPSTR (logic-based program synthesis and transformation) community. Many of the ideas here are reminiscent of methods existing in those communities (e.g. top-down search with heuristics). The use of a grammar to define the space of programs is similar to the "DLAB" formalism developed by researchers at KU Leuven. ADDED AFTER REVISIONS/DISCUSSIONS The revised paper has a number of improvements which had led me to give it slightly higher rating.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples ### Paper Abstract Synthesizing user-intended programs from a small number of input-output exam- ples is a challenging problem with several important applications like spreadsheet manipulation, data wrangling and code refactoring. Existing synthesis systems either completely rely on deductive logic techniques that are extensively hand- engineered or on purely statistical models that need massive amounts of data, and in general fail to provide real-time synthesis on challenging benchmarks. In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models. Thus, it produces programs that satisfy the provided specifications by construction and generalize well on unseen examples, similar to data-driven systems. Our technique effectively utilizes the deductive search framework to reduce the learning problem of the neural component to a simple supervised learning setup. Further, this allows us to both train on sparingly available real-world data and still leverage powerful recurrent neural network encoders. We demonstrate the effectiveness of our method by evaluating on real-world customer scenarios by synthesizing accurate programs with up to 12× speed-up compared to state-of-the-art systems. ### Paper Keywords ["Program synthesis", "deductive search", "deep learning", "program induction", "recurrent neural networks"] ### Paper Content ABSTRACTSynthesizing user-intended programs from a small number of input-output exam-ples is a challenging problem with several important applications like spreadsheetmanipulation, data wrangling and code refactoring. Existing synthesis systemseither completely rely on deductive logic techniques that are extensively hand-engineered or on purely statistical models that need massive amounts of data, and ingeneral fail to provide real-time synthesis on challenging benchmarks. In this work,we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis techniquethat combines the best of both symbolic logic techniques and statistical models.Thus, it produces programs that satisfy the provided specifications by constructionand generalize well on unseen examples, similar to data-driven systems. Ourtechnique effectively utilizes the deductive search framework to reduce the learningproblem of the neural component to a simple supervised learning setup. Further,this allows us to both train on sparingly available real-world data and still leveragepowerful recurrent neural network encoders. We demonstrate the effectivenessof our method by evaluating on real-world customer scenarios by synthesizingaccurate programs with up to 12 speed-up compared to state-of-the-art systems.1 I NTRODUCTIONAutomatic synthesis of programs that satisfy a given specification is a classical problem inAI (Waldinger & Lee, 1969), with extensive literature in both machine learning and programminglanguages communities. Recently, this area has gathered widespread interest, mainly spurred bythe emergence of a sub-area – Programming by Examples (PBE) (Gulwani, 2011). A PBE systemsynthesizes programs that map a given set of example inputs to their specified example outputs. Suchsystems make many tasks accessible to a wider audience as example-based specifications can beeasily provided even by end users without programming skills. See Figure 1 for an example. PBEsystems are usually evaluated on three key criteria: (a)correctness : whether the synthesized programInput OutputYann LeCunn Y LeCunnHugo Larochelle H LarochelleTara Sainath T SainathYoshua Bengio ?Figure 1: An example input-output spec; the goal is to learn aprogram that maps the given inputs to the corresponding outputsand generalizes well to new inputs. Both programs belowsatisfy the spec: (i)Concat (1stletter of 1stword, 2ndword), (ii)Concat (4th-last letter of 1stword, 2ndword). However, program(i)clearly generalizes better: for instance, its output on “YoshuaBengio” is “Y Bengio” while program (ii)produces “s Bengio”.Work done during an internship at Microsoft Research.yEqual contribution.1Published as a conference paper at ICLR 2018satisfies the spec i.e.the provided example input-output mapping, (b)generalization : whether theprogram produces the desired outputs on unseen inputs, and finally, (c)performance : synthesis time.State-of-the-art PBE systems are either symbolic , based on enumerative or deductive search (Gulwani,2011; Polozov & Gulwani, 2015) or statistical , based on data-driven learning to induce the most likelyprogram for the spec (Gaunt et al., 2016; Balog et al., 2017; Devlin et al., 2017). Symbolic systems aredesigned to produce a correct program by construction using logical reasoning and domain-specificknowledge. They also produce the intended program with few input-output examples (often just 1).However, they require significant engineering effort and their underlying search processes strugglewith real-time performance, which is critical for user-facing PBE scenarios.In contrast, statistical systems do not rely on specialized deductive algorithms, which makes theirimplementation and training easier. However, they lack in two critical aspects. First, they requirea lot of training data and so are often trained using randomly generated tasks. As a result, inducedprograms can be fairly unnatural and fail to generalize to real-world tasks with a small number ofexamples. Second, purely statistical systems like RobustFill (Devlin et al., 2017) do not guaranteethat the generated program satisfies the spec. Thus, solving the synthesis task requires generatingmultiple programs with a beam search and post-hoc filtering, which defeats real-time performance.Neural-Guided Deductive Search Motivated by shortcomings of both the above approaches,we propose Neural-Guided Deductive Search (NGDS), a hybrid synthesis technique that bringstogether the desirable aspects of both methods. The symbolic foundation of NGDS is deductivesearch (Polozov & Gulwani, 2015) and is parameterized by an underlying domain-specific language(DSL) of target programs. Synthesis proceeds by recursively applying production rules of the DSL todecompose the initial synthesis problem into smaller sub-problems and further applying the samesearch technique on them. Our key observation I is that most of the deduced sub-problems do notcontribute to the final best program and therefore a priori predicting the usefulness of pursuing aparticular sub-problem streamlines the search process resulting in considerable time savings. InNGDS, we use a statistical model trained on real-world data to predict a score that corresponds to thelikelihood of finding a generalizable program as a result of exploring a sub-problem branch.Our key observation II is that speeding up deductive search while retaining its correctness orgeneralization requires a close integration of symbolic and statistical approaches via an intelligentcontroller. It is based on the “branch & bound” technique from combinatorial optimization (Clausen,1999). The overall algorithm integrates (i) deductive search, (ii) a statistical model that predicts, apriori , the generalization score of the best program from a branch, and (iii) a controller that selectssub-problems for further exploration based on the model’s predictions.Since program synthesis is a sequential process wherein a sequence of decisions (here, selectionsof DSL rules) collectively construct the final program, a reinforcement learning setup seems morenatural. However, our key observation III is that deductive search is Markovian – it generatesindependent sub-problems at every level. In other words, we can reason about a satisfying programfor the sub-problem without factoring in the bigger problem from which it was deduced. This bringsthree benefits enabling a supervised learning formulation: (a)a dataset of search decisions at everylevel over a relatively small set of PBE tasks that contains an exponential amount of informationabout the DSL promoting generalization, (b)such search traces can be generated and used for offlinetraining, (c)we can learn separate models for different classes of sub-problems (e.g. DSL levels orrules), with relatively simpler supervised learning tasks.Evaluation We evaluate NGDS on the string transformation domain, building on top of PROSE,a commercially successful deductive synthesis framework for PBE (Polozov & Gulwani, 2015).It represents one of the most widespread and challenging applications of PBE and has shipped inmultiple mass-market tools including Microsoft Excel and Azure ML Workbench.1We train andvalidate our method on 375scenarios obtained from real-world customer tasks (Gulwani, 2011;Devlin et al., 2017). Thanks to the Markovian search properties described above, these scenariosgenerate a dataset of 400;000+ intermediate search decisions. NGDS produces intended programson68% of the scenarios despite using only oneinput-output example. In contrast, state-of-the-artneural synthesis techniques (Balog et al., 2017; Devlin et al., 2017) learn intended programs from a1https://microsoft.github.io/prose/impact/2Published as a conference paper at ICLR 2018single example in only 24-36% of scenarios taking 4more time. Moreover, NGDS matches theaccuracy of baseline PROSE while providing a speed-up of up to 12over challenging tasks.Contributions First, we present a branch-and-bound optimization based controller that exploitsdeep neural network based score predictions to select grammar rules efficiently (Section 3.2). Second,we propose a program synthesis algorithm that combines key traits of a symbolic and a statisticalapproach to retain desirable properties like correctness, robust generalization, and real-time perfor-mance (Section 3.3). Third, we evaluate NGDS against state-of-the-art baselines on real customertasks and show significant gains (speed-up of up to 12) on several critical cases (Section 4).2 B ACKGROUNDIn this section, we provide a brief background on PBE and the PROSE framework, using establishedformalism from the programming languages community.Domain-Specific Language A program synthesis problem is defined over a domain-specific lan-guage (DSL). A DSL is a restricted programming language that is suitable for expressing tasks in agiven domain, but small enough to restrict a search space for program synthesis. For instance, typicalreal-life DSLs with applications in textual data transformations (Gulwani, 2011) often include condi-tionals, limited forms of loops, and domain-specific operators such as string concatenation, regularexpressions, and date/time formatting. DSLs for tree transformations such as code refactoring (Rolimet al., 2017) and data extraction (Le & Gulwani, 2014) include list/data-type processing operatorssuch as Map andFilter , as well as domain-specific matching operators. Formally, a DSL Lis speci-fied as a context-free grammar, with each non-terminal symbol Ndefined by a set of productions.The right-hand side of each production is an application of some operator F(N1;:::;Nk)to somesymbols ofL. All symbols and operators are strongly typed. Figure 2 shows a subset of the Flash FillDSL that we use as a running example in this paper.Inductive Program Synthesis The task of inductive program synthesis is characterized by a spec.A spec'is a set ofminput-output constraintsfi igmi=1, where:•, aninput state is a mapping of free variables of the desired program Pto some correspondinglytyped values. At the top level of L, a program (and its expected input state) has only one freevariable – the input variable of the DSL (e.g., inputs in Figure 2). Additional local variables areintroduced insideLwith a let construct.• is an output constraint on the execution result of the desired program P(i). At the top level ofL, when provided by the user, is usually the output example – precisely the expected result ofP(i). However, other intermediate constraints arise during the synthesis process. For instance, may be a disjunction of multiple allowed outputs.The overall goal of program synthesis is thus: given a spec ', find a program Pin the underlyingDSLLthatsatisfies',i.e., its outputs P(i)satisfy all the corresponding constraints i.Example 1. Consider the task of formatting a phone number, characterized by the spec '=finputs : [“(612) 8729128 ”]g “612-872-9128 ”. It has a single input-output example,with an input state containing a single variable inputs and its value which is a list with a singleinput string. The output constraint is simply the desired program result.The program the user is most likely looking for is the one that extracts (a) the part of the inputenclosed in the first pair of parentheses, (b) the 7thto 4thcharacters from the end, and (c) the last 4characters, and then concatenates all three parts using hyphens. In our DSL, this corresponds to:ConcatSubStr 0(RegexPosition (x;h“(”;"i;0);RegexPosition (x;h";“)”i;0));ConstStr (“-”);SubStr 0(AbsolutePosition (x;8);AbsolutePosition (x;5));ConstStr (“-”);SubStr 0(AbsolutePosition (x;5);AbsolutePosition (x;1))where"is an empty regex, SubStr 0(pos 1;pos 2)is an abbreviation for “ letx=std:Kth(inputs; 0)inSubstring (x;hpos 1;pos 2i)”, andhiis an abbreviation for std:Pair.However, many other programs in the DSL also satisfy '. For instance, all occurrences of “8”inthe output can be produced via a subprogram that simply extracts the last character. Such a programoverfits to'and is bound to fail for other inputs where the last character and the 4thone differ.3Published as a conference paper at ICLR 2018// Nonterminals@start string transform :=atom | Concat( atom ,transform );stringatom :=ConstStr( s)| let string x= std.Kth( inputs ,k) in Substring( x,pp);Tuple<int, int> pp:=std.Pair( pos,pos) | RegexOccurrence( x,r,k);intpos:=AbsolutePosition( x,k) | RegexPosition( x, std.Pair( r,r),k);// Terminals@input string[] inputs ; string s; int k; Regex r;Figure 2: A subset of the FlashFill DSL (Gulwani, 2011), used as a running example in this paper.Every program takes as input a list of strings inputs , and returns an output string, a concatenationofatoms . Each atom is either a constant or a substring of one of the inputs ( x), extracted usingsome position logic. The RegexOccurrence position logic finds kthoccurrence of a regex rinxandreturns its boundaries. Alternatively, start and end positions can be selected independently either asabsolute indices in xfrom left or right ( AbsolutePosition ) or as thekthoccurrence of a pair of regexessurrounding the position ( RegexPosition ). See Gulwani (2011) for an in-depth DSL description.As Example 1 shows, typical real-life problems are severely underspecified. A DSL like FlashFillmay contain up to 1020programs that satisfy a given spec of 1-3 input-output examples (Polozov &Gulwani, 2015). Therefore, the main challenge lies in finding a program that not only satisfies theprovided input-output examples but also generalizes to unseen inputs . Thus, the synthesis processusually interleaves search andranking : the search phase finds a set of spec-satisfying programs in theDSL, from which the ranking phase selects top programs ordered using a domain-specific rankingfunctionh:L~!Rwhere is the set of all input states. The ranking function takes as input acandidate program P2L and a set of input states ~ 2~(usually~ =inputs in the given spec + anyavailable unlabeled inputs), and produces a score for P’sgeneralization .The implementation of hexpresses a subtle balance between program generality, complexity, andbehavior on available inputs. For instance, in FlashFill hpenalizes overly specific regexes, prefersprograms that produce fewer empty outputs, and prioritizes lower Kolmogorov complexity, amongother features. In modern PBE systems like PROSE, his usually learned in a data-driven mannerfrom customer tasks (Singh & Gulwani, 2015; Ellis & Gulwani, 2017). While designing and learningsuch a ranking is an interesting problem in itself, in this work we assume a black-box access to h.Finally, the problem of inductive program synthesis can be summarized as follows:Problem 1. Given a DSLL, a ranking function h, a spec'=fi igmi=1, optionally a setof unlabeled inputs ~ u, and a target number of programs K, let~ =~ u[figmi=1. The goal ofinductive program synthesis is to find a program set S=fP1;:::;PKgL such that (a)everyprogram inSsatisfies', and (b)the programs inSgeneralize best: h(Pi;~ )h(P;~ )for anyotherP2L that satisfies '.Search Strategy Deductive search strategy for program synthesis, employed by PROSE exploresthe grammar ofLtop-down – iteratively unrolling the productions into partial programs starting fromthe root symbol. Following the divide-and-conquer paradigm, at each step it reduces its synthesisproblem to smaller subproblems defined over the parameters of the current production. Formally,given a spec 'and a symbol N, PROSE computes the set Learn (N;')of top programs w.r.t. husingtwo guiding principles:1.IfNis defined through nproductions N:=F1(:::)j:::jFn(:::), PROSE finds a '-satisfyingprogram set for everyFi, and unites the results, i.e., Learn (N;') =[iLearn (Fi(:::);').2.For a given production N:=F(N1;:::;Nk), PROSE spawns off ksmaller synthesis problemsLearn (Nj;'j);1jkwherein PROSE deduces necessary and sufficient specs 'jfor eachNjsuch that every program of type F(P1;:::;Pk), wherePj2Learn (Nj;'j), satisfies'. Thededuction logic (called a witness function ) is domain-specific for each operator F. PROSE thenagain recursively solves each subproblem and unites a cross-product of the results.Example 2. Consider a spec '=f“Yann ” “Y.L”gon atransform program. Via the firstproductiontransform :=atom , the only'-satisfying program is ConstStr (“Y.L”). The secondproduction on the same level is Concat (atom; transform ). A necessary & sufficient spec on theatom sub-program is that it should produce some prefix of the output string. Thus, the witnessfunction for the Concat operator produces a disjunctive spec 'a=f“Yann ” “Y”_“Y.”g. Each4Published as a conference paper at ICLR 2018of these disjuncts, in turn, induces a corresponding necessary and sufficient suffix spec on the secondparameter:'t1=f“Yann ” “.L”g, and't2=f“Yann ” “L”g, respectively. The disjunctsin'awill be recursively satisfied by different program sets: “Y.”can only be produced via anatom path with a ConstStr program, whereas “Y”can also be extracted from the input using manySubstring logics (their generalization capabilities vary). Figure 3 shows the resulting search DAG.transform“Y.L”Concat (:::)“Y.L”atom“Y.L”atom“Y”_“Y.”transform“L”atom“L”transform“.L”atom“.L”Concat (:::)“.L”atom“.”ConstStr (s)“Y.L”. . . . . . . . . . . .ConstStr (s)“Y”_“Y.”letx= . . .“Y”_“Y.”...Substring (:::)“Y”pp(0;1). . .Figure 3: A portion of the search DAG from Example 2. Only the output parts of the respective specsare shown in each node, their common input state is a single string “Yann ”. Dashed arrows showrecursive Learn calls on a corresponding DSL symbol.Notice that the above mentioned principles create logical non-determinism due to which we mightneed to explore multiple alternatives in a search tree. As such non-determinism arises at every level ofthe DSL with potentially any operator, the search tree (and the resulting search process) is exponentialin size. While all the branches of the tree by construction produce programs that satisfy the givenspec, most of the branches do not contribute to the overall top-ranked generalizable program. Duringdeductive search, PROSE has limited information about the programs potentially produced fromeach branch, and cannot estimate their quality, thus exploring the entire tree unnecessarily. Our maincontribution is a neural-guided search algorithm that predicts the best program scores from eachbranch, and allows PROSE to omit branches that are unlikely to produce the desired program a priori .3 S YNTHESIS ALGORITHMConsider an arbitrary branching moment in the top-down search strategy of PROSE. For example, letNbe a nonterminal symbol in L, defined through a set of productions N:=F1(:::)j:::jFn(:::),and let'be a spec on N, constructed earlier during the recursive descent over L. A conservativeway to select the top kprograms rooted at N(as defined by the ranking function h), i.e., to computeLearn (N;'), is to learn the top kprograms of kind Fi(:::)for alli2[k]and then select the top kprograms overall from the union of program sets learned for each production. Naturally, exploring allthe branches for each nonterminal in the search tree is computationally expensive.In this work, we propose a data-driven method to select an appropriate production rule N:=Fi(N1;:::;Nk)that would most likely lead to a top-ranked program. To this end, we use the currentspec'to determine the “optimal” rule. Now, it might seem unintuitive that even without exploringa production rule and finding the best program in the corresponding program set, we can a prioridetermine optimality of that rule. However, we argue that by understanding 'and its relationshipwith the ranking function h, we can predict the intended branch in many real-life scenarios.Example 3. Consider a spec '=f“alice ” “alice@iclr.org ”;“bob” “bob@iclr.org ”g. While learning a program in Lgiven by Figure 2 that satisfies ', it is clearright at the beginning of the search procedure that the rule transform :=atom does not apply. Thisis because any programs derived from transform :=atom can either extract a substring from theinput or return a constant string, both of which fail to produce the desired output. Hence, we shouldonly consider transform :=Concat (:::), thus significantly reducing the search space.Similarly, consider another spec '=f“alice smith ” “alice ”;“bob jones ” “bob”g. In this case, the output appears to be a substring of input, thus selecting transform :=atomat the beginning of the search procedure is a better option than transform :=Concat (:::).However, many such decisions are more subtle and depend on the ranking function hitself. Forexample, consider a spec '=f“alice liddell ” “al”;“bob ong ” “bo”g. Now,5Published as a conference paper at ICLR 2018LSTM for input encoding LSTM for output encodingChar EmbeddingInput stateChar EmbeddingOutput example(s) EmbeddingProduction rule TwoFClayersPredicted scoreFigure 4: LSTM-based model for predicting the score of a candidate production for a given spec '.bothtransform :=atom andtransform :=Concat (:::)may lead to viable programs becausethe output can be constructed using the first two letters of the input (i.e. a substring atom) or byconcatenating the first letters of each word. Hence, the branch that produces the best program isultimately determined by the ranking function hsince both branches generate valid programs.Example 3 shows that to design a data-driven search strategy for branch selection, we need to learnthe subtle relationship between ',h, and the candidate branch. Below, we provide one such model.3.1 P REDICTING THE GENERALIZATION SCOREAs mentioned above, our goal is to predict one or more production rules that for a given spec 'willlead to a top-ranked program (as ranked a posteriori byh). Formally, given black-box access to h,we want to learn a function fsuch that,f(;') maxP2S(;' )h(P;');whereis a production rule in L, andS(;')is aprogram set of all DSL programs derived fromthe rulethat satisfy'. In other words, we want to predict the score of the top-ranked '-satisfyingprogram that is synthesized by unrolling the rule . We assume that the symbolic search of PROSEhandles the construction of S(;')and ensures that programs in it satisfy 'by construction. Thegoal offis to optimize the score of a program derived from assuming this program is valid. If noprogram derived from can satisfy',fshould return1. Note that, drawing upon observationsmentioned in Section 1, we have cast the production selection problem as a supervised learningproblem, thus simplifying the learning task as opposed to end-to-end reinforcement learning solution.We have evaluated two models for learning f. The loss function for the prediction is given by:L(f;;') =f(;')maxP2S(;' )h(P;')2:Figure 4 shows a common structure of both models we have evaluated. Both are based on a standardmulti-layer LSTM architecture (Hochreiter & Schmidhuber, 1997) and involve (a)embedding thegiven spec',(b)encoding the given production rule , and (c)a feed-forward network to output ascoref(;'). One model attends over input when it encodes the output, whereas another does not.3.2 C ONTROLLER FOR BRANCH SELECTIONA score model falone is insufficient to perfectly predict the branches that should be explored atevery level. Consider again a branching decision moment N:=F1(:::)j:::jFn(:::)in a searchprocess for top kprograms satisfying a spec '. One naïve approach to using the predictions of fis toalways follow the highest-scored production rule argmaxif(Fi;'). However, this means that anysingle incorrect decision on the path from the DSL root to the desired program will eliminate thatprogram from the learned program set . If our search algorithm fails to produce the desired programby committing to a suboptimal branch anytime during the search process, then the user may neverdiscover that such a program exists unless they supply additional input-output example.Thus, a branch selection strategy based on the predictions of fmust balance a trade-off of performanceandgeneralization . Selecting too few branches (a single best branch in the extreme case) riskscommitting to an incorrect path early in the search process and producing a suboptimal program orno program at all. Selecting too many branches (all nbranches in the extreme case) is no differentfrom baseline PROSE and fails to exploit the predictions of fto improve its performance.Formally, a controller for branch selection at a symbol N:=F1(:::)j:::jFn(:::)targetingkbest programs must (a)predict the expected score of the best program from each program set:6Published as a conference paper at ICLR 2018function THRESHOLD BASED (';h;k;s 1;. . .;sn)1:Result setS []2:i argmaxisi3:for all 1indo4: ifjsisijthen//Recursive search5:S+= LEARN (Fi;';k )6:return the topkprograms ofSw.r.t.hfunction BNBBASED (';h;k;s 1;. . .;sn)1:Result setS []; Program target k0 k2:ReorderFiin the descending order of si3:for all 1indo4:Si LEARN (Fi;';k0)//Recursive search5:j BINARY SEARCH (si+1;Map(h;Si))6:S=Si[Si[0::j];k0 k0j7: ifk00then break8:returnSFigure 5: The controllers for guiding the search process to construct a most generalizable '-satisfyingprogram setSof sizekgiven thef-predicted best scores s1;. . .;snof the productions F1;. . .;Fn.Given: DSLL, ranking function h, controllerCfrom Figure 5 ( THRESHOLD BASED orBNBBASED ),symbolic search algorithm LEARN (Production rule , spec', targetk) as in PROSE (Polozov &Gulwani, 2015, Figure 7) with all recursive calls to L EARN replaced with L EARN NGDSfunction LEARN NGDS(Symbol N:=F1(:::)j:::jFn(:::), spec', target number of programs k)1:ifn= 1then return LEARN (F1;';k )2:Pick a score model fbased on depth (N;L)3:s1;:::;s n f(F1;');:::;f (Fn;')4:returnC(';h;k;s 1;:::;s n)Figure 6: Neural-guided deductive search over L, parameterized with a branch selection controller C.si=f(Fi;')81in;and(b)use the predicted scores sito narrow down the set of productionsF1;:::;Fnto explore and to obtain the overall result by selecting a subset of generated programs. Inthis work, we propose and evaluate two controllers. Their pseudocode is shown in Figure 5.Threshold-based: Fix a score threshold , and explore those branches whose predicted score differsby at mostfrom the maximum predicted score. This is a simple extension of the naïve “ argmax ”controller discussed earlier that also explores any branches that are predicted “approximately as goodas the best one”. When = 0, it reduces to the “ argmax ” one.Branch & Bound: This controller is based on the “branch & bound” technique in combinatorialoptimization (Clausen, 1999). Assume the branches Fiare ordered in the descending order of theirrespective predicted scores si. After recursive learning produces its program set Si, the controllerproceeds to the next branch only if si+1exceeds the score of the worst program in Si. Moreover, itreduces the target number of programs to be learned, using si+1as a lower bound on the scores ofthe programs inSi. That is, rather than relying blindly on the predicted scores, the controller guidesthe remaining search process by accounting for the actual synthesized programs as well.3.3 N EURAL -GUIDED DEDUCTIVE SEARCHWe now combine the above components to present our unified algorithm for program synthesis. Itbuilds upon the deductive search of the PROSE system, which uses symbolic PL insights in the formofwitness functions to construct and narrow down the search space, and a ranking function hto pickthe most generalizable program from the found set of spec-satisfying ones. However, it significantlyspeeds up the search process by guiding it a priori at each branching decision using the learnedscore model fand a branch selection controller, outlined in Sections 3.1 and 3.2. The resultingneural-guided deductive search (NGDS) keeps the symbolic insights that construct the search treeensuring correctness of the found programs, but explores only those branches of this tree that arelikely to produce the user-intended generalizable program, thus eliminating unproductive search time.A key idea in NGDS is that the score prediction model fdoes not have to be the same for all decisionsin the search process. It is possible to train separate models for different DSL levels, symbols, or evenproductions. This allows the model to use different features of the input-output spec for evaluatingthe fitness of different productions, and also leads to much simpler supervised learning problems.Figure 6 shows the pseudocode of NGDS. It builds upon the deductive search of PROSE, but augmentsevery branching decision on a symbol with some branch selection controller from Section 3.2. Wepresent a comprehensive evaluation of different strategies in Section 4.7Published as a conference paper at ICLR 2018Metric PROSE DC 1 DC 2 DC 3 RF1 RF2 RF3NGDSAccuracy (% of 73) 67.12 35.81 47.38 62.92 24.53 39.72 56.41 68.49Speed-up (PROSE) 1.00 1.82 1.53 1.42 0.25 0.27 0.30 1.67Table 1: Accuracy and average speed-up of NGDS vs. baseline methods. Accuracies are computedon a test set of 73tasks. Speed-up of a method is the geometric mean of its per-task speed-up (ratioof synthesis time of PROSE and of the method) when restricted to a subset of tasks with PROSE’ssynthesis time is0:5sec.4 E VALUATIONIn this section, we evaluate our NGDS algorithm over the string manipulation domain with a DSLgiven by Figure 2; see Figure 1 for an example task. We evaluate NGDS, its ablations, and baselinetechniques on two key metrics: (a) generalization accuracy on unseen inputs, (b) synthesis time.Dataset. We use a dataset of 375tasks collected from real-world customer string manipulation prob-lems, split into 65% training, 15% validation, and 20% test data. Some of the common applicationsfound in our dataset include date/time formatting, manipulating addresses, modifying names, automat-ically generating email IDs, etc. Each task contains about 10inputs, of which only one is provided asthe spec to the synthesis system, mimicking industrial applications. The remaining unseen examplesare used to evaluate generalization performance of the synthesized programs. After running synthesisof top-1 programs with PROSE on all training tasks, we have collected a dataset of 400,000intermediate search decisions, i.e.tripleshproduction;spec';a posteriori best scoreh(P;')i.Baselines. We compare our method against two state-of-the-art neural synthesis algorithms: Ro-bustFill (Devlin et al., 2017) and DeepCoder (Balog et al., 2017). For RobustFill, we use thebest-performing Attention-C model and use their recommended DP-Beam Search with a beam size of100 as it seems to perform the best; Table 3 in Appendix A presents results with different beam sizes.As in the original work, we select the top-1 program ranked according to the generated log-likelihood.DeepCoder is a generic framework that allows their neural predictions to be combined with anyprogram synthesis method. So, for fair comparison, we combine DeepCoder’s predictions withPROSE. We train DeepCoder model to predict a distribution over L’s operators and as proposed, useit to guide PROSE synthesis. Since both RobustFill and DeepCoder are trained on randomly sampledprograms and are not optimized for generalization in the real-world, we include their variants trainedwith 2 or 3 examples (denoted RF mand DCm) for fairness, although m= 1example is the mostimportant scenario in real-life industrial usage.Ablations. As mentioned in Section 3, our novel usage of score predictors to guide the searchenables us to have multiple prediction models and controllers at various stages of the synthesisprocess. Here we investigate ablations of our approach with models that specialize in predictions forindividual levels in the search process. The model T1is trained for symbol transform (Figure 2)when expanded in the first level. Similarly, PP,POS refer to models trained for the ppandpossymbol, respectively. Finally, we train all our LSTM-based models with CNTK (Seide & Agarwal,2016) using Adam (Kingma & Ba, 2014) with a learning rate of 102and a batch size of 32, usingearly stopping on the validation loss to select the best performing model (thus, 100-600 epochs).We also evaluate three controllers: threshold-based (Thr) and branch-and-bound (BB) controllersgiven in Figure 5, and a combination of them – branch-and-bound with a 0:2threshold predecessor(BB 0:2). In Tables 1 and 2 we denote different model combinations as NGDS( f,C) wherefis asymbol-based model and Cis a controller. The final algorithm selection depends on its accuracy-performance trade-off. In Table 1, we use NGDS( T1+POS , BB), the best performing algorithm onthe test set, although NGDS( T1, BB) performs slightly better on the validation set.Evaluation Metrics. Generalization accuracy is the percentage of test tasks for which the generatedprogram satisfies allunseen inputs in the task. Synthesis time is measured as the wall-clock timetaken by a synthesis method to find the correct program, median over 5 runs. We run all the methodson the same machine with 2.3 GHz Intel Xeon processor, 64GB of RAM, and Windows Server 2016.Results. Table 1 presents generalization accuracy as well as synthesis time speed-up of variousmethods w.r.t. PROSE. As we strive to provide real-time synthesis, we only compare the times fortasks which require PROSE more than 0:5sec. Note that, with one example, NGDS and PROSE are8Published as a conference paper at ICLR 2018MethodValidation Test% of branchesAccuracy Speed-up Accuracy Speed-upPROSE 70.21 1 67.12 1 100.00NGDS(T1, Thr) 59.57 1.15 67.12 1.27 62.72NGDS(T1, BB) 63.83 1.58 68.49 1.22 51.78NGDS(T1, BB 0:2) 61.70 1.03 67.12 1.22 63.16NGDS(T1+PP, Thr) 59.57 0.76 67.12 0.97 56.41NGDS(T1+PP, BB) 61.70 1.05 72.60 0.89 50.22NGDS(T1+PP, BB 0:2) 61.70 0.72 67.12 0.86 56.43NGDS(T1+POS , Thr) 61.70 1.19 67.12 1.93 55.63NGDS(T1+POS , BB) 63.83 1.13 68.49 1.67 50.44NGDS(T1+POS , BB 0:2) 63.83 1.19 67.12 1.73 55.73Table 2: Accuracies, mean speed-ups, and % of branches taken for different ablations of NGDS.significantly more accurate than RobustFill and DeepCoder. This is natural as those methods arenot trained to optimize generalization, but it also highlights advantage of a close integration with asymbolic system (PROSE) that incorporates deep domain knowledge. Moreover, on an average, ourmethod saves more than 50% of synthesis time over PROSE. While DeepCoder with one examplespeeds up the synthesis even more, it does so at the expense of accuracy, eliminating branches withcorrect programs in 65% of tasks.Table 2 presents speed-up obtained by variations of our models and controllers. In addition togeneralization accuracy and synthesis speed-up, we also show a fraction of branches that wereselected for exploration by the controller. Our method obtains impressive speed-up of >1:5in22cases. One such test case where we obtain 12speedup is a simple extraction case whichis fairly common in Web mining: f“alpha,beta,charlie,delta ” “alpha ”g. Forsuch cases, our model determine transform :=atom to be the correct branch (that leads tothe final Substring based program) and hence saves time required to explore the entire Concatoperator which is expensive. Another interesting test case where we observe 2:7speed-up is:f“457 124th St S, Seattle, WA 98111 ” “Seattle-WA ”g. This test case involveslearning a Concat operator initially followed by Substring andRegexPosition operator. Appendix Bincludes a comprehensive table of NGDS performance on all the validation and test tasks.All the models in Table 2 run without attention. As measured by score flip accuracies (i.e.per-centage of correct orderings of branch scores on the same level), attention-based models performbest, achieving 99:57=90:4=96:4%accuracy on train/validation/test, respectively (as compared to96:09=91:24=91:12% for non-attention models). However, an attention-based model is significantlymore computationally expensive at prediction time. Evaluating it dominates the synthesis timeand eliminates any potential speed-ups. Thus, we decided to forgo attention in initial NGDS andinvestigate model compression/binarization in future work.Error Analysis. As Appendix B shows, NGDS is slower than PROSE on some tasks. This occurswhen the predictions do not satisfy the constraints of the controller i.e.all the predicted scores arewithin the threshold or they violate the actual scores during B&B exploration. This leads to NGDSevaluating the LSTM for branches that were previously pruned. This is especially harmful whenbranches pruned out at the very beginning of the search need to be reconsidered – as it could leadto evaluating the neural network many times. While a single evaluation of the network is quick, asearch tree involves many evaluations, and when performance of PROSE is already <1s, this resultsin considerable relative slowdown. We provide two examples to illustrate both the failure modes:(a)“41.7114830017,-91.41233825683,41.60762786865,-91.63739013671 ” “41.7114830017 ”. The intended program is a simple substring extraction. However, at depth 1,the predicted score of Concat is much higher than the predicted score of Atom , and thus NGDSexplores only the Concat branch. The found Concat program is incorrect because it uses absoluteposition indexes and does not generalize to other similar extraction tasks. We found this scenariocommon with punctuation in the output string, which the model considers a strong signal for Concat .(b) “type size = 36: Bartok.Analysis.CallGraphNode type size = 32:Bartok.Analysis.CallGraphNode CallGraphNode ” “36->32 ”. In this case,NGDS correctly explores only the Concat branch, but the slowdown happens at the possymbol.9Published as a conference paper at ICLR 2018There are many different logics to extract the “36”and“32”substrings. NGDS explores theRelativePosition branch first, but the score of the resulting program is less then the prediction forRegexPositionRelative . Thus, the B&B controller explores both branches anyway, which leads to arelative slowdown caused by the network evaluation time.5 R ELATED WORKNeural Program Induction systems synthesize a program by training a newneural network modelto map the example inputs to example outputs (Graves et al., 2014; Reed & De Freitas, 2016;Zaremba et al., 2016). Examples include Neural Turing Machines (Graves et al., 2014) that can learnsimple programs like copying/sorting, work of Kaiser & Sutskever (2015) that can perform morecomplex computations like binary multiplications, and more recent work of Cai et al. (2017) that canincorporate recursions. While we are interested in ultimately producing the right output, all thesemodels need to be re-trained for a given problem type, thus making them unsuitable for real-lifesynthesis of different programs with fewexamples.Neural Program Synthesis systems synthesize a program in a given Lwith a pre-learned neuralnetwork. Seminal works of Bosnjak et al. (2017) and Gaunt et al. (2016) proposed first producing ahigh-level sketch of the program using procedural knowledge, and then synthesizing the program bycombining the sketch with a neural or enumerative synthesis engine. In contrast, R3NN (Parisottoet al., 2016) and RobustFill (Devlin et al., 2017) systems synthesize the program end-to-end usinga neural network; Devlin et al. (2017) show that RobustFill in fact outperforms R3NN. However,RobustFill does not guarantee generation of spec-satisfying programs and often requires more thanone example to find the intended program. In fact, our empirical evaluation (Section 4) shows thatour hybrid synthesis approach significantly outperforms the purely statistical approach of RobustFill.DeepCoder (Balog et al., 2017) is also a hybrid synthesis system that guides enumerative programsynthesis by prioritizing DSL operators according to a spec-driven likelihood distribution on the same.However, NGDS differs from DeepCoder in two important ways: (a) it guides the search process ateach recursive level in a top-down goal-oriented enumeration and thus reshapes the search tree, (b) itis trained on real-world data instead of random programs, thus achieving better generalization.Symbolic Program Synthesis has been studied extensively in the PL community (Gulwani et al.,2017; Alur et al., 2013), dating back as far as 1960s (Waldinger & Lee, 1969). Most approachesemploy either bottom-up enumerative search (Udupa et al., 2013), constraint solving (Torlak & Bodik,2013), or inductive logic programming (Lin et al., 2014), and thus scale poorly to real-world industrialapplications (e.g. data wrangling applications). In this work, we build upon deductive search, firststudied for synthesis by Manna & Waldinger (1971), and primarily used for program synthesisfrom formal logical specifications (Puschel et al., 2005; Chaudhari & Damani, 2015). Gulwani(2011) and later Polozov & Gulwani (2015) used it to build PROSE, a commercially successfuldomain-agnostic system for PBE. While its deductive search guarantees program correctness and alsogood generalization via an accurate ranking function, it still takes several seconds on complex tasks.Thus, speeding up deductive search requires considerable engineering to develop manual heuristics.NGDS instead integrates neural-driven predictions at each level of deductive search to alleviate thisdrawback. Work of Loos et al. (2017) represents the closest work with a similar technique but theirwork is applied to an automated theorem prover, and hence need not care about generalization. Incontrast, NGDS guides the search toward generalizable programs while relying on the underlyingsymbolic engine to generate correct programs.6 C ONCLUSIONWe studied the problem of real-time program synthesis with a small number of input-output examples.For this problem, we proposed a neural-guided system that builds upon PROSE, a state-of-the-artsymbolic logic based system. Our system avoids top-down enumerative grammar exploration requiredby PROSE thus providing impressive synthesis performance while still retaining key advantages ofa deductive system. That is, compared to existing neural synthesis techniques, our system enjoysfollowing advantages: a) correctness : programs generated by our system are guaranteed to satisfy thegiven input-output specification, b) generalization : our system learns the user-intended program withjust one input-output example in around 60% test cases while existing neural systems learn such a10Published as a conference paper at ICLR 2018program in only 16% test cases, c) synthesis time : our system can solve most of the test cases in lessthan 0.1 sec and provide impressive performance gains over both neural as well symbolic systems.The key take-home message of this work is that a deep integration of a symbolic deductive inferencebased system with statistical techniques leads to best of both the worlds where we can avoid extensiveengineering effort required by symbolic systems without compromising the quality of generatedprograms, and at the same time provide significant performance (when measured as synthesis time)gains. For future work, exploring better learning models for production rule selection and applyingour technique to diverse and more powerful grammars should be important research directions.<|im_end|> <|im_start|>assistant ### Review Title Although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic. ### Review Text The paper presents a branch-and-bound approach to learn good programs (consistent with data, expected to generalise well), where an LSTM is used to predict which branches in the search tree should lead to good programs (at the leaves of the search tree). The LSTM learns from inputs of program spec + candidate branch (given by a grammar production rule) and ouputs of quality scores for programms. The issue of how greedy to be in this search is addressed. In the authors' set up we simply assume we are given a 'ranking function' h as an input (which we treat as black-box). In practice this will simply be a guess (perhaps a good educated one) on which programs will perform correctly on future data. As the authors indicate, a more ambitious paper would consider learning h, rather than assuming it as a given. The paper has a number of positive features. It is clearly written (without typo or grammatical problems). The empirical evaluation against PROSE is properly done and shows the presented method working as hoped. This was a competent approach to an interesting (real) problem. However, the 'deep learning' aspect of the paper is not prominent: an LSTM is used as a plug-in and that is about it. Also, although the search method chosen was reasonable, the only real innovation here is to use the LSTM to learn a search heuristic. The authors do not explain what "without attention" means. I think the authors should mention the existence of (logic) program synthesis using inductive logic programming. There are also (closely related) methods developed by the LOPSTR (logic-based program synthesis and transformation) community. Many of the ideas here are reminiscent of methods existing in those communities (e.g. top-down search with heuristics). The use of a grammar to define the space of programs is similar to the "DLAB" formalism developed by researchers at KU Leuven. ADDED AFTER REVISIONS/DISCUSSIONS The revised paper has a number of improvements which had led me to give it slightly higher rating. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
r1lnigSFDr
ICLR.cc/2020/Conference
2020
Improving the Gating Mechanism of Recurrent Neural Networks
["Albert Gua", "Caglar Gulcehre", "Tom le Paine", "Razvan Pascanu", "Matt Hoffman"]
In this work, we revisit the gating mechanisms widely used in various recurrent and feedforward networks such as LSTMs, GRUs, or highway networks. These gates are meant to control information flow, allowing gradients to better propagate back in time for recurrent models. However, to propagate gradients over very long temporal windows, they need to operate close to their saturation regime. We propose two independent and synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyper-parameters, and are aimed at improving learnability of the gates when they are close to saturation. Our proposals are theoretically justified, and we show a generic framework that encompasses other recently proposed gating mechanisms such as chrono-initialization and master gates . We perform systematic analyses and ablation studies on the proposed improvements and evaluate our method on a wide range of applications including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning. Empirically, our proposed gating mechanisms robustly increase the performance of recurrent models such as LSTMs, especially on tasks requiring long temporal dependencies.
["recurrent neural networks", "LSTM", "GRUs", "gating mechanisms", "deep learning", "reinforcement learning"]
ABSTRACTGating mechanisms are widely used in neural network models, where they allowgradients to backpropagate more easily through depth or time. However, their satura-tion property introduces problems of its own. For example, in recurrent models thesegates need to have outputs near 1to propagate information over long time-delays,which requires them to operate in their saturation regime and hinders gradient-basedlearning of the gate mechanism. We address this problem by deriving two synergisticmodifications to the standard gating mechanism that are easy to implement,introduce no additional hyperparameters, and improve learnability of the gates whenthey are close to saturation. We show how these changes are related to and improveon alternative recently proposed gating mechanisms such as chrono-initializationand Ordered Neurons. Empirically, our simple gating mechanisms robustly improvethe performance of recurrent models on a range of applications, including syntheticmemorization tasks, sequential image classification, language modeling, andreinforcement learning, particularly when long-term dependencies are involved.1 I NTRODUCTIONRecurrent neural networks (RNNs) have become a standard machine learning tool for learning fromsequential data. However, RNNs are prone to the vanishing gradient problem, which occurs when thegradients of the recurrent weights become vanishingly small as they get backpropagated through time(Hochreiter et al., 2001). A common approach to alleviate the vanishing gradient problem is to use gatingmechanisms, leading to models such as the long short term memory (Hochreiter & Schmidhuber, 1997,LSTM) and gated recurrent units (Chung et al., 2014, GRUs). These gated RNNs have been very suc-cessful in several different application areas such as in reinforcement learning (Kapturowski et al., 2018;Espeholt et al., 2018) and natural language processing (Bahdanau et al., 2014; Ko ˇcisk`y et al., 2018).At every time step, gated recurrent models form a weighted combination of the history summarizedby the previous state, and (a function of) the incoming inputs, to create the next state. The values ofthe gates, which are the coefficients of the combination, control the length of temporal dependenciesthat can be addressed. This weighted update can be seen as an additive or residual connection on therecurrent state, which helps signals propagate through time without vanishing. However, the gatesthemselves are prone to a saturating property which can also hamper gradient-based learning. Thisis particularly troublesome for RNNs, where carrying information for very long time delays requiresgates to be very close to their saturated states.We formulate and address two particular problems that arise with the standard gating mechanism of re-current models. First, typical initialization of the gates is relatively concentrated. This restricts the rangeof timescales the model can address, as the timescale of a particular unit is dictated by its gates. Our firstproposal, which we call uniform gate initialization (Section 2.2), addresses this by directly initializingthe activations of these gates from a distribution that captures a wider spread of dependency lengths.Second, learning when gates are in their saturation regime is difficult because of vanishing gradientsthrough the gates. We derive a modification that uses an auxiliary refine gate to modulate a main gate,which allows it to have a wider range of activations without gradients vanishing as quickly.Combining these two independent modifications yields our main proposal, which we call the UR-gating mechanism. These changes can be applied to any gate (i.e. bounded parametrized function)and have minimal to no overhead in terms of speed, memory, code complexity, and (hyper-)parameters.1Under review as a conference paper at ICLR 2020We apply them to the forget gate of recurrent models, and evaluate on many benchmarks includingsynthetic long-term dependency tasks, sequential pixel-level image classification, language modeling,program execution, and reinforcement learning. Finally, we connect our methods to other proposedgating modifications, introduce a framework that allows each component to be replaced with similarones, and perform theoretical analysis and extensive ablations of our method. Empirically, theUR- gating mechanism robustly improves on the standard forget and input gates of gated recurrentmodels. When applied to the LSTM, these simple modifications solve synthetic memory tasks thatare pathologically difficult for the standard LSTM, achieve state-of-the-art results on sequentialMNIST and CIFAR-10, and show consistent improvements in language modeling on the WikiText-103dataset (Merity et al., 2016) and reinforcement learning tasks (Hung et al., 2018).2 G ATED RECURRENT NEURAL NETWORKSBroadly speaking, RNNs are used to sweep over a sequence of input data xtto produce a sequenceof recurrent states ht2Rdsummarizing information seen so far. At a high level, an RNN is just aparametrized function in which each sequential application of the network computes a state updateu: (xt;ht1)7!ht. Gating mechanisms were introduced to address the vanishing gradient problem(Bengio et al., 1994; Hochreiter et al., 2001), and have proven crucial to the success of RNNs. Thismechanism essentially smooths out the update using the following equation,ht=ft(xt;ht1)ht1+it(xt;ht1)u(xt;ht1); (1)where the forget gateftandinput gateitare[0;1]d-valued functions that control how fast informationis forgotten or allowed into the memory state. When the gates are tied, i.e. ft+it=1as in GRUs, theybehave as a low-pass filter, deciding the time-scale on which the unit will respond (Tallec & Ollivier,2018). For example, large forget gate activations close to ft= 1are necessary for recurrent modelsto address long-term dependencies.1We will introduce our improvements to the gating mechanism primarily in the context of the LSTM,which is the most popular recurrent model. However, these techniques can be used in any model thatmakes similar use of gates.ft=(Lf(xt;ht1)) (2)it=(Li(xt;ht1)) (3)ut=tanh(Lu(xt;ht1))(4)ct=ftct1+itut (5)ot=(Lo(xt;ht1)) (6)ht=ottanh(ct) (7)A typical LSTM (equations (2)-(7)) is an RNN whose state is rep-resented by a tuple (ht;ct)consisting of a “hidden” state and “cell”state. The basic gate equation (1)is used to create the next cell statect(5). Note that the gate and update activations are a function ofthe previous hidden state ht1instead ofct1. Here,L?stands fora parameterized linear function of its inputs with bias b?, e.g.Lf(xt;ht1)=Wfxxt+Wfhht1+bf; (8)and()refers to the standard sigmoid activation function which we will assume is used for defining[0;1]-valued activations in the rest of this paper. The gates of the LSTM were initially motivated as abinary mechanism, switching on or off to allow information and gradients to pass through. However, inreality this fails to happen due to a combination of initialization and saturation. This can be problematic,such as when very long dependencies are present.2.1 T HEUR-LSTMbf1(U[0;1]) (9)ft=(Lf(xt;ht1)) (10)rt=(Lr(xt;ht1)) (11)gt=rt(1(1ft)2)+(1rt)f2t (12)ct=gtct1+(1gt)ut(13)ot=(Lo(xt;ht1)) (14)ht=ottanh(ct) (15)We present two solutions which work in tandem to address thepreviously described issues. The first ensures a diverse range ofgate values at the start of training by sampling the gate’s biases sothat the activations will be approximately uniformly distributed atinitialization. We call this Uniform Gate Initialization (UGI). Thesecond allows better gradient flow by reparameterizing the gateusing an auxiliary “refine” gate.As our main application is for recurrent models, we present the fullUR-LSTM model in equations (9)-(15). However, we note that1In this work, we use “gate” to alternatively refer to a [0;1]-valued function or the value (“activation”) of thatfunction.2Under review as a conference paper at ICLR 2020these methods can be used to modify any gate (or more generally, bounded function) in any model. Inthis context the UR-LSTM is simply defined by applying UGI and a refine gate ron the original forgetgatefto create an effective forget gate g(equation (12)). This effective gate is then used in the cellstate update (13). Empirically, these small modifications to an LSTM are enough to allow it to achievenearly binary activations and solve difficult memory problems (Figure 4). In the rest of Section 2, weprovide theoretical justifications for UGI and refine gates.2.2 U NIFORM GATE INITIALIZATIONStandard initialization schemes for the gates can prevent the learning of long-term temporal correla-tions (Tallec & Ollivier, 2018). For example, supposing that a unit in the cell state has constant forgetgate valueft, then the contribution of an input xtinktime steps will decay by (ft)k. This gives theunit an effective decay period orcharacteristic timescale ofO(11ft).2Standard initialization of linearlayersLsets the bias term to 0, which causes the forget gate values (2)to concentrate around 1=2. Acommon trick of setting the forget gate bias to bf= 1:0(Jozefowicz et al., 2015) does increase thevalue of the decay period to11(1:0)3:7. However, this is still relatively small, and moreover fixed,hindering the model from easily learning dependencies at varying timescales.We instead propose to directly control the distribution of forget gates, and hence the correspondingdistribution of decay periods. In particular, we propose to simply initialize the value of the forget gateactivationsftaccording to a uniform distribution U(0;1), as described in Section 2.1. An importantdifference between UGI and standard or other (e.g. Tallec & Ollivier, 2018) initializations is thatnegative forget biases are allowed. The effect of UGI is that all timescales are covered, from unitswith very high forget activations remembering information (nearly) indefinitely, to those with lowactivations focusing solely on the incoming input. Additionally, it introduces no additional parameters;it even can have less hyperparameters than the standard gate initialization, which sometimes tunes theforget biasbf. Appendix B.2 and B.3 further discuss the theoretical effects of UGI on timescales.2.3 T HEREFINE GATEGiven a gate f=(Lf(x))2[0;1], the refine gate is an independent gate r=(Lr(x)), and modulatesfto produce a value g2[0;1]that will be used in place of fdownstream. It is motivated by consideringhow to modify the output of a gate fin a way that promotes gradient-based learning, derived below.An additive modification The root of the saturation problem is that the gradient rfof a gate, whichcan be written solely as a function of the activation value as f(1f), decays rapidly as fapproaches 0or1. Thus when the activation fis past a certain upper or lower threshold, learning effectively stops.This problem cannot be fully addressed only by modifying the input to the sigmoid, as in UGI and othertechniques, as the gradient will still vanish by backpropagating through the activation function.Therefore to better control activations near the saturating regime, instead of changing the input to thesigmoid inf=(L(x)), we consider modifying the output. In particular, we consider adjusting fwith an input-dependent update (f;x)for some function , to create an effective gate g=f+(f;x)that will be used in place of fdownstream such as in the main state update (1). This sort of additive(“residual”) connection is a common technique to increase gradient flow, and indeed was the motivationof the LSTM additive gated update (1) itself (Hochreiter & Schmidhuber, 1997).Choosing the adjustment function Although many choices seem plausible for selecting the additiveupdate, we reason backwards from necessary properties of the effective activation gto deduce aprincipled function . The refine gate will appear as a result.First, note that ftmight need to be increased or decreased, regardless of what its value is. Forexample, given a large activation ftnear saturation, it may need to be even higher to address long-termdependencies in recurrent models; alternatively, if it is too high by initialization or needs to unlearnprevious behavior, it may need to decrease. Therefore, the additive update to fshould create aneffective activation gtin the rangeftfor some. Note that the allowed adjustment range =(ft)needs to be a function of fin order to keep g2[0;1].2This corresponds to the number of timesteps it takes to decay by 1=e.3Under review as a conference paper at ICLR 2020In particular, the additive adjustment range (f)should satisfy the following natural properties:Validity :(f)min(f;1f), to ensureg2f(f)2[0;1].Symmetry : Since 0and1are completely symmetrical in the gating framework, (f)=(1f).Differentiability :(f)will be used in backpropagation, requiring 2C1(R).Figure 2a illustrates the general appearance of (f)based on these properties. In particular, Validityimplies that that its derivative satisfies 0(0)1and0(1)1, Symmetry implies 0(f) =0(1f), and Differentiability implies 0is continuous. The simplest such function satisfying theseis the linear0(f)=12f, yielding(f)=ff2=f(1f).Given such a (f), recall that the goal is to produce an effective activation g=f+(f;x)such thatg2f(f)(Figure 2b). Our final observation is that the simplest such function satisfying this is(f;x)=(f) (f;x)for some ()2[1;1]. Using the standard method for defining [1;1]-valuedfunctions via a tanh non-linearity leads to (f;x)=(f)(2r1)for another gate r=(L(x)).The full update is given in Equation (16),g=f+(f)(2r1)=f+f(1f)(2r1)=(1r)f2+r(1(1f)2) (16)Equation (16) has the elegant interpretation that the gate rlinearly interpolates between the lower bandf(f)=f2and the symmetric upper band f+(f)=1(1f)2(Figure 2b). In other words, theoriginal gate fis the coarse-grained determinant of the effective gate g, while the gate r“refines” it.This allows the effective gate gto reach much higher and lower activations than the constituent gates fandr(Figure 2c), bypassing the saturating gradient problem. For example, this allows the effectiveforget gate to reach g=0:99when the forget gate is only f=0:9.2.4 R EFINING RECURRENT MODELSFormally, the full mechanism of the refine gate as applied to gated recurrent models is defined inequations (11)-(13). Note that it is an isolated change where the forget gate (10)is modified beforeapplying the the standard update (1). Figure 1 illustrates the refine gate in an LSTM cell. Figure 2illustrates how the refine gate rtis defined and how it changes the effective gate ftto produce aneffective gate gt.Figure 1: LSTM with refine gate The refine gate rtmodifies another gate, such as the forget gate ftfor recurrent models. It interpolates between upperbound Uftand lowerbound Lftfunctions of theforget gate. The resulting effective forget gate gtis then used in place of ftin the state update (5).Finally, to simplify comparisons and ensure that we always use the same number of parameters as thestandard gates, when using the refine gate we tie the input gate to the effective forget gate, it=1gt.However, we emphasize that these techniques are extremely simple and broad, and can be appliedto any gate (or more broadly, any bounded function) to improve initialization distribution and helpoptimization. For example, our methods can be combined in different ways in recurrent models, e.g. anindependent input gate can be modified with its own refine gate. Alternatively, the refine gate can alsobe initialized uniformly, which we do in our experiments whenever both UGI and refine gates are used.2.5 R ELATED GATING MECHANISMSWe highlight a few recent works that also propose small gate changes to address problems of long-termor variable-length dependencies. Like ours, they can be applied to any gated update equation.4Under review as a conference paper at ICLR 20200.0 0.2 0.4 0.6 0.8 1.0Gate f0.00.20.40.60.81.0Band range c(f)(a)−4−3−2−1 0 1 2 3 4x0.00.20.40.60.81.0g11+e−x1(1+e−x)21−1(1+ex)2 (b) (c)0.0 0.2 0.4 0.6 0.8 1.0Effective gate g1.01.52.02.53.03.5Increase in gradient vs. standard gate (d)Figure 2: Refine gate in action: (a) [Solid] A function (ft)satisfying natural properties is chosen todefine a band within which the forget gate is refined. (b) The forget gate ft(x)is conventionally definedwith the sigmoid function (black). The refine gate interpolates around the original gate ftto yield aneffective gate gtwithin the upper and lower curves, gt2ft(ft). (c) Contours of the effective gate gtas a function of the forget and refine gates ft;rt. High effective activations can be achieved with moremodestft;rtvalues. (d) The gradient rgtas a function of effective gate activation gt. [Black, blue]:Lower and upper bounds on the ratio of the gradient when using a refine gate vs. without.Tallec & Ollivier (2018) suggest an initialization strategy to capture long-term dependencies on theorder ofTmax, by sampling the gate biases from bflogU(1;Tmax1). Although similar to UGI indefinition, chrono initialization (CI) has key differences in the timescales captured, for example byusing an explicit timescale parameter and having no negative biases. Due to its relation to UGI, weprovide a more detailed comparison in Appendix B.3. As mentioned in Section 2.3, techniques such asthese that only modify the input to a sigmoid gate do not fully address the saturation problem.The Ordered Neuron LSTM introduced by Shen et al. (2018) aims to induce an ordering over theunits in the hidden states such that “higher-level” neurons retain information for longer and capturehigher-level information. We highlight this work due to its recent success in NLP, and also becauseits novelties can be factored into introducing two mechanisms which only affect the forget and inputgates, namely (i) the cumax:=cumsum softmax activation function which creates a monotonicallyincreasing vector in [0;1], and (ii) a pair of “master gates” which are ordered by cumax and fine-tunedwith another pair of gates.In fact, we observe that these are related to our techniques in that one controls the distribution of a gateactivation, and the other is an auxiliary gate with modulating behavior. Despite its important novelties,we find that the ON-LSTM has drawbacks including speed/stability issues and theoretical flaws inthe scaling of its gates. We provide the formal definition and detailed analysis of the ON-LSTM inAppendix B.4. In particular we flesh out a deeper relation between the master and refine gates and showhow they can be interchanged for each other.We include a more thorough overview of other related works on RNNs in Appendix B.1. These methodsare largely orthogonal to the isolated gate changes considered here and are not analyzed. We note that animportant drawback common to all other approaches is the introduction of substantial hyperparametersin the form of constants, training protocol, and significant architectural changes. For example, even forchrono initialization, one of the less intrusive proposals, we experimentally find it to be particularlysensitive to the hyperparameter Tmax(Section 3).2.6 G ATE ABLATIONSOur insights about previous work with related gate components allow us to perform extensive ablationsof our contributions. We observe two independent axes of variation, namely, activation function/ini-tialization ( cumax , constant bias sigmoid, CI, UGI) and auxiliary modulating gates (master, refine),where different components can be replaced with each other. Therefore we propose several other gatecombinations to isolate the effects of different gating mechanisms. We summarize a few ablationshere; precise details are given in Appendix B.5. O-: Ordered gates . A natural simplification of themain idea of ON-LSTM, while keeping the hierarchical bias on the forget activations, is to simply dropthe auxiliary master gates and define ft;it(2)-(3)using the cumax activation function. UM-: UGImaster gates . This variant of the ON-LSTM’s gates ablates the cumax operation on the master gates,5Under review as a conference paper at ICLR 2020replacing it with a sigmoid activation and UGI which maintains the same initial distribution on theactivation values. OR-: Refine instead of master . A final variant in between the UR- gates and theON-LSTM’s gates combines cumax with refine gates. In this formulation, as in UR- gates, the refinegate modifies the forget gate and the input gate is tied to the effective forget gate. The forget gate isordered using cumax .Table 1 summarizes the gating modifications we consider and their naming conventions. Note that wealso denote the ON-LSTM method as “OM-LSTM” (M for master) for mnemonic ease. Finally, weremark that all methods here are controlled with the same number of parameters as the standard LSTM,aside from the OM-LSTM and UM-LSTM which use an additional12C-fraction parameters where Cisthe downsize factor on the master gates (Appendix B.4).Table 1: Summary of gating mechanisms considered in this work as applied to the main forget/inputgates of recurrent models. To preserve parameters, refine gate methods use a tied input gate, and mastergate methods use a downsize factor C>1. (Left) Existing approaches and our main method. (right)Ablations of our gates with different components.Name Gate Mechanism U- Uniform gate initialization, no auxiliary gate- Standard gate initialization (1) R- Refine gate with standard gate initializationC- Chrono initialization, no auxiliary gate O- cumax activation on forget/input gatesOM- Ordered main gates, auxiliary master gates UM- UGI main gates, auxiliary master gatesUR- UGI main gate, auxiliary refine gate OR- Ordered main gate, auxiliary refine gate3 E XPERIMENTSWe first perform full ablations of the gating variants (Section 2.6) on benchmark synthetic memorizationand pixel-by-pixel image classification tasks. We then evaluate our main method on importantapplications for recurrent models including language modeling and reinforcement learning, comparingagainst baseline methods where appropriate.The vanilla LSTM uses forget bias 1.0 (Section 2.2). When chrono-initialization is used and notexplicitly tuned, we set Tmaxto be proportional to the hidden size. This heuristic uses the intuitionthat if dependencies of length Texist, then so should dependencies of all lengths T. Moreover, theamount of information that can be remembered is proportional to the number of hidden units.All of our benchmarks have prior work with recurrent baselines, from which we used the same models,protocol, and hyperparameters whenever possible, changing only the gating mechanism. Since oursimple gate changes are compatible with other recurrent cores, we evaluate them in tandem withrecurrent models such as the GRU, Reconstructive Memory Agent (RMA; Hung et al., 2018), andRelational Memory Core (RMC; Santoro et al., 2018) whenever they were used on these tasks. Fullprotocols and details for all experiments are given in Appendix D.3.1 S YNTHETIC TASKSOur first set of experiments is on synthetic memory tasks (Hochreiter & Schmidhuber, 1997; Arjovskyet al., 2016) that are known to be hard for standard LSTMs to solve.Copy task. The input is a sequence of N+ 20 digits where the first 10 tokens (a0;a1;:::;a 9)arerandomly chosen from f1;:::;8g, the middle N tokens are set to 0, and the last ten tokens are 9. The goalof the recurrent model is to output (a0;:::;a 9)in order on the last 10 time steps, whenever the cue token9is presented. We trained our models using cross-entropy with baseline loss log(8) (Appendix D.1).Adding task. The input consists of two sequences: 1. Nnumbers (a0;:::;aN1)sampled independentlyfromU[0;1]2. an indexi02[0;N=2)andi12[N=2;N), together encoded as a two-hot sequence. Thetarget output is ai0+ai1and models are evaluated by the mean squared error with baseline loss 1=6.Figure 3 shows the loss of various methods on the Copy and Adding tasks. The only gate combinationscapable of solving Copy completely are OR-,UR-,O-, and C-LSTM. This confirms the mechanismof their gates: these are the only methods capable of producing high enough forget gate values eitherthrough the cumax non-linearity, the refine gate, or extremely high forget biases. The U-LSTM isthe only other method able to make progress, but converges slower as it suffers from gate saturation6Under review as a conference paper at ICLR 20200 25000 50000 75000 100000Step151050Log LossMethod-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTM0 5000 10000 15000 20000Step12963Log LossMethod-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTMFigure 3: (Left) Copy task length 500 (Right) Adding task length 2000. Every method besides theLSTM solves the Adding task. The only methods capable of solving copy are OR-,UR-,O-, andC-LSTM models, with all other models aside from U-LSTM stuck at baseline. Refine gates are fastest.(a) UR-LSTM (b) U-LSTM (c) LSTM (d) C-LSTMFigure 4: Histograms of forget gate ftactivations (averaged over time and batch) before (Top) and after(Bottom) training on Copy (y-axis independently scaled). C-LSTM initializes with extremal activationswhich barely change during training. Standard LSTM initialization cannot learn large enough ftandmakes no progress on the task. U-LSTM makes progress by encouraging a range of forget gate values,but this distribution does not change significantly during training due to saturation. UR-LSTM startswith the same distribution, but is able to learn extremal gate values. Complementary to here whenlearning large activations is necessary, Appendix E.1 shows a reverse task where the UR-LSTM is ableto un-learn from a saturated regime.without the refine gate. The vanilla LSTM makes no progress. The OM-LSTM and UM-LSTM alsoget stuck at the baseline loss, despite the OM-LSTM’s cumax activation, which we hypothesize is dueto the suboptimal magnitudes of the gates at initialization (Appendix B.4). On the Adding task, everymethod besides the basic LSTM is able to eventually solve it, with all refine gate variants fastest.Figure 4 shows the distributions of forget gate activations of sigmoid-activation methods, before andafter training on the Copy task. It shows that activations near 1:0are important for a model’s ability tomake progress or solve this task, and that adding the refine gate makes this significantly easier.3.2 P IXEL -BY-PIXEL IMAGE CLASSIFICATIONThese tasks involve feeding a recurrent model the pixels of an image in a scanline order before producinga classification label. We test on the sequential MNIST (sMNIST), permuted MNIST (pMNIST) (Leet al., 2015), and sequential CIFAR-10 (sCIFAR) tasks. Each LSTM method was ran with a learningrate sweep with 3 seeds each. The best validation score found over any run is reported in the firsttwo rows of Table 2.3We find in general that all methods are able to improve over the vanilla LSTM.However, the differences become even more pronounced when stability is considered. AlthoughTable 2 reports the best validation accuracies found on any run, we found that many methods werequite unstable. Asterisks are marked next to a score denoting how many of the 3 seeds diverged, for thelearning rate that score was found at.3sMNIST is not included here as it is too easy, making it difficult to draw conclusions.7Under review as a conference paper at ICLR 20200 25 50 75 100Epoch0.900.920.940.96Validation Accuracyname-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTM(a) Permuted MNIST0 25 50 75 100Epoch0.500.550.600.650.70Validation Accuracyname-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTM (b) Sequential CIFAR-10Figure 5: Learning curves with deviations on pixel image classification, at the best stable learning rate.Conversely, Figure 5 shows the accuracy curves of each method at their best stable learning rate. Thebasic LSTM is noticeably worse than all of the others. This suggests that any of the gate modifications,whether better initialization, cumax non-linearity, or master or refine gates, are better than standardgates especially when long-term dependencies are present. Additionally, the uniform gate initializationmethods are generally better than the ordered and chrono initialization, and the refine gate performsbetter than the master gate. We additionally consider applying other techniques developed for recurrentmodels that are independent of the gating mechanism. Table 2 also reports scores when the samegating mechanisms are applied to the GRU model instead of the LSTM, where similar trends holdacross the gating variants. In particular, UR-GRU is the only method that is able to stably attaingood performance. As another example, the addition of a generic regularization technique—wechose Zoneout (Krueger et al., 2016) with default hyperparameters ( zc=0:5,zh=0:05)—continuedimproving the UR-LSTM/GRU, outperforming even non-recurrent models on sequential MNIST andCIFAR-10. Table 3 compares the test accuracy of our main model against other models.Table 2: Validation accuracies on pixel image classification. Asterisks denote divergent runs at thelearning rate the best validation score was found at.Gating Method - C- O- U- R- OM- OR- UM- UR-pMNIST 94:7794:69 96 :17 96 :05 95 :8495:98 96 :40 95 :50 96 :43sCIFAR 63:2465:60 67 :78 67 :63 71 :8567:7370:41 67 :2971:05sCIFAR (GRU) 71:3064:61 69 :8170:10 70 :7470:2071:4069:1771:04Table 3: Test acc. on pixel-by-pixel image classification benchmarks. Top: Recurrent baselines andvariants. Middle: Non-recurrent sequence models with global receptive field. Bottom: Our methods.Model sMNIST pMNIST sCIFARLSTM (ours) 98.9 95.11 63.01Dilated GRU (Chang et al., 2017) 99.0 94.6 -IndRNN (Li et al., 2018a) 99.0 96.0 -r-LSTM (2-Layer with Auxiliary Loss) (Trinh et al., 2018) 98.4 95.2 72.2Transformer (Trinh et al., 2018) 98.9 97.9 62.2Temporal convolution network (Bai et al., 2018a) 99.0 97.2TrellisNet (Bai et al., 2018b) 99.20 98.13 73.42UR-LSTM 99.28 96.96 71.00UR-LSTM + Zoneout (Krueger et al., 2016) 99.21 97.58 74.34UR-GRU + Zoneout 99.27 96.51 74.4From Sections 3.1 and 3.2, we draw a few conclusions about the comparative performance of differentgate modifications. First, the refine gate is consistently better than comparable master gates. CI solvesthe synthetic memory tasks but is worse than any other variant outside of those. We find ordered(cumax ) gates to be effective, but speed issues prevent us from using them in more complicated tasks.UR- gates are consistently among the best performing and most stable.8Under review as a conference paper at ICLR 2020Model Valid. TestLSTM 34.3 35.8C-LSTM 35.0 36.4C-LSTM (Tmax=8) 34.3 36.1C-LSTM (Tmax=11 ) 34.6 35.8OM-LSTM 34.0 34.7U-LSTM 33.8 34.9UR-LSTM 33.6 34.6(a) Perplexities on the WikiText-103 dataset.0 100000 200000 300000 400000 500000Iteration3.523.543.563.583.60Log PerplexityMethodC-LSTMC11-LSTMC8-LSTMLSTMOM-LSTMU-LSTMUR-LSTM (b) Validation learning curves, illustrating training speedand generalization (i.e. overfitting) behavior.3.3 L ANGUAGE MODELINGWe consider word-level language modeling on the WikiText-103 dataset, where (i) the dependencylengths are much shorter than in the synthetic tasks, (ii) language has an implicit hierarchical structureand timescales of varying lengths. We evaluate our gate modifications against the exact hyperparametersof a SOTA LSTM-based baseline (Rae et al., 2018) without additional tuning (Appendix D). Addi-tionally, we compare against ON-LSTM, which was designed for this domain (Shen et al., 2018), andchrono initialization, which addresses dependencies of a particular timescale as opposed to timescale-agnostic UGI methods. In addition to our default hyperparameter-free initialization, we tested modelswith the chrono hyperparameter Tmaxmanually set to 8and11, values previously used for languagemodeling to mimic fixed biases of about 1:0and2:0respectively (Tallec & Ollivier, 2018).Table 6a shows Validation and Test set perplexities for various models. We find that the OM-LSTM,U-LSTM, and UR-LSTM improve over the standard LSTM with no additional tuning. However,although the OM-LSTM was designed to capture the hierarchical nature of language with the cumaxactivation, it does not perform better than the U-LSTM and UR-LSTM. The chrono initialization withour default initialization strategy is far too large. While manually tweaking the Tmaxhyperparameterhelps, it is still far from any UGI-based methods. We attribute these observations to the nature oflanguage having dependencies on multiple widely-varying timescales, and that UGI is enough tocapture these without resorting to strictly enforced hierarchies such as in OM-LSTM.3.4 R EINFORCEMENT LEARNINGIn many partially observable reinforcement learning (RL) tasks, the agent can observe only part of theenvironment at a time and thus requires a memory model to summarize what it has seen previously.However, designing memory architectures for reinforcement learning problems has been a challengingtask (Oh et al., 2016; Wayne et al., 2018). These are usually based on an LSTM core to summarizewhat an agent has seen into a state.We investigated if changing the gates of these recurrent cores can improve the performance of RLagents, especially on difficult tasks involving memory and long-term credit assignment. We chosethe Passive and Active Image Match tasks from Hung et al. (2018) using A3C agents (Mnih et al.,2016). In these tasks, agents are either initially shown a colored indicator (Passive) or must search for it(Active), before being teleported to a room in which they must press a switch with matching color toreceive reward. In between these two phases is an intermediate phase where they can acquire distractorrewards, but the true objective reported is the final reward in the last phase. Episodes last 450600steps, so these tasks require memorization and credit assignment across long sequences.Hung et al. (2018) evaluated agents with different recurrent cores: the basic LSTM, the DNC (an LSTMwith memory), and the RMA (which also uses an LSTM core). We modified each of these with ourgates. Figure 7 shows the results of different models on the Passive Matching and Active Matchingtasks without distractors. These are the most similar to the synthetic tasks (Sec. 3.1), and we foundthat those trends largely transferred to the RL setting even with several additional confounders presentsuch as agents learning via RL algorithms, being required to learn relevant features from pixels ratherthan being given the relevant tokens, and being required to explore in the Active Match case.9Under review as a conference paper at ICLR 20200 1e10 2e10 3e10 4e10Episode Steps02.557.510RewardCoreC-LSTMLSTMU-LSTMUR-LSTM(a) Passive Match without Distractor Rewards0 1e10 2e10 3e10 4e10Episode Steps33.544.555.5RewardCoreC-LSTMLSTMU-LSTMUR-LSTM (b) Active Match without Distractor RewardsFigure 7: We evaluated the image matching tasks from Hung et al. (2018), which test memorization andcredit assignment, using an A3C agent (Mnih et al., 2016) with an LSTM policy core. We observe thatgeneral trends from the synthetic tasks (Section (3.1)) transfer to this reinforcement learning setting.0 1e10 2e10 3e10 4e10Episode Steps468RewardCoreC-LSTMLSTMU-LSTMUR-LSTM(a) Active Match with Distractor Rewards - LSTM0 0.25e10 0.50e10 0.75e10 1e10 1.25e10Episode Steps46810RewardCoreC-RMARMAU-RMAUR-RMA (b) Active Match with Distractor Rewards - RMAFigure 8: The addition of distractor rewards changes the task and relative performance of differentgating mechanisms. For both LSTM and RMA recurrent cores, the UR- gates still perform best.We found that the UR- gates substantially improved the performance of the basic LSTM core on bothPassive Match and Active Match tasks, with or without distractor rewards. On the difficult Active Matchtask, it was the only method to achieve better than random behavior. Figure 8 shows performance ofLSTM and RMA cores on the harder Active Match task with distractors. Here the UR- gates again learnthe fastest and reach the highest reward. In particular, although the RMA is a memory architecture withan explicit memory bank designed for long-term credit assignment, its performance was also improved.3.5 A DDITIONAL RESULTS AND EXPERIMENTAL CONCLUSIONSAppendix (E.1) shows an additional synthetic experiment investigating the effect of refine gates onsaturation. Appendix (E.3) has results on a program execution task, which is interesting for havingexplicit long and variable-length dependencies and hierarchical structure. It additionally shows anothervery different gated recurrent model where the UR- gates provide consistent improvement.Finally, we would like to comment on the longevity of the LSTM, which for example was frequentlyfound to outperform newer competitors when tuned better (Melis et al., 2017). Although manyimprovements have been suggested over the years, none have been proven to be as robust as the LSTMacross an enormously diverse range of sequence modeling tasks. By experimentally starting fromwell-tuned LSTM baselines, we believe our simple isolated gate modifications to actually be robustimprovements. In Appendix B.3 and B.4, we offer a few conclusions for the practitioner about the othergate components considered based on our experimental experience.4 D ISCUSSIONIn this work, we introduce, analyze, and evaluate several modifications to the ubiquitous gatingmechanism that appears in recurrent neural networks. We describe theoretically-justified methods thatimprove on the standard gating method by alleviating problems with initialization and optimization.The mechanisms considered include changes on independent axes, namely initialization method andauxiliary gates, and we perform extensive ablations on our improvements with previously consideredmodifications. Our main gate model robustly improves on standard gates across many different tasksand recurrent cores, while requiring less tuning Finally, we emphasize that these improvements arecompletely independent of the large body of research on neural network architectures that use gates,and hope that these insights can be applied to improve machine learning models at large.10Under review as a conference paper at ICLR 2020
S1llO2HCKH
Official Blind Review #1
6: Weak Accept
This paper introduces and studies modifications to gating mechanisms in RNNs. The first modification is uniform gate initialization. Here the biases in the forget and input gate bias are sampled such that after the application of the sigmoid the values are in the range (1/d, 1-1/d) where d is the dimensionality of the hidden space for the bias. The second modification is the introduction of a refine gating mechanism with a view to allow for gradients to flow when the forget gates f_t in the LSTM updates are near saturation. The idea is to create an effective gate g = f_t +/- phi(f_t, x_t). The paper proposes using phi (f_t, x_t) = f_t(1-f_t) * (2*r_t-1) where (r_t is between 0 or 1). The effect of such a change is that g_t can reach values of 0.99 when the value of f_t is 0.9 allowing gradients to flow more freely through the parameters that constitute the forget gate. Overall the change corresponds to improving gradient flow for the forget gate by interpolating between f_t^2 and (1-f_t)^2. i.e. the authors note that the result of these changes is that it corresponds to sampling biases from a heavier tailed distribution while the refine gate (by allowing the forget gate to reach values close to 0 and 1), allows for capturing information on a much longer time scale. The paper studies various combinations of the two changes proposed to gating architectures. Other baselines include a vanilla LSTM, a Chrono initialized LSTM, and an ordered Neuron LSTM. The models are trained on several synthetic and real world tasks. On the copy and add tasks, the LSTMs that contain the refine gate converge the fastest. A similar story is observed on the task of pixel by pixel image classification. The refine gate was also adapted to memory architectures such as the DNC and RMA where it was found to improve performance on two different tasks. Overall, the paper is written well, I like the (second) idea of the refine gate and the contributions are explained in an accessible manner. While I'm not entirely convinced about the proposed initialization scheme but across the many different tasks tried, the use of the refine gate does appear to give performance improvements that lead me to conclude that this aspect of the work is a solid contribution to the literature. Questions and comments: * This manuscript already quite long and has several formatting issues. Several of the figures are unreadable when printed. For example, every piece of text on Figure 2(d) is unreadable on paper. Figure 3 and 5 are difficult to read; they contain too many alternatives with a colour scheme that makes it difficult to distinguish between them -- consider displaying a subset of the options via a plot and using a table to display (# steps to convergence) as a metric instead. It also appears as if the caption for Table 6 is deleted? * I think that for this approach to work, two conditions need to be satisfied (a) there must be foreseeable improvements in the use of a forget gate that can reach values close to 0/1 for the task at hand and (b) r_t needs to function well despite not being too close to 0 or 1 (lest its parameters suffer from gradient flow issues). * Was there any visualizations done on whether (a) happened? i.e. for the URLSTMs that performed well, were the values of the forget gate closer to 0/1 than the baselines? * What were typical values of r_t, did the models need the refine gate to reach values close to 0 or 1 for the overall approach to work?
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Improving the Gating Mechanism of Recurrent Neural Networks ### Paper Abstract In this work, we revisit the gating mechanisms widely used in various recurrent and feedforward networks such as LSTMs, GRUs, or highway networks. These gates are meant to control information flow, allowing gradients to better propagate back in time for recurrent models. However, to propagate gradients over very long temporal windows, they need to operate close to their saturation regime. We propose two independent and synergistic modifications to the standard gating mechanism that are easy to implement, introduce no additional hyper-parameters, and are aimed at improving learnability of the gates when they are close to saturation. Our proposals are theoretically justified, and we show a generic framework that encompasses other recently proposed gating mechanisms such as chrono-initialization and master gates . We perform systematic analyses and ablation studies on the proposed improvements and evaluate our method on a wide range of applications including synthetic memorization tasks, sequential image classification, language modeling, and reinforcement learning. Empirically, our proposed gating mechanisms robustly increase the performance of recurrent models such as LSTMs, especially on tasks requiring long temporal dependencies. ### Paper Keywords ["recurrent neural networks", "LSTM", "GRUs", "gating mechanisms", "deep learning", "reinforcement learning"] ### Paper Content ABSTRACTGating mechanisms are widely used in neural network models, where they allowgradients to backpropagate more easily through depth or time. However, their satura-tion property introduces problems of its own. For example, in recurrent models thesegates need to have outputs near 1to propagate information over long time-delays,which requires them to operate in their saturation regime and hinders gradient-basedlearning of the gate mechanism. We address this problem by deriving two synergisticmodifications to the standard gating mechanism that are easy to implement,introduce no additional hyperparameters, and improve learnability of the gates whenthey are close to saturation. We show how these changes are related to and improveon alternative recently proposed gating mechanisms such as chrono-initializationand Ordered Neurons. Empirically, our simple gating mechanisms robustly improvethe performance of recurrent models on a range of applications, including syntheticmemorization tasks, sequential image classification, language modeling, andreinforcement learning, particularly when long-term dependencies are involved.1 I NTRODUCTIONRecurrent neural networks (RNNs) have become a standard machine learning tool for learning fromsequential data. However, RNNs are prone to the vanishing gradient problem, which occurs when thegradients of the recurrent weights become vanishingly small as they get backpropagated through time(Hochreiter et al., 2001). A common approach to alleviate the vanishing gradient problem is to use gatingmechanisms, leading to models such as the long short term memory (Hochreiter & Schmidhuber, 1997,LSTM) and gated recurrent units (Chung et al., 2014, GRUs). These gated RNNs have been very suc-cessful in several different application areas such as in reinforcement learning (Kapturowski et al., 2018;Espeholt et al., 2018) and natural language processing (Bahdanau et al., 2014; Ko ˇcisk`y et al., 2018).At every time step, gated recurrent models form a weighted combination of the history summarizedby the previous state, and (a function of) the incoming inputs, to create the next state. The values ofthe gates, which are the coefficients of the combination, control the length of temporal dependenciesthat can be addressed. This weighted update can be seen as an additive or residual connection on therecurrent state, which helps signals propagate through time without vanishing. However, the gatesthemselves are prone to a saturating property which can also hamper gradient-based learning. Thisis particularly troublesome for RNNs, where carrying information for very long time delays requiresgates to be very close to their saturated states.We formulate and address two particular problems that arise with the standard gating mechanism of re-current models. First, typical initialization of the gates is relatively concentrated. This restricts the rangeof timescales the model can address, as the timescale of a particular unit is dictated by its gates. Our firstproposal, which we call uniform gate initialization (Section 2.2), addresses this by directly initializingthe activations of these gates from a distribution that captures a wider spread of dependency lengths.Second, learning when gates are in their saturation regime is difficult because of vanishing gradientsthrough the gates. We derive a modification that uses an auxiliary refine gate to modulate a main gate,which allows it to have a wider range of activations without gradients vanishing as quickly.Combining these two independent modifications yields our main proposal, which we call the UR-gating mechanism. These changes can be applied to any gate (i.e. bounded parametrized function)and have minimal to no overhead in terms of speed, memory, code complexity, and (hyper-)parameters.1Under review as a conference paper at ICLR 2020We apply them to the forget gate of recurrent models, and evaluate on many benchmarks includingsynthetic long-term dependency tasks, sequential pixel-level image classification, language modeling,program execution, and reinforcement learning. Finally, we connect our methods to other proposedgating modifications, introduce a framework that allows each component to be replaced with similarones, and perform theoretical analysis and extensive ablations of our method. Empirically, theUR- gating mechanism robustly improves on the standard forget and input gates of gated recurrentmodels. When applied to the LSTM, these simple modifications solve synthetic memory tasks thatare pathologically difficult for the standard LSTM, achieve state-of-the-art results on sequentialMNIST and CIFAR-10, and show consistent improvements in language modeling on the WikiText-103dataset (Merity et al., 2016) and reinforcement learning tasks (Hung et al., 2018).2 G ATED RECURRENT NEURAL NETWORKSBroadly speaking, RNNs are used to sweep over a sequence of input data xtto produce a sequenceof recurrent states ht2Rdsummarizing information seen so far. At a high level, an RNN is just aparametrized function in which each sequential application of the network computes a state updateu: (xt;ht1)7!ht. Gating mechanisms were introduced to address the vanishing gradient problem(Bengio et al., 1994; Hochreiter et al., 2001), and have proven crucial to the success of RNNs. Thismechanism essentially smooths out the update using the following equation,ht=ft(xt;ht1)ht1+it(xt;ht1)u(xt;ht1); (1)where the forget gateftandinput gateitare[0;1]d-valued functions that control how fast informationis forgotten or allowed into the memory state. When the gates are tied, i.e. ft+it=1as in GRUs, theybehave as a low-pass filter, deciding the time-scale on which the unit will respond (Tallec & Ollivier,2018). For example, large forget gate activations close to ft= 1are necessary for recurrent modelsto address long-term dependencies.1We will introduce our improvements to the gating mechanism primarily in the context of the LSTM,which is the most popular recurrent model. However, these techniques can be used in any model thatmakes similar use of gates.ft=(Lf(xt;ht1)) (2)it=(Li(xt;ht1)) (3)ut=tanh(Lu(xt;ht1))(4)ct=ftct1+itut (5)ot=(Lo(xt;ht1)) (6)ht=ottanh(ct) (7)A typical LSTM (equations (2)-(7)) is an RNN whose state is rep-resented by a tuple (ht;ct)consisting of a “hidden” state and “cell”state. The basic gate equation (1)is used to create the next cell statect(5). Note that the gate and update activations are a function ofthe previous hidden state ht1instead ofct1. Here,L?stands fora parameterized linear function of its inputs with bias b?, e.g.Lf(xt;ht1)=Wfxxt+Wfhht1+bf; (8)and()refers to the standard sigmoid activation function which we will assume is used for defining[0;1]-valued activations in the rest of this paper. The gates of the LSTM were initially motivated as abinary mechanism, switching on or off to allow information and gradients to pass through. However, inreality this fails to happen due to a combination of initialization and saturation. This can be problematic,such as when very long dependencies are present.2.1 T HEUR-LSTMbf1(U[0;1]) (9)ft=(Lf(xt;ht1)) (10)rt=(Lr(xt;ht1)) (11)gt=rt(1(1ft)2)+(1rt)f2t (12)ct=gtct1+(1gt)ut(13)ot=(Lo(xt;ht1)) (14)ht=ottanh(ct) (15)We present two solutions which work in tandem to address thepreviously described issues. The first ensures a diverse range ofgate values at the start of training by sampling the gate’s biases sothat the activations will be approximately uniformly distributed atinitialization. We call this Uniform Gate Initialization (UGI). Thesecond allows better gradient flow by reparameterizing the gateusing an auxiliary “refine” gate.As our main application is for recurrent models, we present the fullUR-LSTM model in equations (9)-(15). However, we note that1In this work, we use “gate” to alternatively refer to a [0;1]-valued function or the value (“activation”) of thatfunction.2Under review as a conference paper at ICLR 2020these methods can be used to modify any gate (or more generally, bounded function) in any model. Inthis context the UR-LSTM is simply defined by applying UGI and a refine gate ron the original forgetgatefto create an effective forget gate g(equation (12)). This effective gate is then used in the cellstate update (13). Empirically, these small modifications to an LSTM are enough to allow it to achievenearly binary activations and solve difficult memory problems (Figure 4). In the rest of Section 2, weprovide theoretical justifications for UGI and refine gates.2.2 U NIFORM GATE INITIALIZATIONStandard initialization schemes for the gates can prevent the learning of long-term temporal correla-tions (Tallec & Ollivier, 2018). For example, supposing that a unit in the cell state has constant forgetgate valueft, then the contribution of an input xtinktime steps will decay by (ft)k. This gives theunit an effective decay period orcharacteristic timescale ofO(11ft).2Standard initialization of linearlayersLsets the bias term to 0, which causes the forget gate values (2)to concentrate around 1=2. Acommon trick of setting the forget gate bias to bf= 1:0(Jozefowicz et al., 2015) does increase thevalue of the decay period to11(1:0)3:7. However, this is still relatively small, and moreover fixed,hindering the model from easily learning dependencies at varying timescales.We instead propose to directly control the distribution of forget gates, and hence the correspondingdistribution of decay periods. In particular, we propose to simply initialize the value of the forget gateactivationsftaccording to a uniform distribution U(0;1), as described in Section 2.1. An importantdifference between UGI and standard or other (e.g. Tallec & Ollivier, 2018) initializations is thatnegative forget biases are allowed. The effect of UGI is that all timescales are covered, from unitswith very high forget activations remembering information (nearly) indefinitely, to those with lowactivations focusing solely on the incoming input. Additionally, it introduces no additional parameters;it even can have less hyperparameters than the standard gate initialization, which sometimes tunes theforget biasbf. Appendix B.2 and B.3 further discuss the theoretical effects of UGI on timescales.2.3 T HEREFINE GATEGiven a gate f=(Lf(x))2[0;1], the refine gate is an independent gate r=(Lr(x)), and modulatesfto produce a value g2[0;1]that will be used in place of fdownstream. It is motivated by consideringhow to modify the output of a gate fin a way that promotes gradient-based learning, derived below.An additive modification The root of the saturation problem is that the gradient rfof a gate, whichcan be written solely as a function of the activation value as f(1f), decays rapidly as fapproaches 0or1. Thus when the activation fis past a certain upper or lower threshold, learning effectively stops.This problem cannot be fully addressed only by modifying the input to the sigmoid, as in UGI and othertechniques, as the gradient will still vanish by backpropagating through the activation function.Therefore to better control activations near the saturating regime, instead of changing the input to thesigmoid inf=(L(x)), we consider modifying the output. In particular, we consider adjusting fwith an input-dependent update (f;x)for some function , to create an effective gate g=f+(f;x)that will be used in place of fdownstream such as in the main state update (1). This sort of additive(“residual”) connection is a common technique to increase gradient flow, and indeed was the motivationof the LSTM additive gated update (1) itself (Hochreiter & Schmidhuber, 1997).Choosing the adjustment function Although many choices seem plausible for selecting the additiveupdate, we reason backwards from necessary properties of the effective activation gto deduce aprincipled function . The refine gate will appear as a result.First, note that ftmight need to be increased or decreased, regardless of what its value is. Forexample, given a large activation ftnear saturation, it may need to be even higher to address long-termdependencies in recurrent models; alternatively, if it is too high by initialization or needs to unlearnprevious behavior, it may need to decrease. Therefore, the additive update to fshould create aneffective activation gtin the rangeftfor some. Note that the allowed adjustment range =(ft)needs to be a function of fin order to keep g2[0;1].2This corresponds to the number of timesteps it takes to decay by 1=e.3Under review as a conference paper at ICLR 2020In particular, the additive adjustment range (f)should satisfy the following natural properties:Validity :(f)min(f;1f), to ensureg2f(f)2[0;1].Symmetry : Since 0and1are completely symmetrical in the gating framework, (f)=(1f).Differentiability :(f)will be used in backpropagation, requiring 2C1(R).Figure 2a illustrates the general appearance of (f)based on these properties. In particular, Validityimplies that that its derivative satisfies 0(0)1and0(1)1, Symmetry implies 0(f) =0(1f), and Differentiability implies 0is continuous. The simplest such function satisfying theseis the linear0(f)=12f, yielding(f)=ff2=f(1f).Given such a (f), recall that the goal is to produce an effective activation g=f+(f;x)such thatg2f(f)(Figure 2b). Our final observation is that the simplest such function satisfying this is(f;x)=(f) (f;x)for some ()2[1;1]. Using the standard method for defining [1;1]-valuedfunctions via a tanh non-linearity leads to (f;x)=(f)(2r1)for another gate r=(L(x)).The full update is given in Equation (16),g=f+(f)(2r1)=f+f(1f)(2r1)=(1r)f2+r(1(1f)2) (16)Equation (16) has the elegant interpretation that the gate rlinearly interpolates between the lower bandf(f)=f2and the symmetric upper band f+(f)=1(1f)2(Figure 2b). In other words, theoriginal gate fis the coarse-grained determinant of the effective gate g, while the gate r“refines” it.This allows the effective gate gto reach much higher and lower activations than the constituent gates fandr(Figure 2c), bypassing the saturating gradient problem. For example, this allows the effectiveforget gate to reach g=0:99when the forget gate is only f=0:9.2.4 R EFINING RECURRENT MODELSFormally, the full mechanism of the refine gate as applied to gated recurrent models is defined inequations (11)-(13). Note that it is an isolated change where the forget gate (10)is modified beforeapplying the the standard update (1). Figure 1 illustrates the refine gate in an LSTM cell. Figure 2illustrates how the refine gate rtis defined and how it changes the effective gate ftto produce aneffective gate gt.Figure 1: LSTM with refine gate The refine gate rtmodifies another gate, such as the forget gate ftfor recurrent models. It interpolates between upperbound Uftand lowerbound Lftfunctions of theforget gate. The resulting effective forget gate gtis then used in place of ftin the state update (5).Finally, to simplify comparisons and ensure that we always use the same number of parameters as thestandard gates, when using the refine gate we tie the input gate to the effective forget gate, it=1gt.However, we emphasize that these techniques are extremely simple and broad, and can be appliedto any gate (or more broadly, any bounded function) to improve initialization distribution and helpoptimization. For example, our methods can be combined in different ways in recurrent models, e.g. anindependent input gate can be modified with its own refine gate. Alternatively, the refine gate can alsobe initialized uniformly, which we do in our experiments whenever both UGI and refine gates are used.2.5 R ELATED GATING MECHANISMSWe highlight a few recent works that also propose small gate changes to address problems of long-termor variable-length dependencies. Like ours, they can be applied to any gated update equation.4Under review as a conference paper at ICLR 20200.0 0.2 0.4 0.6 0.8 1.0Gate f0.00.20.40.60.81.0Band range c(f)(a)−4−3−2−1 0 1 2 3 4x0.00.20.40.60.81.0g11+e−x1(1+e−x)21−1(1+ex)2 (b) (c)0.0 0.2 0.4 0.6 0.8 1.0Effective gate g1.01.52.02.53.03.5Increase in gradient vs. standard gate (d)Figure 2: Refine gate in action: (a) [Solid] A function (ft)satisfying natural properties is chosen todefine a band within which the forget gate is refined. (b) The forget gate ft(x)is conventionally definedwith the sigmoid function (black). The refine gate interpolates around the original gate ftto yield aneffective gate gtwithin the upper and lower curves, gt2ft(ft). (c) Contours of the effective gate gtas a function of the forget and refine gates ft;rt. High effective activations can be achieved with moremodestft;rtvalues. (d) The gradient rgtas a function of effective gate activation gt. [Black, blue]:Lower and upper bounds on the ratio of the gradient when using a refine gate vs. without.Tallec & Ollivier (2018) suggest an initialization strategy to capture long-term dependencies on theorder ofTmax, by sampling the gate biases from bflogU(1;Tmax1). Although similar to UGI indefinition, chrono initialization (CI) has key differences in the timescales captured, for example byusing an explicit timescale parameter and having no negative biases. Due to its relation to UGI, weprovide a more detailed comparison in Appendix B.3. As mentioned in Section 2.3, techniques such asthese that only modify the input to a sigmoid gate do not fully address the saturation problem.The Ordered Neuron LSTM introduced by Shen et al. (2018) aims to induce an ordering over theunits in the hidden states such that “higher-level” neurons retain information for longer and capturehigher-level information. We highlight this work due to its recent success in NLP, and also becauseits novelties can be factored into introducing two mechanisms which only affect the forget and inputgates, namely (i) the cumax:=cumsum softmax activation function which creates a monotonicallyincreasing vector in [0;1], and (ii) a pair of “master gates” which are ordered by cumax and fine-tunedwith another pair of gates.In fact, we observe that these are related to our techniques in that one controls the distribution of a gateactivation, and the other is an auxiliary gate with modulating behavior. Despite its important novelties,we find that the ON-LSTM has drawbacks including speed/stability issues and theoretical flaws inthe scaling of its gates. We provide the formal definition and detailed analysis of the ON-LSTM inAppendix B.4. In particular we flesh out a deeper relation between the master and refine gates and showhow they can be interchanged for each other.We include a more thorough overview of other related works on RNNs in Appendix B.1. These methodsare largely orthogonal to the isolated gate changes considered here and are not analyzed. We note that animportant drawback common to all other approaches is the introduction of substantial hyperparametersin the form of constants, training protocol, and significant architectural changes. For example, even forchrono initialization, one of the less intrusive proposals, we experimentally find it to be particularlysensitive to the hyperparameter Tmax(Section 3).2.6 G ATE ABLATIONSOur insights about previous work with related gate components allow us to perform extensive ablationsof our contributions. We observe two independent axes of variation, namely, activation function/ini-tialization ( cumax , constant bias sigmoid, CI, UGI) and auxiliary modulating gates (master, refine),where different components can be replaced with each other. Therefore we propose several other gatecombinations to isolate the effects of different gating mechanisms. We summarize a few ablationshere; precise details are given in Appendix B.5. O-: Ordered gates . A natural simplification of themain idea of ON-LSTM, while keeping the hierarchical bias on the forget activations, is to simply dropthe auxiliary master gates and define ft;it(2)-(3)using the cumax activation function. UM-: UGImaster gates . This variant of the ON-LSTM’s gates ablates the cumax operation on the master gates,5Under review as a conference paper at ICLR 2020replacing it with a sigmoid activation and UGI which maintains the same initial distribution on theactivation values. OR-: Refine instead of master . A final variant in between the UR- gates and theON-LSTM’s gates combines cumax with refine gates. In this formulation, as in UR- gates, the refinegate modifies the forget gate and the input gate is tied to the effective forget gate. The forget gate isordered using cumax .Table 1 summarizes the gating modifications we consider and their naming conventions. Note that wealso denote the ON-LSTM method as “OM-LSTM” (M for master) for mnemonic ease. Finally, weremark that all methods here are controlled with the same number of parameters as the standard LSTM,aside from the OM-LSTM and UM-LSTM which use an additional12C-fraction parameters where Cisthe downsize factor on the master gates (Appendix B.4).Table 1: Summary of gating mechanisms considered in this work as applied to the main forget/inputgates of recurrent models. To preserve parameters, refine gate methods use a tied input gate, and mastergate methods use a downsize factor C>1. (Left) Existing approaches and our main method. (right)Ablations of our gates with different components.Name Gate Mechanism U- Uniform gate initialization, no auxiliary gate- Standard gate initialization (1) R- Refine gate with standard gate initializationC- Chrono initialization, no auxiliary gate O- cumax activation on forget/input gatesOM- Ordered main gates, auxiliary master gates UM- UGI main gates, auxiliary master gatesUR- UGI main gate, auxiliary refine gate OR- Ordered main gate, auxiliary refine gate3 E XPERIMENTSWe first perform full ablations of the gating variants (Section 2.6) on benchmark synthetic memorizationand pixel-by-pixel image classification tasks. We then evaluate our main method on importantapplications for recurrent models including language modeling and reinforcement learning, comparingagainst baseline methods where appropriate.The vanilla LSTM uses forget bias 1.0 (Section 2.2). When chrono-initialization is used and notexplicitly tuned, we set Tmaxto be proportional to the hidden size. This heuristic uses the intuitionthat if dependencies of length Texist, then so should dependencies of all lengths T. Moreover, theamount of information that can be remembered is proportional to the number of hidden units.All of our benchmarks have prior work with recurrent baselines, from which we used the same models,protocol, and hyperparameters whenever possible, changing only the gating mechanism. Since oursimple gate changes are compatible with other recurrent cores, we evaluate them in tandem withrecurrent models such as the GRU, Reconstructive Memory Agent (RMA; Hung et al., 2018), andRelational Memory Core (RMC; Santoro et al., 2018) whenever they were used on these tasks. Fullprotocols and details for all experiments are given in Appendix D.3.1 S YNTHETIC TASKSOur first set of experiments is on synthetic memory tasks (Hochreiter & Schmidhuber, 1997; Arjovskyet al., 2016) that are known to be hard for standard LSTMs to solve.Copy task. The input is a sequence of N+ 20 digits where the first 10 tokens (a0;a1;:::;a 9)arerandomly chosen from f1;:::;8g, the middle N tokens are set to 0, and the last ten tokens are 9. The goalof the recurrent model is to output (a0;:::;a 9)in order on the last 10 time steps, whenever the cue token9is presented. We trained our models using cross-entropy with baseline loss log(8) (Appendix D.1).Adding task. The input consists of two sequences: 1. Nnumbers (a0;:::;aN1)sampled independentlyfromU[0;1]2. an indexi02[0;N=2)andi12[N=2;N), together encoded as a two-hot sequence. Thetarget output is ai0+ai1and models are evaluated by the mean squared error with baseline loss 1=6.Figure 3 shows the loss of various methods on the Copy and Adding tasks. The only gate combinationscapable of solving Copy completely are OR-,UR-,O-, and C-LSTM. This confirms the mechanismof their gates: these are the only methods capable of producing high enough forget gate values eitherthrough the cumax non-linearity, the refine gate, or extremely high forget biases. The U-LSTM isthe only other method able to make progress, but converges slower as it suffers from gate saturation6Under review as a conference paper at ICLR 20200 25000 50000 75000 100000Step151050Log LossMethod-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTM0 5000 10000 15000 20000Step12963Log LossMethod-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTMFigure 3: (Left) Copy task length 500 (Right) Adding task length 2000. Every method besides theLSTM solves the Adding task. The only methods capable of solving copy are OR-,UR-,O-, andC-LSTM models, with all other models aside from U-LSTM stuck at baseline. Refine gates are fastest.(a) UR-LSTM (b) U-LSTM (c) LSTM (d) C-LSTMFigure 4: Histograms of forget gate ftactivations (averaged over time and batch) before (Top) and after(Bottom) training on Copy (y-axis independently scaled). C-LSTM initializes with extremal activationswhich barely change during training. Standard LSTM initialization cannot learn large enough ftandmakes no progress on the task. U-LSTM makes progress by encouraging a range of forget gate values,but this distribution does not change significantly during training due to saturation. UR-LSTM startswith the same distribution, but is able to learn extremal gate values. Complementary to here whenlearning large activations is necessary, Appendix E.1 shows a reverse task where the UR-LSTM is ableto un-learn from a saturated regime.without the refine gate. The vanilla LSTM makes no progress. The OM-LSTM and UM-LSTM alsoget stuck at the baseline loss, despite the OM-LSTM’s cumax activation, which we hypothesize is dueto the suboptimal magnitudes of the gates at initialization (Appendix B.4). On the Adding task, everymethod besides the basic LSTM is able to eventually solve it, with all refine gate variants fastest.Figure 4 shows the distributions of forget gate activations of sigmoid-activation methods, before andafter training on the Copy task. It shows that activations near 1:0are important for a model’s ability tomake progress or solve this task, and that adding the refine gate makes this significantly easier.3.2 P IXEL -BY-PIXEL IMAGE CLASSIFICATIONThese tasks involve feeding a recurrent model the pixels of an image in a scanline order before producinga classification label. We test on the sequential MNIST (sMNIST), permuted MNIST (pMNIST) (Leet al., 2015), and sequential CIFAR-10 (sCIFAR) tasks. Each LSTM method was ran with a learningrate sweep with 3 seeds each. The best validation score found over any run is reported in the firsttwo rows of Table 2.3We find in general that all methods are able to improve over the vanilla LSTM.However, the differences become even more pronounced when stability is considered. AlthoughTable 2 reports the best validation accuracies found on any run, we found that many methods werequite unstable. Asterisks are marked next to a score denoting how many of the 3 seeds diverged, for thelearning rate that score was found at.3sMNIST is not included here as it is too easy, making it difficult to draw conclusions.7Under review as a conference paper at ICLR 20200 25 50 75 100Epoch0.900.920.940.96Validation Accuracyname-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTM(a) Permuted MNIST0 25 50 75 100Epoch0.500.550.600.650.70Validation Accuracyname-LSTMC-LSTMO-LSTMOM-LSTMOR-LSTMR-LSTMU-LSTMUM-LSTMUR-LSTM (b) Sequential CIFAR-10Figure 5: Learning curves with deviations on pixel image classification, at the best stable learning rate.Conversely, Figure 5 shows the accuracy curves of each method at their best stable learning rate. Thebasic LSTM is noticeably worse than all of the others. This suggests that any of the gate modifications,whether better initialization, cumax non-linearity, or master or refine gates, are better than standardgates especially when long-term dependencies are present. Additionally, the uniform gate initializationmethods are generally better than the ordered and chrono initialization, and the refine gate performsbetter than the master gate. We additionally consider applying other techniques developed for recurrentmodels that are independent of the gating mechanism. Table 2 also reports scores when the samegating mechanisms are applied to the GRU model instead of the LSTM, where similar trends holdacross the gating variants. In particular, UR-GRU is the only method that is able to stably attaingood performance. As another example, the addition of a generic regularization technique—wechose Zoneout (Krueger et al., 2016) with default hyperparameters ( zc=0:5,zh=0:05)—continuedimproving the UR-LSTM/GRU, outperforming even non-recurrent models on sequential MNIST andCIFAR-10. Table 3 compares the test accuracy of our main model against other models.Table 2: Validation accuracies on pixel image classification. Asterisks denote divergent runs at thelearning rate the best validation score was found at.Gating Method - C- O- U- R- OM- OR- UM- UR-pMNIST 94:7794:69 96 :17 96 :05 95 :8495:98 96 :40 95 :50 96 :43sCIFAR 63:2465:60 67 :78 67 :63 71 :8567:7370:41 67 :2971:05sCIFAR (GRU) 71:3064:61 69 :8170:10 70 :7470:2071:4069:1771:04Table 3: Test acc. on pixel-by-pixel image classification benchmarks. Top: Recurrent baselines andvariants. Middle: Non-recurrent sequence models with global receptive field. Bottom: Our methods.Model sMNIST pMNIST sCIFARLSTM (ours) 98.9 95.11 63.01Dilated GRU (Chang et al., 2017) 99.0 94.6 -IndRNN (Li et al., 2018a) 99.0 96.0 -r-LSTM (2-Layer with Auxiliary Loss) (Trinh et al., 2018) 98.4 95.2 72.2Transformer (Trinh et al., 2018) 98.9 97.9 62.2Temporal convolution network (Bai et al., 2018a) 99.0 97.2TrellisNet (Bai et al., 2018b) 99.20 98.13 73.42UR-LSTM 99.28 96.96 71.00UR-LSTM + Zoneout (Krueger et al., 2016) 99.21 97.58 74.34UR-GRU + Zoneout 99.27 96.51 74.4From Sections 3.1 and 3.2, we draw a few conclusions about the comparative performance of differentgate modifications. First, the refine gate is consistently better than comparable master gates. CI solvesthe synthetic memory tasks but is worse than any other variant outside of those. We find ordered(cumax ) gates to be effective, but speed issues prevent us from using them in more complicated tasks.UR- gates are consistently among the best performing and most stable.8Under review as a conference paper at ICLR 2020Model Valid. TestLSTM 34.3 35.8C-LSTM 35.0 36.4C-LSTM (Tmax=8) 34.3 36.1C-LSTM (Tmax=11 ) 34.6 35.8OM-LSTM 34.0 34.7U-LSTM 33.8 34.9UR-LSTM 33.6 34.6(a) Perplexities on the WikiText-103 dataset.0 100000 200000 300000 400000 500000Iteration3.523.543.563.583.60Log PerplexityMethodC-LSTMC11-LSTMC8-LSTMLSTMOM-LSTMU-LSTMUR-LSTM (b) Validation learning curves, illustrating training speedand generalization (i.e. overfitting) behavior.3.3 L ANGUAGE MODELINGWe consider word-level language modeling on the WikiText-103 dataset, where (i) the dependencylengths are much shorter than in the synthetic tasks, (ii) language has an implicit hierarchical structureand timescales of varying lengths. We evaluate our gate modifications against the exact hyperparametersof a SOTA LSTM-based baseline (Rae et al., 2018) without additional tuning (Appendix D). Addi-tionally, we compare against ON-LSTM, which was designed for this domain (Shen et al., 2018), andchrono initialization, which addresses dependencies of a particular timescale as opposed to timescale-agnostic UGI methods. In addition to our default hyperparameter-free initialization, we tested modelswith the chrono hyperparameter Tmaxmanually set to 8and11, values previously used for languagemodeling to mimic fixed biases of about 1:0and2:0respectively (Tallec & Ollivier, 2018).Table 6a shows Validation and Test set perplexities for various models. We find that the OM-LSTM,U-LSTM, and UR-LSTM improve over the standard LSTM with no additional tuning. However,although the OM-LSTM was designed to capture the hierarchical nature of language with the cumaxactivation, it does not perform better than the U-LSTM and UR-LSTM. The chrono initialization withour default initialization strategy is far too large. While manually tweaking the Tmaxhyperparameterhelps, it is still far from any UGI-based methods. We attribute these observations to the nature oflanguage having dependencies on multiple widely-varying timescales, and that UGI is enough tocapture these without resorting to strictly enforced hierarchies such as in OM-LSTM.3.4 R EINFORCEMENT LEARNINGIn many partially observable reinforcement learning (RL) tasks, the agent can observe only part of theenvironment at a time and thus requires a memory model to summarize what it has seen previously.However, designing memory architectures for reinforcement learning problems has been a challengingtask (Oh et al., 2016; Wayne et al., 2018). These are usually based on an LSTM core to summarizewhat an agent has seen into a state.We investigated if changing the gates of these recurrent cores can improve the performance of RLagents, especially on difficult tasks involving memory and long-term credit assignment. We chosethe Passive and Active Image Match tasks from Hung et al. (2018) using A3C agents (Mnih et al.,2016). In these tasks, agents are either initially shown a colored indicator (Passive) or must search for it(Active), before being teleported to a room in which they must press a switch with matching color toreceive reward. In between these two phases is an intermediate phase where they can acquire distractorrewards, but the true objective reported is the final reward in the last phase. Episodes last 450600steps, so these tasks require memorization and credit assignment across long sequences.Hung et al. (2018) evaluated agents with different recurrent cores: the basic LSTM, the DNC (an LSTMwith memory), and the RMA (which also uses an LSTM core). We modified each of these with ourgates. Figure 7 shows the results of different models on the Passive Matching and Active Matchingtasks without distractors. These are the most similar to the synthetic tasks (Sec. 3.1), and we foundthat those trends largely transferred to the RL setting even with several additional confounders presentsuch as agents learning via RL algorithms, being required to learn relevant features from pixels ratherthan being given the relevant tokens, and being required to explore in the Active Match case.9Under review as a conference paper at ICLR 20200 1e10 2e10 3e10 4e10Episode Steps02.557.510RewardCoreC-LSTMLSTMU-LSTMUR-LSTM(a) Passive Match without Distractor Rewards0 1e10 2e10 3e10 4e10Episode Steps33.544.555.5RewardCoreC-LSTMLSTMU-LSTMUR-LSTM (b) Active Match without Distractor RewardsFigure 7: We evaluated the image matching tasks from Hung et al. (2018), which test memorization andcredit assignment, using an A3C agent (Mnih et al., 2016) with an LSTM policy core. We observe thatgeneral trends from the synthetic tasks (Section (3.1)) transfer to this reinforcement learning setting.0 1e10 2e10 3e10 4e10Episode Steps468RewardCoreC-LSTMLSTMU-LSTMUR-LSTM(a) Active Match with Distractor Rewards - LSTM0 0.25e10 0.50e10 0.75e10 1e10 1.25e10Episode Steps46810RewardCoreC-RMARMAU-RMAUR-RMA (b) Active Match with Distractor Rewards - RMAFigure 8: The addition of distractor rewards changes the task and relative performance of differentgating mechanisms. For both LSTM and RMA recurrent cores, the UR- gates still perform best.We found that the UR- gates substantially improved the performance of the basic LSTM core on bothPassive Match and Active Match tasks, with or without distractor rewards. On the difficult Active Matchtask, it was the only method to achieve better than random behavior. Figure 8 shows performance ofLSTM and RMA cores on the harder Active Match task with distractors. Here the UR- gates again learnthe fastest and reach the highest reward. In particular, although the RMA is a memory architecture withan explicit memory bank designed for long-term credit assignment, its performance was also improved.3.5 A DDITIONAL RESULTS AND EXPERIMENTAL CONCLUSIONSAppendix (E.1) shows an additional synthetic experiment investigating the effect of refine gates onsaturation. Appendix (E.3) has results on a program execution task, which is interesting for havingexplicit long and variable-length dependencies and hierarchical structure. It additionally shows anothervery different gated recurrent model where the UR- gates provide consistent improvement.Finally, we would like to comment on the longevity of the LSTM, which for example was frequentlyfound to outperform newer competitors when tuned better (Melis et al., 2017). Although manyimprovements have been suggested over the years, none have been proven to be as robust as the LSTMacross an enormously diverse range of sequence modeling tasks. By experimentally starting fromwell-tuned LSTM baselines, we believe our simple isolated gate modifications to actually be robustimprovements. In Appendix B.3 and B.4, we offer a few conclusions for the practitioner about the othergate components considered based on our experimental experience.4 D ISCUSSIONIn this work, we introduce, analyze, and evaluate several modifications to the ubiquitous gatingmechanism that appears in recurrent neural networks. We describe theoretically-justified methods thatimprove on the standard gating method by alleviating problems with initialization and optimization.The mechanisms considered include changes on independent axes, namely initialization method andauxiliary gates, and we perform extensive ablations on our improvements with previously consideredmodifications. Our main gate model robustly improves on standard gates across many different tasksand recurrent cores, while requiring less tuning Finally, we emphasize that these improvements arecompletely independent of the large body of research on neural network architectures that use gates,and hope that these insights can be applied to improve machine learning models at large.10Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text This paper introduces and studies modifications to gating mechanisms in RNNs. The first modification is uniform gate initialization. Here the biases in the forget and input gate bias are sampled such that after the application of the sigmoid the values are in the range (1/d, 1-1/d) where d is the dimensionality of the hidden space for the bias. The second modification is the introduction of a refine gating mechanism with a view to allow for gradients to flow when the forget gates f_t in the LSTM updates are near saturation. The idea is to create an effective gate g = f_t +/- phi(f_t, x_t). The paper proposes using phi (f_t, x_t) = f_t(1-f_t) * (2*r_t-1) where (r_t is between 0 or 1). The effect of such a change is that g_t can reach values of 0.99 when the value of f_t is 0.9 allowing gradients to flow more freely through the parameters that constitute the forget gate. Overall the change corresponds to improving gradient flow for the forget gate by interpolating between f_t^2 and (1-f_t)^2. i.e. the authors note that the result of these changes is that it corresponds to sampling biases from a heavier tailed distribution while the refine gate (by allowing the forget gate to reach values close to 0 and 1), allows for capturing information on a much longer time scale. The paper studies various combinations of the two changes proposed to gating architectures. Other baselines include a vanilla LSTM, a Chrono initialized LSTM, and an ordered Neuron LSTM. The models are trained on several synthetic and real world tasks. On the copy and add tasks, the LSTMs that contain the refine gate converge the fastest. A similar story is observed on the task of pixel by pixel image classification. The refine gate was also adapted to memory architectures such as the DNC and RMA where it was found to improve performance on two different tasks. Overall, the paper is written well, I like the (second) idea of the refine gate and the contributions are explained in an accessible manner. While I'm not entirely convinced about the proposed initialization scheme but across the many different tasks tried, the use of the refine gate does appear to give performance improvements that lead me to conclude that this aspect of the work is a solid contribution to the literature. Questions and comments: * This manuscript already quite long and has several formatting issues. Several of the figures are unreadable when printed. For example, every piece of text on Figure 2(d) is unreadable on paper. Figure 3 and 5 are difficult to read; they contain too many alternatives with a colour scheme that makes it difficult to distinguish between them -- consider displaying a subset of the options via a plot and using a table to display (# steps to convergence) as a metric instead. It also appears as if the caption for Table 6 is deleted? * I think that for this approach to work, two conditions need to be satisfied (a) there must be foreseeable improvements in the use of a forget gate that can reach values close to 0/1 for the task at hand and (b) r_t needs to function well despite not being too close to 0 or 1 (lest its parameters suffer from gradient flow issues). * Was there any visualizations done on whether (a) happened? i.e. for the URLSTMs that performed well, were the values of the forget gate closer to 0/1 than the baselines? * What were typical values of r_t, did the models need the refine gate to reach values close to 0 or 1 for the overall approach to work? ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
IXvfIex0mX6f
logconference.io/LOG/2022/Conference
2022
DiffWire: Inductive Graph Rewiring via the Lovász Bound
["Adri\u00e1n Arnaiz-Rodr\u00edguez", "Ahmed Begga", "Francisco Escolano", "Nuria M Oliver"]
Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks, such as node and graph classification, link prediction and node and graph clustering in a variety of domains. Most GNNs use a message passing framework and hence are called MPNNs. Despite their promising results, MPNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching. Graph rewiring and graph pooling have been proposed in the literature as solutions to address these limitations. However, most state-of-the-art graph rewiring methods fail to preserve the global topology of the graph, are neither differentiable nor inductive, and require the tuning of hyper-parameters. In this paper, we propose DiffWire, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free by leveraging the Lovász bound. The proposed approach provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: CT-Layer, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; and GAP-Layer, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand. We empirically validate the value of each of these layers separately with benchmark datasets for graph classification. We also perform preliminary studies on the use of CT-Layer for homophilic and heterophilic node classification tasks. DiffWire brings together the learnability of commute times to related definitions of curvature, opening the door to creating more expressive MPNNs.
["GNN", "graph neural networks", "Geometric deep learning", "MPNNs", "graph rewiring", "over-smoothing", "over-squashing", "Lov\u00e1sz bound", "spectral gap", "graph diffusion", "commute times"]
DiffWire: Inductive Graph Rewiring via the Lovász BoundAdrian Arnaiz-RodriguezELLIS Alicanteadrian@ellisalicante.orgAhmed BeggaUniversity of AlicanteFrancisco EscolanoELLIS Alicantesco@ellisalicante.orgNuria OliverELLIS Alicantenuria@ellisalicante.orgAbstractGraph Neural Networks (GNNs) have been shown to achieve competitive resultsto tackle graph-related tasks, such as node and graph classification, link predictionand node and graph clustering in a variety of domains. Most GNNs use a messagepassing framework and hence are called MPNNs. Despite their promising results,MPNNs have been reported to suffer from over-smoothing, over-squashing andunder-reaching. Graph rewiring and graph pooling have been proposed in theliterature as solutions to address these limitations. However, most state-of-the-artgraph rewiring methods fail to preserve the global topology of the graph, areneither differentiable nor inductive, and require the tuning of hyper-parameters.In this paper, we propose DIFFWIRE, a novel framework for graph rewiring inMPNNs that is principled, fully differentiable and parameter-free by leveragingthe Lovász bound. The proposed approach provides a unified theory for graphrewiring by proposing two new, complementary layers in MPNNs: CT-L AYER , alayer that learns the commute times and uses them as a relevance function for edgere-weighting; and GAP-L AYER , a layer to optimize the spectral gap, depending onthe nature of the network and the task at hand. We empirically validate the value ofeach of these layers separately with benchmark datasets for graph classification.We also perform preliminary studies on the use of CT-L AYER for homophilic andheterophilic node classification tasks. DIFFWIREbrings together the learnabilityof commute times to related definitions of curvature, opening the door to creatingmore expressive MPNNs.1 IntroductionGraph Neural Networks (GNNs) [ 1,2] are a class of deep learning models applied to graph structureddata. They have been shown to achieve state-of-the-art results in many graph-related tasks, such asnode and graph classification [ 3,4], link prediction [ 5] and node and graph clustering [ 6,7], and in avariety of domains, including image or molecular structure classification, recommender systems andsocial influence prediction [8].Most GNNs use a message passing framework and thus are referred to as Message Passing NeuralNetworks (MPNNs) [ 4] . In these networks, every node in each layer receives a message from itsadjacent neighbors. All the incoming messages at each node are then aggregated and used to updatethe node’s representation via a learnable non-linear function –which is typically implemented bymeans of a neural network. The final node representations (called node embeddings) are used toperform the graph-related task at hand (e.g. graph classification). MPNNs are extensible, simple andhave proven to yield competitive empirical results. Examples of MPNNs include GCN [ 3], GAT [ 9],GATv2 [ 10], GIN [ 11] and GraphSAGE [ 12]. However, they typically use transductive learning, i.e.the model observes both the training and testing data during the training phase, which might limittheir applicability to graph classification tasks.A. Arnaiz-Rodriguez et al., DiffWire: Inductive Graph Rewiring via the Lovász Bound. Proceedings of the FirstLearning on Graphs Conference (LoG 2022) , PMLR 198, Virtual Event, December 9–12, 2022.DiffWire: Inductive Graph Rewiring via the Lovász BoundHowever, MPNNs also have important limitations due to the inherent complexity of graphs. Despitesuch complexity, the literature has reported best results when MPNNs have a small number of layers,because networks with many layers tend to suffer from over-smoothing [13] and over-squashing [14].However, this models fail to capture information that depends on the entire structure of the graph [ 15]and prevent the information flow to reach distant nodes. This phenomenon is called under-reaching[16] and occurs when the MPNN’s depth is smaller than the graph’s diameter.Over-smoothing [8,17–19] takes place when the embeddings of nodes that belong to different classesbecome indistinguishable. It tends to occur in MPNNs with many layers that are used to tackle short-range tasks, i.e. tasks where a node’s correct prediction mostly depends on its local neighborhood.Given this local dependency, it makes intuitive sense that adding layers to the network would nothelp the network’s performance.Conversely, long-range tasks require as many layers in the network as the range of the interactionbetween the nodes. However, as the number of layers in the network increases, the number ofnodes feeding into each of the node’s receptive field also increases exponentially, leading to over-squashing [14,20]: the information flowing from the receptive field composed of many nodes iscompressed in fixed-length node vectors, and hence the graph fails to correctly propagate the messagescoming from distant nodes. Thus, over-squashing emerges due to the distortion of information flowingfrom distant nodes due to graph bottlenecks that emerge when the number of k-hop neighbors growsexponentially with k.Graph pooling and graph rewiring have been proposed in the literature as solutions to address theselimitations [ 14]. Given that the main infrastructure for message passing in MPNNs are the edgesin the graph, and given that many of these edges might be noisy or inadequate for the downstreamtask [21], graph rewiring aims to identify such edges and edit them.Many graph rewiring methods rely on edge sampling strategies: first, the edges are assigned newweights according to a relevance function and then they are re-sampled according to the new weightsto retain the most relevant edges (i.e. those with larger weights). Edge relevance might be computedin different ways, including randomly [22], based on similarity [23] or on the edge’s curvature [20].Due to the diversity of possible graphs and tasks to be performed with those graphs, optimal graphrewiring should include a variety of strategies that are suited not only to the task at hand but also tothe nature and structure of the graph.Motivation. State-of-the-art edge sampling strategies have three significant limitations . First,most of the proposed methods fail to preserve the global topology of the graph . Second, mostgraph rewiring methods are neither differentiable norinductive [20]. Third, relevance functions thatdepend on a diffusion measure (typically in the spectral domain) are not parameter-free , which addsa layer of complexity in the models. In this paper, we address these three limitations.Contributions and Outline. The main contribution of this work is to propose a theoretical frame-work called DIFFWIREfor graph rewiring in GNNs that is principled, differentiable, inductive, andparameter-free by leveraging the Lovász bound [ 15] given by Eq. 1. This bound is a mathematicalexpression of the relationship between the commute times (effective resistance distance ) and thenetwork’s spectral gap . Given an unseen test graph, DIFFWIREpredicts the optimal graph structurefor the task at hand without any parameter tuning. Given the recently reported connection betweencommute times and curvature [ 24], and between curvature and the spectral gap [ 20], the proposedframework provides a unified theory linking these concepts. Our aim is to leverage diffusion andcurvature theories to propose a new approach for graph rewiring that preserves the graph’s structure.We first propose using the CT as a relevance function for edge re-weighting. Moreover, we develop adifferentiable, parameter-free layer in the GNN ( CT-L AYER ) to learn the CT. Second, we propose analternative graph rewiring approach by adding a layer in the network ( GAP-L AYER ) that optimizesthe spectral gap according to the nature of the network and the task at hand. Finally, we empiricallyvalidate the proposed layers with state-of-the-art benchmark datasets in a graph classification task.We test our approach on a graph classification task to emphasize the inductive nature of DIFFWIRE:the layers in the GNN ( CT-L AYER orGAP-L AYER ) are trained to predict the CTs embedding orminimize the spectral gap for unseen graphs, respectively. This approach gives a great advantagewhen compared to SoTA methods that require optimizing the parameters of the models for each graph.CT-L AYER andGAP-L AYER learn the weights during training to predict the optimal changes in the2DiffWire: Inductive Graph Rewiring via the Lovász Boundtopology of any unseen graph in test time. Finally, we also perform preliminary node classificationexperiments in heterophilic and homophilic graphs using CT-L AYER .The paper is organized as follows: Section 2 provides a summary of the most relevant related literature.Our core technical contribution is described in Section 3, followed by our experimental evaluationand discussion in Section 4. Finally, Section 5 is devoted to conclusions and an outline of our futurelines of research.2 Related WorkIn this section we provide an overview of the most relevant works that have been proposed in theliterature to tackle the challenges of over-smoothing, over-squashing and under-reaching in MPNNsby means of graph rewiring and pooling.Graph Rewiring in MPNNs. Rewiring is a process of changing the graph’s structure to control theinformation flow and hence improve the ability of the network to perform the task at hand (e.g. nodeor graph classification, link prediction...). Several approaches have been proposed in the literature forgraph rewiring, such as connectivity diffusion [ 25] or evolution [ 20], adding new bridge-nodes [ 26]and multi-hop filters [27], and neighborhood [12], node [28] and edge [22] sampling.Edge sampling methods sample the graph’s edges based on their weights or relevance, whichmight be computed in different ways. Rong et al. [22] show that randomly dropping edges duringtraining improves the performance of GNNs. Klicpera et al. [25], define edge relevance accordingto the coefficients of a parameterized diffusion process over the graph and then edges are selectedusing a threshold rule. For Kazi et al. [23], edge relevance is given by the similarity between thenodes’ attributes. In addition, a reinforcement learning process rewards edges leading to a correctclassification and penalizes the rest.Edge sampling-based rewiring has been proposed to tackle over-smoothing and over-squashing inMPNNs. Over-smoothing may be relieved by removing inter-class edges [ 29]. However, this strategyis only valid when the graph is homophilic, i.e. connected nodes tend to share similar attributes.Otherwise, removing these edges could lead to over-squashing [ 20] if their removal obstructs themessage passing between distant nodes belonging to the same class (heterophily). Increasing thesize of the bottlenecks of the graph via rewiring has been shown to improve node classificationperformance in heterophilic graphs, but not in homophilic graphs [ 20]. Recently, Topping et al. [20]propose an edge relevance function given by the edge curvature to mitigate over-squashing. Theyidentify the bottleneck of the graph by computing the Ricci curvature of the edges. Next, they removeedges with high curvature and add edges around minimal curvature edges.Graph Structure Learning (GSL). GSL methods [ 30] aim to learn an optimized graph structure andits corresponding representations at the same time .DIFFWIREcould be seen from the perspective ofGSL: CT-L AYER , as a metric-based, neural approach, and GAP-L AYER , as a direct-neural approachto optimize the structure of the graph to the task at hand.Graph Pooling. Pooling layers simplify the original graph by compressing it into a smaller graphor a vector via pooling operators, which range from simple [ 31] to more sophisticated approaches,such as DiffPool [ 32] and MinCut pool [ 33]. Although graph pooling methods do not consider theedge representations, there is a clear relationship between pooling methods and rewiring since bothof them try to quantify the flow of information through the graph’s bottleneck.Positional Encodings (PEs) A Positional Encoding is a feature that describes the global or localposition of the nodes in the graph. These features are related to random walk measures and theLaplacian’s eigenvectors [ 34]. Commute Times embeddings (CTEs) may be considered an expressiveform of PEs due to their spectral properties, i.e. their relation with the shortest path, the spectral gapor Cheeger constant. Velingker et al. [35] recently proposed use the CTEs as PE or commute times(CT) as edge feature. They pre-compute the CTEs and CT and add it as node or edge features toimprove the structural expressiveness of the GNN. PEs are typically pre-computed and then used tobuild more expressive graph architectures, either by concatenating them to the node features or bybuilding transformer models [ 36,37]. Our work is related to PEs as CT-L AYER learns the originalPEs from the input Xand the adjacency matrix Ainstead of pre-computing and potentially modifyingthem, as previous works do [ 35–38]. Thus, CT-L AYER may be seen as a method to automaticallylearn the PEs for graph rewiring.3DiffWire: Inductive Graph Rewiring via the Lovász BoundFigure 1: DIFFWIRE. Left: Original graph from COLLAB (test set). Center: Rewired graph afterCT-L AYER . Right: Rewired graph after GAP-L AYER . Colors indicate the strength of the edges.3 Proposed Approach: D IFFWIREfor Inductive Graph RewiringDIFFWIREprovides a unified theory for graph rewiring by proposing two new, complementary layersin MPNNs: first, CT-L AYER , a layer that learns the commute times and uses them as a relevancefunction for edge re-weighting; and second, GAP-L AYER , a layer to optimize the spectral gap,depending on the nature of the network and the task at hand.In this section, we present the theoretical foundations for the definitions of CT-L AYER andGAP-LAYER . First, we introduce the bound that our approach is based on: The Lovász bound. Table 3 inA.1 summarizes the notation used in the paper.3.1 The Lovász BoundThe Lovász bound, given by Eq. 1, was derived by Lovász in [ 15] as a means of linking the spectrumgoverning a random walk in an undirected graph G= (V, E)with the hitting time Huvbetween anytwo nodes uandvof the graph. Huvis the expected number of steps needed to reach (or hit) vfromu;Hvuis defined analogously. The sum of both hitting times between the two nodes, vandu, is thecommute time CTuv=Huv+Hvu. Thus, CTuvis the expected number of steps needed to hit vfrom uand go back to u. According to the Lovász bound:1vol(G)CTuv−1du+1dv≤1λ′22dmin(1)where λ′2≥0is the spectral gap , i.e. the first non-zero eigenvalue of L=I−D−1/2AD−1/2(normalized Laplacian [ 39], where Dis the degree matrix and A, the adjacency matrix); vol(G)isthe volume of the graph (sum of degrees); duanddvare the degrees of nodes uandv, respectively;anddminis the minimum degree of the graph.The term CTuv/vol(G)in Eq. 1 is referred to as the effective resistance ,Ruv, between nodes uandv. The bound states that the effective resistance between two nodes in the graph converges to ordiverges from ( 1/du+ 1/dv), depending on whether the graph’s spectral gap diverges from or tendsto zero. The larger the spectral gap, the closer CTuv/vol(G)will be to1du+1dvand hence the lessinformative the commute times will be.We propose two novel GNNs layers based on each side of the inequality in Eq. 1: CT-L AYER , focuseson the left-hand side, and GAP-L AYER , on the right-hand side. The use of each layer depends onthe nature of the network and the task at hand. In a graph classification task (our focus), CT-L AYERis expected to yield good results when the graph’s spectral gap is small; conversely, GAP-L AYERwould be the layer of choice in graphs with large spectral gap.The Lovász bound was later refined by von Luxburg et al. [40]. App. A.2.2 presents this bound alongwith its relationship with Ruvas a global measure of node similarity. Once we have defined bothsides of the Lovász bound, we proceed to describe their implications for graph rewiring.4DiffWire: Inductive Graph Rewiring via the Lovász Bound3.2 CT-L AYER : Commute Times for Graph RewiringWe focus first on the left-hand side of the Lovász bound which concerns the effective resistancesCTuv/vol(G) =Ruv(or commute times)1between any two nodes in the graph.Spectral Sparsification leads to Commute Times. Graph sparsification in undirected graphsmay be formulated as finding a graph H= (V, E′)that is spectrally similar to the original graphG= (V, E)withE′⊂E. Thus, the spectra of their Laplacians, LGandLHshould be similar.Theorem 1 (Spielman and Srivastava [ 41]).LetSparsify (G,q) –> G’ be a sampling algorithm ofgraph G= (V, E), where edges e∈Eare sampled with probability q∝Re(proportional to theeffective resistance). For n=|V|sufficiently large and 1/√n < ε≤1,O(nlogn/ε2)samples areneeded to satisfy ∀x∈Rn: (1−ε)xTLGx≤xTLG′x≤(1 +ε)xTLGx, with probability ≥1/2.The above theorem has a simple explanation in terms of Dirichlet energies, E(x). The LaplacianL=D−A≽0, i.e. it is positive semi-definite (all its eigenvalues are non-negative). Then,if we consider x:V→Ras a real-valued function of the nnodes of G= (V, E), we havethatE(x):=xTLGx=Pe=(u,v)∈E(xu−xv)2≥0for any x. In particular, the eigenvectorsf:={fi:Lfi=λifi}are the set of special functions that minimize the energies E(fi), i.e. they arethe mutually orthogonal and normalized functions with the minimal variabilities achievable by thetopology of G. Therefore, any minimal variability of G′is bounded by (1±ε)times that of Gif wesample enough edges with probability q∝Re. In addition, λi=E(fi)fTifi.This first result implies that edge sampling based on commute times is a principled way to rewire agraph while preserving its original structure and it is bounded by the Dirichlet energies. Next, wepresent what a commute times embedding is and how it can be spectrally computed.Commute Times Embedding (CTE). The choice of effective resistances in Theorem 1 is explainedby the fact that Ruvcan be computed from Ruv= (eu−ev)TL+(eu−ev), where euis the unit vectorwith a unit value at uand zero elsewhere. L+=Pi≥2λ−1ififTi, where fi, λiare the eigenvectorsand eigenvalues of L, is the pseudo-inverse or Green’s function of G= (V, E)if it is connected. TheGreen’s function leads to envision Ruv(and therefore CTuv) asmetrics relating pairs of nodes of G.As a result, the CTE will preserve the commute times distance in a Euclidean space. Note that thislatent space of the nodes can not only be described spectrally but also in a parameter free -manner,which is not the case for other spectral embeddings, such as heat kernel or diffusion maps as they relyon a time parameter t. More precisely, the embedding matrix Zwhose columns contain the nodes’commute times embeddings is spectrally given by:Z:=pvol(G)Λ−1/2FT=pvol(G)Λ′−1/2GTD−1/2(2)where Λis the diagonal matrix of the unnormalized Laplacian Leigenvalues and Fis the matrix oftheir associated eigenvectors. Similarly, Λ′contains the eigenvalues of the normalized Laplacian LandGthe eigenvectors. We have F=GD−1/2orfi=giD−1/2, where Dis the degree matrix.Finally, the commute times are given by the Euclidean distances between the embeddings CTuv=∥zu−zv∥2. The spectral calculation of commute times distances is given by:Ruv=CTuvvol(G)=∥zu−zv∥2vol(G)=nXi=21λi(fi(u)−fi(v))2=nXi=21λ′igi(u)√du−gi(v)√dv2(3)Commute Times as an Optimization Problem. In this section, we demonstrate how the CTs maybe computed as an optimization problem by means of a differentiable layer in a GNN. Constrainingneighboring nodes to have a similar embedding leads toZ= arg minZTZ=IPu,v∥zu−zv∥2AuvPu,vZ2uvdu=P(u,v)∈E∥zu−zv∥2Pu,vZ2uvdu=Tr[ZTLZ]Tr[ZTDZ], (4)which reveals that CTs embeddings result from a Laplacian regularization down-weighted by thedegree. As a result, frontier nodes or hubs –i.e. nodes with inter-community edges– which tend to1We use commute times and effective resistances interchangeably as per their use in the literature5DiffWire: Inductive Graph Rewiring via the Lovász Boundhave larger degrees than those lying inside their respective communities will be embedded far awayfrom their neighbors, increasing the distance between communities. Note that the above quotient oftraces formulation is easily differentiable and different from Tr[ZTLZZTDZ]proposed in [42].With the above elements we define CT-L AYER , the first rewiring layer proposed in this paper. SeeFigure 2 for a graphical representation of the layer.Definition 1 (CT-Layer) .Given the matrix Xn×Fencoding the features of the nodes after anymessage passing (MP) layer, Zn×O(n)= tanh(MLP( X))learns the association X→ZwhileZisoptimized according to the loss LCT=Tr[ZTLZ]Tr[ZTDZ]+ZTZ∥ZTZ∥F−InF. This results in the followingresistance diffusion TCT=R(Z)⊙A, i.e. the Hadamard product between the resistance distanceand the adjacency matrix, providing as input to the subsequent MP layer a learnt convolution matrix.We set R(Z)to the pairwise Euclidean distances of the node embeddings in Zdivided by vol(G).Thus, CT-L AYER learns the CTs and rewires an input graph according to them: the edges withmaximal resistance will tend to be the most important edges so as to preserve the topology of thegraph.CT-Layer∀x∈Rn:1−εxTLGx≤xTLG′x≤(1+ε)xTLGxStructure preservation : Dirichlet energies in new graph G′are bounded in(1±ε)of the Dirichlet energies of the original graph G.LCT=Tr[ZTLZ]Tr[ZTDZ]+ZTZZTZF−INFPool -tanhXAZ∈Rn×O(n)TCT∈Rn×n=cdist(Z)vol(G)⊙ATCTFigure 2: Detailed depiction of CT-L AYER , where cdist refers to the matrix of pairwise Euclideandistances between the node embeddings in Z.Below, we present the relationship between the CTs and the graph’s bottleneck and curvature.TCTand Graph Bottlenecks. Beyond the principled sparsification of TCT(Theorem 1), thislayer rewires the graph G= (E, V)in such a way that edges with maximal resistance will tend to bethe most critical to preserve the topology of the graph. More precisely, althoughPe∈ERe=n−1,the bulk of the resistance distribution will be located at graph bottlenecks, if they exist. Otherwise,their magnitude is upper-bounded and the distribution becomes more uniform.Graph bottlenecks are controlled by the graph’s conductance or Cheeger constant, hG=min S⊆VhS,where: hS=|∂S|min(vol(S),vol( ̄S)),∂S={e= (u, v) :u∈S, v∈ ̄S}andvol(S) =Pu∈Sdu.The interplay between the graph’s conductance and effective resistances is given by:Theorem 2 (Alev et al. [43]).Given a graph G= (V, E), a subset S⊆Vwithvol(S)≤vol(G)/2,hS≥cvol(S)1/2−ε⇐⇒ | ∂S| ≥c·vol(S)1/2−ε, (5)for some constant candε∈[0,1/2]. Then, Ruv≤1d2εu+1d2εv·1ε·c2for any pair u, v.According to this theorem, the larger the graph’s bottleneck, the tighter the bound on Ruvare.Moreover, max( Ruv)≤1/h2S, i.e., the resistance is bounded by the square of the bottleneck.This bound partially explains the rewiring of the graph in Figure 1-center. As seen in the Figure 1-center, rewiring using CT-L AYER sparsifies the graph and assigns larger weights to the edges locatedin the graph’s bottleneck. The interplay between Theorem 2 and Theorem 1 is described in App. A.1.Recent work has proposed using curvature for graph rewiring. We outline below the relationshipbetween CTs and curvature.Effective Resistances and Curvature. Topping et al. [20] propose an approach for graph rewiring,where the relevance function is given by the Ricci curvature. However, this measure is non-differentiable. More recent definitions of curvature [ 24] have been formulated based on resistance6DiffWire: Inductive Graph Rewiring via the Lovász Bounddistances that would be differentiable using our approach. The resistance curvature of an edgee= (u, v)isκuv:= 2(pu+pv)/Ruvwhere pu:= 1−12Pu∼wRuvis the node’s curvature.Relevant properties of the edge resistance curvature are discussed in App. A.1.3, along with a relatedTheorem proposed in Devriendt and Lambiotte [24].3.3 GAP-L AYER : Spectral Gap Optimization for Graph RewiringThe right-hand side of the Lovász bound in Eq. 1 relies on the graph’s spectral gap λ′2, such that thelarger the spectral gap, the closer the commute times would be to their non-informative regime. Notethat the spectral gap is typically large in commonly observed graphs –such as communities in socialnetworks which may be bridged by many edges [ 44]– and, hence, in these cases it would be desirableto rewire the adjacency matrix Aso that λ′2is minimized.In this section, we explain how to rewire the graph’s adjacency matrix A to minimize the spectral gap.We propose using the gradient of λ2wrt each component of ̃A. Then, we can compute these gradienteither using Laplacians ( L, with Fiedler λ2) or normalized Laplacians ( L, with Fiedler λ′2). We alsopresent an approximation of the Fiedler vectors needed to compute those gradients, and proposecomputing them as a GNN Layer called the GAP-L AYER . A detailed schematic of GAP-L AYER isshown in Figure 3.Rewiring using a Ratio-cut (Rcut) Approximation. We propose to rewire the adjacency matrix, A,so that λ2is minimized. We consider a matrix ̃Aclose to Athat satisfies ̃Lf2=λ2f2, where f2isthe solution to the ratio-cut relaxation [ 45]. Following [ 46], the gradient of λ2wrt each componentof ̃Ais given by∇ ̃Aλ2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=diag(f2fT2)11T−f2fT2 (6)where 1is the vector of nones; and [∇ ̃Aλ2]ijis the gradient of λ2wrt ̃Auv. The driving force ofthis gradient relies on the correlation f2fT2. Using this gradient to minimize λ2results in breakingthe graph’s bottleneck while preserving simultaneously the inter-cluster structure. We delve into thismatter in App. A.2.Rewiring using a Normalized-cut (Ncut) Approximation. Similarly, considering now λ′2forrewiring leads to∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=d′ngT2 ̃AT ̃D−1/2g2o1T+d′ngT2 ̃A ̃D−1/2g2o1T+ ̃D−1/2g2gT2 ̃D−1/2(7)where d′is an×1vector including derivatives of degree wrt adjacency and related terms. Thisgradient relies on the Fiedler vector g2(the solution to the normalized-cut relaxation), and on theincoming and outgoing one-hop random walks. This approximation breaks the bottleneck whilepreserving the global topology of the graph (Figure 1-left). Proof and details are included in App. A.2.We present next an approximation of the Fiedler vector, followed by a proposed new layer in theGNN called the GAP-L AYER to learn how to minimize the spectral gap of the graph.Approximating the Fiedler vector. Given that g2= ̃D1/2f2, we can obtain the normalized-cutgradient in terms of f2. From [17] we have thatf2(u) =+1/√nifubelongs to the first cluster−1/√nifubelongs to the second cluster+Olognn(8)Definition 2 (GAP-Layer) .Given the matrix Xn×Fencoding the features of the nodes after anymessage passing (MP) layer, Sn×2=Softmax (MLP( X))learns the association X→Swhile Sisoptimized according to the loss LCut=−Tr[STAS]Tr[STDS]+STS∥STS∥F−In√2F. Then the Fiedler vectorf2is approximated by appyling a softmaxed version of Eq. 8 and considering the loss LFiedler =∥ ̃A−A∥F+α(λ∗2)2, where λ∗2=λ2if we use the ratio-cut approximation (and gradient) andλ∗2=λ′2if we use the normalized-cut approximation and gradient. This returns ̃Aand the GAPdiffusion TGAP= ̃A(S)⊙Aresults from minimizing LGAP :=LCut+LFiedler .7DiffWire: Inductive Graph Rewiring via the Lovász BoundMLP -σXAS∈Rn×2 ෩A⊙ALcut=Tr[STLS]Tr[STDS]+STSSTSF−IN2FTGAPf2(S)λ2=Ef2∇෩ALFiedler෩A=A−μ×∇෩Aλ2Lfiedler=෩A−AF+α(λ2)2∇෩Aλ2=2෩A−A+(diagf2f2T11T−f2f2T)×λ2Figure 3: GAP-L AYER (Rcut). For GAP-L AYER (Ncut), substitute ∇ ̃Aλ2by Eq. 74 Experiments and Discussion4.1 Graph ClassificationIn this section, we study the properties and performance of CT-L AYER andGAP-L AYER in a graphclassification task with several benchmark datasets. To illustrate the merits of our approach, wecompare CT-L AYER andGAP-L AYER with 3 state-of-the-art diffusion and curvature-based graphrewiring methods. Note that the aim of the evaluation is to shed light on the properties of both layersand illustrate their inductive performance, not to perform a benchmark comparison with all previouslyproposed graph rewiring methods.LINEARCONVMINCUTCONVREADOUTMLPXX XAX AX XY(a)MINCUTbaselineXLINEARCONVMINCUTCONVREADOUTMLPXXAX AX XYREWIRINGT (b)CT-L AYER or GAP-L AYERFigure 4: GNN models used in the experiments. Left: MinCut Baseline model. Right: CT-L AYERor GAP-L AYER models, depending on what method is used for rewiring.Baselines: . The first baseline architecture is based on MINCUTPool [33] and it is shown in Figure 4a.It is the base GNN that we use for graph classification without rewiring. MINCUTPool layer learns(An×n,Xn×F)→(A′k×k,Xk×F), being k < n the new number of node clusters. The first baselinestrategy using graph rewiring is k-NN graphs [ 47], where weights of the edges are computed basedon feature similarity. The next two baselines are graph rewiring methods that belong to the samefamily of methods as DIFFWIRE, i.e. methods based on diffusion and curvature, namely DIGL(PPR) [ 25] and SDRF [20]. DIGL is a diffusion-based preprocessing method within the family ofmetric-based GSL approaches. We set the teleporting probability α= 0.001andεis set to keep thesame average degree for each graph. Once preprocessed with DIGL , the graphs are provided as inputto the MinCut Pool (Baseline1) arquitecture. The third baseline model is SDRF, which performscurvature-based rewiring. SDRF is also a preprocessing method which has 3 parameters that arehighly graph-dependent. We set these parameters to τ= 20 andC+= 0for all experiments as per[20]. The number of iterations is estimated dynamically according to 0.7∗ |V|for each graph.Both DIGL and SDRF aim to preserve the global topology of the graph but require optimizing theirparameters for each input graph via hyper-parameter search. In a graph classification task, this searchisO(n3)per graph. Details about the parameter tuning in these methods can be found in App. A.3.3.To shed light on the performance and properties of CT-L AYER andGAP-L AYER , we add thecorresponding layer in between Linear (X)∗− →Conv1 (A,X). We build 3 different models: CT-LAYER ,GAP-L AYER (Rcut), GAP-L AYER (Ncut), depending on the layer used. For CT-L AYER ,we learn TCTwhich is used as a convolution matrix afterwards. For GAP-L AYER , we learn TGAPeither using the Rcut or the Ncut approximations. A schematic of the architectures is shown inFigure 4b and in App. A.3.2.As shown in Table 1, we use in our experiments common benchmark datasets for graph classification.We select datasets both with features and featureless, in which case we use the degree as the nodefeatures. These datasets are diverse regarding the topology of their networks: REDDIT -B,IMDB -B8DiffWire: Inductive Graph Rewiring via the Lovász BoundTable 1: Experimental results on common graph classification benchmarks. Red denotes the bestmodel row-wise and Blue marks the runner-up. ‘*’ means degree as node feature.MinCutPool k-NN DIGL SDRF CT-L AYER GAP-L AYER (R) GAP-L AYER (N)REDDIT-B* 66.53±4.4 64.40±3.8 76.02±4.3 65.3±7.778.45±4.5 77.63±4.9 76.00±5.3IMDB-B* 60.75±7.0 55.20±4.3 59.35±7.7 59.2±6.969.84±4.6 69.93±3.3 68.80±3.1COLLAB* 58.00±6.2 58.33±11 57.51±5.9 56.60±1069.87±2.4 64.47±4.0 65.89±4.9MUTAG 84.21±6.387.58±4.1 85.00±5.6 82.4±6.887.58±4.4 86.90±4.0 86.90±4.0PROTEINS 74.84±2.376.76±2.5 74.49±2.8 74.4±2.775.38±2.9 75.03±3.0 75.34±2.1SBM* 53.00±9.9 50.00±0.0 56.93±12 54.1±7.1 81.40±11 90.80±7.0 92.26±2.9Erdös-Rényi* 81.86±6.2 63.40±3.981.93±6.3 73.6±9.1 79.06±9.8 79.26±10 82.26±3.2andCOLLAB contain truncate scale-free graphs (social networks), whereas MUTAG andPROTEINScontain graphs from biology or chemistry. In addition, we use two synthetic datasets with 2 classes:Erdös-Rényi with p1∈[0.3,0.5]andp2∈[0.4,0.8]and Stochastic block model (SBM) withparameters p1= 0.8,p2= 0.5,q1∈[0.1,0.15]andq2∈[0.01,0.1]. More details about the datasetsin App. A.3.1. In addition, Table 1 reports average accuracies and standard deviation on 10 randomdata splits, using 85/15 stratified train-test split, training during 60 epochs and reporting the results ofthe last epoch for each random run. We use Pytorch Geometric [ 48] and the code is available in apublic repository2.The experiments support our hypothesis that rewiring based on CT-L AYER and GAP-L AYERimproves the performance of the baselines on graph classification. Since both layers are differentiable,they learn how to inductively rewire unseen graphs. The improvements are significant in graphs wheresocial components arise ( REDDIT B, I MDB B, C OLLAB ), i.e. graphs with small world properties andpower-law degree distributions with a topology based on hubs and authorities. These are graphswhere bottlenecks arise easily and our approach is able to properly rewire the graphs. However, theimprovements observed in planar or grid networks ( MUTAG andPROTEINS ) are more limited: thebottleneck does not seem to be critical for the graph classification task.Moreover, CT-L AYER andGAP-L AYER perform better in graphs with featureless nodes than graphswith node features because it is able to leverage the information encoded in the topology of thegraphs. Note that in attribute-based graphs, the weights of the attributes typically overwrite thegraph’s structure in the classification task, whereas in graphs without node features, the informationis encoded in the graph’s structure. Thus, k-NN rewiring outperforms every other rewiring method ingraph classification where graphs has node features.App. A.3.4 contains an in-depth analysis of the comparison between the spectral node CT embeddings(CTEs) given by Equation 2, and the learned node CTEs as predicted by CT-L AYER . We find thatthe CTEs that are learned in CT-L AYER are able to better preserve the original topology of thegraph while shifting the distribution of the effective resistances of the edges towards an asymmetricdistribution where few edges have very large weights and a majority of edges have low weights.In addition, App. A.3.4 also includes the analysis of the graphs latent space of the readout layerproduced by each model. Finally, we analyze the performance of the proposed layers in graphswith different structural properties in App. A.3.6. We analyze the correlation between accuracy, thegraph’s assortativity, and the graph’s bottleneck ( λ2).CT-L AYER vsGAP-L AYER .The datasets explored in this paper are characterized by mild bottle-necks from the perspective of the Lovász bound. For completion, we have included two syntheticdatasets (Stochastic Block Model and Erdös-Rényi) where the Lovász bound is very restrictive.As a result, CT-L AYER is outperformed by GAP-L AYER inSBM . Note that the results on thesynthetic datasets suffer from large variability. As a general rule of thumb, the smaller the graph’sbottleneck (defined as the ratio between the number of inter-community edges and the numberof intra-community edges), the more useful the CT-L AYER is because the rewired graph will besparsified in the communities but will preserve the edges in the gap. Conversely, the larger thebottleneck, the more useful the GAP-Layer is.2https://github.com/AdrianArnaiz/DiffWire9DiffWire: Inductive Graph Rewiring via the Lovász Bound4.2 Node Classification using CT-L AYERCT-L AYER andGAP-L AYER are mainly designed to perform graph classification tasks. However,we identify two potential areas to apply CT-L AYER for node classification.First, the new TCTdiffusion matrix learned by CT-L AYER gives more importance to edges thatconnect different communities, i.e., edges that connect distant nodes in the graph. This behaviourofCT-L AYER is aligned to solve long-range and heterophilic node classification tasks using fewernumber of layers and thus avoiding under-reaching, over-smoothing and over-squashing.Second, there is an increasingly interest in the community in using PEs in the nodes to developemore expressive GNN. PEs tend to help in node classification in homophilic graphs, as nearby nodeswill be assigned similar PEs. However, the main limitation is that PEs are usually pre-computedbefore the GNN training due to their high computational cost. CT-L AYER provides a solution to thisproblem, as it learns to predict the commute times embedding ( Z) of a given graph (see Figure 2and definition 1). Hence, CT-L AYER is able to learn and predict PEs from XandAinside a GNNwithout needing to pre-compute them.We empirically validate CT-L AYER in a node classification task on benchmark homophilic (Cora,Pubmed and Citeseer) and heterophilic (Cornell, Actor and Wisconsin) graphs. The results aredepicted in Table 2 comparing three models: (1) the baseline model consists of a 1-layer-GCN; (2)model 1 is a 1-layer-GCN where the CTEs are concatenated to the node features as PEs ( X∥Z);(3) Finally, model 2 is a 1-layer-GCN where TCTis used as a diffusion matrix ( A=TCT). Moredetails can be found in App. A.3.5.As seen in the Table, the proposed models outperform the baseline GCN model: using CTEs asfeatures (model 1) yields competitive results in homophilic graphs whereas using TCTas a matrixfor message passing (model 2) performs well in heterophilic graphs. Note that in our experimentsthe CTEs are learned by CT-L AYER instead of being pre-computed. A promising direction of futurework would be to explore how to combine these two approaches (model 1 and model 2) to leveragethe best of each of the methods on a wide range of graphs for node classification tasks.Table 2: Results in node classificationDataset GCN (baseline) model 1: model 2:X∥Z A =TCTHomophilyCora 82.01±0.883.66±0.6 67.96±0.8 81.0%Pubmed 81.61±0.386.07±0.1 68.19±0.7 80.0%Citeser 70.81±0.572.26±0.5 66.71±0.6 73.6%Cornell 59.19±3.5 58.02±3.7 69.04±2.2 30.5%Actor 29.59±0.4 29.35±0.4 31.98±0.3 21.9%Wisconsin 68.05±6.2 69.25±5.1 79.05±2.1 19.6%5 Conclusion and Future WorkIn this paper, we have proposed DIFFWIRE, a unified framework for graph rewiring that links thetwo components of the Lovász bound: CTs and the spectral gap. We have presented two novel, fullydifferentiable and inductive rewiring layers: CT-L AYER andGAP-L AYER . We have empiricallyevaluated these layers on benchmark datasets for graph classification with competitive results whencompared to SoTA baselines, specially in graphs where the the nodes have no attributes and havesmall-world properties. We have also performed preliminary experiments in a node classificationtask, showing that using the CT Embeddings and the CT distances benefit GNN architectures inhomophilic and heterophilic graphs, respectively.In future work, we plan to test the proposed approach in other graph-related tasks and intend to applyDIFFWIREto large-scale graphs and real-world applications, particularly in social networks, whichhave unique topology, statistics and direct implications in society.10DiffWire: Inductive Graph Rewiring via the Lovász Bound6 AcknowledgmentsA. Arnaiz-Rodriguez and N. Oliver are supported by a nominal grant received at the ELLIS UnitAlicante Foundation from the Regional Government of Valencia in Spain (Convenio Singular signedwith Generalitat Valenciana, Conselleria d’Innovació, Universitats, Ciència i Societat Digital, Direc-ción General para el Avance de la Sociedad Digital). A. Arnaiz-Rodriguez is also funded by a grantby the Banc Sabadell Foundation. F. Escolano is funded by the project RTI2018-096223-B-I00 of theSpanish Government.References[1]Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. InProceedings. 2005 IEEE international joint conference on neural networks , volume 2, pages 729–734,2005. URL https://ieeexplore.ieee.org/document/1555942 . 1[2]Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. Thegraph neural network model. IEEE transactions on neural networks , 20(1):61–80, 2008. URL https://ieeexplore.ieee.org/document/4700287 . 1[3]Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations (ICLR) , 2017. URL https://openreview.net/forum?id=SJU4ayYgl . 1[4]Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural messagepassing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning ,ICML, page 1263–1272, 2017. 1[5]Thomas N Kipf and Max Welling. Variational graph auto-encoders. In NeurIPS Workshop on BayesianDeep Learning , 2016. URL http://bayesiandeeplearning.org/2016/papers/BDL_16.pdf . 1[6]Shaosheng Cao, Wei Lu, and Qiongkai Xu. Deep neural networks for learning graph representations. InProceedings of the AAAI Conference on Artificial Intelligence , volume 30, 2016. URL https://ojs.aaai.org/index.php/AAAI/article/view/10179 . 1[7]Fei Tian, Bin Gao, Qing Cui, Enhong Chen, and Tie-Yan Liu. Learning deep representations for graphclustering. In Proceedings of the AAAI Conference on Artificial Intelligence , 2014. URL https://ojs.aaai.org/index.php/AAAI/article/view/8916 . 1[8]Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehen-sive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems , 32(1):4–24, 2021. URL https://ieeexplore.ieee.org/document/9046288 . 1, 2[9]Petar Veli ˇckovi ́c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio.Graph Attention Networks. International Conference on Learning Representations , 2018. URL https://openreview.net/forum?id=rJXMpikCZ . 1[10] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In InternationalConference on Learning Representations , 2022. URL https://openreview.net/forum?id=F72ximsx7C1 . 1[11] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks?InInternational Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=ryGs6iA5Km . 1[12] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in Neural Information Processing Systems , 2017. URL https://proceedings.neurips.cc/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf . 1, 3[13] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks forsemi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence ,2018. URL https://ojs.aaai.org/index.php/AAAI/article/view/11604 . 2[14] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications.InInternational Conference on Learning Representations , 2021. URL https://openreview.net/forum?id=i80OPhOCVH2 . 2[15] László Lovász. Random walks on graphs. Combinatorics, Paul erdos is eighty , 2(1-46):4, 1993. URLhttps://web.cs.elte.hu/~lovasz/erdos.pdf . 2, 4[16] Pablo Barceló, Egor V . Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. Thelogical expressiveness of graph neural networks. In International Conference on Learning Representations ,2020. URL https://openreview.net/forum?id=r1lZ7AEKvB . 211DiffWire: Inductive Graph Rewiring via the Lovász Bound[17] NT Hoang, Takanori Maehara, and Tsuyoshi Murata. Revisiting graph neural networks: Graph filteringperspective. In 25th International Conference on Pattern Recognition (ICPR) , pages 8376–8383, 2021.URL https://ieeexplore.ieee.org/document/9412278 . 2, 7, 19[18] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for nodeclassification. In International Conference on Learning Representations , 2020. URL https://openreview.net/forum?id=S1ldO2EFPr .[19] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neuralnetworks: A review of methods and applications. CoRR , abs/1812.08434, 2018. URL http://arxiv.org/abs/1812.08434 . 2[20] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M.Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In InternationalConference on Learning Representations , 2022. URL https://openreview.net/forum?id=7UmjRGzp-A . 2, 3, 6,8, 18, 23[21] Petar Veli ˇckovi ́c. Message passing all the way up. In ICLR 2022 Workshop on Geometrical and TopologicalRepresentation Learning , 2022. URL https://openreview.net/forum?id=Bc8GiEZkTe5 . 2[22] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolu-tional networks on node classification. In International Conference on Learning Representations , 2020.URL https://openreview.net/forum?id=Hkx1qkrKPr . 2, 3[23] Anees Kazi, Luca Cosmo, Seyed-Ahmad Ahmadi, Nassir Navab, and Michael Bronstein. Differentiablegraph module (dgm) for graph convolutional networks. IEEE Transactions on Pattern Analysis andMachine Intelligence , pages 1–1, 2022. URL https://ieeexplore.ieee.org/document/9763421 . 2, 3[24] Karel Devriendt and Renaud Lambiotte. Discrete curvature on graphs from the effective resistance. arXivpreprint arXiv:2201.06385 , 2022. doi: 10.48550/ARXIV .2201.06385. URL https://arxiv.org/abs/2201.06385 . 2, 6, 7, 18[25] Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. InAdvances in Neural Information Processing Systems , 2019. URL https://proceedings.neurips.cc/paper/2019/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf . 3, 8, 23[26] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi,Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relationalinductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 , 2018. URLhttps://arxiv.org/abs/1806.01261 . 3[27] Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, and FedericoMonti. Sign: Scalable inception graph neural networks. In ICML 2020 Workshop on Graph RepresentationLearning and Beyond , 2020. URL https://grlplus.github.io/papers/77.pdf . 3[28] Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. DropGNN: Random dropoutsincrease the expressiveness of graph neural networks. In Advances in Neural Information ProcessingSystems , 2021. URL https://openreview.net/forum?id=fpQojkIV5q8 . 3[29] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. Proceedings of the AAAIConference on Artificial Intelligence , 34(04):3438–3445, Apr. 2020. doi: 10.1609/aaai.v34i04.5747. URLhttps://ojs.aaai.org/index.php/AAAI/article/view/5747 . 3[30] Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, and ShuWu. A survey on graph structure learning: Progress and opportunities. arXiv PrePrint , 2021. URLhttps://arxiv.org/abs/2103.03036 . 3[31] Diego Mesquita, Amauri Souza, and Samuel Kaski. Rethinking pooling in graph neural networks. InAdvances in Neural Information Processing Systems , 2020. URL https://proceedings.neurips.cc/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Paper.pdf . 3[32] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec.Hierarchical graph representation learning with differentiable pooling. In Advances in Neu-ral Information Processing Systems , 2018. URL https://proceedings.neurips.cc/paper/2018/file/e77dbaf6759253c7c6d0efc5690369c7-Paper.pdf . 3[33] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neuralnetworks for graph pooling. In Proceedings of the 37th International Conference on Machine Learning ,2020. URL https://proceedings.mlr.press/v119/bianchi20a.html . 3, 8[34] Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and DominiqueBeaini. Recipe for a General, Powerful, Scalable Graph Transformer. arXiv:2205.12454 , 2022. URLhttps://arxiv.org/pdf/2205.12454.pdf . 312DiffWire: Inductive Graph Rewiring via the Lovász Bound[35] Ameya Velingker, Ali Kemal Sinop, Ira Ktena, Petar Veli ˇckovi ́c, and Sreenivas Gollapudi. Affinity-awaregraph networks. arXiv preprint arXiv:2206.11941 , 2022. URL https://arxiv.org/pdf/2206.11941.pdf . 3,23, 25[36] Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. AAAIWorkshop on Deep Learning on Graphs: Methods and Applications , 2021. URL https://arxiv.org/pdf/2012.09699.pdf . 3, 23[37] Derek Lim, Joshua David Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, and StefanieJegelka. Sign and basis invariant networks for spectral graph representation learning. In ICLR 2022Workshop on Geometrical and Topological Representation Learning , 2022. URL https://openreview.net/forum?id=BlM64by6gc . 3[38] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design prov-ably more powerful neural networks for graph representation learning. Advances in NeuralInformation Processing Systems , 33, 2020. URL https://proceedings.neurips.cc/paper/2020/file/2f73168bf3656f697507752ec592c437-Paper.pdf . 3[39] Fan RK Chung. Spectral Graph Theory . American Mathematical Society, 1997. URL https://www.bibsonomy.org/bibtex/295ef10b5a69a03d8507240b6cf410f8a/folke . 4[40] Ulrike von Luxburg, Agnes Radl, and Matthias Hein. Hitting and commute times in large randomneighborhood graphs. Journal of Machine Learning Research , 15(52):1751–1798, 2014. URL http://jmlr.org/papers/v15/vonluxburg14a.html . 4, 20[41] Daniel A. Spielman and Nikhil Srivastava. Graph sparsification by effective resistances. SIAM Journal onComputing , 40(6):1913–1926, 2011. doi: 10.1137/080734029. URL https://doi.org/10.1137/080734029 . 5[42] Huaijun Qiu and Edwin R. Hancock. Clustering and embedding using commute times. IEEE Transactionson Pattern Analysis and Machine Intelligence , 29(11):1873–1890, 2007. doi: 10.1109/TPAMI.2007.1103.URL https://ieeexplore.ieee.org/document/4302755 . 6[43] Vedat Levi Alev, Nima Anari, Lap Chi Lau, and Shayan Oveis Gharan. Graph Clustering using EffectiveResistance. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018) , volume 94, pages1–16, 2018. doi: 10.4230/LIPIcs.ITCS.2018.41. URL http://drops.dagstuhl.de/opus/volltexte/2018/8369 .6, 17, 24[44] Emmanuel Abbe. Community detection and stochastic block models: Recent developments. Journal ofMachine Learning Research , 18(177):1–86, 2018. URL http://jmlr.org/papers/v18/16-480.html . 7, 20[45] Thomas Bühler and Matthias Hein. Spectral clustering based on the graph p-laplacian. In Proceedings of the26th Annual International Conference on Machine Learning , ICML ’09, page 81–88, New York, NY , USA,2009. Association for Computing Machinery. ISBN 9781605585161. doi: 10.1145/1553374.1553385.URL https://doi.org/10.1145/1553374.1553385 . 7, 20[46] Jian Kang and Hanghang Tong. N2n: Network derivative mining. In Proceedings of the 28th ACMInternational Conference on Information and Knowledge Management , CIKM ’19, page 861–870, NewYork, NY , USA, 2019. Association for Computing Machinery. ISBN 9781450369763. doi: 10.1145/3357384.3357910. URL https://doi.org/10.1145/3357384.3357910 . 7, 19[47] Franco P Preparata and Michael I Shamos. Computational geometry: an introduction . Springer Science &Business Media, 2012. URL http://www.cs.kent.edu/~dragan/CG/CG-Book.pdf . 8[48] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLRWorkshop on Representation Learning on Graphs and Manifolds , 2019. 9[49] Joshua Batson, Daniel A. Spielman, Nikhil Srivastava, and Shang-Hua Teng. Spectral sparsificationof graphs: Theory and algorithms. Commun. ACM , 56(8):87–94, aug 2013. ISSN 0001-0782. doi:10.1145/2492007.2492029. URL https://doi.org/10.1145/2492007.2492029 . 16[50] Morteza Alamgir and Ulrike Luxburg. Phase transition in the family of p-resistances. In Advancesin Neural Information Processing Systems , 2011. URL https://proceedings.neurips.cc/paper/2011/file/07cdfd23373b17c6b337251c22b7ea57-Paper.pdf . 20[51] Morteza Alamgir and Ulrike Luxburg. Phase transition in the family of p-resistances. In Advances inNeural Information Processing Systems , volume 24, 2011. URL https://proceedings.neurips.cc/paper/2011/file/07cdfd23373b17c6b337251c22b7ea57-Paper.pdf . 20[52] Gregory Berkolaiko, James B Kennedy, Pavel Kurasov, and Delio Mugnolo. Edge connectivity and thespectral gap of combinatorial and quantum graphs. Journal of Physics A: Mathematical and Theoretical ,50(36):365201, 2017. URL https://doi.org/10.1088/1751-8121/aa8125 . 20[53] Zoran Stani ́c. Graphs with small spectral gap. Electronic Journal of Linear Algebra , 26:28, 2013. URLhttps://journals.uwyo.edu/index.php/ela/article/view/1259 . 20[54] Douglas J Klein and Milan Randi ́c. Resistance distance. Journal of Mathematical Chemistry , 12(1):81–95,1993. URL https://doi.org/10.1007/BF01164627 . 2413DiffWire: Inductive Graph Rewiring via the Lovász BoundA AppendixIn Appendix A we include a Table with the notation used in the paper and we provide an analysis ofthe diffusion and its relationship with curvature. In Appendix B, we study in detail GAP-L AYER andthe implications of the proposed spectral gradients. Appendix C reports statistics and characteristics ofthe datasets used in the experimental section, provides more information about the experiments results,describes additional experimental results, and includes a summary of the computing infrastructureused in our experiments.Table 3: Notation.Symbol DescriptionG= (V, E)Graph = (Nodes, Edges)A Adjacency matrix: A∈Rn×nX Feature matrix: X∈Rn×Fv Node v∈Voru∈Ve Edge e∈Ex Features of node v:x∈Xn Number of nodes: n=|V|F Number of featuresD Degree diagonal matrix where dvinDvvdv Degree of node vvol(G) Sum of the degrees of the graph vol(G) =Tr[D]L Laplacian: L=D−AB Signed edge-vertex incidence matrixbe Incidence vector: Row vector of B, with be=(u,v)= (eu−ev)ve Projected incidence vector: ve=L+/2beΓ Ratio Γ =1+ε1−εE Dirichlet Energy wrt L:E(x):=xTLxL Normalized Laplacian: L=I−D−1/2AD−1/2Λ Eigenvalue matrix of LΛ′Eigenvalue matrix of Lλi i-th eigenvalue of Lλ2 Second eigenvalue of L: Spectral gapλ′i i-th eigenvalue of Lλ′2 Second eigenvalue of L: Spectral gapF Matrix of eigenvectors of LG Matrix of eigenvectors of Lfi ieigenvector of Lf2 Second eigenvector of L: Fiedler vectorgi ieigenvector of Lg2 Second eigenvector of L: Fiedler vector ̃A New Adjacency matrixE′New edgesHuv Hitting time between uandvCTuv Commute time: CTuv=Huv+HvuRuv Effective resistance: Ruv=CTuv/vol(G)Z Matrix of commute times embeddings for all nodes in Gzu Commute times embedding of node uTCTResistance diffusion or commute times diffusionR(Z) Pairwise Euclidean distance of embedding Zdivided by vol(G)S Cluster assignment matrix: S∈Rn×2TGAPGAP diffusioneu Unit vector with unit value at uand 0 elsewhere∇ ̃Aλ2 Gradient of λ2wrt ̃A[∇ ̃Aλ2]ij Gradient of λ2wrt ̃Auvpu Node curvature: pu:= 1−12Pu∼wRuvκuv Edge curvature: κuv:= 2(pu+pv)/Ruv∥ Concatenation14DiffWire: Inductive Graph Rewiring via the Lovász BoundA.1 Appendix A: CT-L AYERA.1.1 NotationThe Table 3 summarizes the notation used in the paper.A.1.2 Analysis of Commute Times rewiringFirst, we provide an answer to the following question:Is resistance diffusion via TCTa principled way of preserving the Cheeger constant?We answer the question above by linking Theorems 1 and 2 in the paper with the Lovász bound.The outline of our explanation follows three steps.•Proposition 1: Theorem 1 ( Sparsification ) provides a principled way to bias the adjacencymatrix so that the edges with the largest weights in the rewired graph correspond to the edges ingraph’s bottleneck.•Proposition 2: Theorem 2 ( Cheeger vs Resistance ) can be used to demonstrate that increasingthe effective resistance leads to a mild reduction of the Cheeger constant.•Proposition 3: (Conclusion) The effectiveness of the above theorems to contain the Cheegerconstant is constrained by the Lovász bound.Next, we provide a thorough explanation of each of the propositions above.Proposition 1 (Biasing) .Let G’ = Sparsify (G,q) be a sampling algorithm of graph G= (V, E),where edges e∈Eare sampled with probability q∝Re(proportional to the effective resistance).This choice is necessary to retain the global structure of G, i.e., to satisfy∀x∈Rn: (1−ε)xTLGx≤xTLG′x≤(1 +ε)xTLGx, (9)with probability at least 1/2by sampling O(nlogn/ε2)edges , with 1/√n < ε ≤1, instead ofO(m), where m=|E|. In addition, this choice biases the uniform distribution in favor of criticaledges in the graph.Proof. We start by expressing the Laplacian Lin terms of the edge-vertex incidence matrix Bm×e:Beu=(1 ifuis the head of e−1ifuis the tail of e0 otherwise .(10)where edges in undirected graphs are counted once, i.e. e= (u, v) = ( v, u). Then, we haveL=BTB=PebebTe, where beis a row vector ( incidence vector ) ofB, withbe=(u,v)= (eu−ev).In addition, the Dirichlet energies can be expressed as norms:E(x) =xTLx=xTBTBx=∥Bx∥22=Xe=(u,v)∈E(xu−xv)2. (11)As a result, the effective resistance Rebetween the two nodes of an edge e= (u, v)can be defined asRe= (eu−ev)TL+(eu−ev) =bTeL+be (12)Next, we reformulate the spectral constraints in Eq. 9, i.e. (1−ε)LG≼LG′≼(1 +ε)LGasLG≼LG′≼ΓLG,Γ =1 +ε1−ε. (13)This simplifies the analysis, since the above expression can be interpreted as follows: the Dirichletenergies of LG′are lower-bounded by those of LGand upper-bounded by Γtimes the energies ofLG. Considering that the energies define hyper-ellipsoids, the hyper-ellipsoid associated with LG′isbetween the hyper-ellipsoids of LGandΓtimes the LG.The hyper-ellipsoid analogy provides a framework to proof that the inclusion relationships arepreserved under scaling: MLGM≼MLG′M≼MΓLGMwhere Mcan be a matrix. In this case,if we set M:= (L+G)1/2=L+/2Gwe have:L+/2GLGL+/2G≼L+/2GLG′L+/2G≼L+/2GΓL+/2G, (14)15DiffWire: Inductive Graph Rewiring via the Lovász Boundwhich leads toIn≼L+/2GLG′L+/2G≼ΓIn. (15)We seek a Laplacian LG′satisfying the similarity constraints in Eq. 13. Since E′⊂E, i.e. we wantto remove structurally irrelevant edges, we can design LG′in terms of considering allthe edges E:LG′:=BTGBG=XesebebTe (16)and let the similarity constraint define the sampling weights and the choice of e(setting se≥0propertly). More precisely:In≼L+/2GXebebTeL+/2G≼ΓIn. (17)Then if we define ve:=L+/2Gbeas the projected incidence vector , we haveIn≼XesevevTe≼ΓIn. (18)Consequently, a spectral sparsifier must find se≥0so that the above similarity constraint is satisfied.Since there are medges in E,semust be zero for most of the edges. But, what are the best candidatesto retain? Interestingly, the similarity constraint provides the answer. From Eq. 12 we havevTeve=∥ve∥2=∥L+/2Gbe∥22=bTeL+Gbe=Re. (19)This result explains why sampling the edges with probability q∝Releads to a ranking of medgesofG= (V, E)such that edges with large Re=∥ve∥2are preferred3.Algorithm 1 implements a deterministic greedy version of Sparsify (G,q), where we build incremen-tallyE′⊂Eby creating a budget of decreasing resistances Re1≥Re2≥. . .≥ReO(nlogn/ε2).Note that this rewiring strategy preserves the spectral similarities of the graphs, i.e. the globalstructure of G= (V, E)is captured by G′= (V, E′).Moreover, the maximum Rein each graph determines an upper bound on the Cheeger constant andhence an upper bound on the size of the graph’s bottleneck, as per the following proposition.Algorithm 1: GREEDY SparsifyInput : G= (V, E),ε∈(1/√n,1],n=|V|.Output : G′= (V, E′)withE′⊂Esuch that |E′|=O(nlogn/ε2).L←List({ve:e∈E})Q←Sort(L, descending, criterion= ∥ve∥2)▷Sort candidate edges by descending ResistanceE′← ∅I ←0n×nrepeatve←pop(Q) ▷Remove the head of the queueI ← I +vevTeifI≼ΓInthenE′←E′∪ {e} ▷Update the current budget of edgeselsereturn G′= (V, E′)until Q=∅Proposition 2 (Resistance Diameter) .Let G’ = Sparsify (G,q) be a sampling algorithm of graphG= (V, E), where edges e∈Eare sampled with probability q∝Re(proportional to the effectiveresistance). Consider the resistance diameter Rdiam := max u,vRuv. Then, for the pair of (u, v)3Although some of the elements of this section are derived from [ 49], we note that the Nikhil Srivastava’slectures at The Simons Institute (2014) are by far more clarifying.16DiffWire: Inductive Graph Rewiring via the Lovász Bounddoes exist an edge e= (u, v)∈E′inG′= (V, E′)such that Re=Rdiam . A a result the Cheegerconstant of G hGis upper-bounded as follows:hG≤αε√Rdiam·εvol(S)ε−1/2, (20)with0< ε < 1/2anddu≥1/αfor all u∈V.Proof. The fact that the maximum resistance Rdiam is located in an edge is derived from twoobservations: a) Resistance is upper bounded by the shortest-path distance; and b) edges withmaximal resistance are prioritized in (Proposition 1).Theorem 2 states that any attempt to increase the graph’s bottleneck in a multiplicative way (i.e.multiplying it by a constant c≥0) results in decreasing the effective resistances as follows:Ruv≤1d2εu+1d2εv·1ε·c2(21)withε∈[0,1/2]. This equation is called the resistance bound . Therefore, a multiplicative increase ofthe bottleneck leads to a quadratic decrease of the resistances.Following Corollary 2 of [ 43], we obtain an upper bound of any hS, i.e. the Cheeger constant forS⊆Vwithvol(S)≤vol(G)/2– by defining cproperly. In particular we are seeking a value of cthat would lead to a contradiction, which is obtained by settingc=vuut1d2εu∗+1d2εv∗Rdiam·ε, (22)where (u∗, v∗)is a pair of nodes with maximal resistance, i.e. Ru∗v∗=Rdiam .Consider now any other pair of nodes (s, t)withRst<Rdiam. Following Theorem 2, if thebottleneck of hSis multiplied by c, we should haveRst≤1d2εs+1d2εs·1ε·c2=1d2εs+1d2εs·Rdiam1d2εu∗+1d2εv∗. (23)However, since Rdiam≤1d2εu∗+1d2εv∗we have that Rstcan satisfyRst>1d2εs+1d2εs·1ε·c2(24)which is a contradiction and enableshS≤cvol(S)1/2−ε⇐⇒ | ∂S| ≤c·vol(S)1/2−ε. (25)Using cas defined in Eq. 22 and du≥1/αwe obtainc=vuut1d2εu∗+1d2εv∗Rdiam·ε≤rαεRdiam·ε≤αε√Rdiam·ε. (26)Therefore,hS≤cvol(S)1/2−ε≤αε√Rdiam·εvol(S)1/2−ε=αε√Rdiam·ε·vol(S)ε−1/2. (27)As a result, the Cheeger constant of G= (V, E)is mildly reduced (by the square root of the maximalresistance).Proposition 3 (Conclusion) .Let(u∗, v∗)be a pair of nodes (may be not unique) in G= (V, E)with maximal resistance, i.e. Ru∗v∗=Rdiam. Then, the Cheeger constant hGrelies on the ratiobetween the maximal resistance Rdiam and its uninformative approximation1d∗u+1d∗v. The closerthis ratio is to the unit, the easier it is to contain the Cheeger constant.17DiffWire: Inductive Graph Rewiring via the Lovász BoundFigure 5: Left: Original graph with nodes colored as Louvain communities. Middle: TCTlearnt byCT-L AYER with edges colors as node importance [0,1]. Right: Node and edge curvature: TCTusingpu:= 1−12Pu∼wTCTuvandκuv:= 2(pu+pv)/TCTuvwith edge an node curvatures as color. Graph from Reddit-B dataset.Proof. The referred ratio above is the ratio leading to a proper cin Proposition 2. This is consistentwith a Lovász regime where the spectral gap λ′2has a moderate value. However, for regimes withvery small spectral gaps, i.e. λ′2→0, according to the Lovász bound, Rdiam≫1d∗u+1d∗vandhence the Cheeger constant provided by Proposition 2 will tend to zero.We conclude that we can always find an moderate upper bound for the Cheeger constant of G=(V, E), provided that the regime of the Lovász bound is also moderate. Therefore, as the globalproperties of G= (V, E)are captured by G′= (V, E′), a moderate Cheeger constant, whenachievable, also controls the bottlenecks in G′= (V, E′).Our methodology has focused on first exploring the properties of the commute times / effectiveresistances in G= (V, E). Next, we have leveraged the spectral similarity to reason about theproperties –particularly the Cheeger constant– of G= (V, E′). In sum, we conclude that resistancediffusion via TCTis a principled way of preserving the Cheeger constant of G= (V, E).A.1.3 Resistance-based CurvaturesWe refer to recent work by Devriendt and Lambiotte [24] to complement the contributions of Toppinget al. [20] regarding the use of curvature to rewire the edges in a graph.Theorem 3 (Devriendt and Lambiotte [24]).The edge resistance curvature has the following prop-erties: (1) It is bounded by (4−du−dv)≤κuv≤2/Ruv, with equality in the lower bound iffall incident edges to uandvare cut links; (2) It is upper-bounded by the Ollivier-Ricci curvatureκORuv≥κuv, with equality if (u, v)is a cut link; and (3) Forman-Ricci curvature is bounded asfollows: κFRuv/Ruv≤κuvwith equality in the bound if the edge is a cut link.The new definition of curvature given in [ 20] is related to the resistance distance and thus it islearnable with the proposed framework ( CT-L AYER ). Actually, the Balanced-Forman curvature(Definition 1 in [20]) relies on the uniformative approximation of the resistance distance.Figure 5 illustrates the relationship between effective resistances / commute times and curvature onan exemplary graph from the C OLLAB dataset.As seen in the Figure, effective resistances prioritize the edges connecting outer nodes with hubsor central nodes, while the intra-community connections are de-prioritized. This observation isconsistent with the aforementioned theoretical explanations about preserving the bottleneck whilebreaking the intra-cluster structure. In addition, we also observe that the original edges between hubshave been deleted o have been extremely down-weighted.18DiffWire: Inductive Graph Rewiring via the Lovász BoundRegarding curvature, hubs or central nodes have the lowest node curvature (this curvature increaseswith the number of nodes in a cluster/community). Edge curvatures, which rely on node curvatures,depend on the long-term neighborhoods of the connecting nodes. In general, edge curvatures can beseen as a smoothed version –since they integrate node curvatures– of the inverse of the resistancedistances.We observe that edges linking nodes of a given community with hubs tend to have similar edge-curvature values. However, edges linking nodes of different communities with hubs have differentedge curvatures (Figure 5-right). This is due to the different number of nodes belonging to eachcommunity, and to their different average degree inside their respective communities (property 1 ofTheorem 3).Finally, note that the range of edge curvatures is larger than that of resistance distances. The sparsifiertransforms a uniform distribution of the edge weights into a less entropic one: in the example ofFigure 5 we observe a power-law distribution of edge resistances. As a result, κuv:= 2(pu+pv)/TCTuvbecomes very large on average (edges with infinite curvature are not shown in the plot) and a logscale is needed to appreciate the differences between edge resistances and edge curvatures.A.2 Appendix B: GAP-L AYERA.2.1 Spectral GradientsThe proposed GAP-L AYER relies on gradients wrt the Laplacian eigenvalues, and particularly thespectral gap ( λ2forLandλ′2wrtL). Although the GAP-L AYER inductively rewires the adjacencymatrix Aso that λ2is minimized, the gradients derived in this section may also be applied for gapmaximization.Note that while our cost function LFiedler =∥ ̃A−A∥F+α(λ∗2)2, with λ∗2∈ {λ2, λ′2}, relies onan eigenvalue, we do not compute it explicitly , as its computation has a complexity of O(n3)andwould need to be computed in every learning iteration. Instead, we learn an approximation of λ2’seigenvector f2and use its Dirchlet energy E(f2)to approximate the eigenvalue. In addition, sinceg2=D1/2f2, we first approximate g2and then approximate λ′2fromE(g2).Gradients of the Ratio-cut Approximation. LetAbe the adjacency matrix of G= (V, E); and ̃A, a matrix similar to the original adjacency but with minimal λ2. Then, the gradient of λ2wrt eachcomponent of ̃Ais given by∇ ̃Aλ2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=diag(f2fT2)11T−f2fT2, (28)where 1is the vector of nones; and [∇ ̃Aλ2]ijis the gradient of λ2wrt ̃Auv. The above formula isan instance of the network derivative mining mining approach [ 46]. In this framework, λ2is seenas a function of ̃Aand∇ ̃Aλ2, the gradient of λ2wrt ̃A, comes from the chain rule of the matrixderivative Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li. More precisely,∇ ̃Lλ2:=∂λ2∂ ̃L=f2fT2, (29)is a matrix relying on an outer product (correlation). In the proposed GAP-L AYER , since f2isapproximated by:f2(u) =+1/√nifubelongs to the first cluster−1/√nifubelongs to the second cluster, (30)i.e. we discard the Olognnfrom Eq. 30 (the non-liniarities conjectured in [ 17]) in order to simplifythe analysis. After reordering the entries of f2for the sake of clarity, f2fT2is the following blockmatrix:f2fT2=1/n−1/n−1/n 1/nwhose diagonal matrix is diag (f2fT2) =1/n 001/n(31)Then, we have∇ ̃Aλ2=1/n1/n1/n1/n−1/n−1/n−1/n 1/n=02/n2/n 0(32)19DiffWire: Inductive Graph Rewiring via the Lovász Boundwhich explains the results in Figure 1-left: edges linking nodes belonging to the same cluster remainunchanged whereas inter-cluster edges have a gradient of 2/n. This provides a simple explanationforTGAP= ̃A(S)⊙A. The additional masking added by the adjacency matrix ensures that we donot create new links.Gradients Normalized-cut Approximation. Similarly, using λ′2for graph rewiring leads to thefollowing complex expression:∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=d′ngT2 ̃AT ̃D−1/2g2o1T+d′ngT2 ̃A ̃D−1/2g2o1T+ ̃D−1/2g2gT2 ̃D−1/2. (33)However, since g2=D1/2f2andf2=D−1/2g2, the gradient may be simplified as follows:∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=d′nfT2 ̃D1/2 ̃ATf2o1T+d′nfT2 ̃D1/2 ̃Af2o1T+ ̃D−1/2f2fT2 ̃D−1/2. (34)In addition, considering symmetry for the undirected graph case, we obtain:∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=2d′nfT2 ̃D1/2 ̃Af2o1T+ ̃D−1/2f2fT2 ̃D−1/2. (35)where d′is an×1negative vector including derivatives of degree wrt adjacency and related terms.The obtained gradient is composed of two terms.The first term contains the matrix ̃D1/2 ̃Awhich is the adjacency matrix weighted by the square rootof the degree; fT2 ̃D1/2 ̃Af2is a quadratic form (similar to a Dirichlet energy for the Laplacian) whichapproximates an eigenvalue of ̃D1/2 ̃A. We plan to further analyze the properties of this term infuture work.The second term, ̃D−1/2f2fT2 ̃D−1/2, downweights the correlation term for the Ratio-cut case f2fT2by the degrees as in the normalized Laplacian. This results in a normalization of the Fiedler vector:−1/nbecomes −√dudv/nat the uventry and similarly for 1/n, i.e. each entry contains the averagedegree assortativity.A.2.2 Beyond the Lovász Bound: the von Luxburg et al. boundThe Lovász bound was later refined by von Luxburg et al. [40] via a new, tighter bound which replacesdminbyd2minin Eq. 1. Given that λ′2∈(0,2], as the number of nodes in the graph ( n=|V|) andthe average degree increase, then Ruv≈1/du+ 1/dv. This is likely to happen in certain types ofgraphs, such as Gaussian similarity-graphs –graphs where two nodes are linked if the neg-exponentialof the distances between the respective features of the nodes is large enough; ε-graphs –graphs wherethe Euclidean distances between the features in the nodes are ≤ε; andk−NN graphs with large kwrtn. The authors report a linear collapse of Ruvwith the density of the graph in scale-free networks,such as social network graphs, whereas a faster collapse of Ruvhas been reported in communitygraphs –congruent graphs with Stochastic Block Models (SBMs) [44].Given the importance of the effective resistance, Ruv, as a global measure of node similarity, thevon Luxburg et al.’s refinement motivated the development of robust effective resistances , mostly inthe form of p−resistances given by Rpuv= arg min f{Pe∈Ere|fe|p}, where fis a unit-flow injectedinuand recovered in v; and re= 1/wewithwebeing the edge’s weight [ 50]. For p= 1,Rpuvcorresponds to the shortest path; p= 2 results in the effective resistance; and p→ ∞ leads tothe inverse of the unweighted u-v-mincut4. Note that the optimal pvalue depends on the type ofgraph [50] and p−resistances may be studied from the perspective of p−Laplacians [45, 51].While Ruvcould be unbounded by minimizing the spectral gap λ′2, this approach has received littleattention in the literature of mathematical characterization of graphs with small spectral gaps [ 52][53],i.e., instead of tackling the daunting problem of explicitly minimizing the gap, researchers in thisfield have preferred to find graphs with small spectral gaps.4The link between CTs and mincuts is leveraged in the paper as an essential element of our approach.20DiffWire: Inductive Graph Rewiring via the Lovász BoundA.3 Appendix C: ExperimentsIn this section, we provide details about the graphs contained in each of the datasets used in ourexperiments, a detailed clarification about architectures and experiments, and, finally, report additionalexperimental results.A.3.1 Datasets StatisticsTable 4 depicts the number of nodes, edges, average degree, assortativity, number of triangles,transitivity and clustering coefficients (mean and standard deviation) of all the graphs contained ineach of the benchmark datasets used in our experiments. As seen in the Table, the datasets are verydiverse in their characteristics. In addition, we use two synthetic datasets with 2 classes: Erdös-Rényiwithp1∈[0.3,0.5]andp2∈[0.4,0.8]and Stochastic block model (SBM) with parameters p1= 0.8,p2= 0.5,q1∈[0.1,0.15]andq2∈[0.01,0.1].Table 4: Dataset statistics. Parenthesis in Assortativity column denotes number of complete graphs(assortativity is undefined).Nodes Egdes A VG Degree Triangles Transitivity Clustering AssortativityREDDIT-B 429.6 ±554 497.7 ±622 2.33±0.3 24±41 0.01±0.02 0.04±0.06 -0.364 ±0.17(0)IMDB-B 19.7 ±10 96.5±105 8.88±5.0 391±868 0.77±0.15 0.94±0.03 -0.135 ±0.16(139)COLLAB 74.5 ±62 2457±6438 37.36 ±44 12×104±48×104 0.76±0.21 0.89±0.08 -0.033 ±0.24(680)MUTAG 2.2 ±0.1 19.8±5.6 2.18±0.1 0.00±0.0 0.00±0.00 0.00±0.00 -0.279 ±0.17(0)PROTEINS 39.1 ±45.8 72.8±84.6 3.73±0.4 27.4±30 0.48±0.20 0.51±0.23 -0.065 ±0.2(13)In addition, Figure 6 depicts the histograms of the assortativity for all the graphs in each of theeight datasets used in our experiments. As shown in Table 4 assortativity is undefined in completegraphs (constant degree, all degrees are the same). Assortativity is defined as the normalized degreecorrelation. If the graph is complete, then both correlation and its variance is 0, so assortativity willbe 0/0.(a)REDDIT (b)IMDB -BINARY (c)COLLAB(d)MUTAG (e)PROTEINSFigure 6: Histogram of the Assortativity of all the graphs in each of the datasets.In addition, Figure 7 depicts the histograms of the average node degrees for all the graphs in each ofthe eight datasets used in our experiments. The datasets are also very diverse in terms of topology,corresponding to social networks, biochemical networks and meshes.21DiffWire: Inductive Graph Rewiring via the Lovász Bound(a)REDDIT (b)IMDB -BINARY (c)COLLAB(d)MUTAG (e)PROTEINSFigure 7: Degree histogram of the average degree of all the graphs in each of the datasets.A.3.2 Graph Classification GNN ArchitecturesFigure 8 shows the specific GNN architectures used in the experiments explained in section 4 in themanuscript. Although the specific calculation of TGAPandTCTare given in Theorems 2 and 1, wealso provide a couple of pictures for a better intuition.LINEARCONVMINCUTCONVREADOUTMLPXX XAX AX XY(a)MINCUTbaselineXLINEARCONVMINCUTCONVREADOUTMLPXXAX AX XYGAP-LayerTg(b)GAP-L AYERXLINEARCONVMINCUTCONVREADOUTMLPXXAX AX XYCT-LayerTct(c)CT-L AYERFigure 8: Diagrams of the GNNs used in the experiments.22DiffWire: Inductive Graph Rewiring via the Lovász BoundA.3.3 Training ParametersThe value of the hyperparameters used in the experiments are the ones by default in the coderepository5. We report average accuracies and standard deviation on 10 random iterations, usingdifferent 85/15 train-test stratified split (we do not perform hyperparameter search), training during60 epochs and reporting the results of the last epoch for each random run. We have used an Adamoptimizer, with a learning rate of 5e−4and weight decay of 1e−4. In addition, the batch sizeused for the experiments are shown in Table 5. Regarding the synthetic datasets, the parameters are:Erdös-Rényi with p1∈[0.3,0.5]andp2∈[0.4,0.8]and Stochastic block model (SBM) p1= 0.8,p2= 0.5,q1∈[0.1,0.15]andq2∈[0.01,0.1].Table 5: Dataset Batch sizeBatch Dataset sizeREDDIT-BINARY 64 1000IMDB-BINARY 64 2000COLLAB 64 5000MUTAG 32 188PROTEINS 64 1113SBM 32 1000Erdös-Rényi 32 1000For the k-nn graph baseline, we choose ksuch that the main degree of the original graph is maintained,i.e.kequal to average degree. Our experiments also use 2 preprocessing methods DIGL and SDRF.Unlike our proposed methods, both SDRF [ 20] and DIGL [ 25] use a set of hyperparamerters tooptimize for each specific graph, because both are also not inductive. This approach could bemanageable for the task of node classification, where you only have one graph. However, when itcomes to graph classification, the number of graphs are huge (5) and it is nor computationally feasibleoptimize parameters for each specific graph. For DIGL, we use a fixed α= 0.001andεbased onkeeping the same average degree for each graph, i.e., we use a different dynamically chosen εforeach graph in each dataset which maintain the same number of edges as the original graph. In thecase of SDRF, the parameters define how stochastic the edge addition is ( τ), the graph edit distanceupper bound (number of iterations) and optional Ricci upper-bound above which an edge will beremoved each iteration ( C+). We set the parameters τ= 20 (the edge added is always near the edgeof lower curvature), C+= 0(to force one edge is removed every iteration), and number of iterationsdynamic according to 0.7∗ |V|. Thus, we maintain the same number of edges in the new graph(τ= 20 andC+= 0), i.e., same average degree, and we keep the graph distance to the originalbounded by 0.7∗ |V|.A.3.4 Latent Space AnalysisIn this section, we analyze the two latent spaces produced by the models.•First, we compare the CT Embedding computed spectrally ( Zin equation 2) with the CTEmbedding predicted by our CT-L AYER (Zin definition 1) for a given graph, where each pointis a node in the graph.•Second, we compare the graph readout output for every model defined in the experiments(Figure 4) where each point is a graph in the dataset.Spectral CT Embedding vs CT Embeddings Learned by CT-L AYER .The well-known em-beddings based on the Laplacian positional encodings (PE) are typically computed beforehand andappended to the input vector Xas additional features [ 35,36]. This task requires an expensivecomputation O(n3)(see equation 2). Conversely, we propose a GNN Layer that learns how to predictthe CT embeddings (CTEs) for unseen graphs (definition 1 and Figure 2) with a loss function thatoptimizes such CTEs. Note that we do not explicitly use the CTE features (PE) for the nodes, but weuse the CTs as a new diffusion matrix for message passing (given by TCTin Definition 1). Note thatwe could also use Zas positional encodings in the node features, such that CT-L AYER may be seenas a novel approach to learn Positonal Encodings.5https://github.com/AdrianArnaiz/DiffWire23DiffWire: Inductive Graph Rewiring via the Lovász BoundIn this section, we perform a comparative analysis between the spectral commute times embeddings(spectral CTEs, Zin equation 2) and the CTEs that are predicted by our CT-L AYER (Zin definition 1).As seen in Figure 9 (top), both embeddings respect the original topology of the graph, but they differdue to (1) orthogonality restrictions, and more interestingly to (2) the simplification of the originalspectral loss function in Alev et al. [43]: the spectral CTEs minimize the trace of a quotient, whichinvolves computing an inverse, whereas the CTEs learned in CT-L AYER minimize the quotient oftwo traces which is computationally simpler (see LCTloss in Definition 1). Two important propertiesof the first term in Definition 1 are: (1) the learned embedding Zhas minimal Dirichlet energy(numerator) and (2) large degree nodes will be separated (denominator). Figure 9 (top) illustrateshow the CTEs that are learned in CT-L AYER are able to better preserve the original topology of thegraph (note how the nodes are more compactly embedded when compared to the spectral CTEs).Figure 9 (bottom) depicts a histogram of the effective resistances or commute times (CTs) (seeSection 3.2 in the paper) of the edges according to CT-L AYER or the spectral CTEs. The histogram iscomputed from the upper triangle of the TCTmatrix defined in Definition 1. Note that the larger theeffective resistance of an edge, the more important that edge will be considered (and hence the lowerthe probability of being removed [ 54]). We observe how in the histogram of CTEs that are learnedinCT-L AYER there is a ‘small club’ of edges with very large values and a large number of edgeswith low values yielding a power-law-like profile. However, the histogram of the effective resistancescomputed by the spectral CTEs exhibits a profile similar to a Gaussian distribution. From this result,we conclude that the use of LCTin the learning process of the CT-L AYER shifts the distribution ofthe effective resistances of the edges towards an asymmetric distribution where few edges have verylarge weights and a majority of edges have low weights.CT-Layer CTE Spectral CTE0.00 0.01 0.02 0.03020406080100120CT-Layer CT Dist histogram0.003 0.004 0.005 0.0060204060Spectral CT Dist histogram5 10 15 20 25 30Node DegreeFigure 9: Top: CT embeddings predicted by CT-L AYER (left) and spectral CT embeddinggs (right).Bottom: Histogram of normalized effective resistances (i.e., CT distances or upper triangle in TCT)computed from the above CT embeddings. Middle: original graph from the COLLAB dataset. Colorscorrespond to node degree. CT-L AYER CTEs reduced from 75 to 32 dimensions using Johnson-Lindenstrauss. Finally, both CTEs reduced from 32 to 2 dimensions using T-SNE.Graph Readout Latent Space Analysis. To delve into the analysis of the latent spaces producedby our layers and model, we also inspect the latent space produced by the models (Figure 4) that useMINCUTPOOL (Figure 8a), GAP-L AYER (Figure 8b) and CT-L AYER (Figure 8c). Each point is agraph in the dataset, corresponding to the graph embedding of the readout layer. We plot the outputof the readout layer for each model, and then perform dimensionality reduction with TSNE.Observing the latent space of the REDDIT -BINARY dataset (Figure 10), CT-L AYER creates a disperseyet structured latent space for the embeddings of the graphs. This topology in latent spaces show thatthis method is able to capture different topological details. The main reason is the expressiveness ofthe commute times as a distance metric when performing rewiring, which has been shown to be a24DiffWire: Inductive Graph Rewiring via the Lovász Boundoptimal metric to measure node structural similarity. In addition, GAP-L AYER creates a latent spacewhere, although the 2 classes are also separable, the embeddings are more compressed, due to a moreaggressive –yet still informative– change in topology. This change in topology is due to the change inbottleneck size that GAP-L AYER applies to the graph. Finally, MINCUTcreates a more squeezedand compressed embedding, where both classes lie in the same spaces and most of the graphs havecollapsed representations, due to the limited expressiveness of this architecture.(a)CT-L AYER (b)MinCut (c)GAP-L AYERFigure 10: REDDIT embeddings produced by GAP-L AYER (Ncut) CT-L AYER and M INCUT.A.3.5 Architectures and Details of Node Classification ExperimentsThe application of our framework for a node classification task entails several considerations. First,this first implementation of our method works with dense AandXmatrices, whereas node classifica-tion typically uses sparse representations of the edges. Thus, the implementation of our proposedlayers is not straightforward for sparse graph representations. We are planning to work on the sparseversion of this method in future work.Note that we have chosen benchmark datasets that are manageable with our dense implementation.In addition, we have chosen a basic baseline with 1 GCN layer to show the ability of the approachesto avoid under-reaching, over-smoothing and over-squashing.The baseline GCN is a 1-layer-GCN, and the 2 compared models are:•1CT-L AYER for calculating Zfollowed by 1 GCN Layer using Afor message passing andX∥Zas features. This approach is a combination of Velingker et al. [35] and our method. SeeFigure 11c.•1CT-L AYER for calculating TCTfollowed by 1 GCN Layer using that TCTfor messagepassing and Xas features. See Figure 11b.XGCN AYXGCNAYCT-LayerTctXGCNAYCT-LayerZXA(a)GCN baselineXGCN AYXGCNAYCT-LayerTctXGCNAYCT-LayerZXA (b)A=TCTXGCN AYXGCNAYCT-LayerTctXGCNAYCT-LayerZXA(c)X∥ZFigure 11: Diagrams of the GNNs used in the experiments for node classification.A promising direction of future work would be to explore how to combine both approaches to leveragethe best of each of the methods on a wide range of graphs for node classification tasks. In addition,using this learnable CT distance for modulating message passing in more sophisticated ways isplanned for future work.25DiffWire: Inductive Graph Rewiring via the Lovász BoundA.3.6 Analysis of Correlation between Structural Properties and CT-L AYER PerformanceTo analyze the performance of our model in graphs with different structural properties, we analyze thecorrelation between accuracy, the graph’s assortativity, and the graph’s bottleneck ( λ2) in COLLABand REDDIT datasets. If the error is consistent along all levels of accuracy and gaps, the layer cangeneralize along different graph topologies.As seen in Figure 14, Figure 12 (middle), and Figure 13 (middle), we do not identify any correlationor systematic pattern between graph classification accuracy, assortativity, and bottleneck with CT-LAYER -based rewiring, since the proportion of wrong and correct predictions are regular for all levelsof assortativity and bottleneck size.In addition, note that while there is a systematic error of the model over-predicting class 0 in theCOLLAB dataset (see Figure 12), this behavior is not explained by assortativity or bottleneck size,but by the unbalanced number of graphs in each class.0.5 0.0 0.5 1.0assortativity020406080100Assortativity histogramsLabel0120.5 0.0 0.5 1.0assortativity020406080100Correct predictionFalseTrue0.5 0.0 0.5 1.0assortativity020406080100Predicted0120.00 0.25 0.50 0.75 1.00bottleneck0204060801002 histogramsLabel0120.00 0.25 0.50 0.75 1.00bottleneck020406080100Correct predictionFalseTrue0.00 0.25 0.50 0.75 1.00bottleneck020406080100Predicted0120 1 2Predicted0 1 2Label0.74 0.11 0.150.23 0.77 00.29 0.046 0.67COLLABFigure 12: Analysis of assortativity, bottleneck and accuracy for COLLAB dataset. Top: Histogramsof assortativity. Bottom: Histograms of bottleneck size ( λ2). Both are grouped by actual label of thegraph (left), by correct or wrong predictions (middle) and by predicted label (right).1.00 0.75 0.50 0.25 0.00assortativity0102030Assortativity histogramsLabel011.00 0.75 0.50 0.25 0.00assortativity0102030Correct predictionFalseTrue1.00 0.75 0.50 0.25 0.00assortativity0102030Predicted010.00 0.25 0.50 0.75 1.00bottleneck0501001502002502 histogramsLabel010.00 0.25 0.50 0.75 1.00bottleneck050100150200250Correct predictionFalseTrue0.00 0.25 0.50 0.75 1.00bottleneck050100150200250Predicted010 1Predicted0 1Label0.91 0.0950.2 0.81REDDIT-BINARYFigure 13: Analysis of assortativity, bottleneck and accuracy for REDDIT-B dataset. Top: Histogramsof assortativity. Bottom: Histograms of bottleneck size ( λ2). Both are grouped by actual label of thegraph (left), by correct or wrong predictions (middle) and by predicted label (right).26DiffWire: Inductive Graph Rewiring via the Lovász Bound0.5 0.0 0.5 1.0 1.5Assortativity0.250.000.250.500.751.001.25Bottleneck 2Correct predictionFalseTrue(a)COLLAB1.00 0.75 0.50 0.25 0.00Assortativity0.00.20.40.60.81.0Bottleneck 2Correct PredictionFalseTrue (b)REDDIT-BFigure 14: Correlation between assortativity, λ2and accuracy for CT-L AYER . Histograms showsthat the proportion of correct and wrong predictions are regular for all levels of assortativity (x axis)and bottleneck size (y axis). For the sake of clarity, these visualizations, a and b, are the combinationof the 2 histograms in the middle column of Figure 12 and Figure 13 respectively.A.3.7 Computing InfrastructureTable 6 summarizes the computing infrastructure used in our experiments.Table 6: Computing infrastructure.Component DetailsGPU 2x A100-SXM4-40GBRAM 1 TiBCPU 255x AMD 7742 64-Core @ 2.25 GHzOS Ubuntu 20.04.4 LTS27
R3wNU5H7MX
This paper proposes DiffWire, a fully differentiable, inductive, and parameter-free graph rewiring algorithm based on the Lovász bound. DiffWire uses the commute times as a relevance function for edge re-weighting, and proposes two types of new layers to either learn the commute times (CT-Layer) or adds a layer that optimizes the spectral gap for the network and the task (GAP-Layer). Experimental results show the proposed approach performs well on grass classification tasks. Strengths: - The paper is well-written with vigorous theoretical grounding. The connection between CT-Layer & GAP-Layer to the two sides of the Lovasz Bound and the graph’s spectral gap is interesting. - Experimental results demonstrate the good performance of DiffWire on various graph classification datasets. - CT-Layer and GAP-Layer’s performance differences on SBM verify the assumption of GAP-Layer being more suitable under the case that the Lovász bound is restrictive. Weaknesses: - Only two datasets have node features. As stated in the paper, DiffWire performs well on graphs with no node features as it can leverage the topology of the graphs. However, I am not sure if DiffWire can also utilize the node features well when they are informative, and asking for related justifications or additional experiments to be provided. Questions: - Why are experiments only conducted on graph classification tasks? Can the proposed framework also work for node classification tasks (like [1] mentioned in the paper)? It would be great if the authors can discuss this or provide experiments on node classification datasets. - I am interested in if the graph homophily affects the effectiveness of the proposed method. A.3.1 claims to provide nodes, edges, average degree, assortativity, number of triangles, transitivity and clustering coefficients in table 3, but I cannot find the assortativity (assuming this refers to homophily). It would be desirable to include related data and discussion on graph homophily. - I am unclear about how the statement "the smaller the graph’s bottleneck, the more useful the CT-Layer is" explains the better performance of CT-layer on COLLAB. Can this be further elaborated? Overall this paper introduces a differentiable framework for graph rewiring with good theoretical and empirical support. Albeit having several confusions, I tend to accept this paper. [1] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In *International Conference on Learning Representations*, 2022.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title DiffWire: Inductive Graph Rewiring via the Lovász Bound ### Paper Abstract Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks, such as node and graph classification, link prediction and node and graph clustering in a variety of domains. Most GNNs use a message passing framework and hence are called MPNNs. Despite their promising results, MPNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching. Graph rewiring and graph pooling have been proposed in the literature as solutions to address these limitations. However, most state-of-the-art graph rewiring methods fail to preserve the global topology of the graph, are neither differentiable nor inductive, and require the tuning of hyper-parameters. In this paper, we propose DiffWire, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free by leveraging the Lovász bound. The proposed approach provides a unified theory for graph rewiring by proposing two new, complementary layers in MPNNs: CT-Layer, a layer that learns the commute times and uses them as a relevance function for edge re-weighting; and GAP-Layer, a layer to optimize the spectral gap, depending on the nature of the network and the task at hand. We empirically validate the value of each of these layers separately with benchmark datasets for graph classification. We also perform preliminary studies on the use of CT-Layer for homophilic and heterophilic node classification tasks. DiffWire brings together the learnability of commute times to related definitions of curvature, opening the door to creating more expressive MPNNs. ### Paper Keywords ["GNN", "graph neural networks", "Geometric deep learning", "MPNNs", "graph rewiring", "over-smoothing", "over-squashing", "Lov\u00e1sz bound", "spectral gap", "graph diffusion", "commute times"] ### Paper Content DiffWire: Inductive Graph Rewiring via the Lovász BoundAdrian Arnaiz-RodriguezELLIS Alicanteadrian@ellisalicante.orgAhmed BeggaUniversity of AlicanteFrancisco EscolanoELLIS Alicantesco@ellisalicante.orgNuria OliverELLIS Alicantenuria@ellisalicante.orgAbstractGraph Neural Networks (GNNs) have been shown to achieve competitive resultsto tackle graph-related tasks, such as node and graph classification, link predictionand node and graph clustering in a variety of domains. Most GNNs use a messagepassing framework and hence are called MPNNs. Despite their promising results,MPNNs have been reported to suffer from over-smoothing, over-squashing andunder-reaching. Graph rewiring and graph pooling have been proposed in theliterature as solutions to address these limitations. However, most state-of-the-artgraph rewiring methods fail to preserve the global topology of the graph, areneither differentiable nor inductive, and require the tuning of hyper-parameters.In this paper, we propose DIFFWIRE, a novel framework for graph rewiring inMPNNs that is principled, fully differentiable and parameter-free by leveragingthe Lovász bound. The proposed approach provides a unified theory for graphrewiring by proposing two new, complementary layers in MPNNs: CT-L AYER , alayer that learns the commute times and uses them as a relevance function for edgere-weighting; and GAP-L AYER , a layer to optimize the spectral gap, depending onthe nature of the network and the task at hand. We empirically validate the value ofeach of these layers separately with benchmark datasets for graph classification.We also perform preliminary studies on the use of CT-L AYER for homophilic andheterophilic node classification tasks. DIFFWIREbrings together the learnabilityof commute times to related definitions of curvature, opening the door to creatingmore expressive MPNNs.1 IntroductionGraph Neural Networks (GNNs) [ 1,2] are a class of deep learning models applied to graph structureddata. They have been shown to achieve state-of-the-art results in many graph-related tasks, such asnode and graph classification [ 3,4], link prediction [ 5] and node and graph clustering [ 6,7], and in avariety of domains, including image or molecular structure classification, recommender systems andsocial influence prediction [8].Most GNNs use a message passing framework and thus are referred to as Message Passing NeuralNetworks (MPNNs) [ 4] . In these networks, every node in each layer receives a message from itsadjacent neighbors. All the incoming messages at each node are then aggregated and used to updatethe node’s representation via a learnable non-linear function –which is typically implemented bymeans of a neural network. The final node representations (called node embeddings) are used toperform the graph-related task at hand (e.g. graph classification). MPNNs are extensible, simple andhave proven to yield competitive empirical results. Examples of MPNNs include GCN [ 3], GAT [ 9],GATv2 [ 10], GIN [ 11] and GraphSAGE [ 12]. However, they typically use transductive learning, i.e.the model observes both the training and testing data during the training phase, which might limittheir applicability to graph classification tasks.A. Arnaiz-Rodriguez et al., DiffWire: Inductive Graph Rewiring via the Lovász Bound. Proceedings of the FirstLearning on Graphs Conference (LoG 2022) , PMLR 198, Virtual Event, December 9–12, 2022.DiffWire: Inductive Graph Rewiring via the Lovász BoundHowever, MPNNs also have important limitations due to the inherent complexity of graphs. Despitesuch complexity, the literature has reported best results when MPNNs have a small number of layers,because networks with many layers tend to suffer from over-smoothing [13] and over-squashing [14].However, this models fail to capture information that depends on the entire structure of the graph [ 15]and prevent the information flow to reach distant nodes. This phenomenon is called under-reaching[16] and occurs when the MPNN’s depth is smaller than the graph’s diameter.Over-smoothing [8,17–19] takes place when the embeddings of nodes that belong to different classesbecome indistinguishable. It tends to occur in MPNNs with many layers that are used to tackle short-range tasks, i.e. tasks where a node’s correct prediction mostly depends on its local neighborhood.Given this local dependency, it makes intuitive sense that adding layers to the network would nothelp the network’s performance.Conversely, long-range tasks require as many layers in the network as the range of the interactionbetween the nodes. However, as the number of layers in the network increases, the number ofnodes feeding into each of the node’s receptive field also increases exponentially, leading to over-squashing [14,20]: the information flowing from the receptive field composed of many nodes iscompressed in fixed-length node vectors, and hence the graph fails to correctly propagate the messagescoming from distant nodes. Thus, over-squashing emerges due to the distortion of information flowingfrom distant nodes due to graph bottlenecks that emerge when the number of k-hop neighbors growsexponentially with k.Graph pooling and graph rewiring have been proposed in the literature as solutions to address theselimitations [ 14]. Given that the main infrastructure for message passing in MPNNs are the edgesin the graph, and given that many of these edges might be noisy or inadequate for the downstreamtask [21], graph rewiring aims to identify such edges and edit them.Many graph rewiring methods rely on edge sampling strategies: first, the edges are assigned newweights according to a relevance function and then they are re-sampled according to the new weightsto retain the most relevant edges (i.e. those with larger weights). Edge relevance might be computedin different ways, including randomly [22], based on similarity [23] or on the edge’s curvature [20].Due to the diversity of possible graphs and tasks to be performed with those graphs, optimal graphrewiring should include a variety of strategies that are suited not only to the task at hand but also tothe nature and structure of the graph.Motivation. State-of-the-art edge sampling strategies have three significant limitations . First,most of the proposed methods fail to preserve the global topology of the graph . Second, mostgraph rewiring methods are neither differentiable norinductive [20]. Third, relevance functions thatdepend on a diffusion measure (typically in the spectral domain) are not parameter-free , which addsa layer of complexity in the models. In this paper, we address these three limitations.Contributions and Outline. The main contribution of this work is to propose a theoretical frame-work called DIFFWIREfor graph rewiring in GNNs that is principled, differentiable, inductive, andparameter-free by leveraging the Lovász bound [ 15] given by Eq. 1. This bound is a mathematicalexpression of the relationship between the commute times (effective resistance distance ) and thenetwork’s spectral gap . Given an unseen test graph, DIFFWIREpredicts the optimal graph structurefor the task at hand without any parameter tuning. Given the recently reported connection betweencommute times and curvature [ 24], and between curvature and the spectral gap [ 20], the proposedframework provides a unified theory linking these concepts. Our aim is to leverage diffusion andcurvature theories to propose a new approach for graph rewiring that preserves the graph’s structure.We first propose using the CT as a relevance function for edge re-weighting. Moreover, we develop adifferentiable, parameter-free layer in the GNN ( CT-L AYER ) to learn the CT. Second, we propose analternative graph rewiring approach by adding a layer in the network ( GAP-L AYER ) that optimizesthe spectral gap according to the nature of the network and the task at hand. Finally, we empiricallyvalidate the proposed layers with state-of-the-art benchmark datasets in a graph classification task.We test our approach on a graph classification task to emphasize the inductive nature of DIFFWIRE:the layers in the GNN ( CT-L AYER orGAP-L AYER ) are trained to predict the CTs embedding orminimize the spectral gap for unseen graphs, respectively. This approach gives a great advantagewhen compared to SoTA methods that require optimizing the parameters of the models for each graph.CT-L AYER andGAP-L AYER learn the weights during training to predict the optimal changes in the2DiffWire: Inductive Graph Rewiring via the Lovász Boundtopology of any unseen graph in test time. Finally, we also perform preliminary node classificationexperiments in heterophilic and homophilic graphs using CT-L AYER .The paper is organized as follows: Section 2 provides a summary of the most relevant related literature.Our core technical contribution is described in Section 3, followed by our experimental evaluationand discussion in Section 4. Finally, Section 5 is devoted to conclusions and an outline of our futurelines of research.2 Related WorkIn this section we provide an overview of the most relevant works that have been proposed in theliterature to tackle the challenges of over-smoothing, over-squashing and under-reaching in MPNNsby means of graph rewiring and pooling.Graph Rewiring in MPNNs. Rewiring is a process of changing the graph’s structure to control theinformation flow and hence improve the ability of the network to perform the task at hand (e.g. nodeor graph classification, link prediction...). Several approaches have been proposed in the literature forgraph rewiring, such as connectivity diffusion [ 25] or evolution [ 20], adding new bridge-nodes [ 26]and multi-hop filters [27], and neighborhood [12], node [28] and edge [22] sampling.Edge sampling methods sample the graph’s edges based on their weights or relevance, whichmight be computed in different ways. Rong et al. [22] show that randomly dropping edges duringtraining improves the performance of GNNs. Klicpera et al. [25], define edge relevance accordingto the coefficients of a parameterized diffusion process over the graph and then edges are selectedusing a threshold rule. For Kazi et al. [23], edge relevance is given by the similarity between thenodes’ attributes. In addition, a reinforcement learning process rewards edges leading to a correctclassification and penalizes the rest.Edge sampling-based rewiring has been proposed to tackle over-smoothing and over-squashing inMPNNs. Over-smoothing may be relieved by removing inter-class edges [ 29]. However, this strategyis only valid when the graph is homophilic, i.e. connected nodes tend to share similar attributes.Otherwise, removing these edges could lead to over-squashing [ 20] if their removal obstructs themessage passing between distant nodes belonging to the same class (heterophily). Increasing thesize of the bottlenecks of the graph via rewiring has been shown to improve node classificationperformance in heterophilic graphs, but not in homophilic graphs [ 20]. Recently, Topping et al. [20]propose an edge relevance function given by the edge curvature to mitigate over-squashing. Theyidentify the bottleneck of the graph by computing the Ricci curvature of the edges. Next, they removeedges with high curvature and add edges around minimal curvature edges.Graph Structure Learning (GSL). GSL methods [ 30] aim to learn an optimized graph structure andits corresponding representations at the same time .DIFFWIREcould be seen from the perspective ofGSL: CT-L AYER , as a metric-based, neural approach, and GAP-L AYER , as a direct-neural approachto optimize the structure of the graph to the task at hand.Graph Pooling. Pooling layers simplify the original graph by compressing it into a smaller graphor a vector via pooling operators, which range from simple [ 31] to more sophisticated approaches,such as DiffPool [ 32] and MinCut pool [ 33]. Although graph pooling methods do not consider theedge representations, there is a clear relationship between pooling methods and rewiring since bothof them try to quantify the flow of information through the graph’s bottleneck.Positional Encodings (PEs) A Positional Encoding is a feature that describes the global or localposition of the nodes in the graph. These features are related to random walk measures and theLaplacian’s eigenvectors [ 34]. Commute Times embeddings (CTEs) may be considered an expressiveform of PEs due to their spectral properties, i.e. their relation with the shortest path, the spectral gapor Cheeger constant. Velingker et al. [35] recently proposed use the CTEs as PE or commute times(CT) as edge feature. They pre-compute the CTEs and CT and add it as node or edge features toimprove the structural expressiveness of the GNN. PEs are typically pre-computed and then used tobuild more expressive graph architectures, either by concatenating them to the node features or bybuilding transformer models [ 36,37]. Our work is related to PEs as CT-L AYER learns the originalPEs from the input Xand the adjacency matrix Ainstead of pre-computing and potentially modifyingthem, as previous works do [ 35–38]. Thus, CT-L AYER may be seen as a method to automaticallylearn the PEs for graph rewiring.3DiffWire: Inductive Graph Rewiring via the Lovász BoundFigure 1: DIFFWIRE. Left: Original graph from COLLAB (test set). Center: Rewired graph afterCT-L AYER . Right: Rewired graph after GAP-L AYER . Colors indicate the strength of the edges.3 Proposed Approach: D IFFWIREfor Inductive Graph RewiringDIFFWIREprovides a unified theory for graph rewiring by proposing two new, complementary layersin MPNNs: first, CT-L AYER , a layer that learns the commute times and uses them as a relevancefunction for edge re-weighting; and second, GAP-L AYER , a layer to optimize the spectral gap,depending on the nature of the network and the task at hand.In this section, we present the theoretical foundations for the definitions of CT-L AYER andGAP-LAYER . First, we introduce the bound that our approach is based on: The Lovász bound. Table 3 inA.1 summarizes the notation used in the paper.3.1 The Lovász BoundThe Lovász bound, given by Eq. 1, was derived by Lovász in [ 15] as a means of linking the spectrumgoverning a random walk in an undirected graph G= (V, E)with the hitting time Huvbetween anytwo nodes uandvof the graph. Huvis the expected number of steps needed to reach (or hit) vfromu;Hvuis defined analogously. The sum of both hitting times between the two nodes, vandu, is thecommute time CTuv=Huv+Hvu. Thus, CTuvis the expected number of steps needed to hit vfrom uand go back to u. According to the Lovász bound:1vol(G)CTuv−1du+1dv≤1λ′22dmin(1)where λ′2≥0is the spectral gap , i.e. the first non-zero eigenvalue of L=I−D−1/2AD−1/2(normalized Laplacian [ 39], where Dis the degree matrix and A, the adjacency matrix); vol(G)isthe volume of the graph (sum of degrees); duanddvare the degrees of nodes uandv, respectively;anddminis the minimum degree of the graph.The term CTuv/vol(G)in Eq. 1 is referred to as the effective resistance ,Ruv, between nodes uandv. The bound states that the effective resistance between two nodes in the graph converges to ordiverges from ( 1/du+ 1/dv), depending on whether the graph’s spectral gap diverges from or tendsto zero. The larger the spectral gap, the closer CTuv/vol(G)will be to1du+1dvand hence the lessinformative the commute times will be.We propose two novel GNNs layers based on each side of the inequality in Eq. 1: CT-L AYER , focuseson the left-hand side, and GAP-L AYER , on the right-hand side. The use of each layer depends onthe nature of the network and the task at hand. In a graph classification task (our focus), CT-L AYERis expected to yield good results when the graph’s spectral gap is small; conversely, GAP-L AYERwould be the layer of choice in graphs with large spectral gap.The Lovász bound was later refined by von Luxburg et al. [40]. App. A.2.2 presents this bound alongwith its relationship with Ruvas a global measure of node similarity. Once we have defined bothsides of the Lovász bound, we proceed to describe their implications for graph rewiring.4DiffWire: Inductive Graph Rewiring via the Lovász Bound3.2 CT-L AYER : Commute Times for Graph RewiringWe focus first on the left-hand side of the Lovász bound which concerns the effective resistancesCTuv/vol(G) =Ruv(or commute times)1between any two nodes in the graph.Spectral Sparsification leads to Commute Times. Graph sparsification in undirected graphsmay be formulated as finding a graph H= (V, E′)that is spectrally similar to the original graphG= (V, E)withE′⊂E. Thus, the spectra of their Laplacians, LGandLHshould be similar.Theorem 1 (Spielman and Srivastava [ 41]).LetSparsify (G,q) –> G’ be a sampling algorithm ofgraph G= (V, E), where edges e∈Eare sampled with probability q∝Re(proportional to theeffective resistance). For n=|V|sufficiently large and 1/√n < ε≤1,O(nlogn/ε2)samples areneeded to satisfy ∀x∈Rn: (1−ε)xTLGx≤xTLG′x≤(1 +ε)xTLGx, with probability ≥1/2.The above theorem has a simple explanation in terms of Dirichlet energies, E(x). The LaplacianL=D−A≽0, i.e. it is positive semi-definite (all its eigenvalues are non-negative). Then,if we consider x:V→Ras a real-valued function of the nnodes of G= (V, E), we havethatE(x):=xTLGx=Pe=(u,v)∈E(xu−xv)2≥0for any x. In particular, the eigenvectorsf:={fi:Lfi=λifi}are the set of special functions that minimize the energies E(fi), i.e. they arethe mutually orthogonal and normalized functions with the minimal variabilities achievable by thetopology of G. Therefore, any minimal variability of G′is bounded by (1±ε)times that of Gif wesample enough edges with probability q∝Re. In addition, λi=E(fi)fTifi.This first result implies that edge sampling based on commute times is a principled way to rewire agraph while preserving its original structure and it is bounded by the Dirichlet energies. Next, wepresent what a commute times embedding is and how it can be spectrally computed.Commute Times Embedding (CTE). The choice of effective resistances in Theorem 1 is explainedby the fact that Ruvcan be computed from Ruv= (eu−ev)TL+(eu−ev), where euis the unit vectorwith a unit value at uand zero elsewhere. L+=Pi≥2λ−1ififTi, where fi, λiare the eigenvectorsand eigenvalues of L, is the pseudo-inverse or Green’s function of G= (V, E)if it is connected. TheGreen’s function leads to envision Ruv(and therefore CTuv) asmetrics relating pairs of nodes of G.As a result, the CTE will preserve the commute times distance in a Euclidean space. Note that thislatent space of the nodes can not only be described spectrally but also in a parameter free -manner,which is not the case for other spectral embeddings, such as heat kernel or diffusion maps as they relyon a time parameter t. More precisely, the embedding matrix Zwhose columns contain the nodes’commute times embeddings is spectrally given by:Z:=pvol(G)Λ−1/2FT=pvol(G)Λ′−1/2GTD−1/2(2)where Λis the diagonal matrix of the unnormalized Laplacian Leigenvalues and Fis the matrix oftheir associated eigenvectors. Similarly, Λ′contains the eigenvalues of the normalized Laplacian LandGthe eigenvectors. We have F=GD−1/2orfi=giD−1/2, where Dis the degree matrix.Finally, the commute times are given by the Euclidean distances between the embeddings CTuv=∥zu−zv∥2. The spectral calculation of commute times distances is given by:Ruv=CTuvvol(G)=∥zu−zv∥2vol(G)=nXi=21λi(fi(u)−fi(v))2=nXi=21λ′igi(u)√du−gi(v)√dv2(3)Commute Times as an Optimization Problem. In this section, we demonstrate how the CTs maybe computed as an optimization problem by means of a differentiable layer in a GNN. Constrainingneighboring nodes to have a similar embedding leads toZ= arg minZTZ=IPu,v∥zu−zv∥2AuvPu,vZ2uvdu=P(u,v)∈E∥zu−zv∥2Pu,vZ2uvdu=Tr[ZTLZ]Tr[ZTDZ], (4)which reveals that CTs embeddings result from a Laplacian regularization down-weighted by thedegree. As a result, frontier nodes or hubs –i.e. nodes with inter-community edges– which tend to1We use commute times and effective resistances interchangeably as per their use in the literature5DiffWire: Inductive Graph Rewiring via the Lovász Boundhave larger degrees than those lying inside their respective communities will be embedded far awayfrom their neighbors, increasing the distance between communities. Note that the above quotient oftraces formulation is easily differentiable and different from Tr[ZTLZZTDZ]proposed in [42].With the above elements we define CT-L AYER , the first rewiring layer proposed in this paper. SeeFigure 2 for a graphical representation of the layer.Definition 1 (CT-Layer) .Given the matrix Xn×Fencoding the features of the nodes after anymessage passing (MP) layer, Zn×O(n)= tanh(MLP( X))learns the association X→ZwhileZisoptimized according to the loss LCT=Tr[ZTLZ]Tr[ZTDZ]+ZTZ∥ZTZ∥F−InF. This results in the followingresistance diffusion TCT=R(Z)⊙A, i.e. the Hadamard product between the resistance distanceand the adjacency matrix, providing as input to the subsequent MP layer a learnt convolution matrix.We set R(Z)to the pairwise Euclidean distances of the node embeddings in Zdivided by vol(G).Thus, CT-L AYER learns the CTs and rewires an input graph according to them: the edges withmaximal resistance will tend to be the most important edges so as to preserve the topology of thegraph.CT-Layer∀x∈Rn:1−εxTLGx≤xTLG′x≤(1+ε)xTLGxStructure preservation : Dirichlet energies in new graph G′are bounded in(1±ε)of the Dirichlet energies of the original graph G.LCT=Tr[ZTLZ]Tr[ZTDZ]+ZTZZTZF−INFPool -tanhXAZ∈Rn×O(n)TCT∈Rn×n=cdist(Z)vol(G)⊙ATCTFigure 2: Detailed depiction of CT-L AYER , where cdist refers to the matrix of pairwise Euclideandistances between the node embeddings in Z.Below, we present the relationship between the CTs and the graph’s bottleneck and curvature.TCTand Graph Bottlenecks. Beyond the principled sparsification of TCT(Theorem 1), thislayer rewires the graph G= (E, V)in such a way that edges with maximal resistance will tend to bethe most critical to preserve the topology of the graph. More precisely, althoughPe∈ERe=n−1,the bulk of the resistance distribution will be located at graph bottlenecks, if they exist. Otherwise,their magnitude is upper-bounded and the distribution becomes more uniform.Graph bottlenecks are controlled by the graph’s conductance or Cheeger constant, hG=min S⊆VhS,where: hS=|∂S|min(vol(S),vol( ̄S)),∂S={e= (u, v) :u∈S, v∈ ̄S}andvol(S) =Pu∈Sdu.The interplay between the graph’s conductance and effective resistances is given by:Theorem 2 (Alev et al. [43]).Given a graph G= (V, E), a subset S⊆Vwithvol(S)≤vol(G)/2,hS≥cvol(S)1/2−ε⇐⇒ | ∂S| ≥c·vol(S)1/2−ε, (5)for some constant candε∈[0,1/2]. Then, Ruv≤1d2εu+1d2εv·1ε·c2for any pair u, v.According to this theorem, the larger the graph’s bottleneck, the tighter the bound on Ruvare.Moreover, max( Ruv)≤1/h2S, i.e., the resistance is bounded by the square of the bottleneck.This bound partially explains the rewiring of the graph in Figure 1-center. As seen in the Figure 1-center, rewiring using CT-L AYER sparsifies the graph and assigns larger weights to the edges locatedin the graph’s bottleneck. The interplay between Theorem 2 and Theorem 1 is described in App. A.1.Recent work has proposed using curvature for graph rewiring. We outline below the relationshipbetween CTs and curvature.Effective Resistances and Curvature. Topping et al. [20] propose an approach for graph rewiring,where the relevance function is given by the Ricci curvature. However, this measure is non-differentiable. More recent definitions of curvature [ 24] have been formulated based on resistance6DiffWire: Inductive Graph Rewiring via the Lovász Bounddistances that would be differentiable using our approach. The resistance curvature of an edgee= (u, v)isκuv:= 2(pu+pv)/Ruvwhere pu:= 1−12Pu∼wRuvis the node’s curvature.Relevant properties of the edge resistance curvature are discussed in App. A.1.3, along with a relatedTheorem proposed in Devriendt and Lambiotte [24].3.3 GAP-L AYER : Spectral Gap Optimization for Graph RewiringThe right-hand side of the Lovász bound in Eq. 1 relies on the graph’s spectral gap λ′2, such that thelarger the spectral gap, the closer the commute times would be to their non-informative regime. Notethat the spectral gap is typically large in commonly observed graphs –such as communities in socialnetworks which may be bridged by many edges [ 44]– and, hence, in these cases it would be desirableto rewire the adjacency matrix Aso that λ′2is minimized.In this section, we explain how to rewire the graph’s adjacency matrix A to minimize the spectral gap.We propose using the gradient of λ2wrt each component of ̃A. Then, we can compute these gradienteither using Laplacians ( L, with Fiedler λ2) or normalized Laplacians ( L, with Fiedler λ′2). We alsopresent an approximation of the Fiedler vectors needed to compute those gradients, and proposecomputing them as a GNN Layer called the GAP-L AYER . A detailed schematic of GAP-L AYER isshown in Figure 3.Rewiring using a Ratio-cut (Rcut) Approximation. We propose to rewire the adjacency matrix, A,so that λ2is minimized. We consider a matrix ̃Aclose to Athat satisfies ̃Lf2=λ2f2, where f2isthe solution to the ratio-cut relaxation [ 45]. Following [ 46], the gradient of λ2wrt each componentof ̃Ais given by∇ ̃Aλ2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=diag(f2fT2)11T−f2fT2 (6)where 1is the vector of nones; and [∇ ̃Aλ2]ijis the gradient of λ2wrt ̃Auv. The driving force ofthis gradient relies on the correlation f2fT2. Using this gradient to minimize λ2results in breakingthe graph’s bottleneck while preserving simultaneously the inter-cluster structure. We delve into thismatter in App. A.2.Rewiring using a Normalized-cut (Ncut) Approximation. Similarly, considering now λ′2forrewiring leads to∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=d′ngT2 ̃AT ̃D−1/2g2o1T+d′ngT2 ̃A ̃D−1/2g2o1T+ ̃D−1/2g2gT2 ̃D−1/2(7)where d′is an×1vector including derivatives of degree wrt adjacency and related terms. Thisgradient relies on the Fiedler vector g2(the solution to the normalized-cut relaxation), and on theincoming and outgoing one-hop random walks. This approximation breaks the bottleneck whilepreserving the global topology of the graph (Figure 1-left). Proof and details are included in App. A.2.We present next an approximation of the Fiedler vector, followed by a proposed new layer in theGNN called the GAP-L AYER to learn how to minimize the spectral gap of the graph.Approximating the Fiedler vector. Given that g2= ̃D1/2f2, we can obtain the normalized-cutgradient in terms of f2. From [17] we have thatf2(u) =+1/√nifubelongs to the first cluster−1/√nifubelongs to the second cluster+Olognn(8)Definition 2 (GAP-Layer) .Given the matrix Xn×Fencoding the features of the nodes after anymessage passing (MP) layer, Sn×2=Softmax (MLP( X))learns the association X→Swhile Sisoptimized according to the loss LCut=−Tr[STAS]Tr[STDS]+STS∥STS∥F−In√2F. Then the Fiedler vectorf2is approximated by appyling a softmaxed version of Eq. 8 and considering the loss LFiedler =∥ ̃A−A∥F+α(λ∗2)2, where λ∗2=λ2if we use the ratio-cut approximation (and gradient) andλ∗2=λ′2if we use the normalized-cut approximation and gradient. This returns ̃Aand the GAPdiffusion TGAP= ̃A(S)⊙Aresults from minimizing LGAP :=LCut+LFiedler .7DiffWire: Inductive Graph Rewiring via the Lovász BoundMLP -σXAS∈Rn×2 ෩A⊙ALcut=Tr[STLS]Tr[STDS]+STSSTSF−IN2FTGAPf2(S)λ2=Ef2∇෩ALFiedler෩A=A−μ×∇෩Aλ2Lfiedler=෩A−AF+α(λ2)2∇෩Aλ2=2෩A−A+(diagf2f2T11T−f2f2T)×λ2Figure 3: GAP-L AYER (Rcut). For GAP-L AYER (Ncut), substitute ∇ ̃Aλ2by Eq. 74 Experiments and Discussion4.1 Graph ClassificationIn this section, we study the properties and performance of CT-L AYER andGAP-L AYER in a graphclassification task with several benchmark datasets. To illustrate the merits of our approach, wecompare CT-L AYER andGAP-L AYER with 3 state-of-the-art diffusion and curvature-based graphrewiring methods. Note that the aim of the evaluation is to shed light on the properties of both layersand illustrate their inductive performance, not to perform a benchmark comparison with all previouslyproposed graph rewiring methods.LINEARCONVMINCUTCONVREADOUTMLPXX XAX AX XY(a)MINCUTbaselineXLINEARCONVMINCUTCONVREADOUTMLPXXAX AX XYREWIRINGT (b)CT-L AYER or GAP-L AYERFigure 4: GNN models used in the experiments. Left: MinCut Baseline model. Right: CT-L AYERor GAP-L AYER models, depending on what method is used for rewiring.Baselines: . The first baseline architecture is based on MINCUTPool [33] and it is shown in Figure 4a.It is the base GNN that we use for graph classification without rewiring. MINCUTPool layer learns(An×n,Xn×F)→(A′k×k,Xk×F), being k < n the new number of node clusters. The first baselinestrategy using graph rewiring is k-NN graphs [ 47], where weights of the edges are computed basedon feature similarity. The next two baselines are graph rewiring methods that belong to the samefamily of methods as DIFFWIRE, i.e. methods based on diffusion and curvature, namely DIGL(PPR) [ 25] and SDRF [20]. DIGL is a diffusion-based preprocessing method within the family ofmetric-based GSL approaches. We set the teleporting probability α= 0.001andεis set to keep thesame average degree for each graph. Once preprocessed with DIGL , the graphs are provided as inputto the MinCut Pool (Baseline1) arquitecture. The third baseline model is SDRF, which performscurvature-based rewiring. SDRF is also a preprocessing method which has 3 parameters that arehighly graph-dependent. We set these parameters to τ= 20 andC+= 0for all experiments as per[20]. The number of iterations is estimated dynamically according to 0.7∗ |V|for each graph.Both DIGL and SDRF aim to preserve the global topology of the graph but require optimizing theirparameters for each input graph via hyper-parameter search. In a graph classification task, this searchisO(n3)per graph. Details about the parameter tuning in these methods can be found in App. A.3.3.To shed light on the performance and properties of CT-L AYER andGAP-L AYER , we add thecorresponding layer in between Linear (X)∗− →Conv1 (A,X). We build 3 different models: CT-LAYER ,GAP-L AYER (Rcut), GAP-L AYER (Ncut), depending on the layer used. For CT-L AYER ,we learn TCTwhich is used as a convolution matrix afterwards. For GAP-L AYER , we learn TGAPeither using the Rcut or the Ncut approximations. A schematic of the architectures is shown inFigure 4b and in App. A.3.2.As shown in Table 1, we use in our experiments common benchmark datasets for graph classification.We select datasets both with features and featureless, in which case we use the degree as the nodefeatures. These datasets are diverse regarding the topology of their networks: REDDIT -B,IMDB -B8DiffWire: Inductive Graph Rewiring via the Lovász BoundTable 1: Experimental results on common graph classification benchmarks. Red denotes the bestmodel row-wise and Blue marks the runner-up. ‘*’ means degree as node feature.MinCutPool k-NN DIGL SDRF CT-L AYER GAP-L AYER (R) GAP-L AYER (N)REDDIT-B* 66.53±4.4 64.40±3.8 76.02±4.3 65.3±7.778.45±4.5 77.63±4.9 76.00±5.3IMDB-B* 60.75±7.0 55.20±4.3 59.35±7.7 59.2±6.969.84±4.6 69.93±3.3 68.80±3.1COLLAB* 58.00±6.2 58.33±11 57.51±5.9 56.60±1069.87±2.4 64.47±4.0 65.89±4.9MUTAG 84.21±6.387.58±4.1 85.00±5.6 82.4±6.887.58±4.4 86.90±4.0 86.90±4.0PROTEINS 74.84±2.376.76±2.5 74.49±2.8 74.4±2.775.38±2.9 75.03±3.0 75.34±2.1SBM* 53.00±9.9 50.00±0.0 56.93±12 54.1±7.1 81.40±11 90.80±7.0 92.26±2.9Erdös-Rényi* 81.86±6.2 63.40±3.981.93±6.3 73.6±9.1 79.06±9.8 79.26±10 82.26±3.2andCOLLAB contain truncate scale-free graphs (social networks), whereas MUTAG andPROTEINScontain graphs from biology or chemistry. In addition, we use two synthetic datasets with 2 classes:Erdös-Rényi with p1∈[0.3,0.5]andp2∈[0.4,0.8]and Stochastic block model (SBM) withparameters p1= 0.8,p2= 0.5,q1∈[0.1,0.15]andq2∈[0.01,0.1]. More details about the datasetsin App. A.3.1. In addition, Table 1 reports average accuracies and standard deviation on 10 randomdata splits, using 85/15 stratified train-test split, training during 60 epochs and reporting the results ofthe last epoch for each random run. We use Pytorch Geometric [ 48] and the code is available in apublic repository2.The experiments support our hypothesis that rewiring based on CT-L AYER and GAP-L AYERimproves the performance of the baselines on graph classification. Since both layers are differentiable,they learn how to inductively rewire unseen graphs. The improvements are significant in graphs wheresocial components arise ( REDDIT B, I MDB B, C OLLAB ), i.e. graphs with small world properties andpower-law degree distributions with a topology based on hubs and authorities. These are graphswhere bottlenecks arise easily and our approach is able to properly rewire the graphs. However, theimprovements observed in planar or grid networks ( MUTAG andPROTEINS ) are more limited: thebottleneck does not seem to be critical for the graph classification task.Moreover, CT-L AYER andGAP-L AYER perform better in graphs with featureless nodes than graphswith node features because it is able to leverage the information encoded in the topology of thegraphs. Note that in attribute-based graphs, the weights of the attributes typically overwrite thegraph’s structure in the classification task, whereas in graphs without node features, the informationis encoded in the graph’s structure. Thus, k-NN rewiring outperforms every other rewiring method ingraph classification where graphs has node features.App. A.3.4 contains an in-depth analysis of the comparison between the spectral node CT embeddings(CTEs) given by Equation 2, and the learned node CTEs as predicted by CT-L AYER . We find thatthe CTEs that are learned in CT-L AYER are able to better preserve the original topology of thegraph while shifting the distribution of the effective resistances of the edges towards an asymmetricdistribution where few edges have very large weights and a majority of edges have low weights.In addition, App. A.3.4 also includes the analysis of the graphs latent space of the readout layerproduced by each model. Finally, we analyze the performance of the proposed layers in graphswith different structural properties in App. A.3.6. We analyze the correlation between accuracy, thegraph’s assortativity, and the graph’s bottleneck ( λ2).CT-L AYER vsGAP-L AYER .The datasets explored in this paper are characterized by mild bottle-necks from the perspective of the Lovász bound. For completion, we have included two syntheticdatasets (Stochastic Block Model and Erdös-Rényi) where the Lovász bound is very restrictive.As a result, CT-L AYER is outperformed by GAP-L AYER inSBM . Note that the results on thesynthetic datasets suffer from large variability. As a general rule of thumb, the smaller the graph’sbottleneck (defined as the ratio between the number of inter-community edges and the numberof intra-community edges), the more useful the CT-L AYER is because the rewired graph will besparsified in the communities but will preserve the edges in the gap. Conversely, the larger thebottleneck, the more useful the GAP-Layer is.2https://github.com/AdrianArnaiz/DiffWire9DiffWire: Inductive Graph Rewiring via the Lovász Bound4.2 Node Classification using CT-L AYERCT-L AYER andGAP-L AYER are mainly designed to perform graph classification tasks. However,we identify two potential areas to apply CT-L AYER for node classification.First, the new TCTdiffusion matrix learned by CT-L AYER gives more importance to edges thatconnect different communities, i.e., edges that connect distant nodes in the graph. This behaviourofCT-L AYER is aligned to solve long-range and heterophilic node classification tasks using fewernumber of layers and thus avoiding under-reaching, over-smoothing and over-squashing.Second, there is an increasingly interest in the community in using PEs in the nodes to developemore expressive GNN. PEs tend to help in node classification in homophilic graphs, as nearby nodeswill be assigned similar PEs. However, the main limitation is that PEs are usually pre-computedbefore the GNN training due to their high computational cost. CT-L AYER provides a solution to thisproblem, as it learns to predict the commute times embedding ( Z) of a given graph (see Figure 2and definition 1). Hence, CT-L AYER is able to learn and predict PEs from XandAinside a GNNwithout needing to pre-compute them.We empirically validate CT-L AYER in a node classification task on benchmark homophilic (Cora,Pubmed and Citeseer) and heterophilic (Cornell, Actor and Wisconsin) graphs. The results aredepicted in Table 2 comparing three models: (1) the baseline model consists of a 1-layer-GCN; (2)model 1 is a 1-layer-GCN where the CTEs are concatenated to the node features as PEs ( X∥Z);(3) Finally, model 2 is a 1-layer-GCN where TCTis used as a diffusion matrix ( A=TCT). Moredetails can be found in App. A.3.5.As seen in the Table, the proposed models outperform the baseline GCN model: using CTEs asfeatures (model 1) yields competitive results in homophilic graphs whereas using TCTas a matrixfor message passing (model 2) performs well in heterophilic graphs. Note that in our experimentsthe CTEs are learned by CT-L AYER instead of being pre-computed. A promising direction of futurework would be to explore how to combine these two approaches (model 1 and model 2) to leveragethe best of each of the methods on a wide range of graphs for node classification tasks.Table 2: Results in node classificationDataset GCN (baseline) model 1: model 2:X∥Z A =TCTHomophilyCora 82.01±0.883.66±0.6 67.96±0.8 81.0%Pubmed 81.61±0.386.07±0.1 68.19±0.7 80.0%Citeser 70.81±0.572.26±0.5 66.71±0.6 73.6%Cornell 59.19±3.5 58.02±3.7 69.04±2.2 30.5%Actor 29.59±0.4 29.35±0.4 31.98±0.3 21.9%Wisconsin 68.05±6.2 69.25±5.1 79.05±2.1 19.6%5 Conclusion and Future WorkIn this paper, we have proposed DIFFWIRE, a unified framework for graph rewiring that links thetwo components of the Lovász bound: CTs and the spectral gap. We have presented two novel, fullydifferentiable and inductive rewiring layers: CT-L AYER andGAP-L AYER . We have empiricallyevaluated these layers on benchmark datasets for graph classification with competitive results whencompared to SoTA baselines, specially in graphs where the the nodes have no attributes and havesmall-world properties. We have also performed preliminary experiments in a node classificationtask, showing that using the CT Embeddings and the CT distances benefit GNN architectures inhomophilic and heterophilic graphs, respectively.In future work, we plan to test the proposed approach in other graph-related tasks and intend to applyDIFFWIREto large-scale graphs and real-world applications, particularly in social networks, whichhave unique topology, statistics and direct implications in society.10DiffWire: Inductive Graph Rewiring via the Lovász Bound6 AcknowledgmentsA. Arnaiz-Rodriguez and N. Oliver are supported by a nominal grant received at the ELLIS UnitAlicante Foundation from the Regional Government of Valencia in Spain (Convenio Singular signedwith Generalitat Valenciana, Conselleria d’Innovació, Universitats, Ciència i Societat Digital, Direc-ción General para el Avance de la Sociedad Digital). A. Arnaiz-Rodriguez is also funded by a grantby the Banc Sabadell Foundation. F. Escolano is funded by the project RTI2018-096223-B-I00 of theSpanish Government.References[1]Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. InProceedings. 2005 IEEE international joint conference on neural networks , volume 2, pages 729–734,2005. URL https://ieeexplore.ieee.org/document/1555942 . 1[2]Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. Thegraph neural network model. IEEE transactions on neural networks , 20(1):61–80, 2008. URL https://ieeexplore.ieee.org/document/4700287 . 1[3]Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. InInternational Conference on Learning Representations (ICLR) , 2017. URL https://openreview.net/forum?id=SJU4ayYgl . 1[4]Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. Neural messagepassing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning ,ICML, page 1263–1272, 2017. 1[5]Thomas N Kipf and Max Welling. Variational graph auto-encoders. In NeurIPS Workshop on BayesianDeep Learning , 2016. URL http://bayesiandeeplearning.org/2016/papers/BDL_16.pdf . 1[6]Shaosheng Cao, Wei Lu, and Qiongkai Xu. Deep neural networks for learning graph representations. InProceedings of the AAAI Conference on Artificial Intelligence , volume 30, 2016. URL https://ojs.aaai.org/index.php/AAAI/article/view/10179 . 1[7]Fei Tian, Bin Gao, Qing Cui, Enhong Chen, and Tie-Yan Liu. Learning deep representations for graphclustering. In Proceedings of the AAAI Conference on Artificial Intelligence , 2014. URL https://ojs.aaai.org/index.php/AAAI/article/view/8916 . 1[8]Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehen-sive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems , 32(1):4–24, 2021. URL https://ieeexplore.ieee.org/document/9046288 . 1, 2[9]Petar Veli ˇckovi ́c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio.Graph Attention Networks. International Conference on Learning Representations , 2018. URL https://openreview.net/forum?id=rJXMpikCZ . 1[10] Shaked Brody, Uri Alon, and Eran Yahav. How attentive are graph attention networks? In InternationalConference on Learning Representations , 2022. URL https://openreview.net/forum?id=F72ximsx7C1 . 1[11] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks?InInternational Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=ryGs6iA5Km . 1[12] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. InAdvances in Neural Information Processing Systems , 2017. URL https://proceedings.neurips.cc/paper/2017/file/5dd9db5e033da9c6fb5ba83c7a7ebea9-Paper.pdf . 1, 3[13] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks forsemi-supervised learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence ,2018. URL https://ojs.aaai.org/index.php/AAAI/article/view/11604 . 2[14] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical implications.InInternational Conference on Learning Representations , 2021. URL https://openreview.net/forum?id=i80OPhOCVH2 . 2[15] László Lovász. Random walks on graphs. Combinatorics, Paul erdos is eighty , 2(1-46):4, 1993. URLhttps://web.cs.elte.hu/~lovasz/erdos.pdf . 2, 4[16] Pablo Barceló, Egor V . Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. Thelogical expressiveness of graph neural networks. In International Conference on Learning Representations ,2020. URL https://openreview.net/forum?id=r1lZ7AEKvB . 211DiffWire: Inductive Graph Rewiring via the Lovász Bound[17] NT Hoang, Takanori Maehara, and Tsuyoshi Murata. Revisiting graph neural networks: Graph filteringperspective. In 25th International Conference on Pattern Recognition (ICPR) , pages 8376–8383, 2021.URL https://ieeexplore.ieee.org/document/9412278 . 2, 7, 19[18] Kenta Oono and Taiji Suzuki. Graph neural networks exponentially lose expressive power for nodeclassification. In International Conference on Learning Representations , 2020. URL https://openreview.net/forum?id=S1ldO2EFPr .[19] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neuralnetworks: A review of methods and applications. CoRR , abs/1812.08434, 2018. URL http://arxiv.org/abs/1812.08434 . 2[20] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M.Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In InternationalConference on Learning Representations , 2022. URL https://openreview.net/forum?id=7UmjRGzp-A . 2, 3, 6,8, 18, 23[21] Petar Veli ˇckovi ́c. Message passing all the way up. In ICLR 2022 Workshop on Geometrical and TopologicalRepresentation Learning , 2022. URL https://openreview.net/forum?id=Bc8GiEZkTe5 . 2[22] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolu-tional networks on node classification. In International Conference on Learning Representations , 2020.URL https://openreview.net/forum?id=Hkx1qkrKPr . 2, 3[23] Anees Kazi, Luca Cosmo, Seyed-Ahmad Ahmadi, Nassir Navab, and Michael Bronstein. Differentiablegraph module (dgm) for graph convolutional networks. IEEE Transactions on Pattern Analysis andMachine Intelligence , pages 1–1, 2022. URL https://ieeexplore.ieee.org/document/9763421 . 2, 3[24] Karel Devriendt and Renaud Lambiotte. Discrete curvature on graphs from the effective resistance. arXivpreprint arXiv:2201.06385 , 2022. doi: 10.48550/ARXIV .2201.06385. URL https://arxiv.org/abs/2201.06385 . 2, 6, 7, 18[25] Johannes Klicpera, Stefan Weißenberger, and Stephan Günnemann. Diffusion improves graph learning. InAdvances in Neural Information Processing Systems , 2019. URL https://proceedings.neurips.cc/paper/2019/file/23c894276a2c5a16470e6a31f4618d73-Paper.pdf . 3, 8, 23[26] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi,Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relationalinductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 , 2018. URLhttps://arxiv.org/abs/1806.01261 . 3[27] Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, and FedericoMonti. Sign: Scalable inception graph neural networks. In ICML 2020 Workshop on Graph RepresentationLearning and Beyond , 2020. URL https://grlplus.github.io/papers/77.pdf . 3[28] Pál András Papp, Karolis Martinkus, Lukas Faber, and Roger Wattenhofer. DropGNN: Random dropoutsincrease the expressiveness of graph neural networks. In Advances in Neural Information ProcessingSystems , 2021. URL https://openreview.net/forum?id=fpQojkIV5q8 . 3[29] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the over-smoothing problem for graph neural networks from the topological view. Proceedings of the AAAIConference on Artificial Intelligence , 34(04):3438–3445, Apr. 2020. doi: 10.1609/aaai.v34i04.5747. URLhttps://ojs.aaai.org/index.php/AAAI/article/view/5747 . 3[30] Yanqiao Zhu, Weizhi Xu, Jinghao Zhang, Yuanqi Du, Jieyu Zhang, Qiang Liu, Carl Yang, and ShuWu. A survey on graph structure learning: Progress and opportunities. arXiv PrePrint , 2021. URLhttps://arxiv.org/abs/2103.03036 . 3[31] Diego Mesquita, Amauri Souza, and Samuel Kaski. Rethinking pooling in graph neural networks. InAdvances in Neural Information Processing Systems , 2020. URL https://proceedings.neurips.cc/paper/2020/file/1764183ef03fc7324eb58c3842bd9a57-Paper.pdf . 3[32] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec.Hierarchical graph representation learning with differentiable pooling. In Advances in Neu-ral Information Processing Systems , 2018. URL https://proceedings.neurips.cc/paper/2018/file/e77dbaf6759253c7c6d0efc5690369c7-Paper.pdf . 3[33] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neuralnetworks for graph pooling. In Proceedings of the 37th International Conference on Machine Learning ,2020. URL https://proceedings.mlr.press/v119/bianchi20a.html . 3, 8[34] Ladislav Rampášek, Mikhail Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, and DominiqueBeaini. Recipe for a General, Powerful, Scalable Graph Transformer. arXiv:2205.12454 , 2022. URLhttps://arxiv.org/pdf/2205.12454.pdf . 312DiffWire: Inductive Graph Rewiring via the Lovász Bound[35] Ameya Velingker, Ali Kemal Sinop, Ira Ktena, Petar Veli ˇckovi ́c, and Sreenivas Gollapudi. Affinity-awaregraph networks. arXiv preprint arXiv:2206.11941 , 2022. URL https://arxiv.org/pdf/2206.11941.pdf . 3,23, 25[36] Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. AAAIWorkshop on Deep Learning on Graphs: Methods and Applications , 2021. URL https://arxiv.org/pdf/2012.09699.pdf . 3, 23[37] Derek Lim, Joshua David Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, and StefanieJegelka. Sign and basis invariant networks for spectral graph representation learning. In ICLR 2022Workshop on Geometrical and Topological Representation Learning , 2022. URL https://openreview.net/forum?id=BlM64by6gc . 3[38] Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. Distance encoding: Design prov-ably more powerful neural networks for graph representation learning. Advances in NeuralInformation Processing Systems , 33, 2020. URL https://proceedings.neurips.cc/paper/2020/file/2f73168bf3656f697507752ec592c437-Paper.pdf . 3[39] Fan RK Chung. Spectral Graph Theory . American Mathematical Society, 1997. URL https://www.bibsonomy.org/bibtex/295ef10b5a69a03d8507240b6cf410f8a/folke . 4[40] Ulrike von Luxburg, Agnes Radl, and Matthias Hein. Hitting and commute times in large randomneighborhood graphs. Journal of Machine Learning Research , 15(52):1751–1798, 2014. URL http://jmlr.org/papers/v15/vonluxburg14a.html . 4, 20[41] Daniel A. Spielman and Nikhil Srivastava. Graph sparsification by effective resistances. SIAM Journal onComputing , 40(6):1913–1926, 2011. doi: 10.1137/080734029. URL https://doi.org/10.1137/080734029 . 5[42] Huaijun Qiu and Edwin R. Hancock. Clustering and embedding using commute times. IEEE Transactionson Pattern Analysis and Machine Intelligence , 29(11):1873–1890, 2007. doi: 10.1109/TPAMI.2007.1103.URL https://ieeexplore.ieee.org/document/4302755 . 6[43] Vedat Levi Alev, Nima Anari, Lap Chi Lau, and Shayan Oveis Gharan. Graph Clustering using EffectiveResistance. In 9th Innovations in Theoretical Computer Science Conference (ITCS 2018) , volume 94, pages1–16, 2018. doi: 10.4230/LIPIcs.ITCS.2018.41. URL http://drops.dagstuhl.de/opus/volltexte/2018/8369 .6, 17, 24[44] Emmanuel Abbe. Community detection and stochastic block models: Recent developments. Journal ofMachine Learning Research , 18(177):1–86, 2018. URL http://jmlr.org/papers/v18/16-480.html . 7, 20[45] Thomas Bühler and Matthias Hein. Spectral clustering based on the graph p-laplacian. In Proceedings of the26th Annual International Conference on Machine Learning , ICML ’09, page 81–88, New York, NY , USA,2009. Association for Computing Machinery. ISBN 9781605585161. doi: 10.1145/1553374.1553385.URL https://doi.org/10.1145/1553374.1553385 . 7, 20[46] Jian Kang and Hanghang Tong. N2n: Network derivative mining. In Proceedings of the 28th ACMInternational Conference on Information and Knowledge Management , CIKM ’19, page 861–870, NewYork, NY , USA, 2019. Association for Computing Machinery. ISBN 9781450369763. doi: 10.1145/3357384.3357910. URL https://doi.org/10.1145/3357384.3357910 . 7, 19[47] Franco P Preparata and Michael I Shamos. Computational geometry: an introduction . Springer Science &Business Media, 2012. URL http://www.cs.kent.edu/~dragan/CG/CG-Book.pdf . 8[48] Matthias Fey and Jan E. Lenssen. Fast graph representation learning with PyTorch Geometric. In ICLRWorkshop on Representation Learning on Graphs and Manifolds , 2019. 9[49] Joshua Batson, Daniel A. Spielman, Nikhil Srivastava, and Shang-Hua Teng. Spectral sparsificationof graphs: Theory and algorithms. Commun. ACM , 56(8):87–94, aug 2013. ISSN 0001-0782. doi:10.1145/2492007.2492029. URL https://doi.org/10.1145/2492007.2492029 . 16[50] Morteza Alamgir and Ulrike Luxburg. Phase transition in the family of p-resistances. In Advancesin Neural Information Processing Systems , 2011. URL https://proceedings.neurips.cc/paper/2011/file/07cdfd23373b17c6b337251c22b7ea57-Paper.pdf . 20[51] Morteza Alamgir and Ulrike Luxburg. Phase transition in the family of p-resistances. In Advances inNeural Information Processing Systems , volume 24, 2011. URL https://proceedings.neurips.cc/paper/2011/file/07cdfd23373b17c6b337251c22b7ea57-Paper.pdf . 20[52] Gregory Berkolaiko, James B Kennedy, Pavel Kurasov, and Delio Mugnolo. Edge connectivity and thespectral gap of combinatorial and quantum graphs. Journal of Physics A: Mathematical and Theoretical ,50(36):365201, 2017. URL https://doi.org/10.1088/1751-8121/aa8125 . 20[53] Zoran Stani ́c. Graphs with small spectral gap. Electronic Journal of Linear Algebra , 26:28, 2013. URLhttps://journals.uwyo.edu/index.php/ela/article/view/1259 . 20[54] Douglas J Klein and Milan Randi ́c. Resistance distance. Journal of Mathematical Chemistry , 12(1):81–95,1993. URL https://doi.org/10.1007/BF01164627 . 2413DiffWire: Inductive Graph Rewiring via the Lovász BoundA AppendixIn Appendix A we include a Table with the notation used in the paper and we provide an analysis ofthe diffusion and its relationship with curvature. In Appendix B, we study in detail GAP-L AYER andthe implications of the proposed spectral gradients. Appendix C reports statistics and characteristics ofthe datasets used in the experimental section, provides more information about the experiments results,describes additional experimental results, and includes a summary of the computing infrastructureused in our experiments.Table 3: Notation.Symbol DescriptionG= (V, E)Graph = (Nodes, Edges)A Adjacency matrix: A∈Rn×nX Feature matrix: X∈Rn×Fv Node v∈Voru∈Ve Edge e∈Ex Features of node v:x∈Xn Number of nodes: n=|V|F Number of featuresD Degree diagonal matrix where dvinDvvdv Degree of node vvol(G) Sum of the degrees of the graph vol(G) =Tr[D]L Laplacian: L=D−AB Signed edge-vertex incidence matrixbe Incidence vector: Row vector of B, with be=(u,v)= (eu−ev)ve Projected incidence vector: ve=L+/2beΓ Ratio Γ =1+ε1−εE Dirichlet Energy wrt L:E(x):=xTLxL Normalized Laplacian: L=I−D−1/2AD−1/2Λ Eigenvalue matrix of LΛ′Eigenvalue matrix of Lλi i-th eigenvalue of Lλ2 Second eigenvalue of L: Spectral gapλ′i i-th eigenvalue of Lλ′2 Second eigenvalue of L: Spectral gapF Matrix of eigenvectors of LG Matrix of eigenvectors of Lfi ieigenvector of Lf2 Second eigenvector of L: Fiedler vectorgi ieigenvector of Lg2 Second eigenvector of L: Fiedler vector ̃A New Adjacency matrixE′New edgesHuv Hitting time between uandvCTuv Commute time: CTuv=Huv+HvuRuv Effective resistance: Ruv=CTuv/vol(G)Z Matrix of commute times embeddings for all nodes in Gzu Commute times embedding of node uTCTResistance diffusion or commute times diffusionR(Z) Pairwise Euclidean distance of embedding Zdivided by vol(G)S Cluster assignment matrix: S∈Rn×2TGAPGAP diffusioneu Unit vector with unit value at uand 0 elsewhere∇ ̃Aλ2 Gradient of λ2wrt ̃A[∇ ̃Aλ2]ij Gradient of λ2wrt ̃Auvpu Node curvature: pu:= 1−12Pu∼wRuvκuv Edge curvature: κuv:= 2(pu+pv)/Ruv∥ Concatenation14DiffWire: Inductive Graph Rewiring via the Lovász BoundA.1 Appendix A: CT-L AYERA.1.1 NotationThe Table 3 summarizes the notation used in the paper.A.1.2 Analysis of Commute Times rewiringFirst, we provide an answer to the following question:Is resistance diffusion via TCTa principled way of preserving the Cheeger constant?We answer the question above by linking Theorems 1 and 2 in the paper with the Lovász bound.The outline of our explanation follows three steps.•Proposition 1: Theorem 1 ( Sparsification ) provides a principled way to bias the adjacencymatrix so that the edges with the largest weights in the rewired graph correspond to the edges ingraph’s bottleneck.•Proposition 2: Theorem 2 ( Cheeger vs Resistance ) can be used to demonstrate that increasingthe effective resistance leads to a mild reduction of the Cheeger constant.•Proposition 3: (Conclusion) The effectiveness of the above theorems to contain the Cheegerconstant is constrained by the Lovász bound.Next, we provide a thorough explanation of each of the propositions above.Proposition 1 (Biasing) .Let G’ = Sparsify (G,q) be a sampling algorithm of graph G= (V, E),where edges e∈Eare sampled with probability q∝Re(proportional to the effective resistance).This choice is necessary to retain the global structure of G, i.e., to satisfy∀x∈Rn: (1−ε)xTLGx≤xTLG′x≤(1 +ε)xTLGx, (9)with probability at least 1/2by sampling O(nlogn/ε2)edges , with 1/√n < ε ≤1, instead ofO(m), where m=|E|. In addition, this choice biases the uniform distribution in favor of criticaledges in the graph.Proof. We start by expressing the Laplacian Lin terms of the edge-vertex incidence matrix Bm×e:Beu=(1 ifuis the head of e−1ifuis the tail of e0 otherwise .(10)where edges in undirected graphs are counted once, i.e. e= (u, v) = ( v, u). Then, we haveL=BTB=PebebTe, where beis a row vector ( incidence vector ) ofB, withbe=(u,v)= (eu−ev).In addition, the Dirichlet energies can be expressed as norms:E(x) =xTLx=xTBTBx=∥Bx∥22=Xe=(u,v)∈E(xu−xv)2. (11)As a result, the effective resistance Rebetween the two nodes of an edge e= (u, v)can be defined asRe= (eu−ev)TL+(eu−ev) =bTeL+be (12)Next, we reformulate the spectral constraints in Eq. 9, i.e. (1−ε)LG≼LG′≼(1 +ε)LGasLG≼LG′≼ΓLG,Γ =1 +ε1−ε. (13)This simplifies the analysis, since the above expression can be interpreted as follows: the Dirichletenergies of LG′are lower-bounded by those of LGand upper-bounded by Γtimes the energies ofLG. Considering that the energies define hyper-ellipsoids, the hyper-ellipsoid associated with LG′isbetween the hyper-ellipsoids of LGandΓtimes the LG.The hyper-ellipsoid analogy provides a framework to proof that the inclusion relationships arepreserved under scaling: MLGM≼MLG′M≼MΓLGMwhere Mcan be a matrix. In this case,if we set M:= (L+G)1/2=L+/2Gwe have:L+/2GLGL+/2G≼L+/2GLG′L+/2G≼L+/2GΓL+/2G, (14)15DiffWire: Inductive Graph Rewiring via the Lovász Boundwhich leads toIn≼L+/2GLG′L+/2G≼ΓIn. (15)We seek a Laplacian LG′satisfying the similarity constraints in Eq. 13. Since E′⊂E, i.e. we wantto remove structurally irrelevant edges, we can design LG′in terms of considering allthe edges E:LG′:=BTGBG=XesebebTe (16)and let the similarity constraint define the sampling weights and the choice of e(setting se≥0propertly). More precisely:In≼L+/2GXebebTeL+/2G≼ΓIn. (17)Then if we define ve:=L+/2Gbeas the projected incidence vector , we haveIn≼XesevevTe≼ΓIn. (18)Consequently, a spectral sparsifier must find se≥0so that the above similarity constraint is satisfied.Since there are medges in E,semust be zero for most of the edges. But, what are the best candidatesto retain? Interestingly, the similarity constraint provides the answer. From Eq. 12 we havevTeve=∥ve∥2=∥L+/2Gbe∥22=bTeL+Gbe=Re. (19)This result explains why sampling the edges with probability q∝Releads to a ranking of medgesofG= (V, E)such that edges with large Re=∥ve∥2are preferred3.Algorithm 1 implements a deterministic greedy version of Sparsify (G,q), where we build incremen-tallyE′⊂Eby creating a budget of decreasing resistances Re1≥Re2≥. . .≥ReO(nlogn/ε2).Note that this rewiring strategy preserves the spectral similarities of the graphs, i.e. the globalstructure of G= (V, E)is captured by G′= (V, E′).Moreover, the maximum Rein each graph determines an upper bound on the Cheeger constant andhence an upper bound on the size of the graph’s bottleneck, as per the following proposition.Algorithm 1: GREEDY SparsifyInput : G= (V, E),ε∈(1/√n,1],n=|V|.Output : G′= (V, E′)withE′⊂Esuch that |E′|=O(nlogn/ε2).L←List({ve:e∈E})Q←Sort(L, descending, criterion= ∥ve∥2)▷Sort candidate edges by descending ResistanceE′← ∅I ←0n×nrepeatve←pop(Q) ▷Remove the head of the queueI ← I +vevTeifI≼ΓInthenE′←E′∪ {e} ▷Update the current budget of edgeselsereturn G′= (V, E′)until Q=∅Proposition 2 (Resistance Diameter) .Let G’ = Sparsify (G,q) be a sampling algorithm of graphG= (V, E), where edges e∈Eare sampled with probability q∝Re(proportional to the effectiveresistance). Consider the resistance diameter Rdiam := max u,vRuv. Then, for the pair of (u, v)3Although some of the elements of this section are derived from [ 49], we note that the Nikhil Srivastava’slectures at The Simons Institute (2014) are by far more clarifying.16DiffWire: Inductive Graph Rewiring via the Lovász Bounddoes exist an edge e= (u, v)∈E′inG′= (V, E′)such that Re=Rdiam . A a result the Cheegerconstant of G hGis upper-bounded as follows:hG≤αε√Rdiam·εvol(S)ε−1/2, (20)with0< ε < 1/2anddu≥1/αfor all u∈V.Proof. The fact that the maximum resistance Rdiam is located in an edge is derived from twoobservations: a) Resistance is upper bounded by the shortest-path distance; and b) edges withmaximal resistance are prioritized in (Proposition 1).Theorem 2 states that any attempt to increase the graph’s bottleneck in a multiplicative way (i.e.multiplying it by a constant c≥0) results in decreasing the effective resistances as follows:Ruv≤1d2εu+1d2εv·1ε·c2(21)withε∈[0,1/2]. This equation is called the resistance bound . Therefore, a multiplicative increase ofthe bottleneck leads to a quadratic decrease of the resistances.Following Corollary 2 of [ 43], we obtain an upper bound of any hS, i.e. the Cheeger constant forS⊆Vwithvol(S)≤vol(G)/2– by defining cproperly. In particular we are seeking a value of cthat would lead to a contradiction, which is obtained by settingc=vuut1d2εu∗+1d2εv∗Rdiam·ε, (22)where (u∗, v∗)is a pair of nodes with maximal resistance, i.e. Ru∗v∗=Rdiam .Consider now any other pair of nodes (s, t)withRst<Rdiam. Following Theorem 2, if thebottleneck of hSis multiplied by c, we should haveRst≤1d2εs+1d2εs·1ε·c2=1d2εs+1d2εs·Rdiam1d2εu∗+1d2εv∗. (23)However, since Rdiam≤1d2εu∗+1d2εv∗we have that Rstcan satisfyRst>1d2εs+1d2εs·1ε·c2(24)which is a contradiction and enableshS≤cvol(S)1/2−ε⇐⇒ | ∂S| ≤c·vol(S)1/2−ε. (25)Using cas defined in Eq. 22 and du≥1/αwe obtainc=vuut1d2εu∗+1d2εv∗Rdiam·ε≤rαεRdiam·ε≤αε√Rdiam·ε. (26)Therefore,hS≤cvol(S)1/2−ε≤αε√Rdiam·εvol(S)1/2−ε=αε√Rdiam·ε·vol(S)ε−1/2. (27)As a result, the Cheeger constant of G= (V, E)is mildly reduced (by the square root of the maximalresistance).Proposition 3 (Conclusion) .Let(u∗, v∗)be a pair of nodes (may be not unique) in G= (V, E)with maximal resistance, i.e. Ru∗v∗=Rdiam. Then, the Cheeger constant hGrelies on the ratiobetween the maximal resistance Rdiam and its uninformative approximation1d∗u+1d∗v. The closerthis ratio is to the unit, the easier it is to contain the Cheeger constant.17DiffWire: Inductive Graph Rewiring via the Lovász BoundFigure 5: Left: Original graph with nodes colored as Louvain communities. Middle: TCTlearnt byCT-L AYER with edges colors as node importance [0,1]. Right: Node and edge curvature: TCTusingpu:= 1−12Pu∼wTCTuvandκuv:= 2(pu+pv)/TCTuvwith edge an node curvatures as color. Graph from Reddit-B dataset.Proof. The referred ratio above is the ratio leading to a proper cin Proposition 2. This is consistentwith a Lovász regime where the spectral gap λ′2has a moderate value. However, for regimes withvery small spectral gaps, i.e. λ′2→0, according to the Lovász bound, Rdiam≫1d∗u+1d∗vandhence the Cheeger constant provided by Proposition 2 will tend to zero.We conclude that we can always find an moderate upper bound for the Cheeger constant of G=(V, E), provided that the regime of the Lovász bound is also moderate. Therefore, as the globalproperties of G= (V, E)are captured by G′= (V, E′), a moderate Cheeger constant, whenachievable, also controls the bottlenecks in G′= (V, E′).Our methodology has focused on first exploring the properties of the commute times / effectiveresistances in G= (V, E). Next, we have leveraged the spectral similarity to reason about theproperties –particularly the Cheeger constant– of G= (V, E′). In sum, we conclude that resistancediffusion via TCTis a principled way of preserving the Cheeger constant of G= (V, E).A.1.3 Resistance-based CurvaturesWe refer to recent work by Devriendt and Lambiotte [24] to complement the contributions of Toppinget al. [20] regarding the use of curvature to rewire the edges in a graph.Theorem 3 (Devriendt and Lambiotte [24]).The edge resistance curvature has the following prop-erties: (1) It is bounded by (4−du−dv)≤κuv≤2/Ruv, with equality in the lower bound iffall incident edges to uandvare cut links; (2) It is upper-bounded by the Ollivier-Ricci curvatureκORuv≥κuv, with equality if (u, v)is a cut link; and (3) Forman-Ricci curvature is bounded asfollows: κFRuv/Ruv≤κuvwith equality in the bound if the edge is a cut link.The new definition of curvature given in [ 20] is related to the resistance distance and thus it islearnable with the proposed framework ( CT-L AYER ). Actually, the Balanced-Forman curvature(Definition 1 in [20]) relies on the uniformative approximation of the resistance distance.Figure 5 illustrates the relationship between effective resistances / commute times and curvature onan exemplary graph from the C OLLAB dataset.As seen in the Figure, effective resistances prioritize the edges connecting outer nodes with hubsor central nodes, while the intra-community connections are de-prioritized. This observation isconsistent with the aforementioned theoretical explanations about preserving the bottleneck whilebreaking the intra-cluster structure. In addition, we also observe that the original edges between hubshave been deleted o have been extremely down-weighted.18DiffWire: Inductive Graph Rewiring via the Lovász BoundRegarding curvature, hubs or central nodes have the lowest node curvature (this curvature increaseswith the number of nodes in a cluster/community). Edge curvatures, which rely on node curvatures,depend on the long-term neighborhoods of the connecting nodes. In general, edge curvatures can beseen as a smoothed version –since they integrate node curvatures– of the inverse of the resistancedistances.We observe that edges linking nodes of a given community with hubs tend to have similar edge-curvature values. However, edges linking nodes of different communities with hubs have differentedge curvatures (Figure 5-right). This is due to the different number of nodes belonging to eachcommunity, and to their different average degree inside their respective communities (property 1 ofTheorem 3).Finally, note that the range of edge curvatures is larger than that of resistance distances. The sparsifiertransforms a uniform distribution of the edge weights into a less entropic one: in the example ofFigure 5 we observe a power-law distribution of edge resistances. As a result, κuv:= 2(pu+pv)/TCTuvbecomes very large on average (edges with infinite curvature are not shown in the plot) and a logscale is needed to appreciate the differences between edge resistances and edge curvatures.A.2 Appendix B: GAP-L AYERA.2.1 Spectral GradientsThe proposed GAP-L AYER relies on gradients wrt the Laplacian eigenvalues, and particularly thespectral gap ( λ2forLandλ′2wrtL). Although the GAP-L AYER inductively rewires the adjacencymatrix Aso that λ2is minimized, the gradients derived in this section may also be applied for gapmaximization.Note that while our cost function LFiedler =∥ ̃A−A∥F+α(λ∗2)2, with λ∗2∈ {λ2, λ′2}, relies onan eigenvalue, we do not compute it explicitly , as its computation has a complexity of O(n3)andwould need to be computed in every learning iteration. Instead, we learn an approximation of λ2’seigenvector f2and use its Dirchlet energy E(f2)to approximate the eigenvalue. In addition, sinceg2=D1/2f2, we first approximate g2and then approximate λ′2fromE(g2).Gradients of the Ratio-cut Approximation. LetAbe the adjacency matrix of G= (V, E); and ̃A, a matrix similar to the original adjacency but with minimal λ2. Then, the gradient of λ2wrt eachcomponent of ̃Ais given by∇ ̃Aλ2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=diag(f2fT2)11T−f2fT2, (28)where 1is the vector of nones; and [∇ ̃Aλ2]ijis the gradient of λ2wrt ̃Auv. The above formula isan instance of the network derivative mining mining approach [ 46]. In this framework, λ2is seenas a function of ̃Aand∇ ̃Aλ2, the gradient of λ2wrt ̃A, comes from the chain rule of the matrixderivative Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li. More precisely,∇ ̃Lλ2:=∂λ2∂ ̃L=f2fT2, (29)is a matrix relying on an outer product (correlation). In the proposed GAP-L AYER , since f2isapproximated by:f2(u) =+1/√nifubelongs to the first cluster−1/√nifubelongs to the second cluster, (30)i.e. we discard the Olognnfrom Eq. 30 (the non-liniarities conjectured in [ 17]) in order to simplifythe analysis. After reordering the entries of f2for the sake of clarity, f2fT2is the following blockmatrix:f2fT2=1/n−1/n−1/n 1/nwhose diagonal matrix is diag (f2fT2) =1/n 001/n(31)Then, we have∇ ̃Aλ2=1/n1/n1/n1/n−1/n−1/n−1/n 1/n=02/n2/n 0(32)19DiffWire: Inductive Graph Rewiring via the Lovász Boundwhich explains the results in Figure 1-left: edges linking nodes belonging to the same cluster remainunchanged whereas inter-cluster edges have a gradient of 2/n. This provides a simple explanationforTGAP= ̃A(S)⊙A. The additional masking added by the adjacency matrix ensures that we donot create new links.Gradients Normalized-cut Approximation. Similarly, using λ′2for graph rewiring leads to thefollowing complex expression:∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=d′ngT2 ̃AT ̃D−1/2g2o1T+d′ngT2 ̃A ̃D−1/2g2o1T+ ̃D−1/2g2gT2 ̃D−1/2. (33)However, since g2=D1/2f2andf2=D−1/2g2, the gradient may be simplified as follows:∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=d′nfT2 ̃D1/2 ̃ATf2o1T+d′nfT2 ̃D1/2 ̃Af2o1T+ ̃D−1/2f2fT2 ̃D−1/2. (34)In addition, considering symmetry for the undirected graph case, we obtain:∇ ̃Aλ′2:=Trh(∇ ̃Lλ2)T· ∇ ̃A ̃Li=2d′nfT2 ̃D1/2 ̃Af2o1T+ ̃D−1/2f2fT2 ̃D−1/2. (35)where d′is an×1negative vector including derivatives of degree wrt adjacency and related terms.The obtained gradient is composed of two terms.The first term contains the matrix ̃D1/2 ̃Awhich is the adjacency matrix weighted by the square rootof the degree; fT2 ̃D1/2 ̃Af2is a quadratic form (similar to a Dirichlet energy for the Laplacian) whichapproximates an eigenvalue of ̃D1/2 ̃A. We plan to further analyze the properties of this term infuture work.The second term, ̃D−1/2f2fT2 ̃D−1/2, downweights the correlation term for the Ratio-cut case f2fT2by the degrees as in the normalized Laplacian. This results in a normalization of the Fiedler vector:−1/nbecomes −√dudv/nat the uventry and similarly for 1/n, i.e. each entry contains the averagedegree assortativity.A.2.2 Beyond the Lovász Bound: the von Luxburg et al. boundThe Lovász bound was later refined by von Luxburg et al. [40] via a new, tighter bound which replacesdminbyd2minin Eq. 1. Given that λ′2∈(0,2], as the number of nodes in the graph ( n=|V|) andthe average degree increase, then Ruv≈1/du+ 1/dv. This is likely to happen in certain types ofgraphs, such as Gaussian similarity-graphs –graphs where two nodes are linked if the neg-exponentialof the distances between the respective features of the nodes is large enough; ε-graphs –graphs wherethe Euclidean distances between the features in the nodes are ≤ε; andk−NN graphs with large kwrtn. The authors report a linear collapse of Ruvwith the density of the graph in scale-free networks,such as social network graphs, whereas a faster collapse of Ruvhas been reported in communitygraphs –congruent graphs with Stochastic Block Models (SBMs) [44].Given the importance of the effective resistance, Ruv, as a global measure of node similarity, thevon Luxburg et al.’s refinement motivated the development of robust effective resistances , mostly inthe form of p−resistances given by Rpuv= arg min f{Pe∈Ere|fe|p}, where fis a unit-flow injectedinuand recovered in v; and re= 1/wewithwebeing the edge’s weight [ 50]. For p= 1,Rpuvcorresponds to the shortest path; p= 2 results in the effective resistance; and p→ ∞ leads tothe inverse of the unweighted u-v-mincut4. Note that the optimal pvalue depends on the type ofgraph [50] and p−resistances may be studied from the perspective of p−Laplacians [45, 51].While Ruvcould be unbounded by minimizing the spectral gap λ′2, this approach has received littleattention in the literature of mathematical characterization of graphs with small spectral gaps [ 52][53],i.e., instead of tackling the daunting problem of explicitly minimizing the gap, researchers in thisfield have preferred to find graphs with small spectral gaps.4The link between CTs and mincuts is leveraged in the paper as an essential element of our approach.20DiffWire: Inductive Graph Rewiring via the Lovász BoundA.3 Appendix C: ExperimentsIn this section, we provide details about the graphs contained in each of the datasets used in ourexperiments, a detailed clarification about architectures and experiments, and, finally, report additionalexperimental results.A.3.1 Datasets StatisticsTable 4 depicts the number of nodes, edges, average degree, assortativity, number of triangles,transitivity and clustering coefficients (mean and standard deviation) of all the graphs contained ineach of the benchmark datasets used in our experiments. As seen in the Table, the datasets are verydiverse in their characteristics. In addition, we use two synthetic datasets with 2 classes: Erdös-Rényiwithp1∈[0.3,0.5]andp2∈[0.4,0.8]and Stochastic block model (SBM) with parameters p1= 0.8,p2= 0.5,q1∈[0.1,0.15]andq2∈[0.01,0.1].Table 4: Dataset statistics. Parenthesis in Assortativity column denotes number of complete graphs(assortativity is undefined).Nodes Egdes A VG Degree Triangles Transitivity Clustering AssortativityREDDIT-B 429.6 ±554 497.7 ±622 2.33±0.3 24±41 0.01±0.02 0.04±0.06 -0.364 ±0.17(0)IMDB-B 19.7 ±10 96.5±105 8.88±5.0 391±868 0.77±0.15 0.94±0.03 -0.135 ±0.16(139)COLLAB 74.5 ±62 2457±6438 37.36 ±44 12×104±48×104 0.76±0.21 0.89±0.08 -0.033 ±0.24(680)MUTAG 2.2 ±0.1 19.8±5.6 2.18±0.1 0.00±0.0 0.00±0.00 0.00±0.00 -0.279 ±0.17(0)PROTEINS 39.1 ±45.8 72.8±84.6 3.73±0.4 27.4±30 0.48±0.20 0.51±0.23 -0.065 ±0.2(13)In addition, Figure 6 depicts the histograms of the assortativity for all the graphs in each of theeight datasets used in our experiments. As shown in Table 4 assortativity is undefined in completegraphs (constant degree, all degrees are the same). Assortativity is defined as the normalized degreecorrelation. If the graph is complete, then both correlation and its variance is 0, so assortativity willbe 0/0.(a)REDDIT (b)IMDB -BINARY (c)COLLAB(d)MUTAG (e)PROTEINSFigure 6: Histogram of the Assortativity of all the graphs in each of the datasets.In addition, Figure 7 depicts the histograms of the average node degrees for all the graphs in each ofthe eight datasets used in our experiments. The datasets are also very diverse in terms of topology,corresponding to social networks, biochemical networks and meshes.21DiffWire: Inductive Graph Rewiring via the Lovász Bound(a)REDDIT (b)IMDB -BINARY (c)COLLAB(d)MUTAG (e)PROTEINSFigure 7: Degree histogram of the average degree of all the graphs in each of the datasets.A.3.2 Graph Classification GNN ArchitecturesFigure 8 shows the specific GNN architectures used in the experiments explained in section 4 in themanuscript. Although the specific calculation of TGAPandTCTare given in Theorems 2 and 1, wealso provide a couple of pictures for a better intuition.LINEARCONVMINCUTCONVREADOUTMLPXX XAX AX XY(a)MINCUTbaselineXLINEARCONVMINCUTCONVREADOUTMLPXXAX AX XYGAP-LayerTg(b)GAP-L AYERXLINEARCONVMINCUTCONVREADOUTMLPXXAX AX XYCT-LayerTct(c)CT-L AYERFigure 8: Diagrams of the GNNs used in the experiments.22DiffWire: Inductive Graph Rewiring via the Lovász BoundA.3.3 Training ParametersThe value of the hyperparameters used in the experiments are the ones by default in the coderepository5. We report average accuracies and standard deviation on 10 random iterations, usingdifferent 85/15 train-test stratified split (we do not perform hyperparameter search), training during60 epochs and reporting the results of the last epoch for each random run. We have used an Adamoptimizer, with a learning rate of 5e−4and weight decay of 1e−4. In addition, the batch sizeused for the experiments are shown in Table 5. Regarding the synthetic datasets, the parameters are:Erdös-Rényi with p1∈[0.3,0.5]andp2∈[0.4,0.8]and Stochastic block model (SBM) p1= 0.8,p2= 0.5,q1∈[0.1,0.15]andq2∈[0.01,0.1].Table 5: Dataset Batch sizeBatch Dataset sizeREDDIT-BINARY 64 1000IMDB-BINARY 64 2000COLLAB 64 5000MUTAG 32 188PROTEINS 64 1113SBM 32 1000Erdös-Rényi 32 1000For the k-nn graph baseline, we choose ksuch that the main degree of the original graph is maintained,i.e.kequal to average degree. Our experiments also use 2 preprocessing methods DIGL and SDRF.Unlike our proposed methods, both SDRF [ 20] and DIGL [ 25] use a set of hyperparamerters tooptimize for each specific graph, because both are also not inductive. This approach could bemanageable for the task of node classification, where you only have one graph. However, when itcomes to graph classification, the number of graphs are huge (5) and it is nor computationally feasibleoptimize parameters for each specific graph. For DIGL, we use a fixed α= 0.001andεbased onkeeping the same average degree for each graph, i.e., we use a different dynamically chosen εforeach graph in each dataset which maintain the same number of edges as the original graph. In thecase of SDRF, the parameters define how stochastic the edge addition is ( τ), the graph edit distanceupper bound (number of iterations) and optional Ricci upper-bound above which an edge will beremoved each iteration ( C+). We set the parameters τ= 20 (the edge added is always near the edgeof lower curvature), C+= 0(to force one edge is removed every iteration), and number of iterationsdynamic according to 0.7∗ |V|. Thus, we maintain the same number of edges in the new graph(τ= 20 andC+= 0), i.e., same average degree, and we keep the graph distance to the originalbounded by 0.7∗ |V|.A.3.4 Latent Space AnalysisIn this section, we analyze the two latent spaces produced by the models.•First, we compare the CT Embedding computed spectrally ( Zin equation 2) with the CTEmbedding predicted by our CT-L AYER (Zin definition 1) for a given graph, where each pointis a node in the graph.•Second, we compare the graph readout output for every model defined in the experiments(Figure 4) where each point is a graph in the dataset.Spectral CT Embedding vs CT Embeddings Learned by CT-L AYER .The well-known em-beddings based on the Laplacian positional encodings (PE) are typically computed beforehand andappended to the input vector Xas additional features [ 35,36]. This task requires an expensivecomputation O(n3)(see equation 2). Conversely, we propose a GNN Layer that learns how to predictthe CT embeddings (CTEs) for unseen graphs (definition 1 and Figure 2) with a loss function thatoptimizes such CTEs. Note that we do not explicitly use the CTE features (PE) for the nodes, but weuse the CTs as a new diffusion matrix for message passing (given by TCTin Definition 1). Note thatwe could also use Zas positional encodings in the node features, such that CT-L AYER may be seenas a novel approach to learn Positonal Encodings.5https://github.com/AdrianArnaiz/DiffWire23DiffWire: Inductive Graph Rewiring via the Lovász BoundIn this section, we perform a comparative analysis between the spectral commute times embeddings(spectral CTEs, Zin equation 2) and the CTEs that are predicted by our CT-L AYER (Zin definition 1).As seen in Figure 9 (top), both embeddings respect the original topology of the graph, but they differdue to (1) orthogonality restrictions, and more interestingly to (2) the simplification of the originalspectral loss function in Alev et al. [43]: the spectral CTEs minimize the trace of a quotient, whichinvolves computing an inverse, whereas the CTEs learned in CT-L AYER minimize the quotient oftwo traces which is computationally simpler (see LCTloss in Definition 1). Two important propertiesof the first term in Definition 1 are: (1) the learned embedding Zhas minimal Dirichlet energy(numerator) and (2) large degree nodes will be separated (denominator). Figure 9 (top) illustrateshow the CTEs that are learned in CT-L AYER are able to better preserve the original topology of thegraph (note how the nodes are more compactly embedded when compared to the spectral CTEs).Figure 9 (bottom) depicts a histogram of the effective resistances or commute times (CTs) (seeSection 3.2 in the paper) of the edges according to CT-L AYER or the spectral CTEs. The histogram iscomputed from the upper triangle of the TCTmatrix defined in Definition 1. Note that the larger theeffective resistance of an edge, the more important that edge will be considered (and hence the lowerthe probability of being removed [ 54]). We observe how in the histogram of CTEs that are learnedinCT-L AYER there is a ‘small club’ of edges with very large values and a large number of edgeswith low values yielding a power-law-like profile. However, the histogram of the effective resistancescomputed by the spectral CTEs exhibits a profile similar to a Gaussian distribution. From this result,we conclude that the use of LCTin the learning process of the CT-L AYER shifts the distribution ofthe effective resistances of the edges towards an asymmetric distribution where few edges have verylarge weights and a majority of edges have low weights.CT-Layer CTE Spectral CTE0.00 0.01 0.02 0.03020406080100120CT-Layer CT Dist histogram0.003 0.004 0.005 0.0060204060Spectral CT Dist histogram5 10 15 20 25 30Node DegreeFigure 9: Top: CT embeddings predicted by CT-L AYER (left) and spectral CT embeddinggs (right).Bottom: Histogram of normalized effective resistances (i.e., CT distances or upper triangle in TCT)computed from the above CT embeddings. Middle: original graph from the COLLAB dataset. Colorscorrespond to node degree. CT-L AYER CTEs reduced from 75 to 32 dimensions using Johnson-Lindenstrauss. Finally, both CTEs reduced from 32 to 2 dimensions using T-SNE.Graph Readout Latent Space Analysis. To delve into the analysis of the latent spaces producedby our layers and model, we also inspect the latent space produced by the models (Figure 4) that useMINCUTPOOL (Figure 8a), GAP-L AYER (Figure 8b) and CT-L AYER (Figure 8c). Each point is agraph in the dataset, corresponding to the graph embedding of the readout layer. We plot the outputof the readout layer for each model, and then perform dimensionality reduction with TSNE.Observing the latent space of the REDDIT -BINARY dataset (Figure 10), CT-L AYER creates a disperseyet structured latent space for the embeddings of the graphs. This topology in latent spaces show thatthis method is able to capture different topological details. The main reason is the expressiveness ofthe commute times as a distance metric when performing rewiring, which has been shown to be a24DiffWire: Inductive Graph Rewiring via the Lovász Boundoptimal metric to measure node structural similarity. In addition, GAP-L AYER creates a latent spacewhere, although the 2 classes are also separable, the embeddings are more compressed, due to a moreaggressive –yet still informative– change in topology. This change in topology is due to the change inbottleneck size that GAP-L AYER applies to the graph. Finally, MINCUTcreates a more squeezedand compressed embedding, where both classes lie in the same spaces and most of the graphs havecollapsed representations, due to the limited expressiveness of this architecture.(a)CT-L AYER (b)MinCut (c)GAP-L AYERFigure 10: REDDIT embeddings produced by GAP-L AYER (Ncut) CT-L AYER and M INCUT.A.3.5 Architectures and Details of Node Classification ExperimentsThe application of our framework for a node classification task entails several considerations. First,this first implementation of our method works with dense AandXmatrices, whereas node classifica-tion typically uses sparse representations of the edges. Thus, the implementation of our proposedlayers is not straightforward for sparse graph representations. We are planning to work on the sparseversion of this method in future work.Note that we have chosen benchmark datasets that are manageable with our dense implementation.In addition, we have chosen a basic baseline with 1 GCN layer to show the ability of the approachesto avoid under-reaching, over-smoothing and over-squashing.The baseline GCN is a 1-layer-GCN, and the 2 compared models are:•1CT-L AYER for calculating Zfollowed by 1 GCN Layer using Afor message passing andX∥Zas features. This approach is a combination of Velingker et al. [35] and our method. SeeFigure 11c.•1CT-L AYER for calculating TCTfollowed by 1 GCN Layer using that TCTfor messagepassing and Xas features. See Figure 11b.XGCN AYXGCNAYCT-LayerTctXGCNAYCT-LayerZXA(a)GCN baselineXGCN AYXGCNAYCT-LayerTctXGCNAYCT-LayerZXA (b)A=TCTXGCN AYXGCNAYCT-LayerTctXGCNAYCT-LayerZXA(c)X∥ZFigure 11: Diagrams of the GNNs used in the experiments for node classification.A promising direction of future work would be to explore how to combine both approaches to leveragethe best of each of the methods on a wide range of graphs for node classification tasks. In addition,using this learnable CT distance for modulating message passing in more sophisticated ways isplanned for future work.25DiffWire: Inductive Graph Rewiring via the Lovász BoundA.3.6 Analysis of Correlation between Structural Properties and CT-L AYER PerformanceTo analyze the performance of our model in graphs with different structural properties, we analyze thecorrelation between accuracy, the graph’s assortativity, and the graph’s bottleneck ( λ2) in COLLABand REDDIT datasets. If the error is consistent along all levels of accuracy and gaps, the layer cangeneralize along different graph topologies.As seen in Figure 14, Figure 12 (middle), and Figure 13 (middle), we do not identify any correlationor systematic pattern between graph classification accuracy, assortativity, and bottleneck with CT-LAYER -based rewiring, since the proportion of wrong and correct predictions are regular for all levelsof assortativity and bottleneck size.In addition, note that while there is a systematic error of the model over-predicting class 0 in theCOLLAB dataset (see Figure 12), this behavior is not explained by assortativity or bottleneck size,but by the unbalanced number of graphs in each class.0.5 0.0 0.5 1.0assortativity020406080100Assortativity histogramsLabel0120.5 0.0 0.5 1.0assortativity020406080100Correct predictionFalseTrue0.5 0.0 0.5 1.0assortativity020406080100Predicted0120.00 0.25 0.50 0.75 1.00bottleneck0204060801002 histogramsLabel0120.00 0.25 0.50 0.75 1.00bottleneck020406080100Correct predictionFalseTrue0.00 0.25 0.50 0.75 1.00bottleneck020406080100Predicted0120 1 2Predicted0 1 2Label0.74 0.11 0.150.23 0.77 00.29 0.046 0.67COLLABFigure 12: Analysis of assortativity, bottleneck and accuracy for COLLAB dataset. Top: Histogramsof assortativity. Bottom: Histograms of bottleneck size ( λ2). Both are grouped by actual label of thegraph (left), by correct or wrong predictions (middle) and by predicted label (right).1.00 0.75 0.50 0.25 0.00assortativity0102030Assortativity histogramsLabel011.00 0.75 0.50 0.25 0.00assortativity0102030Correct predictionFalseTrue1.00 0.75 0.50 0.25 0.00assortativity0102030Predicted010.00 0.25 0.50 0.75 1.00bottleneck0501001502002502 histogramsLabel010.00 0.25 0.50 0.75 1.00bottleneck050100150200250Correct predictionFalseTrue0.00 0.25 0.50 0.75 1.00bottleneck050100150200250Predicted010 1Predicted0 1Label0.91 0.0950.2 0.81REDDIT-BINARYFigure 13: Analysis of assortativity, bottleneck and accuracy for REDDIT-B dataset. Top: Histogramsof assortativity. Bottom: Histograms of bottleneck size ( λ2). Both are grouped by actual label of thegraph (left), by correct or wrong predictions (middle) and by predicted label (right).26DiffWire: Inductive Graph Rewiring via the Lovász Bound0.5 0.0 0.5 1.0 1.5Assortativity0.250.000.250.500.751.001.25Bottleneck 2Correct predictionFalseTrue(a)COLLAB1.00 0.75 0.50 0.25 0.00Assortativity0.00.20.40.60.81.0Bottleneck 2Correct PredictionFalseTrue (b)REDDIT-BFigure 14: Correlation between assortativity, λ2and accuracy for CT-L AYER . Histograms showsthat the proportion of correct and wrong predictions are regular for all levels of assortativity (x axis)and bottleneck size (y axis). For the sake of clarity, these visualizations, a and b, are the combinationof the 2 histograms in the middle column of Figure 12 and Figure 13 respectively.A.3.7 Computing InfrastructureTable 6 summarizes the computing infrastructure used in our experiments.Table 6: Computing infrastructure.Component DetailsGPU 2x A100-SXM4-40GBRAM 1 TiBCPU 255x AMD 7742 64-Core @ 2.25 GHzOS Ubuntu 20.04.4 LTS27<|im_end|> <|im_start|>assistant ### Review Title ### Review Text This paper proposes DiffWire, a fully differentiable, inductive, and parameter-free graph rewiring algorithm based on the Lovász bound. DiffWire uses the commute times as a relevance function for edge re-weighting, and proposes two types of new layers to either learn the commute times (CT-Layer) or adds a layer that optimizes the spectral gap for the network and the task (GAP-Layer). Experimental results show the proposed approach performs well on grass classification tasks. Strengths: - The paper is well-written with vigorous theoretical grounding. The connection between CT-Layer & GAP-Layer to the two sides of the Lovasz Bound and the graph’s spectral gap is interesting. - Experimental results demonstrate the good performance of DiffWire on various graph classification datasets. - CT-Layer and GAP-Layer’s performance differences on SBM verify the assumption of GAP-Layer being more suitable under the case that the Lovász bound is restrictive. Weaknesses: - Only two datasets have node features. As stated in the paper, DiffWire performs well on graphs with no node features as it can leverage the topology of the graphs. However, I am not sure if DiffWire can also utilize the node features well when they are informative, and asking for related justifications or additional experiments to be provided. Questions: - Why are experiments only conducted on graph classification tasks? Can the proposed framework also work for node classification tasks (like [1] mentioned in the paper)? It would be great if the authors can discuss this or provide experiments on node classification datasets. - I am interested in if the graph homophily affects the effectiveness of the proposed method. A.3.1 claims to provide nodes, edges, average degree, assortativity, number of triangles, transitivity and clustering coefficients in table 3, but I cannot find the assortativity (assuming this refers to homophily). It would be desirable to include related data and discussion on graph homophily. - I am unclear about how the statement "the smaller the graph’s bottleneck, the more useful the CT-Layer is" explains the better performance of CT-layer on COLLAB. Can this be further elaborated? Overall this paper introduces a differentiable framework for graph rewiring with good theoretical and empirical support. Albeit having several confusions, I tend to accept this paper. [1] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature. In *International Conference on Learning Representations*, 2022. ### Review Rating ### Review Confidence <|im_end|> <|im_end|>
_VoSnnpUDkC
graphicsinterface.org/Graphics_Interface/2021/Conference
2021
Generating Adversarial Examples for Robust Deception against Image Transfer and Reloading
["Chuan Zhou", "Duohe Ma", "Tianwei Zhang", "Liming Wang"]
Adversarial examples play an irreplaceable role in evaluating deep learning models' security and robustness. It is necessary and important to understand the effectiveness of adversarial examples to utilize them for model improvement. In this paper, we explore the impact of input transformation on adversarial examples. First, we discover a new phenomenon. Reloading an adversarial example from the disk or transferring it to another platform can deactivate its malicious functionality. The reason is that reloading or transferring images can reduce the pixel precision, which will counter the perturbation added by the adversary. We validate this finding on different mainstream adversarial attacks. Second, we propose a novel Confidence Iteration method, which can generate more robust adversarial examples. The key idea is to set the confidence threshold and add the pixel loss caused by image reloading or transferring into the calculation. We integrate our solution with different existing adversarial approaches. Experiments indicate that such integration can significantly increase the success rate of adversarial attacks.
["Adversarial Examples", "Robustness", "Reloading"]
ABSTRACTAdversarial examples play an irreplaceable role in evaluating deeplearning models’ security and robustness. It is necessary and impor-tant to understand the effectiveness of adversarial examples to utilizethem for model improvement. In this paper, we explore the impactof input transformation on adversarial examples. First, we discovera new phenomenon. Reloading an adversarial example from the diskor transferring it to another platform can deactivate its maliciousfunctionality. The reason is that reloading or transferring images canreduce the pixel precision, which will counter the perturbation addedby the adversary. We validate this finding on different mainstreamadversarial attacks. Second, we propose a novel Confidence Iterationmethod, which can generate more robust adversarial examples. Thekey idea is to set the confidence threshold and add the pixel losscaused by image reloading or transferring into the calculation. Weintegrate our solution with different existing adversarial approaches.Experiments indicate that such integration can significantly increasethe success rate of adversarial attacks.Keywords: Adversarial Examples ,Robustness, ReloadingIndex Terms: Computing methdologies—Computer vision prob-lems; Neural networks—Security and privacy—Software and appli-cation security;1 I NTRODUCTIONDNNs are well known to be vulnerable to adversarial attacks [1].The adversarial algorithm can add small but carefully crafted pertur-bations to an input, which can mislead the DNN to give incorrectoutput with high confidence. Extensive work has been done towardsattacking supervised DNN applications across various domains suchas image [2 –5],audio [6 –8], and natural language processing [9,10].Since DNNs are widely adopted in different AI tasks, the adversarialattacks can bring significant security threats and damage our every-day lives. Moreover, researchers have demonstrated the possibilityof adversarial attacks in the physical world [11, 12], proving that theattacks are realistic and severe.In addition to attacking DNN models, generating powerful androbust adversarial examples also has very positive meanings. First,adversarial examples can be used to test and evaluate the robustnessand security of DNN models. The more sophisticated and stealthythe adversarial examples are, the more convincing their evaluationresults will be. Second, generating adversarial examples can alsohelp defeat such adversarial attacks. One promising defense is ad-versarial training [2], where adversarial examples will be includedin the training dataset to train a model that is resistant to those adver-sarial examples. Obviously, if we inject more powerful adversarialexamples into the training set, the model will be more robust.In this paper, we explore and evaluate the effectiveness of ad-versarial examples with transformation. Guo et al. [13] studiedthe image transformation (cropping-scaling, bit-depth reduction,compression) as a defense against adversarial attacks; Dziugaite etal. [14] conducted comprehensive evaluations on the effectiveness ofadversarial examples with JPG compression. Unlike the above workthatactively transforms the images, we consider cases where imagesarepassively transformed due to reloading or transferring. We dis-cover that an image will lose certain precision when it is reloadedfrom the disk, or transferred to a different platform. Such precisionreduction in an adversarial example can counter the adversarial per-turbation, making the attack ineffective. We evaluate adversarialexamples’ effectiveness with different mainstream methods and findthat most of the methods will fail after the image is reloaded ortransferred.To generate robust adversarial examples against image reloadingor transferring, we propose a novel approach, Confidence Iteration(CI). Generally, our CI approach dynamically checks the generatedexamples’ confidence score to evaluate its effectiveness after beingreloaded or transferred. By doing so it can filter out the less qualifiedadversarial examples.Our approach has several advantages. First, it is generic andcan be integrated with existing adversarial attacks for enhancementbecause it can be called outside of the adversarial algorithm. Second,the adversarial examples generated by our approach have highersuccess rates before and after they are reloaded or transferred. Third,the adversarial examples generated by our approach have a lowerdetection rate by state-of-the-art defense solutions. We expect thatour solution can help researchers better understand, evaluate and im-prove DNN models’ resistance against various adversarial examples.In summary, we make the following contributions:•we are the first to find that the adversarial examples can beineffective after being reloaded or transferred. We confirm ourfindings through comprehensive evaluations;•We propose an effective method, Confidence Iteration, to gen-erate more robust adversarial examples, which can maintainhigh attack performance under image transformation.The rest of the paper is organized as follows: Section 2 givesthe background and related work about adversarial attacks and de-fenses. Section 3 describes and evaluates the adversarial examples’effectiveness after image reloading and transferring. We introduceour approach in Section 4 and evaluate it in Section 5. Section 6concludes the paper.2 R ELATED WORKSIn this section, we give a brief background about attack and defensetechniques of adversarial examples. We also introduct the resistanceof adversarial examples against input transformation.2.1 Adversarial Attack TechniquesAn adversary carefully crafts adversarial examples by adding imper-ceptible and human unnoticeable modifications to the original cleaninput. The target model will then predict this adversarial example asone attacker-specific label (targeted attack), or arbitrary incorrect la-bels (untargeted attack). Most adversarial attacks require that the Lpnorm of the added modifications cannot exceed a threshold parame-tere. Different adversarial attack techniques have been proposed.We will describe six common attack methods below.Fast Gradient Sign Method (FGSM) [2]. The intuition ofFGSM is that the adversary can modify the input such that thechange direction is completely consistent with the change directionof the gradient, making the loss function increase at the fastest speed.Such changes can cause the greatest impact on the classificationresults, making the neural network misclassify the modified input.1Online Submission ID: 20Basic Iterative Method (BIM) [15]. This is a simple extensionof FGSM. The basic idea of BIM is to apply FGSM for severaliterations, with a small step size for each iteration. The number ofiterations is determined by min(e+4;1:25e).DeepFool [16]. Deepfool is based on the assumption that modelsare fully linear. There is a polyhedron that can separate individualclasses. The DeepFool attack searches for adversarial exampleswith minimal perturbations within a specific region using the L2distance. Therefore, one big advantage of DeepFool is that it canautomatically determine the optimal perturbation threshold e.Decision-Based Attack [17]. The decision-based attack startsfrom a large adversarial perturbation and then seeks to reduce theperturbation while staying adversarial. It is a method that only relieson the model’s final decision. A perturbation is sampled from aproposal distribution at each step, which reduces the distance of theperturbed image towards the original input. They find progressivelysmaller adversarial perturbations according to a given adversarialcriterion. The decision-based attack finally generates an adversarialexample with little disturbance near the classification boundary.HopSkipJump Attack [18]. HopSkipJump Attack is an algo-rithm based on a novel estimate of the gradient direction using bi-nary information at the decision boundary. Different from Decision-Based Attacks, which need a large number of model queries, Hop-SkipJump Attack requires significantly fewer model queries andgeneration time. What is more, in HopSkipJump Attack, the per-turbations are used to estimate a gradient direction to handle theinefficiency in Boundary Attack.Projected Gradient Descent(PGD) [19]. Their PGD attack con-sists of initializing the search for an adversarial example at a randompoint within the allowed norm ball, then running several iterationsof the basic iterative method [15] to find an adversarial example.2.2 Adversarial Example Defense Techniques.Existing approaches for defeating adversarial examples mainly fallinto two categories, as described below.Adversarial Training. Szegedy et al. [2] proposed that by train-ing the neural network with the mixed dataset of adversarial exam-ples and original clean samples, the new model will be resistantto adversarial examples. However, Moosavi-Dezfooli et al. [20]showed that an adversary can still generate new examples to fool thedefense model.Adversarial Example Detection. Instead of enhancing the mod-els, these approaches aim to detect adversarial examples. One typicalsolution is de-noising. Mustafa A et al. [21] proposed the waveletreconstruction algorithm to map the adversarial examples outside ofthe manifold region to the natural images’ manifold region througha deep image reconstruction network. It can restore the normal dis-criminability of the classifier effectively. Hinton et al. [22] adoptedthis reconstruction process of capsule network to detect adversarialexamples automatically.2.3 Transformation and Distortion of Adversarial Exam-ples.Most neural networks trained for image classification are trainedon images that have undergone JPG compression, containing theoriginal data subspace.Dziugaite et al. [14] find that perturbations of natural images (byadding scaled white noise or randomly corrupting a small number ofpixels) are almost certain to move an image out of the JPG subspaceand, therefore, out of the data subspace. Adversarial examples can,therefore, induce the classification network to give wrong classifi-cation results. However, when the degree of disturbance is small,the pixel disturbance value superimposed on the original image bythe adversarial example is also small, which means that these dis-turbance values are not robust to image compression, storage, andtransmission. The pixel loss is the reason why image transformationor distortion can defeat adversarial examples.Obviously, how to keeppixel perturbation is the solution to this problem.3 T RANSFERRING AND RELOADING OF ADVERSARIAL EX-AMPLESWe study different popular image formats and approaches of adver-sarial attacks and conclude that image transferring and reloading cansignificantly reduce adversarial attacks’ success rate.3.1 Root CauseThere are two reasons that image transferring and reloading candeactivate adversarial examples. First, in an adversarial image gener-ated using existing approaches, each pixel is usually represented as afloat value. When we store the image into the disk, the pixels willbe converted into inttype to save space. Such accuracy loss canmake the adversarial example ineffective when we reload it from thedisk. We find that the mainstream image formats (BMP, JPEG, andPNG) all perform such pixel conversion. Second, when we transferan image to a different platform via networks, the image is usuallycompressed to save the network traffic. For instance, we use theWeChat application to send pictures from a smartphone to a laptopand find that the application will compress the pictures with an 80%compression rate by default.Although such conversion and compression types have a humanunnoticeable impact on the images, they can significantly affectadversarial attacks’ success rate. The adversary’s goal is to find thesmallest perturbation that causes the model to classify the imageinto an attack-specific category. Common techniques usually movethe original clean samples towards the classification boundary andstop when the samples just cross the boundary to make sure thatthe added perturbation is small. So the adversarial examples havevery high precision requirements for their pixel values. The smallchanges caused by image reloading or transferring can move theadversarial images to classes different from the one the adversarydesires, making the adversarial examples ineffective. Here, we useFigure 1 to directly illustrate the adverse effects of image reloadingand image format transformation on the adversarial effect of theadversarial example. Below we conduct a set of experiments tovalidate those effects empirically.Figure 1: Red dots represent data, and the gray line represents thehyperplane that can separate individual classes. The gray dots repre-sent the inner boundary of the adversarial examples. The green dotrepresents a specific adversarial example. The yellow dot representsthat reloading can project this adversarial example back into theoriginal sample space.3.2 Experiments3.2.1 Impact of image reloading.We first empirically check the effectiveness of adversarial examplesafter being saved and reloaded.2Online Submission ID: 20(a) Original (b) JPG (c) PNG (d) BMPFigure 2: Pixel values before and after saving/reloadingPrecision loss. We generate a 33 image, and add each pixelvalue with a random perturbation qbetween 0 and 1. Then wesave the image into three different formats (JPG, BMP, PNG) andthen reload it into the memory. All the operations are done underwindows10.Figure 2 shows the pixel values of the original image (2a) andreloaded JPG (2b), PNG (2c) and BMP (2d) images, respectively.We observe that each image format has precision loss due to the typeconversion from float toint: JPG format directly discards thedecimals. In contrast, PNG and BMP formats round off the decimals.Although such estimation does not cause visual-perceptible effectsto the image, it can affect the results of adversarial attacks, as theseattacks require precise pixel-level perturbations. We demonstratethe effects below.Effectiveness of adversarial examples. We measure the per-formance of adversarial examples after being reloaded or trans-ferred. We select six commonly used approaches of adversarialattacks: Decision-Based Attack [17], HopSkipJump Attack [18],Deepfool [16], BIM [15], FGSM [2] ,and PGD [19]. For each ap-proach, we generate some adversarial examples. Decision-BasedAttack, HopSkipJump Attack and PGD use ResNet50 classifier.Deepfool uses ResNet34 classifier. BIM and FGSM use the VGG11classifier. Furthermore, all adversarial examples are tested with theclassifier used at the time of generation.We find that all the six adversarial attack methods measure theclassification number and confidence of adversarial examples atthe time of generation to judge whether the adversarial attack issuccessful. In fact, the classification number and confidence atthis time are not true, because the model does not classify the realimage at this time. They all use models(for example, ResNet50)to classify the generated Numpy array instead of the real pictureitself. It means, so far, they have not generated the image form ofthe adversarial examples. To test the effectiveness of the adversarialexamples, we use cv2.imwrite andplt.savefig to downloadthe adversarial examples locally. Next, we use the same model(forexample, ResNet50) to load the adversarial examples saved locally.In this paper, we refer to the above behavior as “Reloading.”We also find that when images are transmitted through instantmessaging software, companies compress them to save bandwidth,which results in a loss of pixels in the image, which is detrimentalto the adversarial examples generated by subtle perturbations. Forexample, when we use WeChat to send an image to a friend, ourfriend can see the compressed image with only a small amountof traffic. Instead of clicking the ”download the original” button,we save the compressed image locally and use the above model tocategorize it. The above process is referred to as ”Transferring” inthis paper.We use Figure 3 and Figure 4 to illustrate adversarial examples’confidence values after being reloaded and transferred. Different col-ors represent different classification Numbers, the height of the col-umn represents confidence, and each block represents six algorithmsfrom left to the right: Decision-Based Attack [17],HopSkipJumpAttack [18],DeepFool [16],BIM [15],FGSM [2], and PGD [19]. Wecan see that the initial image can be correctly classified with high con-fidence in all six algorithms. Besides, all the adversarial examplesgenerated by the algorithms can be classified into other categories,which means that the six algorithms’ adversarial examples have theFigure 3: Classification number and confidence of adversarial exam-ples after being reloaded and transferred.Figure 4: Classification number and confidence of adversarial exam-ples after being reloaded and transferred for another picture.adversarial ability to deceive classification models into giving falseresults.Surprisingly enough, we find that regardless of adversarial ex-amples are saved in JPG, PNG, or BMP, most of them could beclassified as the original clean image when they are reloaded ortransferred. Some even had high confidence. As reflected in theimage, the image after Reloading or Transferring is classified as theoriginal clean image with the same color.We hope to use more straightforward data to show you this phe-nomenon. As a result, Table 1 and Table 2 are two experimentalresults of another two groups of Reloading and Transferring. Thedata in the table represents the classification number and confidence(data in brackets). We can find that many of the adversarial examplesgenerated by the six kinds of adversarial attacks cannot maintaintheir attack ability after being reloaded or transferred. After beingreloaded or transferred, the adversarial examples will be classified asoriginal clean samples’ labels (such as 90 and 129) by the classifier.In order to verify that the adversarial examples with high confidencealso have Reloading and Transferring problems, we conduct thefollowing experiments with results in Table 3:We can find that the adversarial examples of Picture1 Picture4with high confidence as shown in Figure 5, after being reloadedor transferred, a large part of them are classified as original cleansamples in Table 3, proving that the adversarial examples with highconfidence also have Reloading and Transferring problems.All data in Tables 1 to 3 are all derived from the ResNet50 model.Cross Validation. Instead of using the same model to verifyadversarial examples’ effectiveness, we conduct two sets of cross-validation experiments. One set uses the Reloaded images, and theother uses the Transferred images. The classification number of theinitial clean image is 129. The classification numbers of their adver-sarial examples generated by the six adversarial algorithms are no3Online Submission ID: 20Picture1Picture2Picture3Picture4Figure 5: Adversarial Examples generated from Picture1 Picture4longer 129, which means that the adversarial attack is successful(notshown in Table 4). We feed the two sets of adversarial examplesgenerated by algorithm A into algorithm B after they are Reloadedor Transferred, to cross-verify the adversarial effectiveness of adver-sarial examples after being Reloaded or Transferred. Table 4 showstheir classification Numbers and Confidence in other algorithms.Obviously, no matter after Reloaded or Transferred, the adversar-ial examples lose their effectiveness in their own and other adversar-ial algorithms. After WeChat transmission, due to the existence ofimage compression during the transmission process, four new itemsin the table are classified to be recovered as clean samples.Multiple attacks In this section, we use the existing adversarialexamples as input and conduct other adversarial attacks. The newadversarial examples after reloaded are tested for effectiveness. Theresults are shown in Table 5.As shown in Table 5, even if we send the generated adversarialexamples into another generation algorithm again, the problem thatreloading and transferring results in the decrease of the adversarialeffectiveness also exists and is very serious. In Table 5, we see thatin addition to an item that failed to generate the adversarial exampleacross models and an item misclassified as classification number533, other adversarial examples are all classified as the initial cleansample’s classification number 129.The above chart synoptically shows that Reloading and Trans-ferring will significantly reduce the effectiveness of the adversarialattack. This is true for single attacks, cross attacks, and multipleattacks.3.3 Spectrum Analysis.Next, spectrum analysis is performed on the adversarial examplesused in Table 1 and Table 2.The spectrum analysis results are shown in Figure 6. From leftto right are the initial images, adversarial examples generated byBIM,FGSM and Deepfool algorithms. We can find that the Deepfoolalgorithm can retain the original clean sample’s original appearanceto the greatest extent. In contrast, FGSM algorithm introduces morenoise points, which is reflected in the spectrum map, that is, FGSMalgorithm generates a more uniform distribution of the spectrum mapwith more low-frequency components. This is why the adversarialexamples generated by the FGSM algorithm have better resistanceto Reloading and Transferring loss in Table 1 and Table 2.The results of the wavelet transform spectrum diagram of the orig-inal picture and adversarial examples of BIM, FGSM, and Deepfoolare shown in Figure 7 from left to right. Obviously, in the wavelet do-main, the original clean image is closest to the adversarial examplegenerated by Deepfool, both in low and high-frequency components,which means that Deepfool’s algorithm can counter the attack withminimal perturbation and is least likely to maintain its antagonismat the same time. FGSM algorithm exerts a large disturbance, so thehigh and low-frequency components in the wavelet domain are quitedifferent from the original clean image, maintaining the antagonismrelatively well.4 A R OBUST APPROACH TO GENERATING ADVERSARIALEXAMPLESAs discussed in Section 3, adversarial examples generated fromexisting techniques will become ineffective after being reloaded ortransferred. In this section, we propose an efficient and robust ap-proach, Confidence Iteration (CI), to produce adversarial examplesthat are resistant to the processes of Reloading or Transferring. CIis generic: it can be integrated with all existing adversarial exam-ple techniques to improve their robustness while maintaining theiradvantages.Our CI approach’s intuition is that an adversarial example’s confi-dence score of the attacker-specific classification number reflects thisexample’s resistance against input reloading or transferring. We use4Online Submission ID: 20Table 1: Classification number and confidence of an adversarial example after being reloaded and transferredClassification number(confidence) Attack Decision HopSkipJump Deepfool BIM FGSM PGDOriginal images 90(74:060% ) 90(74:060% ) 90(99:811% )90(99:582% )90(97:312% ) 90(100 :000% )adversarial images 852(15:062% ) 84(48:441% ) 95(49:315% )95(61:163% )735(44:672% )318(100 :000% )JPG reloading 90(72:291% ) 90(69:921% ) 90(52:677% )90(46:958% )84(99:217% ) 90(99:651% )transferring 90(63:671% ) 90(93:686% ) 90(52:985% )90(47:276% )84(99:402% ) 90(96:650% )PNG reloading 84(52:540% ) 84(83:981% ) 90(43:454% )90(45:934% )84(99:421% ) 90(94:402% )transferring 90(82:835% ) 90(50:656% ) 90(80:671% )90(36:895% )84(89:627% ) 90(99:985% )BMP reloading 84(52:540% ) 84(83:981% ) 90(43:454% )90(45:934% )84(99:421% ) 90(94:402% )transferring 90(82:835% ) 90(50:656% ) 90(80:671% )90(36:895% )84(89:627% ) 90(99:985% )Table 2: Classification number and confidence of another adversarial example after being reloaded and transferredClassification number(confidence) Attack Decision HopSkipJump Deepfool BIM FGSM PGDOriginal images 129(89:531% ) 129(89:531% )129(86:374% )129(71:917% )129(91:494% )129(98:182% )adversarial images 852(12:363% ) 132(36:282% )128(48:604% )128(98:746% )915(5:642% ) 128(97:858% )JPG reloading 132(35:742% ) 129(65:183% )129(60:726% )129(87:825% )132(51:324% )129(81:000% )transferring 132(34:461% ) 129(58:947% )129(88:792% )129(85:496% )132(30:130% )129(98:601% )PNG reloading 132(53:513% ) 129(64:022% )129(53:670% )129(85:081% )132(53:185% )128(38:533% )transferring 129(36:472% ) 129(77:169% )129(81:671% )129(81:244% )129(41:192% )129(89:833% )BMP reloading 132(53:513% ) 129(64:022% )129(53:670% )129(85:081% )132(53:185% )128(38:533% )transferring 129(36:472% ) 129(77:169% )129(81:671% )129(81:244% )129(41:192% )129(89:833% )Figure 6: Spectrum analysis of pictures in Table 1 and Table 2one existing technique (e.g., FGSM, BIM) to generate an adversarialexample and save it in the disk locally, and measure its confidencescore of the target class. (This actually involves reloading the image.)If the confidence score is higher than a threshold, we will accept thisimage. Otherwise, we continue to iterate, save it locally (or trans-form it through WeChat transmission), and measure the target class’sreloading confidence score until it meets the confidence requirementor exceeds the iteration number threshold. When the confidencevalue cmeets the expected requirement p, the adversarial exampleimage saved to the hard disk at this time has some resistance tothe pixel value’s variant. Besides, multiple gradient rise caused bymultiple iterations will keep the pixel values change with consistentdirection. That is to say, after many iterations, the fractional parts ofsome pixel values will be promoted to the integer part, can no longerbe discarded. To measure if an adversarial example is effective afterimage distortion, we adopt the wavelet reconstruction algorithm [21].As the name implies, we first process adversarial examples throughthe wavelet denoising algorithm. Then, we send the denoised imageinto ESRGAN, A super-resolution reconstructed network. Someadversarial examples with weak attack ability will be classified asinitial clean samples after being processed by this algorithm, whichmeans that their attack ability has been lost. By detecting the adver-sarial examples processed by the wavelet reconstruction algorithm,we could measure the generated adversarial examples’ robustnessFigure 7: Wavelet transform spectrum diagram of original pictureand adversaral examples of BIM, FGSM and Deepfool from left toright5Online Submission ID: 20Table 3: Classification number and confidence of adversarial examples generated from Picture1 Picture4 after being reloaded and transferredFGSM Picture1 Picture2 Picture3 Picture4Original images 106(94:478% )288(90:196% )173(92:451% )376(99:613% )adversarial images 343(84:336% )293(95:005% )104(86:118% )371(69:347% )JPG reloading 106(99:904% )288(49:574% )104(28:730% )371(34:062% )transferring 106(99:953% )288(54:895% )104(31:623% )371(33:070% )PNG reloading 106(99:685% )608(26:309% )173(49:878% )376(36:097% )transferring 106(99:807% )390(47:548% )173(47:880% )371(66:135% )BMP reloading 106(99:685% )608(26:309% )173(49:878% )376(36:097% )transferring 106(99:807% )390(47:548% )173(47:880% )371(66:135% )Table 4: Classification number and confidence of adversarial examples after being reloaded and transferred using Cross-ValidationClassification number(confidence) Original clean image Deepfool BIM FGSMDeepfool reloading 129(89:16% ) 129(72:14% )129(86:31% )128(57:74% )transferring 129(91:25% ) 128(77:49% )129(89:12% )129(67:91% )BIM reloading 128(72:14% ) 128(77:30% )129(60:48% )129(65:53% )transferring 129(91:81% ) 128(78:98% )141(47:95% )129(85:49% )FGSM reloading 132(82:26% ) 129(40:96% )129(14:19% )129(58:89% )transferring 129(65:64% ) 129(42:91% )129(15:98% )129(88:35% )PGD reloading 129(60:72% ) 129(87:82% )129(89:18% )129(81:00% )transferring 129(66:12% ) 129(68:06% )129(89:11% )129(98:60% )and effectiveness.Algorithm 1 Confidence IterationInput: A classifier fwith loss function J;a real example xandground-truth label y;Input: The size of perturbation e;iterations limit number Tmaxandconfidence limit value p;Output: iterations number T;An adversarial example xwithkxxk¥Te1:T= 0;2:xT=x;3:Save xTas a picture xrealTon your local hard drive (or transformit through WeChat transmission)4:Input xrealTtofand obtain the confidence cand the gradientÑxJxrealT;ytrue;5:while (TTmax)and(cp)do6: x=xrealT+eÑxJxrealT;ytrue;7: T=T+1;8: xT=x;9: Save xTas a picture xrealTon your local hard drive (or trans-form it through WeChat transmission)10: Reload xrealTtofand obtain the confidence cand the gradientÑxJxrealT;ytrue;11:end whileAlgorithm 1 summarizes our CI approach. We first input the cleanimage, generate its adversarial example, and then save the adversarialexample locally. The local adversarial example is then reloaded intothe classification model and judges whether the adversarial attackcan succeed. On the premise of the adversarial attack’s success,we give the confidence value cof the adversarial attack, which isobtained by reloading the adversarial example in the hard disk intothe classification model. Then we compare the expected confidencethreshold pwith the current confidence c. If the current confidencecis less than the expected confidence threshold pand the currentiteration number Tis less than the iteration number threshold Tmax.We will run the Confidence Iteration algorithm, save the generatedadversarial example locally, and compare c and p. The whole processwill not stop until cis greater than porTequals Tmax.It is precisely because the CI algorithm has a download, reload,and confidence judgment process. We can apply it to the backendof any adversarial example generation algorithm to enhance theadversarial example’s robustness against reloading and transferring.5 E VALUATIONIn this section, we conduct experiments to validate the effectivenessof our proposed approach.5.1 ConfigurationsDataset. To better reflect the real-world setting, we implement acrawler to crawl some images from websites instead of using existingimage datasets. We consider the Inception v3 model [23] and restrictthe scope of crawled images to the categories recognized by thismodel. We filter out the crawled images that the Inception v3 modelcannot correctly recognize and finally establish a dataset consistingof around 1300 clean images with the correct labels.Implementations. We consider two adversarial example tech-niques: FGSM and BIM. Our CI approach is generic and can beapplied to other techniques as well. We select VGG11 [24] as thetarget model. We implement these techniques with the CI algorithmusing the PyTorch library. We set the iteration upper limit Tmaxas 6,and the confidence threshold pas 70%.Metrics. We adopt two metrics to evaluate the effectiveness ofadversarial examples: (1) success rate is defined in Equation 1a. Nsis the number of adversarial examples which can be misclassified bythe target model fwhile its clean images can be classified truly, and6Online Submission ID: 20Table 5: Classification number and confidence of adversarial examples after multiple attacksA image with classification number 129 +0 +Deepfool +BIM +FGSM +PGDDeepfool 129(60:72% )129(71:88% )129(95:91% )Unsuccessful generation 129(92:27% )BIM 129(89:82% )129(92:32% )129(99:37% )129(98:65% ) 129(90:22% )FGSM 129(89:18% )129(72:24% )533(31:53% )129(71:53% ) 129(55:72% )PGD 129(81:00% )129(82:52% )129(88:24% )129(99:71% ) 129(55:72% )Nfis the number of adversarial examples that can be predicted ascorresponding clean image’s label; (2) average confidence score isdefined in Equation 1b. piis the confidence score from the targetmodel with the highest false classification confidence. (Here wedo not consider the adversarial example that can be classified as itsclean sample’s label by the model.)Padv=NsNs+Nf(1a)Cave=1NsNsåi=1pi (1b)5.2 Results and AnalysisAdversarial example generation. We first show the generatedadversarial examples using FGSM, CI-FGSM, BIM, and CI-BIM, asshown in Figure 8. We can see that similar to FGSM and BIM. OurCI-FGSM and CI-BIM can also produce adversarial examples withimperceptible perturbations that can fool the target deep learningmodel.FGSMCI-FGSMBIMCI-BIMFigure 8: Adversarial examples generated by FGSM, CI-FGSM,BIM, and CI-BIMAttack effects after image reloading. Using different ap-proaches, we generate a large amount of adversarial images, savethem to the local disk. Then we reload them and feed them intothe target model for prediction. We measure the success rates andaverage confidence scores of the four algorithms in Table 6. FGSMhas a lower success rate and confidence score as it adopts stridingperturbation. In contrast, BIM has higher attack performance. Forour CI-BIM, although the confidence score is slightly lower thanBIM. But the success rate is much higher than that of BIM. Our CIapproach is more efficient when eis smaller.Different parameters can lead to different effects of the CI ap-proach. Figure 9 demonstrates the adversarial success rate of adver-sarial examples from CI algorithm with different threshold p. Wecan see that by increasing p, the attack performance can be signif-icantly improved. To boost the adversarial attacks, conventionalapproaches require setting a large disturbance hyper-parameter e(e.g., 16), and large number of iterations T(e.g., 10). To achieve thesame effects, our CI approach only needs to increase the thresholdwhile keeping smaller values of e(0.050.2) and T(e.g., 6) toachieve similar attack effects.Resistance against Detection of Adversarial Examples. In ad-dition to defeating input transformation, our CI-approach is better atevading adversarial example detection. We use the wavelet recon-struction algorithm [21] as the defense method to measure the per-formance of different adversarial algorithms. After being processedby the wavelet reconstruction algorithm, the adversarial exampleswith weak attack capabilities will be identified as the initial cleanimage’s label by the classification model. As the name implies,we first process adversarial examples through a wavelet denoisingalgorithm. Then, we send the denoised image into ESRGAN, Asuper-resolution reconstructed network. By detecting the adversarialexamples processed by the wavelet denoising algorithm, we couldmeasure the generated adversarial examples’ robustness. We setthe parameter eas 0.1 and dof the wavelet denoising algorithmfrom 0.01 to 0.1. Figure 10 shows the comparison results. We canclearly see that although the attack performance of BIM is betterthan FGSM, the adversarial examples generated by the BIM algo-rithm are easier to be detected as adversarial examples under thesame parameters. On the contrary, our CI method has high attackperformance and is not easy to be detected as adversarial examples,especially when the detection parameter sis small.Application to other adversarial example techniques. In addi-tion to BIM, our CI approach can be also applied to other adversarialattack algorithms to boost adversarial examples. Figure 11 shows theattack performance of FGSM with CI and its comparisons with sim-ple FGSM. We can see that the CI approach can improve the attackperformance of FGSM, which is more obvious when the parametereis smaller. Simultaneously, the effect of CI-FGSM is much betterthan that of an ordinary BIM algorithm. CI-BIM algorithm has thebest adversarial success rate among the four algorithms, which isalso easy to understand. When the parameter eis small, the FGSMalgorithm uses the small step length for perturbation superposition,BIM algorithm iterates these small step length perturbations, andCI-BIM algorithm iterates again for the iteration of these small steplength perturbations on the premise of confidence satisfying therequirements p. This is an iteration at different scales. In a sense,our method implements an adaptive step size attack. When the pa-7Online Submission ID: 20Table 6: the success rates and average confidence scores of adversarial examplessuccess rate confidence scoree=0.1 e=0.2 e=0.3 e=0.1 e=0.2 e=0.3FGSM 81.4% 95.8% 99.2% 23.3% 20.3% 27.0%CI-FGSM 87.0% 96.5% 98.8% 22.9% 21.5% 27.7%BIM 87.5% 94.4% 94.0% 74.7% 73.2% 68.3%CI-BIM 95.5% 98.9% 99.3% 57.8% 62.9% 63.7%rameter eis relatively small, the adjustment range of the dynamicstep length is larger, which means that our CI-BIM algorithm canfind adversarial examples with high attack capabilities in a largerrange. Essentially, the CI-BIM algorithm has higher adversarial at-tack performance because of its wider search domain for generatingmore robust adversarial examples.6 C ONCLUSIONIn this paper, we evaluate the effectiveness of adversarial examplesafter being reloaded or transferred. We discover that most main-stream adversarial attacks will fail with such input transformation.Then we propose a new solution, Confidence Iteration, to generatehigh-quality and robust adversarial examples. This solution cansignificantly facilitate other existing attacks, increasing the attacksuccess rate and reducing the detection rate. Future work includesmore evaluations on the integration with other attack techniques andleveraging Confidence Iteration to enhance DNN models via testingand adversarial training.
QN2XZZ3DIMv
Generating Adversarial Examples for Robust Deception against Image Transfer and Reloading
3: Clear rejection
The authors explore how storing and transmitting images affects the attack performance of adversarial examples. They propose a novel metric called "Confidence Iteration" to generate adversarial examples that are robust to said effects of storage and transmission. Comments: - Although the objective is clear, there is no discussion on why more basic techniques cannot be used or fail to address this problem. For example, one would expect that using data augmentation during the training will increase the network's robustness to arbitrary transformations, including photometric, geometric, noise, etc. - In contrast to the authors' statement, the adversarial examples shown are noticeably noisy. It seems that applying a simple median filter before passing the image to the network would get rid of most of the noise. - The idea of studying the effects of compression and quantization on adversarial examples' effectiveness is useful. However, I find it trivial and would not call this a discovery of a "new phenomenon". Any information loss (due to any factor, not just these two), which leads to perturbations, can affect the network's performance. - It is unclear whether the Inceptionv3 model was trained on the created dataset or any other dataset. - Why was VGG11 used instead of Inceptionv3? - I find the "Confidence Iteration" trivial. The range of the colour values is not specified. Are the RGB images in the range of [0,1]? If so, the perturbations are up to 30%, which seems quite extreme considering the claim that the adversarial examples are "stealthy". - The paper requires proofreading as there are a few sentences that do not parse.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Generating Adversarial Examples for Robust Deception against Image Transfer and Reloading ### Paper Abstract Adversarial examples play an irreplaceable role in evaluating deep learning models' security and robustness. It is necessary and important to understand the effectiveness of adversarial examples to utilize them for model improvement. In this paper, we explore the impact of input transformation on adversarial examples. First, we discover a new phenomenon. Reloading an adversarial example from the disk or transferring it to another platform can deactivate its malicious functionality. The reason is that reloading or transferring images can reduce the pixel precision, which will counter the perturbation added by the adversary. We validate this finding on different mainstream adversarial attacks. Second, we propose a novel Confidence Iteration method, which can generate more robust adversarial examples. The key idea is to set the confidence threshold and add the pixel loss caused by image reloading or transferring into the calculation. We integrate our solution with different existing adversarial approaches. Experiments indicate that such integration can significantly increase the success rate of adversarial attacks. ### Paper Keywords ["Adversarial Examples", "Robustness", "Reloading"] ### Paper Content ABSTRACTAdversarial examples play an irreplaceable role in evaluating deeplearning models’ security and robustness. It is necessary and impor-tant to understand the effectiveness of adversarial examples to utilizethem for model improvement. In this paper, we explore the impactof input transformation on adversarial examples. First, we discovera new phenomenon. Reloading an adversarial example from the diskor transferring it to another platform can deactivate its maliciousfunctionality. The reason is that reloading or transferring images canreduce the pixel precision, which will counter the perturbation addedby the adversary. We validate this finding on different mainstreamadversarial attacks. Second, we propose a novel Confidence Iterationmethod, which can generate more robust adversarial examples. Thekey idea is to set the confidence threshold and add the pixel losscaused by image reloading or transferring into the calculation. Weintegrate our solution with different existing adversarial approaches.Experiments indicate that such integration can significantly increasethe success rate of adversarial attacks.Keywords: Adversarial Examples ,Robustness, ReloadingIndex Terms: Computing methdologies—Computer vision prob-lems; Neural networks—Security and privacy—Software and appli-cation security;1 I NTRODUCTIONDNNs are well known to be vulnerable to adversarial attacks [1].The adversarial algorithm can add small but carefully crafted pertur-bations to an input, which can mislead the DNN to give incorrectoutput with high confidence. Extensive work has been done towardsattacking supervised DNN applications across various domains suchas image [2 –5],audio [6 –8], and natural language processing [9,10].Since DNNs are widely adopted in different AI tasks, the adversarialattacks can bring significant security threats and damage our every-day lives. Moreover, researchers have demonstrated the possibilityof adversarial attacks in the physical world [11, 12], proving that theattacks are realistic and severe.In addition to attacking DNN models, generating powerful androbust adversarial examples also has very positive meanings. First,adversarial examples can be used to test and evaluate the robustnessand security of DNN models. The more sophisticated and stealthythe adversarial examples are, the more convincing their evaluationresults will be. Second, generating adversarial examples can alsohelp defeat such adversarial attacks. One promising defense is ad-versarial training [2], where adversarial examples will be includedin the training dataset to train a model that is resistant to those adver-sarial examples. Obviously, if we inject more powerful adversarialexamples into the training set, the model will be more robust.In this paper, we explore and evaluate the effectiveness of ad-versarial examples with transformation. Guo et al. [13] studiedthe image transformation (cropping-scaling, bit-depth reduction,compression) as a defense against adversarial attacks; Dziugaite etal. [14] conducted comprehensive evaluations on the effectiveness ofadversarial examples with JPG compression. Unlike the above workthatactively transforms the images, we consider cases where imagesarepassively transformed due to reloading or transferring. We dis-cover that an image will lose certain precision when it is reloadedfrom the disk, or transferred to a different platform. Such precisionreduction in an adversarial example can counter the adversarial per-turbation, making the attack ineffective. We evaluate adversarialexamples’ effectiveness with different mainstream methods and findthat most of the methods will fail after the image is reloaded ortransferred.To generate robust adversarial examples against image reloadingor transferring, we propose a novel approach, Confidence Iteration(CI). Generally, our CI approach dynamically checks the generatedexamples’ confidence score to evaluate its effectiveness after beingreloaded or transferred. By doing so it can filter out the less qualifiedadversarial examples.Our approach has several advantages. First, it is generic andcan be integrated with existing adversarial attacks for enhancementbecause it can be called outside of the adversarial algorithm. Second,the adversarial examples generated by our approach have highersuccess rates before and after they are reloaded or transferred. Third,the adversarial examples generated by our approach have a lowerdetection rate by state-of-the-art defense solutions. We expect thatour solution can help researchers better understand, evaluate and im-prove DNN models’ resistance against various adversarial examples.In summary, we make the following contributions:•we are the first to find that the adversarial examples can beineffective after being reloaded or transferred. We confirm ourfindings through comprehensive evaluations;•We propose an effective method, Confidence Iteration, to gen-erate more robust adversarial examples, which can maintainhigh attack performance under image transformation.The rest of the paper is organized as follows: Section 2 givesthe background and related work about adversarial attacks and de-fenses. Section 3 describes and evaluates the adversarial examples’effectiveness after image reloading and transferring. We introduceour approach in Section 4 and evaluate it in Section 5. Section 6concludes the paper.2 R ELATED WORKSIn this section, we give a brief background about attack and defensetechniques of adversarial examples. We also introduct the resistanceof adversarial examples against input transformation.2.1 Adversarial Attack TechniquesAn adversary carefully crafts adversarial examples by adding imper-ceptible and human unnoticeable modifications to the original cleaninput. The target model will then predict this adversarial example asone attacker-specific label (targeted attack), or arbitrary incorrect la-bels (untargeted attack). Most adversarial attacks require that the Lpnorm of the added modifications cannot exceed a threshold parame-tere. Different adversarial attack techniques have been proposed.We will describe six common attack methods below.Fast Gradient Sign Method (FGSM) [2]. The intuition ofFGSM is that the adversary can modify the input such that thechange direction is completely consistent with the change directionof the gradient, making the loss function increase at the fastest speed.Such changes can cause the greatest impact on the classificationresults, making the neural network misclassify the modified input.1Online Submission ID: 20Basic Iterative Method (BIM) [15]. This is a simple extensionof FGSM. The basic idea of BIM is to apply FGSM for severaliterations, with a small step size for each iteration. The number ofiterations is determined by min(e+4;1:25e).DeepFool [16]. Deepfool is based on the assumption that modelsare fully linear. There is a polyhedron that can separate individualclasses. The DeepFool attack searches for adversarial exampleswith minimal perturbations within a specific region using the L2distance. Therefore, one big advantage of DeepFool is that it canautomatically determine the optimal perturbation threshold e.Decision-Based Attack [17]. The decision-based attack startsfrom a large adversarial perturbation and then seeks to reduce theperturbation while staying adversarial. It is a method that only relieson the model’s final decision. A perturbation is sampled from aproposal distribution at each step, which reduces the distance of theperturbed image towards the original input. They find progressivelysmaller adversarial perturbations according to a given adversarialcriterion. The decision-based attack finally generates an adversarialexample with little disturbance near the classification boundary.HopSkipJump Attack [18]. HopSkipJump Attack is an algo-rithm based on a novel estimate of the gradient direction using bi-nary information at the decision boundary. Different from Decision-Based Attacks, which need a large number of model queries, Hop-SkipJump Attack requires significantly fewer model queries andgeneration time. What is more, in HopSkipJump Attack, the per-turbations are used to estimate a gradient direction to handle theinefficiency in Boundary Attack.Projected Gradient Descent(PGD) [19]. Their PGD attack con-sists of initializing the search for an adversarial example at a randompoint within the allowed norm ball, then running several iterationsof the basic iterative method [15] to find an adversarial example.2.2 Adversarial Example Defense Techniques.Existing approaches for defeating adversarial examples mainly fallinto two categories, as described below.Adversarial Training. Szegedy et al. [2] proposed that by train-ing the neural network with the mixed dataset of adversarial exam-ples and original clean samples, the new model will be resistantto adversarial examples. However, Moosavi-Dezfooli et al. [20]showed that an adversary can still generate new examples to fool thedefense model.Adversarial Example Detection. Instead of enhancing the mod-els, these approaches aim to detect adversarial examples. One typicalsolution is de-noising. Mustafa A et al. [21] proposed the waveletreconstruction algorithm to map the adversarial examples outside ofthe manifold region to the natural images’ manifold region througha deep image reconstruction network. It can restore the normal dis-criminability of the classifier effectively. Hinton et al. [22] adoptedthis reconstruction process of capsule network to detect adversarialexamples automatically.2.3 Transformation and Distortion of Adversarial Exam-ples.Most neural networks trained for image classification are trainedon images that have undergone JPG compression, containing theoriginal data subspace.Dziugaite et al. [14] find that perturbations of natural images (byadding scaled white noise or randomly corrupting a small number ofpixels) are almost certain to move an image out of the JPG subspaceand, therefore, out of the data subspace. Adversarial examples can,therefore, induce the classification network to give wrong classifi-cation results. However, when the degree of disturbance is small,the pixel disturbance value superimposed on the original image bythe adversarial example is also small, which means that these dis-turbance values are not robust to image compression, storage, andtransmission. The pixel loss is the reason why image transformationor distortion can defeat adversarial examples.Obviously, how to keeppixel perturbation is the solution to this problem.3 T RANSFERRING AND RELOADING OF ADVERSARIAL EX-AMPLESWe study different popular image formats and approaches of adver-sarial attacks and conclude that image transferring and reloading cansignificantly reduce adversarial attacks’ success rate.3.1 Root CauseThere are two reasons that image transferring and reloading candeactivate adversarial examples. First, in an adversarial image gener-ated using existing approaches, each pixel is usually represented as afloat value. When we store the image into the disk, the pixels willbe converted into inttype to save space. Such accuracy loss canmake the adversarial example ineffective when we reload it from thedisk. We find that the mainstream image formats (BMP, JPEG, andPNG) all perform such pixel conversion. Second, when we transferan image to a different platform via networks, the image is usuallycompressed to save the network traffic. For instance, we use theWeChat application to send pictures from a smartphone to a laptopand find that the application will compress the pictures with an 80%compression rate by default.Although such conversion and compression types have a humanunnoticeable impact on the images, they can significantly affectadversarial attacks’ success rate. The adversary’s goal is to find thesmallest perturbation that causes the model to classify the imageinto an attack-specific category. Common techniques usually movethe original clean samples towards the classification boundary andstop when the samples just cross the boundary to make sure thatthe added perturbation is small. So the adversarial examples havevery high precision requirements for their pixel values. The smallchanges caused by image reloading or transferring can move theadversarial images to classes different from the one the adversarydesires, making the adversarial examples ineffective. Here, we useFigure 1 to directly illustrate the adverse effects of image reloadingand image format transformation on the adversarial effect of theadversarial example. Below we conduct a set of experiments tovalidate those effects empirically.Figure 1: Red dots represent data, and the gray line represents thehyperplane that can separate individual classes. The gray dots repre-sent the inner boundary of the adversarial examples. The green dotrepresents a specific adversarial example. The yellow dot representsthat reloading can project this adversarial example back into theoriginal sample space.3.2 Experiments3.2.1 Impact of image reloading.We first empirically check the effectiveness of adversarial examplesafter being saved and reloaded.2Online Submission ID: 20(a) Original (b) JPG (c) PNG (d) BMPFigure 2: Pixel values before and after saving/reloadingPrecision loss. We generate a 33 image, and add each pixelvalue with a random perturbation qbetween 0 and 1. Then wesave the image into three different formats (JPG, BMP, PNG) andthen reload it into the memory. All the operations are done underwindows10.Figure 2 shows the pixel values of the original image (2a) andreloaded JPG (2b), PNG (2c) and BMP (2d) images, respectively.We observe that each image format has precision loss due to the typeconversion from float toint: JPG format directly discards thedecimals. In contrast, PNG and BMP formats round off the decimals.Although such estimation does not cause visual-perceptible effectsto the image, it can affect the results of adversarial attacks, as theseattacks require precise pixel-level perturbations. We demonstratethe effects below.Effectiveness of adversarial examples. We measure the per-formance of adversarial examples after being reloaded or trans-ferred. We select six commonly used approaches of adversarialattacks: Decision-Based Attack [17], HopSkipJump Attack [18],Deepfool [16], BIM [15], FGSM [2] ,and PGD [19]. For each ap-proach, we generate some adversarial examples. Decision-BasedAttack, HopSkipJump Attack and PGD use ResNet50 classifier.Deepfool uses ResNet34 classifier. BIM and FGSM use the VGG11classifier. Furthermore, all adversarial examples are tested with theclassifier used at the time of generation.We find that all the six adversarial attack methods measure theclassification number and confidence of adversarial examples atthe time of generation to judge whether the adversarial attack issuccessful. In fact, the classification number and confidence atthis time are not true, because the model does not classify the realimage at this time. They all use models(for example, ResNet50)to classify the generated Numpy array instead of the real pictureitself. It means, so far, they have not generated the image form ofthe adversarial examples. To test the effectiveness of the adversarialexamples, we use cv2.imwrite andplt.savefig to downloadthe adversarial examples locally. Next, we use the same model(forexample, ResNet50) to load the adversarial examples saved locally.In this paper, we refer to the above behavior as “Reloading.”We also find that when images are transmitted through instantmessaging software, companies compress them to save bandwidth,which results in a loss of pixels in the image, which is detrimentalto the adversarial examples generated by subtle perturbations. Forexample, when we use WeChat to send an image to a friend, ourfriend can see the compressed image with only a small amountof traffic. Instead of clicking the ”download the original” button,we save the compressed image locally and use the above model tocategorize it. The above process is referred to as ”Transferring” inthis paper.We use Figure 3 and Figure 4 to illustrate adversarial examples’confidence values after being reloaded and transferred. Different col-ors represent different classification Numbers, the height of the col-umn represents confidence, and each block represents six algorithmsfrom left to the right: Decision-Based Attack [17],HopSkipJumpAttack [18],DeepFool [16],BIM [15],FGSM [2], and PGD [19]. Wecan see that the initial image can be correctly classified with high con-fidence in all six algorithms. Besides, all the adversarial examplesgenerated by the algorithms can be classified into other categories,which means that the six algorithms’ adversarial examples have theFigure 3: Classification number and confidence of adversarial exam-ples after being reloaded and transferred.Figure 4: Classification number and confidence of adversarial exam-ples after being reloaded and transferred for another picture.adversarial ability to deceive classification models into giving falseresults.Surprisingly enough, we find that regardless of adversarial ex-amples are saved in JPG, PNG, or BMP, most of them could beclassified as the original clean image when they are reloaded ortransferred. Some even had high confidence. As reflected in theimage, the image after Reloading or Transferring is classified as theoriginal clean image with the same color.We hope to use more straightforward data to show you this phe-nomenon. As a result, Table 1 and Table 2 are two experimentalresults of another two groups of Reloading and Transferring. Thedata in the table represents the classification number and confidence(data in brackets). We can find that many of the adversarial examplesgenerated by the six kinds of adversarial attacks cannot maintaintheir attack ability after being reloaded or transferred. After beingreloaded or transferred, the adversarial examples will be classified asoriginal clean samples’ labels (such as 90 and 129) by the classifier.In order to verify that the adversarial examples with high confidencealso have Reloading and Transferring problems, we conduct thefollowing experiments with results in Table 3:We can find that the adversarial examples of Picture1 Picture4with high confidence as shown in Figure 5, after being reloadedor transferred, a large part of them are classified as original cleansamples in Table 3, proving that the adversarial examples with highconfidence also have Reloading and Transferring problems.All data in Tables 1 to 3 are all derived from the ResNet50 model.Cross Validation. Instead of using the same model to verifyadversarial examples’ effectiveness, we conduct two sets of cross-validation experiments. One set uses the Reloaded images, and theother uses the Transferred images. The classification number of theinitial clean image is 129. The classification numbers of their adver-sarial examples generated by the six adversarial algorithms are no3Online Submission ID: 20Picture1Picture2Picture3Picture4Figure 5: Adversarial Examples generated from Picture1 Picture4longer 129, which means that the adversarial attack is successful(notshown in Table 4). We feed the two sets of adversarial examplesgenerated by algorithm A into algorithm B after they are Reloadedor Transferred, to cross-verify the adversarial effectiveness of adver-sarial examples after being Reloaded or Transferred. Table 4 showstheir classification Numbers and Confidence in other algorithms.Obviously, no matter after Reloaded or Transferred, the adversar-ial examples lose their effectiveness in their own and other adversar-ial algorithms. After WeChat transmission, due to the existence ofimage compression during the transmission process, four new itemsin the table are classified to be recovered as clean samples.Multiple attacks In this section, we use the existing adversarialexamples as input and conduct other adversarial attacks. The newadversarial examples after reloaded are tested for effectiveness. Theresults are shown in Table 5.As shown in Table 5, even if we send the generated adversarialexamples into another generation algorithm again, the problem thatreloading and transferring results in the decrease of the adversarialeffectiveness also exists and is very serious. In Table 5, we see thatin addition to an item that failed to generate the adversarial exampleacross models and an item misclassified as classification number533, other adversarial examples are all classified as the initial cleansample’s classification number 129.The above chart synoptically shows that Reloading and Trans-ferring will significantly reduce the effectiveness of the adversarialattack. This is true for single attacks, cross attacks, and multipleattacks.3.3 Spectrum Analysis.Next, spectrum analysis is performed on the adversarial examplesused in Table 1 and Table 2.The spectrum analysis results are shown in Figure 6. From leftto right are the initial images, adversarial examples generated byBIM,FGSM and Deepfool algorithms. We can find that the Deepfoolalgorithm can retain the original clean sample’s original appearanceto the greatest extent. In contrast, FGSM algorithm introduces morenoise points, which is reflected in the spectrum map, that is, FGSMalgorithm generates a more uniform distribution of the spectrum mapwith more low-frequency components. This is why the adversarialexamples generated by the FGSM algorithm have better resistanceto Reloading and Transferring loss in Table 1 and Table 2.The results of the wavelet transform spectrum diagram of the orig-inal picture and adversarial examples of BIM, FGSM, and Deepfoolare shown in Figure 7 from left to right. Obviously, in the wavelet do-main, the original clean image is closest to the adversarial examplegenerated by Deepfool, both in low and high-frequency components,which means that Deepfool’s algorithm can counter the attack withminimal perturbation and is least likely to maintain its antagonismat the same time. FGSM algorithm exerts a large disturbance, so thehigh and low-frequency components in the wavelet domain are quitedifferent from the original clean image, maintaining the antagonismrelatively well.4 A R OBUST APPROACH TO GENERATING ADVERSARIALEXAMPLESAs discussed in Section 3, adversarial examples generated fromexisting techniques will become ineffective after being reloaded ortransferred. In this section, we propose an efficient and robust ap-proach, Confidence Iteration (CI), to produce adversarial examplesthat are resistant to the processes of Reloading or Transferring. CIis generic: it can be integrated with all existing adversarial exam-ple techniques to improve their robustness while maintaining theiradvantages.Our CI approach’s intuition is that an adversarial example’s confi-dence score of the attacker-specific classification number reflects thisexample’s resistance against input reloading or transferring. We use4Online Submission ID: 20Table 1: Classification number and confidence of an adversarial example after being reloaded and transferredClassification number(confidence) Attack Decision HopSkipJump Deepfool BIM FGSM PGDOriginal images 90(74:060% ) 90(74:060% ) 90(99:811% )90(99:582% )90(97:312% ) 90(100 :000% )adversarial images 852(15:062% ) 84(48:441% ) 95(49:315% )95(61:163% )735(44:672% )318(100 :000% )JPG reloading 90(72:291% ) 90(69:921% ) 90(52:677% )90(46:958% )84(99:217% ) 90(99:651% )transferring 90(63:671% ) 90(93:686% ) 90(52:985% )90(47:276% )84(99:402% ) 90(96:650% )PNG reloading 84(52:540% ) 84(83:981% ) 90(43:454% )90(45:934% )84(99:421% ) 90(94:402% )transferring 90(82:835% ) 90(50:656% ) 90(80:671% )90(36:895% )84(89:627% ) 90(99:985% )BMP reloading 84(52:540% ) 84(83:981% ) 90(43:454% )90(45:934% )84(99:421% ) 90(94:402% )transferring 90(82:835% ) 90(50:656% ) 90(80:671% )90(36:895% )84(89:627% ) 90(99:985% )Table 2: Classification number and confidence of another adversarial example after being reloaded and transferredClassification number(confidence) Attack Decision HopSkipJump Deepfool BIM FGSM PGDOriginal images 129(89:531% ) 129(89:531% )129(86:374% )129(71:917% )129(91:494% )129(98:182% )adversarial images 852(12:363% ) 132(36:282% )128(48:604% )128(98:746% )915(5:642% ) 128(97:858% )JPG reloading 132(35:742% ) 129(65:183% )129(60:726% )129(87:825% )132(51:324% )129(81:000% )transferring 132(34:461% ) 129(58:947% )129(88:792% )129(85:496% )132(30:130% )129(98:601% )PNG reloading 132(53:513% ) 129(64:022% )129(53:670% )129(85:081% )132(53:185% )128(38:533% )transferring 129(36:472% ) 129(77:169% )129(81:671% )129(81:244% )129(41:192% )129(89:833% )BMP reloading 132(53:513% ) 129(64:022% )129(53:670% )129(85:081% )132(53:185% )128(38:533% )transferring 129(36:472% ) 129(77:169% )129(81:671% )129(81:244% )129(41:192% )129(89:833% )Figure 6: Spectrum analysis of pictures in Table 1 and Table 2one existing technique (e.g., FGSM, BIM) to generate an adversarialexample and save it in the disk locally, and measure its confidencescore of the target class. (This actually involves reloading the image.)If the confidence score is higher than a threshold, we will accept thisimage. Otherwise, we continue to iterate, save it locally (or trans-form it through WeChat transmission), and measure the target class’sreloading confidence score until it meets the confidence requirementor exceeds the iteration number threshold. When the confidencevalue cmeets the expected requirement p, the adversarial exampleimage saved to the hard disk at this time has some resistance tothe pixel value’s variant. Besides, multiple gradient rise caused bymultiple iterations will keep the pixel values change with consistentdirection. That is to say, after many iterations, the fractional parts ofsome pixel values will be promoted to the integer part, can no longerbe discarded. To measure if an adversarial example is effective afterimage distortion, we adopt the wavelet reconstruction algorithm [21].As the name implies, we first process adversarial examples throughthe wavelet denoising algorithm. Then, we send the denoised imageinto ESRGAN, A super-resolution reconstructed network. Someadversarial examples with weak attack ability will be classified asinitial clean samples after being processed by this algorithm, whichmeans that their attack ability has been lost. By detecting the adver-sarial examples processed by the wavelet reconstruction algorithm,we could measure the generated adversarial examples’ robustnessFigure 7: Wavelet transform spectrum diagram of original pictureand adversaral examples of BIM, FGSM and Deepfool from left toright5Online Submission ID: 20Table 3: Classification number and confidence of adversarial examples generated from Picture1 Picture4 after being reloaded and transferredFGSM Picture1 Picture2 Picture3 Picture4Original images 106(94:478% )288(90:196% )173(92:451% )376(99:613% )adversarial images 343(84:336% )293(95:005% )104(86:118% )371(69:347% )JPG reloading 106(99:904% )288(49:574% )104(28:730% )371(34:062% )transferring 106(99:953% )288(54:895% )104(31:623% )371(33:070% )PNG reloading 106(99:685% )608(26:309% )173(49:878% )376(36:097% )transferring 106(99:807% )390(47:548% )173(47:880% )371(66:135% )BMP reloading 106(99:685% )608(26:309% )173(49:878% )376(36:097% )transferring 106(99:807% )390(47:548% )173(47:880% )371(66:135% )Table 4: Classification number and confidence of adversarial examples after being reloaded and transferred using Cross-ValidationClassification number(confidence) Original clean image Deepfool BIM FGSMDeepfool reloading 129(89:16% ) 129(72:14% )129(86:31% )128(57:74% )transferring 129(91:25% ) 128(77:49% )129(89:12% )129(67:91% )BIM reloading 128(72:14% ) 128(77:30% )129(60:48% )129(65:53% )transferring 129(91:81% ) 128(78:98% )141(47:95% )129(85:49% )FGSM reloading 132(82:26% ) 129(40:96% )129(14:19% )129(58:89% )transferring 129(65:64% ) 129(42:91% )129(15:98% )129(88:35% )PGD reloading 129(60:72% ) 129(87:82% )129(89:18% )129(81:00% )transferring 129(66:12% ) 129(68:06% )129(89:11% )129(98:60% )and effectiveness.Algorithm 1 Confidence IterationInput: A classifier fwith loss function J;a real example xandground-truth label y;Input: The size of perturbation e;iterations limit number Tmaxandconfidence limit value p;Output: iterations number T;An adversarial example xwithkxxk¥Te1:T= 0;2:xT=x;3:Save xTas a picture xrealTon your local hard drive (or transformit through WeChat transmission)4:Input xrealTtofand obtain the confidence cand the gradientÑxJxrealT;ytrue;5:while (TTmax)and(cp)do6: x=xrealT+eÑxJxrealT;ytrue;7: T=T+1;8: xT=x;9: Save xTas a picture xrealTon your local hard drive (or trans-form it through WeChat transmission)10: Reload xrealTtofand obtain the confidence cand the gradientÑxJxrealT;ytrue;11:end whileAlgorithm 1 summarizes our CI approach. We first input the cleanimage, generate its adversarial example, and then save the adversarialexample locally. The local adversarial example is then reloaded intothe classification model and judges whether the adversarial attackcan succeed. On the premise of the adversarial attack’s success,we give the confidence value cof the adversarial attack, which isobtained by reloading the adversarial example in the hard disk intothe classification model. Then we compare the expected confidencethreshold pwith the current confidence c. If the current confidencecis less than the expected confidence threshold pand the currentiteration number Tis less than the iteration number threshold Tmax.We will run the Confidence Iteration algorithm, save the generatedadversarial example locally, and compare c and p. The whole processwill not stop until cis greater than porTequals Tmax.It is precisely because the CI algorithm has a download, reload,and confidence judgment process. We can apply it to the backendof any adversarial example generation algorithm to enhance theadversarial example’s robustness against reloading and transferring.5 E VALUATIONIn this section, we conduct experiments to validate the effectivenessof our proposed approach.5.1 ConfigurationsDataset. To better reflect the real-world setting, we implement acrawler to crawl some images from websites instead of using existingimage datasets. We consider the Inception v3 model [23] and restrictthe scope of crawled images to the categories recognized by thismodel. We filter out the crawled images that the Inception v3 modelcannot correctly recognize and finally establish a dataset consistingof around 1300 clean images with the correct labels.Implementations. We consider two adversarial example tech-niques: FGSM and BIM. Our CI approach is generic and can beapplied to other techniques as well. We select VGG11 [24] as thetarget model. We implement these techniques with the CI algorithmusing the PyTorch library. We set the iteration upper limit Tmaxas 6,and the confidence threshold pas 70%.Metrics. We adopt two metrics to evaluate the effectiveness ofadversarial examples: (1) success rate is defined in Equation 1a. Nsis the number of adversarial examples which can be misclassified bythe target model fwhile its clean images can be classified truly, and6Online Submission ID: 20Table 5: Classification number and confidence of adversarial examples after multiple attacksA image with classification number 129 +0 +Deepfool +BIM +FGSM +PGDDeepfool 129(60:72% )129(71:88% )129(95:91% )Unsuccessful generation 129(92:27% )BIM 129(89:82% )129(92:32% )129(99:37% )129(98:65% ) 129(90:22% )FGSM 129(89:18% )129(72:24% )533(31:53% )129(71:53% ) 129(55:72% )PGD 129(81:00% )129(82:52% )129(88:24% )129(99:71% ) 129(55:72% )Nfis the number of adversarial examples that can be predicted ascorresponding clean image’s label; (2) average confidence score isdefined in Equation 1b. piis the confidence score from the targetmodel with the highest false classification confidence. (Here wedo not consider the adversarial example that can be classified as itsclean sample’s label by the model.)Padv=NsNs+Nf(1a)Cave=1NsNsåi=1pi (1b)5.2 Results and AnalysisAdversarial example generation. We first show the generatedadversarial examples using FGSM, CI-FGSM, BIM, and CI-BIM, asshown in Figure 8. We can see that similar to FGSM and BIM. OurCI-FGSM and CI-BIM can also produce adversarial examples withimperceptible perturbations that can fool the target deep learningmodel.FGSMCI-FGSMBIMCI-BIMFigure 8: Adversarial examples generated by FGSM, CI-FGSM,BIM, and CI-BIMAttack effects after image reloading. Using different ap-proaches, we generate a large amount of adversarial images, savethem to the local disk. Then we reload them and feed them intothe target model for prediction. We measure the success rates andaverage confidence scores of the four algorithms in Table 6. FGSMhas a lower success rate and confidence score as it adopts stridingperturbation. In contrast, BIM has higher attack performance. Forour CI-BIM, although the confidence score is slightly lower thanBIM. But the success rate is much higher than that of BIM. Our CIapproach is more efficient when eis smaller.Different parameters can lead to different effects of the CI ap-proach. Figure 9 demonstrates the adversarial success rate of adver-sarial examples from CI algorithm with different threshold p. Wecan see that by increasing p, the attack performance can be signif-icantly improved. To boost the adversarial attacks, conventionalapproaches require setting a large disturbance hyper-parameter e(e.g., 16), and large number of iterations T(e.g., 10). To achieve thesame effects, our CI approach only needs to increase the thresholdwhile keeping smaller values of e(0.050.2) and T(e.g., 6) toachieve similar attack effects.Resistance against Detection of Adversarial Examples. In ad-dition to defeating input transformation, our CI-approach is better atevading adversarial example detection. We use the wavelet recon-struction algorithm [21] as the defense method to measure the per-formance of different adversarial algorithms. After being processedby the wavelet reconstruction algorithm, the adversarial exampleswith weak attack capabilities will be identified as the initial cleanimage’s label by the classification model. As the name implies,we first process adversarial examples through a wavelet denoisingalgorithm. Then, we send the denoised image into ESRGAN, Asuper-resolution reconstructed network. By detecting the adversarialexamples processed by the wavelet denoising algorithm, we couldmeasure the generated adversarial examples’ robustness. We setthe parameter eas 0.1 and dof the wavelet denoising algorithmfrom 0.01 to 0.1. Figure 10 shows the comparison results. We canclearly see that although the attack performance of BIM is betterthan FGSM, the adversarial examples generated by the BIM algo-rithm are easier to be detected as adversarial examples under thesame parameters. On the contrary, our CI method has high attackperformance and is not easy to be detected as adversarial examples,especially when the detection parameter sis small.Application to other adversarial example techniques. In addi-tion to BIM, our CI approach can be also applied to other adversarialattack algorithms to boost adversarial examples. Figure 11 shows theattack performance of FGSM with CI and its comparisons with sim-ple FGSM. We can see that the CI approach can improve the attackperformance of FGSM, which is more obvious when the parametereis smaller. Simultaneously, the effect of CI-FGSM is much betterthan that of an ordinary BIM algorithm. CI-BIM algorithm has thebest adversarial success rate among the four algorithms, which isalso easy to understand. When the parameter eis small, the FGSMalgorithm uses the small step length for perturbation superposition,BIM algorithm iterates these small step length perturbations, andCI-BIM algorithm iterates again for the iteration of these small steplength perturbations on the premise of confidence satisfying therequirements p. This is an iteration at different scales. In a sense,our method implements an adaptive step size attack. When the pa-7Online Submission ID: 20Table 6: the success rates and average confidence scores of adversarial examplessuccess rate confidence scoree=0.1 e=0.2 e=0.3 e=0.1 e=0.2 e=0.3FGSM 81.4% 95.8% 99.2% 23.3% 20.3% 27.0%CI-FGSM 87.0% 96.5% 98.8% 22.9% 21.5% 27.7%BIM 87.5% 94.4% 94.0% 74.7% 73.2% 68.3%CI-BIM 95.5% 98.9% 99.3% 57.8% 62.9% 63.7%rameter eis relatively small, the adjustment range of the dynamicstep length is larger, which means that our CI-BIM algorithm canfind adversarial examples with high attack capabilities in a largerrange. Essentially, the CI-BIM algorithm has higher adversarial at-tack performance because of its wider search domain for generatingmore robust adversarial examples.6 C ONCLUSIONIn this paper, we evaluate the effectiveness of adversarial examplesafter being reloaded or transferred. We discover that most main-stream adversarial attacks will fail with such input transformation.Then we propose a new solution, Confidence Iteration, to generatehigh-quality and robust adversarial examples. This solution cansignificantly facilitate other existing attacks, increasing the attacksuccess rate and reducing the detection rate. Future work includesmore evaluations on the integration with other attack techniques andleveraging Confidence Iteration to enhance DNN models via testingand adversarial training.<|im_end|> <|im_start|>assistant ### Review Title Generating Adversarial Examples for Robust Deception against Image Transfer and Reloading ### Review Text The authors explore how storing and transmitting images affects the attack performance of adversarial examples. They propose a novel metric called "Confidence Iteration" to generate adversarial examples that are robust to said effects of storage and transmission. Comments: - Although the objective is clear, there is no discussion on why more basic techniques cannot be used or fail to address this problem. For example, one would expect that using data augmentation during the training will increase the network's robustness to arbitrary transformations, including photometric, geometric, noise, etc. - In contrast to the authors' statement, the adversarial examples shown are noticeably noisy. It seems that applying a simple median filter before passing the image to the network would get rid of most of the noise. - The idea of studying the effects of compression and quantization on adversarial examples' effectiveness is useful. However, I find it trivial and would not call this a discovery of a "new phenomenon". Any information loss (due to any factor, not just these two), which leads to perturbations, can affect the network's performance. - It is unclear whether the Inceptionv3 model was trained on the created dataset or any other dataset. - Why was VGG11 used instead of Inceptionv3? - I find the "Confidence Iteration" trivial. The range of the colour values is not specified. Are the RGB images in the range of [0,1]? If so, the perturbations are up to 30%, which seems quite extreme considering the claim that the adversarial examples are "stealthy". - The paper requires proofreading as there are a few sentences that do not parse. ### Review Rating 3: Clear rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
BylRVjC9K7
ICLR.cc/2019/Conference
2019
Explaining Adversarial Examples with Knowledge Representation
["Xingyu Zhou", "Tengyu Ma", "Huahong Zhang"]
Adversarial examples are modified samples that preserve original image structures but deviate classifiers. Researchers have put efforts into developing methods for generating adversarial examples and finding out origins. Past research put much attention on decision boundary changes caused by these methods. This paper, in contrast, discusses the origin of adversarial examples from a more underlying knowledge representation point of view. Human beings can learn and classify prototypes as well as transformations of objects. While neural networks store learned knowledge in a more hybrid way of combining all prototypes and transformations as a whole distribution. Hybrid storage may lead to lower distances between different classes so that small modifications can mislead the classifier. A one-step distribution imitation method is designed to imitate distribution of the nearest different class neighbor. Experiments show that simply by imitating distributions from a training set without any knowledge of the classifier can still lead to obvious impacts on classification results from deep networks. It also implies that adversarial examples can be in more forms than small perturbations. Potential ways of alleviating adversarial examples are discussed from the representation point of view. The first path is to change the encoding of data sent to the training step. Training data that are more prototypical can help seize more robust and accurate structural knowledge. The second path requires constructing learning frameworks with improved representations.
["adversarial example", "knowledge representation", "distribution imitation"]
ABSTRACTAdversarial examples are modified samples that preserve original image structuresbut deviate classifiers. Researchers have put efforts into developing methods forgenerating adversarial examples and finding out origins. Past research put muchattention on decision boundary changes caused by these methods. This paper,in contrast, discusses the origin of adversarial examples from a more underlyingknowledge representation point of view. Human beings can learn and classify pro-totypes as well as transformations of objects. While neural networks store learnedknowledge in a more hybrid way of combining all prototypes and transforma-tions as a whole distribution. Hybrid storage may lead to lower distances betweendifferent classes so that small modifications can mislead the classifier. A one-step distribution imitation method is designed to imitate distribution of the nearestdifferent class neighbor. Experiments show that simply by imitating distributionsfrom a training set without any knowledge of the classifier can still lead to obviousimpacts on classification results from deep networks. It also implies that adver-sarial examples can be in more forms than small perturbations. Potential waysof alleviating adversarial examples are discussed from the representation point ofview. The first path is to change the encoding of data sent to the training step.Training data that are more prototypical can help seize more robust and accuratestructural knowledge. The second path requires constructing learning frameworkswith improved representations.1 I NTRODUCTIONWith the more widespread use of deep neural networks, the robustness and security of these networkshave aroused the attention of both academic and industrial eyes. Among these adversarial examplesis one of the most interesting as well as intriguing.Since the discovery of adversarial examples in CNNs from 2013Szegedy et al. (2013), securityand robustness has become a hot topic. Researchers have put efforts into finding out sources foradversarial examples and also developing methods for automatically generating these adversarialexamplesGoodfellow et al. (2014).Most these research focus on how certain perturbations lead to changes in decision boundaries. Thispaper discusses the origin of adversarial examples from a more underlying knowledge representationpoint of view. It provides a possible reason why adversarial examples exist for current networks anduses some experiments to prove this idea. Experiments also in some way show that adversarialexamples can be derived from only the training data and totally network-independent. In addition,adversarial examples may be in more forms than the usual small perturbations. At last, possibleways to alleviate this issue are discussed.1.1 R ELATED WORKCurrent adversarial attacks have become a systematic procedure. Some algorithms have been devel-oped to deliberately generate these kinds of adversarial examples Goodfellow et al. (2014); Moosavi-Dezfooli et al. (2016). After these examples have been generated, they can be injected back into themodel to skew the classificationPapernot et al. (2017). This can even serve as a universal attackmodel for other machine learning techniques.1Under review as a conference paper at ICLR 2019Some adversarial example generation techniques arise from the properties of neural networks them-selves and are dependent on model architectures. Most of this kind of work has been done onimage classification tasks like handwriting recognition (MINST dataset) and object recognition(ImageNet). However, from related research work these years, it has been widely recognized thatthere are even more machine architectures vulnerable to adversarial attacking other than neural net-worksPapernot et al. (2016).More recently, some other research has shown that adversarial examples maybe more widespreadin our physical worldKurakin et al. (2016). And there even exist universal perturbationsMoosavi-Dezfooli et al. (2017b;a) for a certain neural network that can generate examples that are both uni-versal and adversarial. A recent paper further shows the existence of single-pixel attacks on imageclassification tasksSu et al. (2017).Opposite to the attack techniques are the defense techniquesPapernot et al. (2015); Lu et al. (2017a).Some research has also been on this area. The most straightforward idea for defense is includingsome adversarial examples as inputs of the training set and let the neural network also learn whatadversarial examplesTram `er et al. (2017a) are like.DeepDefenseYan et al. (2018) incorporates anadversarial perturbation-based regularization into classification objectives. A quite optimistic viewcomes from the research on multi-camera view indicating adversarial examples can be constrainedby taking inputs of an object from different angles of viewLu et al. (2017b). And this multi-cameradesign aims to prove that adversarial examples may not be obvious for autonomous driving whichmust involve multiple cameras and scaling. However, some research shows that adversarial exam-ples can directly work on image and scene segmentationsMetzen et al. (2017); Xie et al. (2017).Other researchers also put many efforts into exploring the underlying reasons for these adver-sarial examples. The most direct one would be the linear vibration caused by vector computa-tionGoodfellow et al. (2014); Tram `er et al. (2017b). This is the cause given by the fast gradientmethod paper. A more recent trend is that researchers try to use geometrical and topological meth-ods to explore the perturbation effects on high-dimensional decision spaceWang et al. (2016); Tanay& Griffin (2016); Tram `er et al. (2017b); Liu et al. (2016). All these research shows a trend thatpeople are more and more eager to undermine the principles of deep neural networks and extendtheir applications.1.2 G ENERAL IDEAEven though we still have no clear idea how eyes and vision of human beings actually work on theneural level, human eyes are more error-resistant. It is not easy to see such kind of adversarial ex-amples intuitively. Most of these kinds of examples are generated by carefully designed algorithmsand procedures. This complexity to some extent shows that adversarial examples may only occupya small percentage for total image space we can imagine with pixel representations.To the best of our knowledge, this paper should be the first one that discusses adversarial examplesfrom the internal knowledge representation point of view. The rest of this paper is organized asfollows:The representation section gives a formal description of the explanation and illustrates whyadversarial examples exist for current architectures.The experiment section proposes a one-step distribution imitation procedure to do somedistribution imitations. Experiments show that simply by distribution imitation on a train-ing set without any knowledge of the classifier may still lead to obvious impacts on clas-sification results using the network. The last discussion section concludes the paper anddiscusses some potential solutions from knowledge representation perspective.2 K NOWLEDGE REPRESENTATION OF NETWORKWhen people think of an object, the object usually can be depicted in a quite abstract way. Forexample, when asked what is a car, we can think of a vehicle with several wheels and a driversteering in it. Even though for one task, there could be a large number of neurons involved, atleast on high-level the encoding of the information should still be in a sparse way. For computers,the modern machine also set sparsity and abstraction as goals to seek. However, due to the lack of2Under review as a conference paper at ICLR 2019accurate knowledge of optimal representations, current deep networks actually choose an alternativeway with redundant parameters to depict tasks and objects.For human beings, both abstraction and sparsity can be achieved with hierarchical storage of knowl-edge. This can be similar to object-oriented programming. We can think of an object with itsattributes. Going back to the object recognition tasks, human beings can actually learn from one ora few prototypes for a certain object and know how this prototype can be transformed into differentplausible forms. As a result, when recognizing an object, we are actually recognizing the object aswell as how it is transformed from the prototype of this object to what we see.Current multi-layer neural networks are partly enlightened by the hierarchical ideology. Even thoughthroughout these years, the detailed architectures of deep neural networks have evolved a lot so asthe accuracy, the most fundamental one is still the AlexNetKrizhevsky et al. (2012).There are many layers but two main phases in this network for image classification tasks. The firstphase mainly uses convolutional layers to extract local features from different scales. The secondmainly uses fully connected layers to add up local features from the first phase with weights toconstruct a higher-level image entity. At last outputs from the second phase go through a Softmaxclassifier and give out probabilities for each possible class.As described above, the architecture of AlexNet is quite intuitive. However, there still exists a greatgap between abstract and sparse representation from human beings and neural networks. An idealexecution procedure can be simulated here to see how neural networks actually represent knowledgethrough the process.Consider the two-step procedure of extraction and transformation. We can define the output of anetwork as: output =TX, where Xrefers to the extracted part after the convolution and poolingpart and Trefers to the transformations.For a given network, different inputs can have different extracted patterns of X, here define theextracted X= [X1; X2]consisting of two parts. Correspondingly, different inputs can activatedifferent parts in the transformation, so we can have the output denoted as:output =TX= [T1; T2][X1; X2]T(1)We can have inputs from the same class(same label). Some extreme cases can be considered here.Suppose for the first input, it is only extracted to be [X1;0]. For this input:output 1 = [T1; T2][X1; X2]T=T1X1+ 0 = T1X1 (2)In the same way, there could exist a second input that is only extracted to be [0; X2](for example,symmetric data). For this input:output 2 = [T1; T2][X1; X2]T= 0 + T2X2=T2X2 (3)After the first input, parameters for the transformation T= [T1;0]can be determined in the form ofthe extracted input.For the ideal condition, the classifier should give the largest value for the correct class and very smallbut equal values for other classes before the softmax gives out the final probability distribution. Theextreme case for the ideal softmax is to give positive infinite for the correct and negative infinite forothers. So the outputs discussed above should be a fixed value when two inputs belong to the sameclass(they have the same output results).Similarly, in the second step, given the same output result, there should be:T2X2=T1X1 (4)T2= (T1X1)X12 (5)Now the transformations are denoted by the first two inputs and we are given a third input and itsoutput pattern can be denoted as X3= [AX1; BX2], where A; B are both matrices.Put this back into the learned system of output = [T1; T2][X1; X2]T, we can get the output result:output 3 = [T1; T2][AX1; BX2]1(6)3Under review as a conference paper at ICLR 2019= [T1;(T1X1)X12][AX1; BX2]1(7)So the system actually processes the test input into:output 3 =T1AX1+ (T1X1)X12BX2 (8)That actually indicates an interesting fact that what current neural networks have extracted at last arenot only local features with combinations but also a weighted sampling of all training inputs. Thevisualization techniquesZeiler & Fergus (2014) can help show some elements of truth in this aspect.We can use the DeepDream tool to visualize individual feature channels and their combinations toexplore the space of patterns learned by the neural networkMordvintsev et al. (2015).The tool returns images that strongly activate the channels within the network to highlight the fea-tures learned by a neural network. Nine random channels are chosen, they are ’balloon’, ’containership’, ’grey whale’, ’aircraft carrier’, ’ashcan’, ’radio’, ’trolleybus’, ’revolver’ and ’passenger car’.Figure 1: Visualize last layer with Deep-Dream.Figure 2: Independent and hybrid representations.It can be seen in Figure 1 that in the last layer with corresponding labels, the objects can still berecognized from their round and very similar repeating structures. For relative complex objects like’container ship’ or ’revolver’ they can only ensemble similar shapes that are obviously overlappingwith each other. This, in some way, proves the point that the knowledge storage and representationof current neural networks are not exactly sparse prototype based. Current networks actually usea hybrid distribution of all its prototypes from training and possible transformations to represent acertain class of object.And the accumulation of these makes the space distances between different types of objects decisionboundaries become smaller. When they become small enough, even small perturbations becomepossible to lead the classifier into a wrong result if imposed in certain directions. That may be theorigin of adversarial examples.The Figure 2 above provides a simple description of this idea. The first one represents the way ofdifferentiating transformations in the same class. One class includes two normal distributions(aftertransformations) N(10,3) and N(30,4). There is one simple explanation of why one prototype of aclass is reasonable to be denoted as an object. When a human is learning a class prototype from aphysical object, there are actually an unlimited number of images from different angles and viewsstreaming into human brains. And a large number of samples from a same class prototype will beconverged to be a normal distribution. The other class includes one normal distribution N(45,6).In this representation approach, we can actually easily differentiate between these prototypes withrepresentations. Even their mean values are different enough. However, as we have discussed,the representation in current neural networks is more likely to be in the second figures form whenmultiple prototypes and transformations are involved. We can see in the second figure of Figure 2,4Under review as a conference paper at ICLR 2019the representation of the first class combines two subclasses together. From the properties of normaldistributions, the resulting hybrid representation becomes N(40, 5). This becomes more similar tothe second class and makes it more difficult to classify. The absolute difference in means decreasesfrom 45-30=15 to the smaller 45-40=5. The classifier has a lowest precision it can recognize. If rightnow a perturbation is added to make this difference even smaller to a certain degree, the classificationcannot be guaranteed any more.In summary, human beings can detect different objects as well as their transformations at the sametime. CNNs do not separate these two. This makes final layers extracting not pure objects but inreality, a probability density distribution of the objects and its different states after transformations.Adversarial examples can arise from this underlying form of hybrid knowledge representation byimitating the distributions of a target class.3 E XPERIMENTAs discussed above, the hybrid knowledge representation may lead to unclear bounds facing sam-ples with high-dimensional distributions. One way to depict high-dimensional distributions is usingweights from PCA. Similar values of weights, especially the first most important values, meanshigher similarity in distributions. And higher similarity in distributions means higher probability oflying in overwhelming areas between different class of objects. This part first gives an example ofhow the distribution of an adversarial example may vary after the perturbation. Then more systemicexperiments based on one-step distribution imitations are conducted completely regardless of thenetwork. And these experiments show how distribution imitation can impact the classification basedon hybrid storage.3.1 A S AMPLE OF WEIGHT MODIFICATIONBefore diving into the direct distribution imitations, an example is shown here to show weight vari-ations from a more commonly used adversarial attack method. The classifier training is based onhandwriting digits and gets a classifier with 99.8% accuracy. To undermine more universal prop-erties from the adversary, Fast Gradient MethodGoodfellow et al. (2014) is used for generatingadversarial examples and reduce the success rate to 45%.Here a set of examples is shown in Figure 3. The first one is a successful adversarial one, whichmakes the classifier recognizes a number 7 as a number 9. The second one is an unsuccessful one inwhich the classifier could still recognize a number 1 correctly as a number 1.(a) 7 Adding Noise Recognized to be 9. (b) 1 Adding Noise Still Recognized to be 1.Figure 3: A set of examples of digit adversarial images.Here we do PCA on all ten classes from 0 to 9. From these principal components feature space,we can infer what happens when the classifier classifies the data as 9 instead of 7. The adversarialexample actually puts more weights on the relating principal components of 9 and this later mis-leads the classifier. This can be confirmed by the values of projections on normal and adversarialexamples.A simple metric that can be used here is the angle between projectionsFujimaki et al. (2005). It canbe computed using the dot product and ArcCOS function. The angle on basis 7 between normal andadversarial is 0.2286 while on basis 9 is 0.1140. Smaller angle means higher similarity here. Thatis one view of classifying the modified image of 7 as 9.5Under review as a conference paper at ICLR 2019Table 1: Projections of normal and adversarial images on Basis 7 and 9 (truncated)Projection Score1 Score2 Score3 Score4 Score57 Nor -193 108 -39 582 2747 Adv -134 123 96 644 1439 Nor -199 364 26 812 3.79 Adv -131 420 8.3 52 0.13.2 O NE-STEP DISTRIBUTION IMITATIONAccording to the distribution imitation idea discussed in section 2, we can actually take a one-stepimitation to see whether the new image can deviate from the original classification result while stillpreserving the overall structure. We do this imitation by modifying the weight values from PCA.Nowadays, PCA is mainly used for dimension reduction in the data pre-processing step. But it canbe used as a rough classification method as well. When doing this classification work, first computePCA on the training set and get coefficients and weights for training images. And then compute theweights of the test image using the coefficients. Choose the nearest neighbor according to a certainnumber of first weight values. And the class of this nearest neighbor is regarded as the output classof the test image.Here, we are seeking a different goal. We know the label of the test image, but we want to modifythis image so that a classifier trained from this training set will be more likely to misclassify themodified image.Overall, we are using PCA weights to find the nearest different class neighbor and imitate the orig-inal image into similar patterns with this neighbor. A formal description of this procedure is givenbelow.Algorithm 1: Use PCA Weights to find the Nearest Different Class Neighbor and ImitateData: training image dataset X=fx1; x2; :::; x ng; TrainLabelsData: testing dataImage y; yLabelResult: xgoal; ymodified1initialization;2[coeff; score; mu ] =pca(X);3yscore = (ymu)inv(coeff );4dist min= +inf5forx2Xdo6 dist=Distance (yscore; score [x]);7 ifdist < dist min\yLabel6=TrainLabels [x]then8 dist min=dist;9 xgoal=x;10 else11 continue;12 end13end14ymodified =weightImitate (y; xgoal);The distance measure can be computed by the first certain number of most important weights whichoccupy a high percentage of total variance.Given the procedure above, the next question is: how to imitate the goal image given the originaltest image? We implement this weightImitate (y; xgoal)function in two ways:Ratio Modification The weight value difference between the test image and the goal image itneeds to imitate is calculated. And then add a ratio times this difference to the original weight value.6Under review as a conference paper at ICLR 2019It can be denoted as yscore =yscore+(xgoalyscore)ratio . And then use this new weight vectorto reconstruct the image.Weight Select Modification An alternative way is to choose some weight positions and assign thevalues from xgoaltoyscore on positions chosen. That is yscore[weight sel] =xgoal[weight sel].And then use this new weight vector to reconstruct the image.Both ways discussed above are implemented in the experiment. Experiments are conducted on thefirst 5,000 images from cifar-10 dataset. There are two main reasons the experiment is done on 5,000samples rather than more.First, 5,000 (about 500 per category) should be enough to depict distribution differences for a limitednumber of dimensions required for comparison here;Second, the very first step of the procedure discussed above involves the computation of PCA onthe whole training dataset. In addition, the step of finding nearest different class neighbor needs toiterate through all samples. Too many samples will cause too much burden in computation.One thing worth noticing is that there are three color channels in colored images, the distributionimitation execution is conducted by each channel. For purely gray-scale image dataset like MNIST,it would just do for one channel.To test image classification results, a 24-layer network is used. It has 6 stacks of Conv, BatchNormand Relu combinations together with two MaxPooling layers after the second and fourth stack. Thisis a network structure that is very common. This network in normal working condition has anaccuracy of 85.1% on the test set of cifar-10.Figure 4 shows the results from the fixed ratio(=0.2) method. All weight values from the originalimage are changed towards 0.2 times the distance between the goal and it. It reduces the accuracyfrom 0.851 to 0.756.Some interesting results are shown here. Two dog images are classified as birds. A cat is classifiedas a truck. A horse is classified as a deer. Note that even though some results remain correct, theirconfidence probability reduce obviously.(a) Confusion Matrix. (b) 9 random samples. (c) Modified images.Figure 4: Ratio=0.2 Weight Bias reduces accuracy by 10%.Figure 5 shows the results from the row select modification method. Weight values on 50-100thpositions are assigned with corresponding values from the goal. It reduces the accuracy from 0.851to 0.532. Similarly, some obvious absurd classifications occur and even though some results remaincorrect, their confidence probability reduce obviously. Further tests are also conducted using animproved classifier adding ResNet connections. It has an accuracy of 0.588, which is not verydifferent from the previous setting.It can be observed that in both these two ways, the main structure of the original RGB image canbe well preserved. But it differs from normal adversarial images in that it actually maximizes somecolor masking to the original image and this is sometimes visible. The position and visibility of thiskind of masking depend on the magnitude and selection of weight changes. For the second method,7Under review as a conference paper at ICLR 2019(a) Confusion Matrix. (b) 9 random samples. (c) Modified images.Figure 5: Weight imitation on 50-100 components reduces accuracy by 32%.if we chose more critical positions like weight 10-30, the main structure of the original image canbe totally destroyed.As RGB images have three color channels, the modifications from original images are sometimesnot very visible. We do further experiments on the MNIST digit dataset with more flexible imitationsettings. As we can see from experiments by two methods above, different weights have differentimpacts and the modification strength decides the overall result pattern.Ratio Piecewise Modification One hybrid way is to combine two modification methods abovetogether. One 28*28 input image is divided into four 14*14 small patches. The dimension is 196.Instead of one fixed modification ratio for a number of scores, we apply modifications on scoresfrom 10 to higher. For score 10-50, the ratio is linearly increased from 0 to a ratio level. Andfor scores higher than 50, the ratio is fixed at this level. This idea shown in Figure 6 serves as apiecewise function.(a) Ratio setting according to weight positions usinga piecewise function. (b) Confusion matrix for modified digits.Figure 6: Modification ratio and classification results.Figure 7 shows the result when the selected highest ratio is 5. Under this condition the classificationerror increases from 1.04% to 34.20%. But this error rate requires relative strong modifications thatare quite visible under black and white settings for MNIST dataset. One thing worth noticing is thatthe classifier is most easily fooled into recognizing the modified digit as 8. This concentration canbe explained by the most widely connected and stretched for the digit ’8’ itself. This makes it themost distributed target in high dimension space and the ideal target for imitation in many cases.We can see from Figure 7 that modified digit images are still recognizable for human eyes but theycan successfully fool the neural network in our experiments. From a binary view, some noise isadded to or removed from original images. The effect of this kind of perturbation is also supported8Under review as a conference paper at ICLR 2019(a) Original digits. (b) Modified digits.Figure 7: 25 misclassified samples from the imitation that reduces accuracy by 33%.by a recent paper on face recognitionWang et al. (2018). One main difference is that comparedto RGB images with three color channels, modifications on black and white digits are more easilyrecognizable.We conducted 50 sets of batch experiments. Each batch utilizes the first 500 samples from the testset. In these experiments, the modification ratio increases from 0.5 to 5. As shown in Figure 8, theclassification accuracy gradually decreases from the initial 98.96% to 65.80% when the modificationratio equals 5.(a) Classification accuracy decreases with stronger imita-tion modifications. (b) Stronger modification on digit ’1’.Figure 8: Experiment data for increased modification strength.From row 1 to 5 and col 1 to 5 modi-fication ratio increases from 0.5 to 5.It is worth pointing out that the procedure and experiments conducted here are mainly to show thatadversarial images can be caused by distribution imitations on the dataset itself regardless of thenetwork. It also shows that adversarial images can be in more forms than small perturbations.This one-step distribution imitation procedure is not a formal method of adversarial attack. Butconsidering the fact that it only needs to get access to the training data or even a part of the training9Under review as a conference paper at ICLR 2019data, it can be conducted without any knowledge of the network. It still renders some danger if theattacker wants to misguide the classifier with many trials.4 D ISCUSSIONSIn summary, this paper discusses the origin of adversarial examples from an underlying knowledgerepresentation point of view. Neural networks store learned knowledge in a more hybrid way thatcombining all prototypes and transformation distributions as a whole. This hybrid storage may leadto lower distances between different classes so that small modifications may mislead the classifier.4.1 U NIVERSAL DISTRIBUTION IMITATION VS HIGHEFFICIENCY OF DEEPNETWORKSThe one-step distribution imitation procedure discussed imitates the nearest different class neighbor.Experiments conducted show that simply by distribution imitation on a training set without anyknowledge of the network may still lead to obvious impacts on classification results. This alsoexplains why adversarial examples can be universal.Modified images are sometimes visible but still can keep original structures and information, thisshows that adversarial images can be in more forms than small perturbations. Also, the robustnessof classification is not only related to the classifier but also the dataset quality used for training.One interesting question intuitively arising is while adversarial examples can be universal, why cur-rent deep neural networks show such high efficiency. A rational hypothesis is adversarial examplesare rare from probabilistic view. A research diving into properties of layers Peck et al. (2017) showslower bounds on the magnitudes of perturbations necessary to change the classification results. Thisalso adds to the point that adversarial examples exist under strict conditions and cannot be generatedin a pure random manner.4.2 P OTENTIAL SOLUTIONS FROM KR V IEWDataset with More Concentrated Structural Info Without changing learning frameworks, thispath focuses on changing what classifiers can learn from the very beginning.For human beings, it is possible to learn a class of object from a single image sample. However, thisis not the full story. Human beings are actually learning all the time throughout the process of usingeyes to get visions. A hidden advantage for human vision is that humans have gained a good priorknowledge for various kinds of environments. This makes it easier for human beings to first judgeout the relative distances from the object and more accurately seize a separate form of structuralinformation.In the same way, for a classifier learning from scratch, the training set with a monotonic backgroundand a limited number of different object states can make learning space more concentrated and thisin theory can help resist adversarial examples as early as from the training step. As far as we cansee from current learning frameworks, this can be realized in two ways.One way is to add pre-processing steps on datasets to make the data more compact instead of feedingdata directly to the neural networks. A recent work shows that thermometer encodingBuckman et al.(2018) can help resist adversarial examples. This can be viewed as a frontend sampling step on themost significant information.The other way is to make use of more prototypical datasets. Instead of giving one object in an imagea single class label, it is also required to give a state description. This is equivalent to creating moresubclasses and training the classifier to do further classifications. One potential dataset seeking thispurpose is choosing learning prototypes from simplest toysWang et al. (2017). This proposed anegocentric, manual, multi-image (EMMI) dataset providing a more structured and denser samplingof viewpoints.Improved Representation of Learning The other path is to construct a better network repre-sentation. The success of deep networks points out the importance of hierarchical representations.However, recent research on neural network compression proves that knowledge representation in10Under review as a conference paper at ICLR 2019current neural networks has a large number of shared and redundant parametersHan et al. (2015);Iandola et al. (2016). A deep network itself is still worth digging into.According to the prototype and transformation representation model discussed, it is reasonable toseparate these two kinds of knowledge in learning frameworks that involve detection and classifica-tion. For the current neural network for classification tasks, it is relatively simple to define a stablecost function and do the training globally. However, it we really want to separate these prototypesand transformations at the same time, it is almost impossible to define it as a single optimizationproblem. And state overlapping between different prototypes will make the final decision fuzzy.In this way, we still need to face the dilemma that current hybrid representations in some waymake adversarial examples inevitable. It might be necessary to design a new learning frameworkthat represents knowledge in a more compact way. No matter what path to choose, we should stillrealize that current datasets or representation models are far away from human beings accumulationthrough years of vision usage. There is still a long way to go before machines can understand andrepresent vision information as good as or even better than human beings can do.
SJlH5W753Q
Very hard to read
3: Clear rejection
**First of all, this paper uses 11 pages** Submission instruction is "There will be a strict upper limit of 10 pages." The readability of the manuscript should be improved. I'm not convinced why Chapter 2 motivates Chapter 3. I think Ch. 2 and Ch. 3 are different stories.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Explaining Adversarial Examples with Knowledge Representation ### Paper Abstract Adversarial examples are modified samples that preserve original image structures but deviate classifiers. Researchers have put efforts into developing methods for generating adversarial examples and finding out origins. Past research put much attention on decision boundary changes caused by these methods. This paper, in contrast, discusses the origin of adversarial examples from a more underlying knowledge representation point of view. Human beings can learn and classify prototypes as well as transformations of objects. While neural networks store learned knowledge in a more hybrid way of combining all prototypes and transformations as a whole distribution. Hybrid storage may lead to lower distances between different classes so that small modifications can mislead the classifier. A one-step distribution imitation method is designed to imitate distribution of the nearest different class neighbor. Experiments show that simply by imitating distributions from a training set without any knowledge of the classifier can still lead to obvious impacts on classification results from deep networks. It also implies that adversarial examples can be in more forms than small perturbations. Potential ways of alleviating adversarial examples are discussed from the representation point of view. The first path is to change the encoding of data sent to the training step. Training data that are more prototypical can help seize more robust and accurate structural knowledge. The second path requires constructing learning frameworks with improved representations. ### Paper Keywords ["adversarial example", "knowledge representation", "distribution imitation"] ### Paper Content ABSTRACTAdversarial examples are modified samples that preserve original image structuresbut deviate classifiers. Researchers have put efforts into developing methods forgenerating adversarial examples and finding out origins. Past research put muchattention on decision boundary changes caused by these methods. This paper,in contrast, discusses the origin of adversarial examples from a more underlyingknowledge representation point of view. Human beings can learn and classify pro-totypes as well as transformations of objects. While neural networks store learnedknowledge in a more hybrid way of combining all prototypes and transforma-tions as a whole distribution. Hybrid storage may lead to lower distances betweendifferent classes so that small modifications can mislead the classifier. A one-step distribution imitation method is designed to imitate distribution of the nearestdifferent class neighbor. Experiments show that simply by imitating distributionsfrom a training set without any knowledge of the classifier can still lead to obviousimpacts on classification results from deep networks. It also implies that adver-sarial examples can be in more forms than small perturbations. Potential waysof alleviating adversarial examples are discussed from the representation point ofview. The first path is to change the encoding of data sent to the training step.Training data that are more prototypical can help seize more robust and accuratestructural knowledge. The second path requires constructing learning frameworkswith improved representations.1 I NTRODUCTIONWith the more widespread use of deep neural networks, the robustness and security of these networkshave aroused the attention of both academic and industrial eyes. Among these adversarial examplesis one of the most interesting as well as intriguing.Since the discovery of adversarial examples in CNNs from 2013Szegedy et al. (2013), securityand robustness has become a hot topic. Researchers have put efforts into finding out sources foradversarial examples and also developing methods for automatically generating these adversarialexamplesGoodfellow et al. (2014).Most these research focus on how certain perturbations lead to changes in decision boundaries. Thispaper discusses the origin of adversarial examples from a more underlying knowledge representationpoint of view. It provides a possible reason why adversarial examples exist for current networks anduses some experiments to prove this idea. Experiments also in some way show that adversarialexamples can be derived from only the training data and totally network-independent. In addition,adversarial examples may be in more forms than the usual small perturbations. At last, possibleways to alleviate this issue are discussed.1.1 R ELATED WORKCurrent adversarial attacks have become a systematic procedure. Some algorithms have been devel-oped to deliberately generate these kinds of adversarial examples Goodfellow et al. (2014); Moosavi-Dezfooli et al. (2016). After these examples have been generated, they can be injected back into themodel to skew the classificationPapernot et al. (2017). This can even serve as a universal attackmodel for other machine learning techniques.1Under review as a conference paper at ICLR 2019Some adversarial example generation techniques arise from the properties of neural networks them-selves and are dependent on model architectures. Most of this kind of work has been done onimage classification tasks like handwriting recognition (MINST dataset) and object recognition(ImageNet). However, from related research work these years, it has been widely recognized thatthere are even more machine architectures vulnerable to adversarial attacking other than neural net-worksPapernot et al. (2016).More recently, some other research has shown that adversarial examples maybe more widespreadin our physical worldKurakin et al. (2016). And there even exist universal perturbationsMoosavi-Dezfooli et al. (2017b;a) for a certain neural network that can generate examples that are both uni-versal and adversarial. A recent paper further shows the existence of single-pixel attacks on imageclassification tasksSu et al. (2017).Opposite to the attack techniques are the defense techniquesPapernot et al. (2015); Lu et al. (2017a).Some research has also been on this area. The most straightforward idea for defense is includingsome adversarial examples as inputs of the training set and let the neural network also learn whatadversarial examplesTram `er et al. (2017a) are like.DeepDefenseYan et al. (2018) incorporates anadversarial perturbation-based regularization into classification objectives. A quite optimistic viewcomes from the research on multi-camera view indicating adversarial examples can be constrainedby taking inputs of an object from different angles of viewLu et al. (2017b). And this multi-cameradesign aims to prove that adversarial examples may not be obvious for autonomous driving whichmust involve multiple cameras and scaling. However, some research shows that adversarial exam-ples can directly work on image and scene segmentationsMetzen et al. (2017); Xie et al. (2017).Other researchers also put many efforts into exploring the underlying reasons for these adver-sarial examples. The most direct one would be the linear vibration caused by vector computa-tionGoodfellow et al. (2014); Tram `er et al. (2017b). This is the cause given by the fast gradientmethod paper. A more recent trend is that researchers try to use geometrical and topological meth-ods to explore the perturbation effects on high-dimensional decision spaceWang et al. (2016); Tanay& Griffin (2016); Tram `er et al. (2017b); Liu et al. (2016). All these research shows a trend thatpeople are more and more eager to undermine the principles of deep neural networks and extendtheir applications.1.2 G ENERAL IDEAEven though we still have no clear idea how eyes and vision of human beings actually work on theneural level, human eyes are more error-resistant. It is not easy to see such kind of adversarial ex-amples intuitively. Most of these kinds of examples are generated by carefully designed algorithmsand procedures. This complexity to some extent shows that adversarial examples may only occupya small percentage for total image space we can imagine with pixel representations.To the best of our knowledge, this paper should be the first one that discusses adversarial examplesfrom the internal knowledge representation point of view. The rest of this paper is organized asfollows:The representation section gives a formal description of the explanation and illustrates whyadversarial examples exist for current architectures.The experiment section proposes a one-step distribution imitation procedure to do somedistribution imitations. Experiments show that simply by distribution imitation on a train-ing set without any knowledge of the classifier may still lead to obvious impacts on clas-sification results using the network. The last discussion section concludes the paper anddiscusses some potential solutions from knowledge representation perspective.2 K NOWLEDGE REPRESENTATION OF NETWORKWhen people think of an object, the object usually can be depicted in a quite abstract way. Forexample, when asked what is a car, we can think of a vehicle with several wheels and a driversteering in it. Even though for one task, there could be a large number of neurons involved, atleast on high-level the encoding of the information should still be in a sparse way. For computers,the modern machine also set sparsity and abstraction as goals to seek. However, due to the lack of2Under review as a conference paper at ICLR 2019accurate knowledge of optimal representations, current deep networks actually choose an alternativeway with redundant parameters to depict tasks and objects.For human beings, both abstraction and sparsity can be achieved with hierarchical storage of knowl-edge. This can be similar to object-oriented programming. We can think of an object with itsattributes. Going back to the object recognition tasks, human beings can actually learn from one ora few prototypes for a certain object and know how this prototype can be transformed into differentplausible forms. As a result, when recognizing an object, we are actually recognizing the object aswell as how it is transformed from the prototype of this object to what we see.Current multi-layer neural networks are partly enlightened by the hierarchical ideology. Even thoughthroughout these years, the detailed architectures of deep neural networks have evolved a lot so asthe accuracy, the most fundamental one is still the AlexNetKrizhevsky et al. (2012).There are many layers but two main phases in this network for image classification tasks. The firstphase mainly uses convolutional layers to extract local features from different scales. The secondmainly uses fully connected layers to add up local features from the first phase with weights toconstruct a higher-level image entity. At last outputs from the second phase go through a Softmaxclassifier and give out probabilities for each possible class.As described above, the architecture of AlexNet is quite intuitive. However, there still exists a greatgap between abstract and sparse representation from human beings and neural networks. An idealexecution procedure can be simulated here to see how neural networks actually represent knowledgethrough the process.Consider the two-step procedure of extraction and transformation. We can define the output of anetwork as: output =TX, where Xrefers to the extracted part after the convolution and poolingpart and Trefers to the transformations.For a given network, different inputs can have different extracted patterns of X, here define theextracted X= [X1; X2]consisting of two parts. Correspondingly, different inputs can activatedifferent parts in the transformation, so we can have the output denoted as:output =TX= [T1; T2][X1; X2]T(1)We can have inputs from the same class(same label). Some extreme cases can be considered here.Suppose for the first input, it is only extracted to be [X1;0]. For this input:output 1 = [T1; T2][X1; X2]T=T1X1+ 0 = T1X1 (2)In the same way, there could exist a second input that is only extracted to be [0; X2](for example,symmetric data). For this input:output 2 = [T1; T2][X1; X2]T= 0 + T2X2=T2X2 (3)After the first input, parameters for the transformation T= [T1;0]can be determined in the form ofthe extracted input.For the ideal condition, the classifier should give the largest value for the correct class and very smallbut equal values for other classes before the softmax gives out the final probability distribution. Theextreme case for the ideal softmax is to give positive infinite for the correct and negative infinite forothers. So the outputs discussed above should be a fixed value when two inputs belong to the sameclass(they have the same output results).Similarly, in the second step, given the same output result, there should be:T2X2=T1X1 (4)T2= (T1X1)X12 (5)Now the transformations are denoted by the first two inputs and we are given a third input and itsoutput pattern can be denoted as X3= [AX1; BX2], where A; B are both matrices.Put this back into the learned system of output = [T1; T2][X1; X2]T, we can get the output result:output 3 = [T1; T2][AX1; BX2]1(6)3Under review as a conference paper at ICLR 2019= [T1;(T1X1)X12][AX1; BX2]1(7)So the system actually processes the test input into:output 3 =T1AX1+ (T1X1)X12BX2 (8)That actually indicates an interesting fact that what current neural networks have extracted at last arenot only local features with combinations but also a weighted sampling of all training inputs. Thevisualization techniquesZeiler & Fergus (2014) can help show some elements of truth in this aspect.We can use the DeepDream tool to visualize individual feature channels and their combinations toexplore the space of patterns learned by the neural networkMordvintsev et al. (2015).The tool returns images that strongly activate the channels within the network to highlight the fea-tures learned by a neural network. Nine random channels are chosen, they are ’balloon’, ’containership’, ’grey whale’, ’aircraft carrier’, ’ashcan’, ’radio’, ’trolleybus’, ’revolver’ and ’passenger car’.Figure 1: Visualize last layer with Deep-Dream.Figure 2: Independent and hybrid representations.It can be seen in Figure 1 that in the last layer with corresponding labels, the objects can still berecognized from their round and very similar repeating structures. For relative complex objects like’container ship’ or ’revolver’ they can only ensemble similar shapes that are obviously overlappingwith each other. This, in some way, proves the point that the knowledge storage and representationof current neural networks are not exactly sparse prototype based. Current networks actually usea hybrid distribution of all its prototypes from training and possible transformations to represent acertain class of object.And the accumulation of these makes the space distances between different types of objects decisionboundaries become smaller. When they become small enough, even small perturbations becomepossible to lead the classifier into a wrong result if imposed in certain directions. That may be theorigin of adversarial examples.The Figure 2 above provides a simple description of this idea. The first one represents the way ofdifferentiating transformations in the same class. One class includes two normal distributions(aftertransformations) N(10,3) and N(30,4). There is one simple explanation of why one prototype of aclass is reasonable to be denoted as an object. When a human is learning a class prototype from aphysical object, there are actually an unlimited number of images from different angles and viewsstreaming into human brains. And a large number of samples from a same class prototype will beconverged to be a normal distribution. The other class includes one normal distribution N(45,6).In this representation approach, we can actually easily differentiate between these prototypes withrepresentations. Even their mean values are different enough. However, as we have discussed,the representation in current neural networks is more likely to be in the second figures form whenmultiple prototypes and transformations are involved. We can see in the second figure of Figure 2,4Under review as a conference paper at ICLR 2019the representation of the first class combines two subclasses together. From the properties of normaldistributions, the resulting hybrid representation becomes N(40, 5). This becomes more similar tothe second class and makes it more difficult to classify. The absolute difference in means decreasesfrom 45-30=15 to the smaller 45-40=5. The classifier has a lowest precision it can recognize. If rightnow a perturbation is added to make this difference even smaller to a certain degree, the classificationcannot be guaranteed any more.In summary, human beings can detect different objects as well as their transformations at the sametime. CNNs do not separate these two. This makes final layers extracting not pure objects but inreality, a probability density distribution of the objects and its different states after transformations.Adversarial examples can arise from this underlying form of hybrid knowledge representation byimitating the distributions of a target class.3 E XPERIMENTAs discussed above, the hybrid knowledge representation may lead to unclear bounds facing sam-ples with high-dimensional distributions. One way to depict high-dimensional distributions is usingweights from PCA. Similar values of weights, especially the first most important values, meanshigher similarity in distributions. And higher similarity in distributions means higher probability oflying in overwhelming areas between different class of objects. This part first gives an example ofhow the distribution of an adversarial example may vary after the perturbation. Then more systemicexperiments based on one-step distribution imitations are conducted completely regardless of thenetwork. And these experiments show how distribution imitation can impact the classification basedon hybrid storage.3.1 A S AMPLE OF WEIGHT MODIFICATIONBefore diving into the direct distribution imitations, an example is shown here to show weight vari-ations from a more commonly used adversarial attack method. The classifier training is based onhandwriting digits and gets a classifier with 99.8% accuracy. To undermine more universal prop-erties from the adversary, Fast Gradient MethodGoodfellow et al. (2014) is used for generatingadversarial examples and reduce the success rate to 45%.Here a set of examples is shown in Figure 3. The first one is a successful adversarial one, whichmakes the classifier recognizes a number 7 as a number 9. The second one is an unsuccessful one inwhich the classifier could still recognize a number 1 correctly as a number 1.(a) 7 Adding Noise Recognized to be 9. (b) 1 Adding Noise Still Recognized to be 1.Figure 3: A set of examples of digit adversarial images.Here we do PCA on all ten classes from 0 to 9. From these principal components feature space,we can infer what happens when the classifier classifies the data as 9 instead of 7. The adversarialexample actually puts more weights on the relating principal components of 9 and this later mis-leads the classifier. This can be confirmed by the values of projections on normal and adversarialexamples.A simple metric that can be used here is the angle between projectionsFujimaki et al. (2005). It canbe computed using the dot product and ArcCOS function. The angle on basis 7 between normal andadversarial is 0.2286 while on basis 9 is 0.1140. Smaller angle means higher similarity here. Thatis one view of classifying the modified image of 7 as 9.5Under review as a conference paper at ICLR 2019Table 1: Projections of normal and adversarial images on Basis 7 and 9 (truncated)Projection Score1 Score2 Score3 Score4 Score57 Nor -193 108 -39 582 2747 Adv -134 123 96 644 1439 Nor -199 364 26 812 3.79 Adv -131 420 8.3 52 0.13.2 O NE-STEP DISTRIBUTION IMITATIONAccording to the distribution imitation idea discussed in section 2, we can actually take a one-stepimitation to see whether the new image can deviate from the original classification result while stillpreserving the overall structure. We do this imitation by modifying the weight values from PCA.Nowadays, PCA is mainly used for dimension reduction in the data pre-processing step. But it canbe used as a rough classification method as well. When doing this classification work, first computePCA on the training set and get coefficients and weights for training images. And then compute theweights of the test image using the coefficients. Choose the nearest neighbor according to a certainnumber of first weight values. And the class of this nearest neighbor is regarded as the output classof the test image.Here, we are seeking a different goal. We know the label of the test image, but we want to modifythis image so that a classifier trained from this training set will be more likely to misclassify themodified image.Overall, we are using PCA weights to find the nearest different class neighbor and imitate the orig-inal image into similar patterns with this neighbor. A formal description of this procedure is givenbelow.Algorithm 1: Use PCA Weights to find the Nearest Different Class Neighbor and ImitateData: training image dataset X=fx1; x2; :::; x ng; TrainLabelsData: testing dataImage y; yLabelResult: xgoal; ymodified1initialization;2[coeff; score; mu ] =pca(X);3yscore = (ymu)inv(coeff );4dist min= +inf5forx2Xdo6 dist=Distance (yscore; score [x]);7 ifdist < dist min\yLabel6=TrainLabels [x]then8 dist min=dist;9 xgoal=x;10 else11 continue;12 end13end14ymodified =weightImitate (y; xgoal);The distance measure can be computed by the first certain number of most important weights whichoccupy a high percentage of total variance.Given the procedure above, the next question is: how to imitate the goal image given the originaltest image? We implement this weightImitate (y; xgoal)function in two ways:Ratio Modification The weight value difference between the test image and the goal image itneeds to imitate is calculated. And then add a ratio times this difference to the original weight value.6Under review as a conference paper at ICLR 2019It can be denoted as yscore =yscore+(xgoalyscore)ratio . And then use this new weight vectorto reconstruct the image.Weight Select Modification An alternative way is to choose some weight positions and assign thevalues from xgoaltoyscore on positions chosen. That is yscore[weight sel] =xgoal[weight sel].And then use this new weight vector to reconstruct the image.Both ways discussed above are implemented in the experiment. Experiments are conducted on thefirst 5,000 images from cifar-10 dataset. There are two main reasons the experiment is done on 5,000samples rather than more.First, 5,000 (about 500 per category) should be enough to depict distribution differences for a limitednumber of dimensions required for comparison here;Second, the very first step of the procedure discussed above involves the computation of PCA onthe whole training dataset. In addition, the step of finding nearest different class neighbor needs toiterate through all samples. Too many samples will cause too much burden in computation.One thing worth noticing is that there are three color channels in colored images, the distributionimitation execution is conducted by each channel. For purely gray-scale image dataset like MNIST,it would just do for one channel.To test image classification results, a 24-layer network is used. It has 6 stacks of Conv, BatchNormand Relu combinations together with two MaxPooling layers after the second and fourth stack. Thisis a network structure that is very common. This network in normal working condition has anaccuracy of 85.1% on the test set of cifar-10.Figure 4 shows the results from the fixed ratio(=0.2) method. All weight values from the originalimage are changed towards 0.2 times the distance between the goal and it. It reduces the accuracyfrom 0.851 to 0.756.Some interesting results are shown here. Two dog images are classified as birds. A cat is classifiedas a truck. A horse is classified as a deer. Note that even though some results remain correct, theirconfidence probability reduce obviously.(a) Confusion Matrix. (b) 9 random samples. (c) Modified images.Figure 4: Ratio=0.2 Weight Bias reduces accuracy by 10%.Figure 5 shows the results from the row select modification method. Weight values on 50-100thpositions are assigned with corresponding values from the goal. It reduces the accuracy from 0.851to 0.532. Similarly, some obvious absurd classifications occur and even though some results remaincorrect, their confidence probability reduce obviously. Further tests are also conducted using animproved classifier adding ResNet connections. It has an accuracy of 0.588, which is not verydifferent from the previous setting.It can be observed that in both these two ways, the main structure of the original RGB image canbe well preserved. But it differs from normal adversarial images in that it actually maximizes somecolor masking to the original image and this is sometimes visible. The position and visibility of thiskind of masking depend on the magnitude and selection of weight changes. For the second method,7Under review as a conference paper at ICLR 2019(a) Confusion Matrix. (b) 9 random samples. (c) Modified images.Figure 5: Weight imitation on 50-100 components reduces accuracy by 32%.if we chose more critical positions like weight 10-30, the main structure of the original image canbe totally destroyed.As RGB images have three color channels, the modifications from original images are sometimesnot very visible. We do further experiments on the MNIST digit dataset with more flexible imitationsettings. As we can see from experiments by two methods above, different weights have differentimpacts and the modification strength decides the overall result pattern.Ratio Piecewise Modification One hybrid way is to combine two modification methods abovetogether. One 28*28 input image is divided into four 14*14 small patches. The dimension is 196.Instead of one fixed modification ratio for a number of scores, we apply modifications on scoresfrom 10 to higher. For score 10-50, the ratio is linearly increased from 0 to a ratio level. Andfor scores higher than 50, the ratio is fixed at this level. This idea shown in Figure 6 serves as apiecewise function.(a) Ratio setting according to weight positions usinga piecewise function. (b) Confusion matrix for modified digits.Figure 6: Modification ratio and classification results.Figure 7 shows the result when the selected highest ratio is 5. Under this condition the classificationerror increases from 1.04% to 34.20%. But this error rate requires relative strong modifications thatare quite visible under black and white settings for MNIST dataset. One thing worth noticing is thatthe classifier is most easily fooled into recognizing the modified digit as 8. This concentration canbe explained by the most widely connected and stretched for the digit ’8’ itself. This makes it themost distributed target in high dimension space and the ideal target for imitation in many cases.We can see from Figure 7 that modified digit images are still recognizable for human eyes but theycan successfully fool the neural network in our experiments. From a binary view, some noise isadded to or removed from original images. The effect of this kind of perturbation is also supported8Under review as a conference paper at ICLR 2019(a) Original digits. (b) Modified digits.Figure 7: 25 misclassified samples from the imitation that reduces accuracy by 33%.by a recent paper on face recognitionWang et al. (2018). One main difference is that comparedto RGB images with three color channels, modifications on black and white digits are more easilyrecognizable.We conducted 50 sets of batch experiments. Each batch utilizes the first 500 samples from the testset. In these experiments, the modification ratio increases from 0.5 to 5. As shown in Figure 8, theclassification accuracy gradually decreases from the initial 98.96% to 65.80% when the modificationratio equals 5.(a) Classification accuracy decreases with stronger imita-tion modifications. (b) Stronger modification on digit ’1’.Figure 8: Experiment data for increased modification strength.From row 1 to 5 and col 1 to 5 modi-fication ratio increases from 0.5 to 5.It is worth pointing out that the procedure and experiments conducted here are mainly to show thatadversarial images can be caused by distribution imitations on the dataset itself regardless of thenetwork. It also shows that adversarial images can be in more forms than small perturbations.This one-step distribution imitation procedure is not a formal method of adversarial attack. Butconsidering the fact that it only needs to get access to the training data or even a part of the training9Under review as a conference paper at ICLR 2019data, it can be conducted without any knowledge of the network. It still renders some danger if theattacker wants to misguide the classifier with many trials.4 D ISCUSSIONSIn summary, this paper discusses the origin of adversarial examples from an underlying knowledgerepresentation point of view. Neural networks store learned knowledge in a more hybrid way thatcombining all prototypes and transformation distributions as a whole. This hybrid storage may leadto lower distances between different classes so that small modifications may mislead the classifier.4.1 U NIVERSAL DISTRIBUTION IMITATION VS HIGHEFFICIENCY OF DEEPNETWORKSThe one-step distribution imitation procedure discussed imitates the nearest different class neighbor.Experiments conducted show that simply by distribution imitation on a training set without anyknowledge of the network may still lead to obvious impacts on classification results. This alsoexplains why adversarial examples can be universal.Modified images are sometimes visible but still can keep original structures and information, thisshows that adversarial images can be in more forms than small perturbations. Also, the robustnessof classification is not only related to the classifier but also the dataset quality used for training.One interesting question intuitively arising is while adversarial examples can be universal, why cur-rent deep neural networks show such high efficiency. A rational hypothesis is adversarial examplesare rare from probabilistic view. A research diving into properties of layers Peck et al. (2017) showslower bounds on the magnitudes of perturbations necessary to change the classification results. Thisalso adds to the point that adversarial examples exist under strict conditions and cannot be generatedin a pure random manner.4.2 P OTENTIAL SOLUTIONS FROM KR V IEWDataset with More Concentrated Structural Info Without changing learning frameworks, thispath focuses on changing what classifiers can learn from the very beginning.For human beings, it is possible to learn a class of object from a single image sample. However, thisis not the full story. Human beings are actually learning all the time throughout the process of usingeyes to get visions. A hidden advantage for human vision is that humans have gained a good priorknowledge for various kinds of environments. This makes it easier for human beings to first judgeout the relative distances from the object and more accurately seize a separate form of structuralinformation.In the same way, for a classifier learning from scratch, the training set with a monotonic backgroundand a limited number of different object states can make learning space more concentrated and thisin theory can help resist adversarial examples as early as from the training step. As far as we cansee from current learning frameworks, this can be realized in two ways.One way is to add pre-processing steps on datasets to make the data more compact instead of feedingdata directly to the neural networks. A recent work shows that thermometer encodingBuckman et al.(2018) can help resist adversarial examples. This can be viewed as a frontend sampling step on themost significant information.The other way is to make use of more prototypical datasets. Instead of giving one object in an imagea single class label, it is also required to give a state description. This is equivalent to creating moresubclasses and training the classifier to do further classifications. One potential dataset seeking thispurpose is choosing learning prototypes from simplest toysWang et al. (2017). This proposed anegocentric, manual, multi-image (EMMI) dataset providing a more structured and denser samplingof viewpoints.Improved Representation of Learning The other path is to construct a better network repre-sentation. The success of deep networks points out the importance of hierarchical representations.However, recent research on neural network compression proves that knowledge representation in10Under review as a conference paper at ICLR 2019current neural networks has a large number of shared and redundant parametersHan et al. (2015);Iandola et al. (2016). A deep network itself is still worth digging into.According to the prototype and transformation representation model discussed, it is reasonable toseparate these two kinds of knowledge in learning frameworks that involve detection and classifica-tion. For the current neural network for classification tasks, it is relatively simple to define a stablecost function and do the training globally. However, it we really want to separate these prototypesand transformations at the same time, it is almost impossible to define it as a single optimizationproblem. And state overlapping between different prototypes will make the final decision fuzzy.In this way, we still need to face the dilemma that current hybrid representations in some waymake adversarial examples inevitable. It might be necessary to design a new learning frameworkthat represents knowledge in a more compact way. No matter what path to choose, we should stillrealize that current datasets or representation models are far away from human beings accumulationthrough years of vision usage. There is still a long way to go before machines can understand andrepresent vision information as good as or even better than human beings can do.<|im_end|> <|im_start|>assistant ### Review Title Very hard to read ### Review Text **First of all, this paper uses 11 pages** Submission instruction is "There will be a strict upper limit of 10 pages." The readability of the manuscript should be improved. I'm not convinced why Chapter 2 motivates Chapter 3. I think Ch. 2 and Ch. 3 are different stories. ### Review Rating 3: Clear rejection ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
HygiDTVKPr
ICLR.cc/2020/Conference
2020
A Mention-Pair Model of Annotation with Nonparametric User Communities
["Silviu Paun", "Juntao Yu", "Jon Chamberlain", "Udo Kruschwitz", "Massimo Poesio"]
The availability of large datasets is essential for progress in coreference and other areas of NLP. Crowdsourcing has proven a viable alternative to expert annotation, offering similar quality for better scalability. However, crowdsourcing require adjudication, and most models of annotation focus on classification tasks where the set of classes is predetermined. This restriction does not apply to anaphoric annotation, where coders relate markables to coreference chains whose number cannot be predefined. This gap was recently covered with the introduction of a mention pair model of anaphoric annotation (MPA). In this work we extend MPA to alleviate the effects of sparsity inherent in some crowdsourcing environments. Specifically, we use a nonparametric partially pooled structure (based on a stick breaking process), fitting jointly with the ability of the annotators hierarchical community profiles. The individual estimates can thus be improved using information about the community when the data is scarce. We show, using a recently published large-scale crowdsourced anaphora dataset, that the proposed model performs better than its unpooled counterpart in conditions of sparsity, and on par when enough observations are available. The model is thus more resilient to different crowdsourcing setups, and, further provides insights into the community of workers. The model is also flexible enough to be used in standard annotation tasks for classification where it registers on par performance with the state of the art.
["model of annotation", "coreference resolution", "anaphoric annotation", "mention pair model", "bayesian nonparametrics"]
ABSTRACTThe availability of large datasets is essential for progress in coreference and otherareas of NLP. Crowdsourcing has proven a viable alternative to expert annotation,offering similar quality for better scalability. However, crowdsourcing requireadjudication, and most models of annotation focus on classification tasks wherethe set of classes is predetermined. This restriction does not apply to anaphoricannotation, where coders relate markables to coreference chains whose numbercannot be predefined. This gap was recently covered with the introduction of amention pair model of anaphoric annotation ( MPA). In this work we extend MPAto alleviate the effects of sparsity inherent in some crowdsourcing environments.Specifically, we use a nonparametric partially pooled structure (based on a stickbreaking process), fitting jointly with the ability of the annotators hierarchicalcommunity profiles. The individual estimates can thus be improved using infor-mation about the community when the data is scarce. We show, using a recentlypublished large-scale crowdsourced anaphora dataset, that the proposed modelperforms better than its unpooled counterpart in conditions of sparsity, and on parwhen enough observations are available. The model is thus more resilient to dif-ferent crowdsourcing setups, and, further provides insights into the community ofworkers. The model is also flexible enough to be used in standard annotation tasksfor classification where it registers on par performance with the state of the art.1 I NTRODUCTIONIdentifying and resolving anaphoric reference to discourse entities, a task known in NLP ascoref-erence resolution , has long been considered a core aspect of language interpretation (Poesio et al.,2016). Ever since the MUC evaluation campaign in the 1990s (Grishman & Sundheim, 1995; Chin-chor, 1998), larger and richer datasets have been made available, pushing the state of the art to newheights. In the last few years, the O NTONOTES corpus (Pradhan et al., 2007; Weischedel et al.,2011), used for the CONLL 2011 and 2012 shared tasks (Pradhan et al., 2012), has become the defacto standard resource for coreference resolution research (Fernandes et al., 2014; Bj ̈orkelund &Kuhn, 2014; Martschat & Strube, 2015; Clark & Manning, 2015; 2016a;b; Lee et al., 2017; 2018).The corpus was hand-annotated by experts, and remained the largest available dataset up until therecent publication of P RECO (Chen et al., 2018). But there are still many languages and domainsfor which no such resources are available, or where the annotation scheme is limited (e.g., the lackof singletons in O NTONOTES , no expletives in either P RECO nor O NTONOTES ).Annotating data on the scale required to train state of the art systems using traditional expert annota-tion can quickly get unaffordable. But in recent years crowdsourcing has proved a viable alternativeto expert annotation, with studies indicating that expert-level quality can be achieved with muchlower costs (Snow et al., 2008; Raykar et al., 2010). Crowdsourced data however require aggrega-tion methods to choose the most likely label(s) among the interpretations provided by the crowd.Past research suggests probabilistic models of annotation are one of the most promising approachesto aggregation (Dawid & Skene, 1979; Carpenter, 2008; Whitehill et al., 2009; Raykar et al., 2010;Hovy et al., 2013; Quoc Viet Hung et al., 2013; Sheshadri & Lease, 2013; Passonneau & Carpenter,2014; Venanzi et al., 2014; Kamar et al., 2015; Paun et al., 2018a). These models offer a rich frame-work of interpretation and can employ distinct prior and likelihood structures (pooled, unpooled,and partially pooled) and a diverse set of effects (annotator ability, item difficulty).Motivation Most work on models of annotation assume the set of classes the annotators can choosefrom is fixed across the annotated items, an aspect not appropriate for anaphoric annotation wherecoders relate markables to anaphoric chains. Recently, Paun et al. (2018b) developed a probabilis-1Under review as a conference paper at ICLR 2020tic model able to aggregate crowdsourced anaphoric annotations. The model was later applied toadjudicate the interpretations from the Phrase Detectives 2 corpus with comparable quality to thatof expert annotators (Poesio et al., 2019). The model of Paun et al. (2018b) assumes an unpooledstructure, i.e., it models individual annotator parameters. Such models typically require a largernumber of observations to properly estimate the ability profile of the coders. In a crowdsourcing en-vironment this requirement may not always be satisfied, e.g., in the intial stages of a crowdsourcingcampaign, or in the commonly encountered scenario where the workload of the annotators resem-bles a power law curve (Ipeirotis, 2010; Chamberlain, 2016); both examples describe a sparse dataenvironment which may prove difficult to handle for an unpooled model. One intuitive solution tothis problem is to exploit the similarities found in the behaviour of the annotators. Simpson et al.(2011; 2013) identified, after fitting a model of annotation, distinctive clusters in the ability of theworkers; more generally, typical annotator communities found in a crowdsourcing setup includespammers, adversarial, biased, average or high quality players. Knowledge of these communitieswould allow to regularize the ability of the annotators towards the profile of the community they arepart of. This partially pooled structure can prove effective in conditions of sparsity where there arenot enough observations to accurately estimate the ability of the annotators in isolation. The levelof pooling would be dictated by the data, such that, when enough observations are gathered, thepartially pooled and the unpooled models would perform similarly.Contributions In this work we extend the unpooled mention pair model of annotation ( MPA) pro-posed by Paun et al. (2018b) with hierarchical communities of annotators. We let the number ofcommunities grow with the data, a flexibility that we achieve using a Dirichlet process mixturebased on a stick-breaking representation of the underlying Dirichlet process. We conduct the eval-uation on the Phrase Detectives 2 corpus, in various levels of sparsity, assessing the accuracy ofthe inferred mention pairs, the quality of the post-hoc constructed silver chains, and the viability ofusing silver chains as an alternative to expert-annotated chains when training a state of the art coref-erence system. We discuss the inferred community profiles of a few known spammers and honestplayers of the game used to collect the Phrase Detectives 2 corpus. We conclude the evaluation witha performance check on traditional crowdsourcing datasets against several state of the art models.2 A C OMMUNITY MODEL FOR ANAPHORIC ANNOTATIONSWe use the same annotation scheme as Paun et al. (2018b): annotators mark mentions as discoursenew if a new entity is being introduced into the discourse (represented internally as DN), with prop-erty for predicative noun phrases (PR), with non-referring for expletives (NR) and with discourse oldif an already introduced entity is being mentioned, in which case the annotators must also specifythe mention’s most recent antecedent (DO(ante-id)). We also follow the same notation as in Paunet al. (2018b) and use the term label to refer to a given annotation (i.e., DN, DO(ante-id), PR, NR)and the term class to refer to a general category a given label belongs to (DN, DO, PR, NR).2.1 M ODEL SPECIFICATIONSimilar to MPA, the model we propose – which we will henceforth refer to as C OMMUNITY MPA –assumes a pre-processing step in which the mention-level annotations are transformed into a seriesof binary decisions with respect to each (distinct) candidate label. Both models assume a similargenerative process of these decisions, although for C OMMUNITY MPA we used a different parame-terization to later accommodate the hierarchical structure:For every mention i2f1;2;:::;Ig:–For every (distinct) candidate label m2f1;2;:::;Mig:Draw true label indicator ci;mBern(zi;m)1For every position n2f1;2;:::;Ni;mg:Ifci;m= 1then draw decision based on the sensitivity of the annotator yi;m;nBern((jj[i;m;n ];zi;m))2 31zi;mis the class of the m-th candidate label for mention i.2()is the standard logistic function.3jj[i,m,n] returns the index of the annotator who made the n-th decision on the m-th label of mention i.2Under review as a conference paper at ICLR 2020Otherwise, draw decision based on annotator’s specificity yi;m;nBern(1(jj[i;m;n ];zi;m))The annotators belong to different communities and have abilities that depend on the class of thementions they annotate:For every annotator j2f1;2;:::;Jg:–Draw a community xjCat(())4–For every class h2f1;2;:::;Kg:Draw sensitivity j;hNormal (xj;h;xj;h)Draw specificity j;hNormal (xj;h;xj;h)The communities serve as hierarchical priors on the ability of the annotators regularizing it towardstheir mean as strongly as evidenced by the data (effect captured by the variance parameters):For every community r2f1;2;:::;1g:–Draw a stick proportion rBeta(1;b)5–For every class h2f1;2;:::;Kg:Draw mean sensitivity r;hNormal (d0;d1)Draw sensitivity variance r;hInverseGamma (e0;e1)Draw mean specificity r;hNormal (t0;t1)Draw specificity variance r;hInverseGamma (u0;u1)Finally, the model is completed with conjugate priors:For every class h2f1;2;:::;Kg:–Draw class specific true label likelihood hBeta(a0;a1)Draw a scale bGamma (s0;s1)6Both MPA and its extension C OMMUNITY MPA function as components in a standard mention pairframework: each mention is assigned the most likely candidate label based on the posterior of thelabel indicators, and the coreference chains are built from the mention pair links.2.2 P ARAMETER ESTIMATION NOTESWe estimate the parameters of the proposed model using variational inference, which is determinis-tic, typically fast, and benefits from a clear convergence criterion (Blei et al., 2017).We parameterized the model with conjugacy in mind, making sure the complete conditionals arepart of the exponential family. In this case the corresponding variational distributions take the sameform and have the natural parameters equal to the expected value (under the variational distribu-tion) of the natural parameters of the complete conditionals (Blei & Jordan, 2006; Hoffman et al.,2013). Conjugacy was not directly obtained in those cases involving the standard logistic function;we addressed that using the bound from Jaakkola & Jordan (2000). Lastly, the stick breaking rep-resentation of the Dirichlet process nicely complies with conjugacy as well (Blei & Jordan, 2006).We approximate the infinite mixture of user communities with truncated variational distributions.The derivations are somewhat standard in the machine learning literature and for space constrainsare omitted from the main paper but included in Appendix A.3 E VALUATIONWe conducted the experiments on the recently made available Phrase Detectives 2 corpus (Poesioet al., 2019). The dataset was annotated in a game with a purpose setting, where 98,67% of the4() is a vector of stick lengths, where the length of the r-th stick is r() = rQr1r0=1(1r0). Thedifferent stick lengths represent the prevalence of the communities.5The length of the stick broken at that proportion gives the prevalence of the community.6The scaling parameter affects the growth of the number of communities with the data.3Under review as a conference paper at ICLR 2020(a) Property results (predicative NPs) (b) Non referring results (expletives)(c) Discourse new results (d) Discourse old resultsFigure 1: A per class evaluation of the inferred mention pairs matched against expert annotations.The ”all” configuration uses the dataset as it is (does not alter the player workloads).collected judgements come from players who produced more than 40 annotations each. To simulatea more sparse evaluation environment we break the larger player workloads into smaller batcheswhich we further assume each was produced by a different player such that the workload of theplayers does not exceed a fixed threshold. Under this procedure the annotations are kept unchanged,offering a larger confidence when assessing whether the differences in performance between thepartially-pooled C OMMUNITY MPA model and its unpooled counterpart MPA come from the numberof annotator observations that is required for each of these models to properly estimate their ability.3.1 M ENTION PAIRACCURACYThis subsection presents the results for the agreement between the inferred mention pairs and thegold pairs. Both C OMMUNITY MPA and MPA models were trained on the full Phrase Detectives 2corpus and evaluated on the expert-annotated subset. Both model implementations produce posteriorpoint estimates for each candidate interpretation; we assign each mention the interpretation with themost mass under the posterior.Figure 1 shows the results obtained by the models, for each class, under different workload con-figurations (i.e., maximum player workload). The trend lines clearly indicate a better performanceof C OMMUNITY MPA, across all classes. The gap in performance between the two models is thelargest when the maximum number of observations (annotations) a player can have is capped at 5and closes as the number of available observations increase, reaching on par performance when thedataset is used as it is (the “all” configuration). Remember that when the corpus is used as it is, al-most all the data (98,67%) is produced by players with more than 40 annotations each, having plenty4Under review as a conference paper at ICLR 2020(a) Singletons excluded. (b) Singletons included.Figure 2: The quality of silver chains evaluated against gold chains.of observations for an unpooled model like MPA to properly profile their ability. We saw howeverthat MPA suffers in conditions of sparsity which the partially-pooled C OMMUNITY MPA model al-leviates through the hierarchical structure (the annotators’ ability is pooled towards the ability of thecommunity they are part of).3.2 S ILVER CHAIN QUALITYWe build the silver coreference chains from the inferred mention pairs by following the link structureinherent in the pairs. We assess the quality of the constructed chains by comparing them against gold(expert annotated) chains using standard coreference metrics. For this evaluation we used the scorerintroduced by Poesio et al. (2018) to assess the quality of the chains both in a traditional C ONLLstyle (without singletons) and having singletons included into the evaluation.Figure 2 shows the average F1 obtained when assessing the silver chains produced by the two mod-els. The trend lines are similar to those from the mention pair evaluation: a better performanceof C OMMUNITY MPA that increases with the maximum player workload, registering the largest gapover MPA at the lowest number of observations considered in the evaluation, and on par performancewhen the dataset was used as it is. Again, we could see the benefits in low count data brought by thehierarchical structure of C OMMUNITY MPA.3.3 T RAINING ON SILVER CHAINSIn this Section we assess the viability of using silver chains as an alternative to expert-annotatedchains when training a state of the art coreference system. We use the system used by Poesio et al.(2019), as other coreference systems cannot classify singletons and non referring expressions (i.e.,expletives and predicative NPs), one of the unique characteristics of the Phrase Detectives 2 corpus.We used as test data the gold chains that come with the corpus, while for training and developmentwe used silver chains constructed a posteriori from aggregated mention pairs. We report the resultsobtained using the scorer introduced in the previous section which can handle both singletons andnon-referring expressions in addition to the traditional CoNLL evaluation.The results, presented in Figure 3, paint a similar picture to those from the previous evaluationsections: a better performance of the C OMMUNITY MPA model in conditions of sparsity, with MPAclosing the gap as more observations are allowed for each coder.3.4 I NFERRED USERCOMMUNITIESThePhrase Detectives 2 dataset also includes an anonymized list of spammers and one with honest,well-established players from the game the corpus was collected with. We use these two lists tointroduce and discuss some of the inferred communities these players were assigned to.5Under review as a conference paper at ICLR 2020(a) Singletons excluded. (b) Singletons included. (c) Predicative NPs and expletives.Figure 3: Results of a state of the art coreference system trained on silver chains.(a) “DN spammers” (4.5%) (b) “Top players” (14.1%)(c) “Average players” (57.8%) (d) “DN biased players” (9%)Figure 4: Examples of inferred user communities from the Phrase Detectives 2 corpus. The horizon-tal line refers to the average ability for DN - discourse new interpretations, PR - property (predicativeNPs), DO - discourse old and NR - non referring (expletives). Each community is given an intuitivename and is presented together with its prevalence.Figure 4 presents the community profiles the players from the aforementioned lists belong to. In-terestingly, all the spammers were assigned to the community from Figure 4a. The model assignsalmost full probability mass to the average specificity of this community and negligible mass to thesensitivity for all classes with the exception of DN, where this mass assignment is reversed. Forthat to objectively be the case, the players under this community would have to (almost) always6Under review as a conference paper at ICLR 2020Bluebird RTE Valence TempMajority vote 75.93 (82/108) 91.88 (735/800) 80.00 (80/100) 93.94 (434/462)iBCC 89.81 (97/108) 92.88 (743/800) 85.00 (85/100) 94.35 (436/462)cBCC 88.89 (96/108) 93.12 (745/800) 88.00 (88/100) 94.37 (436/462)hcBCC 88.89 (96/108) 93.12 (745/800) 89.00 (89/100) 94.37 (436/462)MPA 88.89 (96/108) 93.00 (744/800) 86.00 (86/100) 94.16 (435/462)COMMUNITY MPA 89.81 (97/108) 93.25 (746/800) 85.00 (85/100) 94.16 (435/462)Table 1: Accuracy (correctly adjudicated items / total number of items) results on traditional crowd-sourcing datasets. The results of the first 4 methods are as reported in (Moreno et al., 2015).choose the DN interpretation, an aspect confirmed by an inspection of their annotations. The an-notators from the list of honest players, have more diverse ability profiles. Most of them belong tothe pool of “Top players” presented in Figure 4b. The players from this community have a solidunderstanding of anaphoric annotation and generally a large workload; for example the 10 peoplefrom the supplied list that the model assigned to this community have provided 35% of the entirecorpus annotations. Figure 4c shows the profile of the average players, named so based on the largeprevalence of this community in the population of annotators. Looking at their sensitivity, the esti-mates confirm the general intuition that PR cases (predicative NPs) are the hardest to spot, mostlybecause they can easily be confused with DN (discourse new); indefinite NPs (e.g., a policeman)are the most common type of mention in both classes. Expletives (NR) and the introduction of newentities into the discourse (DN) are the easiest to understand by most people. Another intuitive re-sult is that discourse old mentions are more difficult compared with discourse new ones; the formerrequires the identification of the most recent antecedent, which could go wrong for various reasons(e.g., ambiguity, negligence). Finally, we could also find in the supplied list players who are biasedtowards DN, but make the effort occasionally to supply other interpretations as well (see Figure 4d).3.5 T RADITIONAL CROWDSOURCING TASKSBoth C OMMUNITY MPA and its unpooled counterpart MPA can be applied to traditional crowdsourc-ing datasets where the set of classes the coders can choose from is the same across the annotateditems. Under the modeling framework described in this work (Section 2), for this type of data, thelabels coincide with the classes they belong to. We compare the aforementioned models againstthe methods from Moreno et al. (2015) which include a majority vote baseline, the iBCC modelof Kim & Ghahramani (2012) which is a Bayesian version of the model presented in the seminarwork of Dawid & Skene (1979), and two nonparametric community models: the cBCC model whereannotators are assigned into communities and annotate according to the profile of the communitythey belong to, and the hBCC model which assumes both annotator and community level structures.The latter model is a nonparametric extention of the Dawid & Skene (1979) model, similarly howCOMMUNITY MPA is to MPA.The evaluation was conducted on 4 datasets commonly used in the crowdsourcing literature (Morenoet al., 2015; Raykar & Yu, 2012; Snow et al., 2008; Hovy et al., 2013; Paun et al., 2018a). Table1 presents the accuracy results, indicating on par performance between the probabilistic modelsand a slight advantage of these models against the majority vote baseline. The simple majorityvote baseline implicitly assumes an equal expertise among annotators, a well known shortcoming,previous studies reporting more significant differences in performance to the probabilistic models ofannotation, on larger datasets (Venanzi et al., 2014; Paun et al., 2018a).4 R ELATED WORKTo our knowledge, the model presented in this work and its unpooled counterpart from Paun et al.(2018b) are the first models designed for anaphoric annotation. Both models however draw in-spiration from the mention-pair model of coreference (Soon et al., 2001; Hoste, 2016) and fromtraditional models of annotation.7Under review as a conference paper at ICLR 2020The model we introduced in this work functions as a component in a rather abstract mention-pairframework: the mention pairs are inferred in an unsupervised fashion from the crowd annotationsand the clustering to coreference chains is done by simply following the unique link structure fromthe inferred pairs.Traditional models of annotation assume the set of classes the annotators can choose from is fixedacross the annotated items, an aspect not appropriate for an anaphoric annotation task. However, themodel we introduced is after all a model of annotation, and thus shares many of the characteristicsfound in this area of research. Back in the late 70’s, in a seminal work, Dawid & Skene (1979)introduced a model of annotation that would take into account in its inference for the correct in-terpretations the ability of the annotators. The model has found wide application and inspired thestate of the art through the years (Smyth et al., 1995; Albert & Dodd, 2004; Whitehill et al., 2009;Raykar et al., 2010; Kim & Ghahramani, 2012; Hovy et al., 2013; Simpson et al., 2013; Passonneau& Carpenter, 2014; Felt et al., 2014; 2015; Kamar et al., 2015; Venanzi et al., 2014; Moreno et al.,2015; Nguyen et al., 2017; Paun et al., 2018a, inter alia). Two of the models proposed in recentyears assume a partially pooled structure, having the annotators clustered into communities. Thistype of structure uses information about the communities to improve the estimates of the individ-uals by regularizing towards the community mean. One of the models assumes a fixed number ofclusters (Venanzi et al., 2014), while the other takes a nonparametric approach letting the number ofclusters grow with the data (Moreno et al., 2015). Both of these models are community extensionsof the Dawid & Skene (1979) model. The model we introduced in this work also assumes a par-tially pooled structure, but it was built as an extension of the MPA model for anaphoric annotations(Paun et al., 2018b). Compared to the nonparametric community model of Moreno et al. (2015), theclosest of the two community models to ours, besides the differences in the data these models weredesigned for, there are also differences in the parameterization and inference. Moreno et al. (2015)use a Chinese restaurant process to allow for a latent number of communities and Gibbs sampling forinference, whereas we use a stick breaking process with variational inference, aiming for conjugacythroughout our parameterization. Since our model can also be used on traditional crowdsourcingdatasets, we showed back in Section 3.5 that we get comparable performance to the state of the artwhich included the nonparametric model of Moreno et al. (2015).5 C ONCLUSIONSThe development of more powerful and versatile coreference resolution systems relies on the avail-ability of larger and linguistically richer datasets, and crowdsourcing has been identified as a viablealternative to expert annotation, offering comparable quality at a fraction of the costs and largerscalability. Although the study of models of annotation, necessary to adjudicating crowd labels, re-ceived much attention over the years, it was mostly aimed at standard classification tasks where theset of classes the coders can choose from is the same across the annotated items. The literature onmodels of anaphoric annotation is scarce, the only previous effort in this direction being the recentlyintroduced mention pair model of Paun et al. (2018b).In this work we extended the unpooled model proposed by Paun et al. (2018b) with hierarchicalcommunities of annotators, using information about the community to improve the estimates of theindividuals. The partially pooled extension is nonparametric, letting the number of communitiesgrow with the data, flexibility we achieved with the help of a Dirichlet process mixture based on astick-breaking representation of the underlying Dirichlet process. The hierarchical structure offersa better resilience to sparsity, making the model a better fit (compared to its unpooled counterpart)for a larger number of crowdsourcing setups. We demonstrated this across a number of coreferenceresolution related tasks, in various levels of sparsity, assessing the accuracy of the inferred mentionpairs, the quality of the post-hoc constructed silver chains, and the viability of using silver chainsas an alternative to expert-annotated chains when training a state of the art coreference system. Wealso included a discussion of the inferred community profiles.The model, although developed for anaphoric annotation, is also flexible enough to be used in tradi-tional crowdsourcing setups where the set of classes the coders can choose from is the same acrossthe annotated items. We showed, in this context, the model is on par with the state of the art.The paper also includes guidance for the estimation of the parameters using variational inference(see appendix) and is accompanied by the code implementing the proposed model.8Under review as a conference paper at ICLR 2020
Byec-4kCYr
Official Blind Review #2
3: Weak Reject
This paper extends the unpooled mention pair model of annotation (MPA) (Paun et al., 2018b) with the hierarchical priors (e.g., mean and variance) on the ability of the annotators. The proposed method was evaluated on Phrase Detectives 2 corpus, which was annotated by players in a game with a purpose setting for coreference resolution. To control the sparsity of the dataset, the authors split annotations from larger player workloads into smaller batches, and assumed that each batch was produced by a different player. The experimental results show that, when the data is sparse, the proposed method (CommunityMPA) worked better than MPA on Phrase Detectives 2 corpus in terms of mention-pair accuracy, silver-chain quality, and the performance of the state-of-the-art method trained on the aggregated mention pairs. This paper also includes a discussion about the inferred community profiles. The comparison with the traditional approaches that consider communities showed that the proposed method is comparable to the traditional approaches. I am wondering of the connection between community and sparsity. This study assumes that knowledge of communities (spammers, adversarial, biased, average or high-quality players) would allow to regularize the ability of the annotators towards. In P2, this paper wrote, "This partially pooled structure can prove effective in conditions of sparsity where there are not enough observations to accurately estimate the ability of the annotators in isolation." I have two questions here: why is it effective to consider community for reducing the problem of the sparsity? If the knowledge of communities is useful, why did the advantage of the proposed method disappear in Figures 1, 2, and 3 when we used all data? In addition, I'm not convinced with the idea of "breaking the larger player workloads into smaller batches" for simulating the sparsity and communities. This treatment introduces quite a few shadow users whose capabilities are exactly the same, and deviates from the reality of the user community. Does this treatment favor the proposed method over MPA more than necessary? I am also wondering why evaluating portions of the dataset where annotations were made by 'sparse' users would not work to highlight the effectiveness of the proposed method for sparse users. The impact of this paper would be greater if the experimental results could support the importance of modeling user community on the real data. The authors may justify the simulation procedure of the sparsity because 98.67% of Phrase Detectives 2 corpus was annotated by those who produced more than 40 annotations. However, I also think that it is important to show how the proposed method is effective on the real data with sparse annotators. Currently, Table 1 showed no improvement over the conventional methods. Minor comment In Section 2.1: It was difficult to separate which part is the base model (MPA) and the novel proposal without reading Paun et al. (2018b).
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Mention-Pair Model of Annotation with Nonparametric User Communities ### Paper Abstract The availability of large datasets is essential for progress in coreference and other areas of NLP. Crowdsourcing has proven a viable alternative to expert annotation, offering similar quality for better scalability. However, crowdsourcing require adjudication, and most models of annotation focus on classification tasks where the set of classes is predetermined. This restriction does not apply to anaphoric annotation, where coders relate markables to coreference chains whose number cannot be predefined. This gap was recently covered with the introduction of a mention pair model of anaphoric annotation (MPA). In this work we extend MPA to alleviate the effects of sparsity inherent in some crowdsourcing environments. Specifically, we use a nonparametric partially pooled structure (based on a stick breaking process), fitting jointly with the ability of the annotators hierarchical community profiles. The individual estimates can thus be improved using information about the community when the data is scarce. We show, using a recently published large-scale crowdsourced anaphora dataset, that the proposed model performs better than its unpooled counterpart in conditions of sparsity, and on par when enough observations are available. The model is thus more resilient to different crowdsourcing setups, and, further provides insights into the community of workers. The model is also flexible enough to be used in standard annotation tasks for classification where it registers on par performance with the state of the art. ### Paper Keywords ["model of annotation", "coreference resolution", "anaphoric annotation", "mention pair model", "bayesian nonparametrics"] ### Paper Content ABSTRACTThe availability of large datasets is essential for progress in coreference and otherareas of NLP. Crowdsourcing has proven a viable alternative to expert annotation,offering similar quality for better scalability. However, crowdsourcing requireadjudication, and most models of annotation focus on classification tasks wherethe set of classes is predetermined. This restriction does not apply to anaphoricannotation, where coders relate markables to coreference chains whose numbercannot be predefined. This gap was recently covered with the introduction of amention pair model of anaphoric annotation ( MPA). In this work we extend MPAto alleviate the effects of sparsity inherent in some crowdsourcing environments.Specifically, we use a nonparametric partially pooled structure (based on a stickbreaking process), fitting jointly with the ability of the annotators hierarchicalcommunity profiles. The individual estimates can thus be improved using infor-mation about the community when the data is scarce. We show, using a recentlypublished large-scale crowdsourced anaphora dataset, that the proposed modelperforms better than its unpooled counterpart in conditions of sparsity, and on parwhen enough observations are available. The model is thus more resilient to dif-ferent crowdsourcing setups, and, further provides insights into the community ofworkers. The model is also flexible enough to be used in standard annotation tasksfor classification where it registers on par performance with the state of the art.1 I NTRODUCTIONIdentifying and resolving anaphoric reference to discourse entities, a task known in NLP ascoref-erence resolution , has long been considered a core aspect of language interpretation (Poesio et al.,2016). Ever since the MUC evaluation campaign in the 1990s (Grishman & Sundheim, 1995; Chin-chor, 1998), larger and richer datasets have been made available, pushing the state of the art to newheights. In the last few years, the O NTONOTES corpus (Pradhan et al., 2007; Weischedel et al.,2011), used for the CONLL 2011 and 2012 shared tasks (Pradhan et al., 2012), has become the defacto standard resource for coreference resolution research (Fernandes et al., 2014; Bj ̈orkelund &Kuhn, 2014; Martschat & Strube, 2015; Clark & Manning, 2015; 2016a;b; Lee et al., 2017; 2018).The corpus was hand-annotated by experts, and remained the largest available dataset up until therecent publication of P RECO (Chen et al., 2018). But there are still many languages and domainsfor which no such resources are available, or where the annotation scheme is limited (e.g., the lackof singletons in O NTONOTES , no expletives in either P RECO nor O NTONOTES ).Annotating data on the scale required to train state of the art systems using traditional expert annota-tion can quickly get unaffordable. But in recent years crowdsourcing has proved a viable alternativeto expert annotation, with studies indicating that expert-level quality can be achieved with muchlower costs (Snow et al., 2008; Raykar et al., 2010). Crowdsourced data however require aggrega-tion methods to choose the most likely label(s) among the interpretations provided by the crowd.Past research suggests probabilistic models of annotation are one of the most promising approachesto aggregation (Dawid & Skene, 1979; Carpenter, 2008; Whitehill et al., 2009; Raykar et al., 2010;Hovy et al., 2013; Quoc Viet Hung et al., 2013; Sheshadri & Lease, 2013; Passonneau & Carpenter,2014; Venanzi et al., 2014; Kamar et al., 2015; Paun et al., 2018a). These models offer a rich frame-work of interpretation and can employ distinct prior and likelihood structures (pooled, unpooled,and partially pooled) and a diverse set of effects (annotator ability, item difficulty).Motivation Most work on models of annotation assume the set of classes the annotators can choosefrom is fixed across the annotated items, an aspect not appropriate for anaphoric annotation wherecoders relate markables to anaphoric chains. Recently, Paun et al. (2018b) developed a probabilis-1Under review as a conference paper at ICLR 2020tic model able to aggregate crowdsourced anaphoric annotations. The model was later applied toadjudicate the interpretations from the Phrase Detectives 2 corpus with comparable quality to thatof expert annotators (Poesio et al., 2019). The model of Paun et al. (2018b) assumes an unpooledstructure, i.e., it models individual annotator parameters. Such models typically require a largernumber of observations to properly estimate the ability profile of the coders. In a crowdsourcing en-vironment this requirement may not always be satisfied, e.g., in the intial stages of a crowdsourcingcampaign, or in the commonly encountered scenario where the workload of the annotators resem-bles a power law curve (Ipeirotis, 2010; Chamberlain, 2016); both examples describe a sparse dataenvironment which may prove difficult to handle for an unpooled model. One intuitive solution tothis problem is to exploit the similarities found in the behaviour of the annotators. Simpson et al.(2011; 2013) identified, after fitting a model of annotation, distinctive clusters in the ability of theworkers; more generally, typical annotator communities found in a crowdsourcing setup includespammers, adversarial, biased, average or high quality players. Knowledge of these communitieswould allow to regularize the ability of the annotators towards the profile of the community they arepart of. This partially pooled structure can prove effective in conditions of sparsity where there arenot enough observations to accurately estimate the ability of the annotators in isolation. The levelof pooling would be dictated by the data, such that, when enough observations are gathered, thepartially pooled and the unpooled models would perform similarly.Contributions In this work we extend the unpooled mention pair model of annotation ( MPA) pro-posed by Paun et al. (2018b) with hierarchical communities of annotators. We let the number ofcommunities grow with the data, a flexibility that we achieve using a Dirichlet process mixturebased on a stick-breaking representation of the underlying Dirichlet process. We conduct the eval-uation on the Phrase Detectives 2 corpus, in various levels of sparsity, assessing the accuracy ofthe inferred mention pairs, the quality of the post-hoc constructed silver chains, and the viability ofusing silver chains as an alternative to expert-annotated chains when training a state of the art coref-erence system. We discuss the inferred community profiles of a few known spammers and honestplayers of the game used to collect the Phrase Detectives 2 corpus. We conclude the evaluation witha performance check on traditional crowdsourcing datasets against several state of the art models.2 A C OMMUNITY MODEL FOR ANAPHORIC ANNOTATIONSWe use the same annotation scheme as Paun et al. (2018b): annotators mark mentions as discoursenew if a new entity is being introduced into the discourse (represented internally as DN), with prop-erty for predicative noun phrases (PR), with non-referring for expletives (NR) and with discourse oldif an already introduced entity is being mentioned, in which case the annotators must also specifythe mention’s most recent antecedent (DO(ante-id)). We also follow the same notation as in Paunet al. (2018b) and use the term label to refer to a given annotation (i.e., DN, DO(ante-id), PR, NR)and the term class to refer to a general category a given label belongs to (DN, DO, PR, NR).2.1 M ODEL SPECIFICATIONSimilar to MPA, the model we propose – which we will henceforth refer to as C OMMUNITY MPA –assumes a pre-processing step in which the mention-level annotations are transformed into a seriesof binary decisions with respect to each (distinct) candidate label. Both models assume a similargenerative process of these decisions, although for C OMMUNITY MPA we used a different parame-terization to later accommodate the hierarchical structure:For every mention i2f1;2;:::;Ig:–For every (distinct) candidate label m2f1;2;:::;Mig:Draw true label indicator ci;mBern(zi;m)1For every position n2f1;2;:::;Ni;mg:Ifci;m= 1then draw decision based on the sensitivity of the annotator yi;m;nBern((jj[i;m;n ];zi;m))2 31zi;mis the class of the m-th candidate label for mention i.2()is the standard logistic function.3jj[i,m,n] returns the index of the annotator who made the n-th decision on the m-th label of mention i.2Under review as a conference paper at ICLR 2020Otherwise, draw decision based on annotator’s specificity yi;m;nBern(1(jj[i;m;n ];zi;m))The annotators belong to different communities and have abilities that depend on the class of thementions they annotate:For every annotator j2f1;2;:::;Jg:–Draw a community xjCat(())4–For every class h2f1;2;:::;Kg:Draw sensitivity j;hNormal (xj;h;xj;h)Draw specificity j;hNormal (xj;h;xj;h)The communities serve as hierarchical priors on the ability of the annotators regularizing it towardstheir mean as strongly as evidenced by the data (effect captured by the variance parameters):For every community r2f1;2;:::;1g:–Draw a stick proportion rBeta(1;b)5–For every class h2f1;2;:::;Kg:Draw mean sensitivity r;hNormal (d0;d1)Draw sensitivity variance r;hInverseGamma (e0;e1)Draw mean specificity r;hNormal (t0;t1)Draw specificity variance r;hInverseGamma (u0;u1)Finally, the model is completed with conjugate priors:For every class h2f1;2;:::;Kg:–Draw class specific true label likelihood hBeta(a0;a1)Draw a scale bGamma (s0;s1)6Both MPA and its extension C OMMUNITY MPA function as components in a standard mention pairframework: each mention is assigned the most likely candidate label based on the posterior of thelabel indicators, and the coreference chains are built from the mention pair links.2.2 P ARAMETER ESTIMATION NOTESWe estimate the parameters of the proposed model using variational inference, which is determinis-tic, typically fast, and benefits from a clear convergence criterion (Blei et al., 2017).We parameterized the model with conjugacy in mind, making sure the complete conditionals arepart of the exponential family. In this case the corresponding variational distributions take the sameform and have the natural parameters equal to the expected value (under the variational distribu-tion) of the natural parameters of the complete conditionals (Blei & Jordan, 2006; Hoffman et al.,2013). Conjugacy was not directly obtained in those cases involving the standard logistic function;we addressed that using the bound from Jaakkola & Jordan (2000). Lastly, the stick breaking rep-resentation of the Dirichlet process nicely complies with conjugacy as well (Blei & Jordan, 2006).We approximate the infinite mixture of user communities with truncated variational distributions.The derivations are somewhat standard in the machine learning literature and for space constrainsare omitted from the main paper but included in Appendix A.3 E VALUATIONWe conducted the experiments on the recently made available Phrase Detectives 2 corpus (Poesioet al., 2019). The dataset was annotated in a game with a purpose setting, where 98,67% of the4() is a vector of stick lengths, where the length of the r-th stick is r() = rQr1r0=1(1r0). Thedifferent stick lengths represent the prevalence of the communities.5The length of the stick broken at that proportion gives the prevalence of the community.6The scaling parameter affects the growth of the number of communities with the data.3Under review as a conference paper at ICLR 2020(a) Property results (predicative NPs) (b) Non referring results (expletives)(c) Discourse new results (d) Discourse old resultsFigure 1: A per class evaluation of the inferred mention pairs matched against expert annotations.The ”all” configuration uses the dataset as it is (does not alter the player workloads).collected judgements come from players who produced more than 40 annotations each. To simulatea more sparse evaluation environment we break the larger player workloads into smaller batcheswhich we further assume each was produced by a different player such that the workload of theplayers does not exceed a fixed threshold. Under this procedure the annotations are kept unchanged,offering a larger confidence when assessing whether the differences in performance between thepartially-pooled C OMMUNITY MPA model and its unpooled counterpart MPA come from the numberof annotator observations that is required for each of these models to properly estimate their ability.3.1 M ENTION PAIRACCURACYThis subsection presents the results for the agreement between the inferred mention pairs and thegold pairs. Both C OMMUNITY MPA and MPA models were trained on the full Phrase Detectives 2corpus and evaluated on the expert-annotated subset. Both model implementations produce posteriorpoint estimates for each candidate interpretation; we assign each mention the interpretation with themost mass under the posterior.Figure 1 shows the results obtained by the models, for each class, under different workload con-figurations (i.e., maximum player workload). The trend lines clearly indicate a better performanceof C OMMUNITY MPA, across all classes. The gap in performance between the two models is thelargest when the maximum number of observations (annotations) a player can have is capped at 5and closes as the number of available observations increase, reaching on par performance when thedataset is used as it is (the “all” configuration). Remember that when the corpus is used as it is, al-most all the data (98,67%) is produced by players with more than 40 annotations each, having plenty4Under review as a conference paper at ICLR 2020(a) Singletons excluded. (b) Singletons included.Figure 2: The quality of silver chains evaluated against gold chains.of observations for an unpooled model like MPA to properly profile their ability. We saw howeverthat MPA suffers in conditions of sparsity which the partially-pooled C OMMUNITY MPA model al-leviates through the hierarchical structure (the annotators’ ability is pooled towards the ability of thecommunity they are part of).3.2 S ILVER CHAIN QUALITYWe build the silver coreference chains from the inferred mention pairs by following the link structureinherent in the pairs. We assess the quality of the constructed chains by comparing them against gold(expert annotated) chains using standard coreference metrics. For this evaluation we used the scorerintroduced by Poesio et al. (2018) to assess the quality of the chains both in a traditional C ONLLstyle (without singletons) and having singletons included into the evaluation.Figure 2 shows the average F1 obtained when assessing the silver chains produced by the two mod-els. The trend lines are similar to those from the mention pair evaluation: a better performanceof C OMMUNITY MPA that increases with the maximum player workload, registering the largest gapover MPA at the lowest number of observations considered in the evaluation, and on par performancewhen the dataset was used as it is. Again, we could see the benefits in low count data brought by thehierarchical structure of C OMMUNITY MPA.3.3 T RAINING ON SILVER CHAINSIn this Section we assess the viability of using silver chains as an alternative to expert-annotatedchains when training a state of the art coreference system. We use the system used by Poesio et al.(2019), as other coreference systems cannot classify singletons and non referring expressions (i.e.,expletives and predicative NPs), one of the unique characteristics of the Phrase Detectives 2 corpus.We used as test data the gold chains that come with the corpus, while for training and developmentwe used silver chains constructed a posteriori from aggregated mention pairs. We report the resultsobtained using the scorer introduced in the previous section which can handle both singletons andnon-referring expressions in addition to the traditional CoNLL evaluation.The results, presented in Figure 3, paint a similar picture to those from the previous evaluationsections: a better performance of the C OMMUNITY MPA model in conditions of sparsity, with MPAclosing the gap as more observations are allowed for each coder.3.4 I NFERRED USERCOMMUNITIESThePhrase Detectives 2 dataset also includes an anonymized list of spammers and one with honest,well-established players from the game the corpus was collected with. We use these two lists tointroduce and discuss some of the inferred communities these players were assigned to.5Under review as a conference paper at ICLR 2020(a) Singletons excluded. (b) Singletons included. (c) Predicative NPs and expletives.Figure 3: Results of a state of the art coreference system trained on silver chains.(a) “DN spammers” (4.5%) (b) “Top players” (14.1%)(c) “Average players” (57.8%) (d) “DN biased players” (9%)Figure 4: Examples of inferred user communities from the Phrase Detectives 2 corpus. The horizon-tal line refers to the average ability for DN - discourse new interpretations, PR - property (predicativeNPs), DO - discourse old and NR - non referring (expletives). Each community is given an intuitivename and is presented together with its prevalence.Figure 4 presents the community profiles the players from the aforementioned lists belong to. In-terestingly, all the spammers were assigned to the community from Figure 4a. The model assignsalmost full probability mass to the average specificity of this community and negligible mass to thesensitivity for all classes with the exception of DN, where this mass assignment is reversed. Forthat to objectively be the case, the players under this community would have to (almost) always6Under review as a conference paper at ICLR 2020Bluebird RTE Valence TempMajority vote 75.93 (82/108) 91.88 (735/800) 80.00 (80/100) 93.94 (434/462)iBCC 89.81 (97/108) 92.88 (743/800) 85.00 (85/100) 94.35 (436/462)cBCC 88.89 (96/108) 93.12 (745/800) 88.00 (88/100) 94.37 (436/462)hcBCC 88.89 (96/108) 93.12 (745/800) 89.00 (89/100) 94.37 (436/462)MPA 88.89 (96/108) 93.00 (744/800) 86.00 (86/100) 94.16 (435/462)COMMUNITY MPA 89.81 (97/108) 93.25 (746/800) 85.00 (85/100) 94.16 (435/462)Table 1: Accuracy (correctly adjudicated items / total number of items) results on traditional crowd-sourcing datasets. The results of the first 4 methods are as reported in (Moreno et al., 2015).choose the DN interpretation, an aspect confirmed by an inspection of their annotations. The an-notators from the list of honest players, have more diverse ability profiles. Most of them belong tothe pool of “Top players” presented in Figure 4b. The players from this community have a solidunderstanding of anaphoric annotation and generally a large workload; for example the 10 peoplefrom the supplied list that the model assigned to this community have provided 35% of the entirecorpus annotations. Figure 4c shows the profile of the average players, named so based on the largeprevalence of this community in the population of annotators. Looking at their sensitivity, the esti-mates confirm the general intuition that PR cases (predicative NPs) are the hardest to spot, mostlybecause they can easily be confused with DN (discourse new); indefinite NPs (e.g., a policeman)are the most common type of mention in both classes. Expletives (NR) and the introduction of newentities into the discourse (DN) are the easiest to understand by most people. Another intuitive re-sult is that discourse old mentions are more difficult compared with discourse new ones; the formerrequires the identification of the most recent antecedent, which could go wrong for various reasons(e.g., ambiguity, negligence). Finally, we could also find in the supplied list players who are biasedtowards DN, but make the effort occasionally to supply other interpretations as well (see Figure 4d).3.5 T RADITIONAL CROWDSOURCING TASKSBoth C OMMUNITY MPA and its unpooled counterpart MPA can be applied to traditional crowdsourc-ing datasets where the set of classes the coders can choose from is the same across the annotateditems. Under the modeling framework described in this work (Section 2), for this type of data, thelabels coincide with the classes they belong to. We compare the aforementioned models againstthe methods from Moreno et al. (2015) which include a majority vote baseline, the iBCC modelof Kim & Ghahramani (2012) which is a Bayesian version of the model presented in the seminarwork of Dawid & Skene (1979), and two nonparametric community models: the cBCC model whereannotators are assigned into communities and annotate according to the profile of the communitythey belong to, and the hBCC model which assumes both annotator and community level structures.The latter model is a nonparametric extention of the Dawid & Skene (1979) model, similarly howCOMMUNITY MPA is to MPA.The evaluation was conducted on 4 datasets commonly used in the crowdsourcing literature (Morenoet al., 2015; Raykar & Yu, 2012; Snow et al., 2008; Hovy et al., 2013; Paun et al., 2018a). Table1 presents the accuracy results, indicating on par performance between the probabilistic modelsand a slight advantage of these models against the majority vote baseline. The simple majorityvote baseline implicitly assumes an equal expertise among annotators, a well known shortcoming,previous studies reporting more significant differences in performance to the probabilistic models ofannotation, on larger datasets (Venanzi et al., 2014; Paun et al., 2018a).4 R ELATED WORKTo our knowledge, the model presented in this work and its unpooled counterpart from Paun et al.(2018b) are the first models designed for anaphoric annotation. Both models however draw in-spiration from the mention-pair model of coreference (Soon et al., 2001; Hoste, 2016) and fromtraditional models of annotation.7Under review as a conference paper at ICLR 2020The model we introduced in this work functions as a component in a rather abstract mention-pairframework: the mention pairs are inferred in an unsupervised fashion from the crowd annotationsand the clustering to coreference chains is done by simply following the unique link structure fromthe inferred pairs.Traditional models of annotation assume the set of classes the annotators can choose from is fixedacross the annotated items, an aspect not appropriate for an anaphoric annotation task. However, themodel we introduced is after all a model of annotation, and thus shares many of the characteristicsfound in this area of research. Back in the late 70’s, in a seminal work, Dawid & Skene (1979)introduced a model of annotation that would take into account in its inference for the correct in-terpretations the ability of the annotators. The model has found wide application and inspired thestate of the art through the years (Smyth et al., 1995; Albert & Dodd, 2004; Whitehill et al., 2009;Raykar et al., 2010; Kim & Ghahramani, 2012; Hovy et al., 2013; Simpson et al., 2013; Passonneau& Carpenter, 2014; Felt et al., 2014; 2015; Kamar et al., 2015; Venanzi et al., 2014; Moreno et al.,2015; Nguyen et al., 2017; Paun et al., 2018a, inter alia). Two of the models proposed in recentyears assume a partially pooled structure, having the annotators clustered into communities. Thistype of structure uses information about the communities to improve the estimates of the individ-uals by regularizing towards the community mean. One of the models assumes a fixed number ofclusters (Venanzi et al., 2014), while the other takes a nonparametric approach letting the number ofclusters grow with the data (Moreno et al., 2015). Both of these models are community extensionsof the Dawid & Skene (1979) model. The model we introduced in this work also assumes a par-tially pooled structure, but it was built as an extension of the MPA model for anaphoric annotations(Paun et al., 2018b). Compared to the nonparametric community model of Moreno et al. (2015), theclosest of the two community models to ours, besides the differences in the data these models weredesigned for, there are also differences in the parameterization and inference. Moreno et al. (2015)use a Chinese restaurant process to allow for a latent number of communities and Gibbs sampling forinference, whereas we use a stick breaking process with variational inference, aiming for conjugacythroughout our parameterization. Since our model can also be used on traditional crowdsourcingdatasets, we showed back in Section 3.5 that we get comparable performance to the state of the artwhich included the nonparametric model of Moreno et al. (2015).5 C ONCLUSIONSThe development of more powerful and versatile coreference resolution systems relies on the avail-ability of larger and linguistically richer datasets, and crowdsourcing has been identified as a viablealternative to expert annotation, offering comparable quality at a fraction of the costs and largerscalability. Although the study of models of annotation, necessary to adjudicating crowd labels, re-ceived much attention over the years, it was mostly aimed at standard classification tasks where theset of classes the coders can choose from is the same across the annotated items. The literature onmodels of anaphoric annotation is scarce, the only previous effort in this direction being the recentlyintroduced mention pair model of Paun et al. (2018b).In this work we extended the unpooled model proposed by Paun et al. (2018b) with hierarchicalcommunities of annotators, using information about the community to improve the estimates of theindividuals. The partially pooled extension is nonparametric, letting the number of communitiesgrow with the data, flexibility we achieved with the help of a Dirichlet process mixture based on astick-breaking representation of the underlying Dirichlet process. The hierarchical structure offersa better resilience to sparsity, making the model a better fit (compared to its unpooled counterpart)for a larger number of crowdsourcing setups. We demonstrated this across a number of coreferenceresolution related tasks, in various levels of sparsity, assessing the accuracy of the inferred mentionpairs, the quality of the post-hoc constructed silver chains, and the viability of using silver chainsas an alternative to expert-annotated chains when training a state of the art coreference system. Wealso included a discussion of the inferred community profiles.The model, although developed for anaphoric annotation, is also flexible enough to be used in tradi-tional crowdsourcing setups where the set of classes the coders can choose from is the same acrossthe annotated items. We showed, in this context, the model is on par with the state of the art.The paper also includes guidance for the estimation of the parameters using variational inference(see appendix) and is accompanied by the code implementing the proposed model.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text This paper extends the unpooled mention pair model of annotation (MPA) (Paun et al., 2018b) with the hierarchical priors (e.g., mean and variance) on the ability of the annotators. The proposed method was evaluated on Phrase Detectives 2 corpus, which was annotated by players in a game with a purpose setting for coreference resolution. To control the sparsity of the dataset, the authors split annotations from larger player workloads into smaller batches, and assumed that each batch was produced by a different player. The experimental results show that, when the data is sparse, the proposed method (CommunityMPA) worked better than MPA on Phrase Detectives 2 corpus in terms of mention-pair accuracy, silver-chain quality, and the performance of the state-of-the-art method trained on the aggregated mention pairs. This paper also includes a discussion about the inferred community profiles. The comparison with the traditional approaches that consider communities showed that the proposed method is comparable to the traditional approaches. I am wondering of the connection between community and sparsity. This study assumes that knowledge of communities (spammers, adversarial, biased, average or high-quality players) would allow to regularize the ability of the annotators towards. In P2, this paper wrote, "This partially pooled structure can prove effective in conditions of sparsity where there are not enough observations to accurately estimate the ability of the annotators in isolation." I have two questions here: why is it effective to consider community for reducing the problem of the sparsity? If the knowledge of communities is useful, why did the advantage of the proposed method disappear in Figures 1, 2, and 3 when we used all data? In addition, I'm not convinced with the idea of "breaking the larger player workloads into smaller batches" for simulating the sparsity and communities. This treatment introduces quite a few shadow users whose capabilities are exactly the same, and deviates from the reality of the user community. Does this treatment favor the proposed method over MPA more than necessary? I am also wondering why evaluating portions of the dataset where annotations were made by 'sparse' users would not work to highlight the effectiveness of the proposed method for sparse users. The impact of this paper would be greater if the experimental results could support the importance of modeling user community on the real data. The authors may justify the simulation procedure of the sparsity because 98.67% of Phrase Detectives 2 corpus was annotated by those who produced more than 40 annotations. However, I also think that it is important to show how the proposed method is effective on the real data with sparse annotators. Currently, Table 1 showed no improvement over the conventional methods. Minor comment In Section 2.1: It was difficult to separate which part is the base model (MPA) and the novel proposal without reading Paun et al. (2018b). ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
uQfOy7LrlTR
ICLR.cc/2021/Conference
2021
Scaling the Convex Barrier with Active Sets
["Alessandro De Palma", "Harkirat Behl", "Rudy R Bunel", "Philip Torr", "M. Pawan Kumar"]
Tight and efficient neural network bounding is of critical importance for the scaling of neural network verification systems. A number of efficient specialised dual solvers for neural network bounds have been presented recently, but they are often too loose to verify more challenging properties. This lack of tightness is linked to the weakness of the employed relaxation, which is usually a linear program of size linear in the number of neurons. While a tighter linear relaxation for piecewise linear activations exists, it comes at the cost of exponentially many constraints and thus currently lacks an efficient customised solver. We alleviate this deficiency via a novel dual algorithm that realises the full potential of the new relaxation by operating on a small active set of dual variables. Our method recovers the strengths of the new relaxation in the dual space: tightness and a linear separation oracle. At the same time, it shares the benefits of previous dual approaches for weaker relaxations: massive parallelism, GPU implementation, low cost per iteration and valid bounds at any time. As a consequence, we obtain better bounds than off-the-shelf solvers in only a fraction of their running time and recover the speed-accuracy trade-offs of looser dual solvers if the computational budget is small. We demonstrate that this results in significant formal verification speed-ups.
["Neural Network Verification", "Neural Network Bounding", "Optimisation for Deep Learning"]
ABSTRACTTight and efficient neural network bounding is of critical importance for the scal-ing of neural network verification systems. A number of efficient specialised dualsolvers for neural network bounds have been presented recently, but they are oftentoo loose to verify more challenging properties. This lack of tightness is linked tothe weakness of the employed relaxation, which is usually a linear program of sizelinear in the number of neurons. While a tighter linear relaxation for piecewiselinear activations exists, it comes at the cost of exponentially many constraints andthus currently lacks an efficient customised solver. We alleviate this deficiency viaa novel dual algorithm that realises the full potential of the new relaxation by op-erating on a small active set of dual variables. Our method recovers the strengthsof the new relaxation in the dual space: tightness and a linear separation oracle. Atthe same time, it shares the benefits of previous dual approaches for weaker relax-ations: massive parallelism, GPU implementation, low cost per iteration and validbounds at any time. As a consequence, we obtain better bounds than off-the-shelfsolvers in only a fraction of their running time and recover the speed-accuracytrade-offs of looser dual solvers if the computational budget is small. We demon-strate that this results in significant formal verification speed-ups.1 I NTRODUCTIONVerification requires formally proving or disproving that a given property of a neural network holdsover all inputs in a specified domain. We consider properties in their canonical form (Bunel et al.,2018), which requires us to either: (i) prove that no input results in a negative output (propertyis true); or (ii) identify a counter-example (property is false). The search for counter-examples istypically performed by efficient methods such as random sampling of the input domain (Webb et al.,2019), or projected gradient descent (Carlini & Wagner, 2017). In contrast, establishing the veracityof a property requires solving a suitable convex relaxation to obtain a lower bound on the minimumoutput. If the lower bound is positive, the given property is true. If the bound is negative and nocounter-example is found, either: (i) we make no conclusions regarding the property (incompleteverification); or (ii) we further refine the counter-example search and lower bound computationwithin a branch-and-bound framework until we reach a concrete conclusion (complete verification).The main bottleneck of branch and bound is the computation of the lower bound for each nodeof the enumeration tree via convex optimization. While earlier works relied on off-the-shelfsolvers (Ehlers, 2017; Bunel et al., 2018), it was quickly established that such an approach doesnot scale-up elegantly with the size of the neural network. This has motivated researchers to designspecialized dual solvers (Dvijotham et al., 2019; Bunel et al., 2020a), thereby providing initial ev-idence that verification can be realised in practice. However, the convex relaxation considered inthe dual solvers is itself very weak (Ehlers, 2017), hitting what is now commonly referred to as the“convex barrier” (Salman et al., 2019). In practice, this implies that either several properties remainundecided in incomplete verification, or take several hours to be verified exactly.Multiple works have tried to overcome the convex barrier for piecewise linear activations (Raghu-nathan et al., 2018; Singh et al., 2019). Here, we focus on the single-neuron Linear Programming(LP) relaxation by Anderson et al. (2020). Unfortunately, its tightness comes at the price of expo-nentially many (in the number of variables) constraints. Therefore, existing dual solvers (Dvijothamet al., 2018; Bunel et al., 2020a) are not easily applicable, limiting the scaling of the new relaxation.Equal contribution.1Published as a conference paper at ICLR 2021We address this problem by presenting a specialized dual solver for the relaxation by Anderson et al.(2020), which realises its full potential by meeting the following desiderata:By keeping an active set of dual variables, we obtain a sparse dual solver that recovers thestrengths of the original primal problem (Anderson et al., 2020) in the dual domain. In line withprevious dual solvers, our approach yields valid bounds at anytime, leverages convolutional net-work structure and enjoys massive parallelism within a GPU implementation, resulting in betterbounds in an order of magnitude less time than off-the-shelf solvers (Gurobi Optimization, 2020).We present a unified dual treatment that includes both a linearly sized LP relaxation (Ehlers,2017) and the tighter formulation. As a consequence, our solver provides a wide range of speed-accuracy trade-offs : (i) it is competitive with dual approaches on the looser relaxation (Dvijothamet al., 2018; Bunel et al., 2020a); and (ii) it yields much tighter bounds if a larger computa-tional budget is available. Owing to this flexibility, we show that our dual algorithm yields largecomplete verification gains compared to primal approaches (Anderson et al., 2020) and previousdual algorithms.2 P RELIMINARIES : NEURAL NETWORK RELAXATIONSWe denote vectors by bold lower case letters (for example, x) and matrices by upper case letters(for example, W). We usefor the Hadamard product, JKfor integer ranges, 1afor the indi-cator vector on condition aand brackets for intervals ( [lk;uk]) and vector or matrix entries ( x[i]orW[i;j]). In addition, given W2Rmnandx2Rm, we will employ WxandW@xasshorthands for respectivelyPicoli(W)xandPicoli(W)Tx, where col i(W)denotes the i-thcolumn of matrix W.LetCbe the network input domain. Similar to Dvijotham et al. (2018); Bunel et al. (2020a), weassume that linear minimisation over Ccan be performed in closed-form. Our goal is to computebounds on the scalar output of a piecewise-linear feedforward neural network. The tightest possiblelower bound can be obtained by solving the following optimization problem:minx;^ x^xn s.t. x02C; (1a)^ xk+1=Wk+1xk+bk+1k2J0;n1K; (1b)xk=(^ xk) k2J1;n1K; (1c)where the activation function (^ xk)is piecewise-linear, ^ xk;xk2Rnkdenote the outputs of the k-th linear layer (fully-connected or convolutional) and activation function respectively, Wkandbkdenote its weight matrix and bias, nkis the number of activations at layer k. We will focus onthe ReLU case ( (x) = max ( x;0)), as common piecewise-linear functions can be expressed as acomposition of ReLUs (Bunel et al., 2020b).Problem (1) is non-convex due to the activation function’s non-linearity (1c). As solving it is NP-hard (Katz et al., 2017), it is commonly approximated by a convex relaxation (see x4). The qualityof the corresponding bounds, which is fundamental in verification, depends on the tightness of therelaxation. Unfortunately, tight relaxations usually correspond to slower bounding procedures. Wefirst review a popular ReLU relaxation in x2.1). We then consider a tighter one in x2.2.2.1 P LANET RELAXATIONThe so-called Planet relaxation (Ehlers, 2017) has enjoyed widespread use due to its amenability toefficient customised solvers (Dvijotham et al., 2018; Bunel et al., 2020a) and is the “relaxation ofchoice” for many works in the area (Bunel et al., 2020b; Lu & Kumar, 2020). Here, we describe itin its non-projected form Mk, the LP relaxation of the Big-M Mixed Integer Programming (MIP)formulation (Tjeng et al., 2019). Applying Mkto problem (1) results in:minx;^ x;z^xns.t. x02C^ xk+1=Wk+1xk+bk+1 k2J0;n1K;xk^ xk;xk^ukzk;xk^ xk^lk(1zk);(xk;^ xk;zk)2[lk;uk][^lk;^uk][0;1]9=;:=Mkk2J1;n1K;(2)where ^lk;^ukandlk;ukareintermediate bounds respectively on pre-activation variables ^ xkandpost-activation variables xk. These constants play an important role in the structure of Mkand,2Published as a conference paper at ICLR 2021together with the relaxed binary constraints on z, define box constraints on the variables. We detailhow to compute intermediate bounds in appendix E. Projecting out auxiliary variables zresults inthe Planet relaxation (cf. appendix B.1 for details), which replaces (1c) by its convex hull.Problem (2), which is linearly-sized, can be easily solved via commercial black-box LPsolvers (Bunel et al., 2018). This does not scale-up well with the size of the neural network, mo-tivating the need for specialised solvers. Customised dual solvers have been designed by relaxingconstraints (1b), (1c) (Dvijotham et al., 2018) or replacing (1c) by the Planet relaxation and employ-ing Lagrangian Decomposition (Bunel et al., 2020a). Both approaches result in bounds very closeto optimality for problem (2) in only a fraction of the runtime of off-the-shelf solvers.2.2 A T IGHTER RELAXATIONA much tighter approximation of problem (1) than the Planet relaxation ( x2.1) can be obtained byrepresenting the convex hull of the composition of (1b) and (1c) rather than the convex hull of (1c)alone. A formulation of this type was recently introduced by Anderson et al. (2020).Let us define Lk1;Uk12Rnknk1as:Lk1[i;j] =lk1[j]1Wk[i;j]0+uk1[j]1Wk[i;j]<0,and Uk1[i;j] =uk1[j]1Wk[i;j]0+lk1[j]1Wk[i;j]<0. Additionally, let us introduce2Wk=f0;1gnknk1, the set of all possible binary masks of weight matrix Wk, andEk:= 2Wknf0;1g, which excludes the all-zero and all-one masks. The new representationresults in the following primal problem:minx;^ x;z^xns.t.x02C^ xk+1=Wk+1xk+bk+1 k2J0;n1K;(xk;^ xk;zk)2M kxk0@(WkIk)xk1+zkbkWkIkLk1(1zk)+Wk(1Ik)Uk1zk1A8Ik2Ek9>>=>>;:=Akk2J1;n1K:(3)BothMkandAkyield valid MIP formulations for problem (1) when imposing integrality con-straints on z. However, the LP relaxation of Akwill yield tighter bounds. In the worst case, thistightness comes at the cost of exponentially many constraints: one for each Ik2Ek. On the otherhand, given a set of primal assignments (x;z)that are not necessarily feasible for problem (3), onecan efficiently compute the most violated constraint (if any) at that point. The mask associated tosuch constraint can be computed in linear-time (Anderson et al., 2020) as:Ik[i;j] =1T((1zk[i])Lk1[i;j]+zk[i]Uk1[i;j]xk1[i])Wk[i;j]0: (4)We point out that Akslightly differs from the original formulation of Anderson et al. (2020),which does not explicitly include pre-activation bounds ^lk;^uk(which we treat via Mk). While thiswas implicitly addressed in practical applications (Botoeva et al., 2020), not doing so has a strongnegative effect on bound tightness, possibly to the point of yielding looser bounds than problem (2).In appendix F, we provide an example in which this is the case and extend the original derivationby Anderson et al. (2020) to recover Akas in problem (3).Owing to the exponential number of constraints, problem (3) cannot be solved as it is. As outlinedby Anderson et al. (2020), the availability of a linear-time separation oracle (4) offers a naturalprimal cutting plane algorithm, which can then be implemented in off-the-shelf solvers: solve theBig-M LP (2), then iteratively add the most violated constraints from Akat the optimal solution.When applied to the verification of small neural networks via off-the-shelf MIP solvers, this leadsto substantial gains with respect to the looser Big-M relaxation (Anderson et al., 2020).3 A NEFFICIENT DUAL SOLVER FOR THE TIGHTER RELAXATIONInspired by the success of dual approaches on looser relaxations (Bunel et al., 2020a; Dvijothamet al., 2019), we show that the formal verification gains by Anderson et al. (2020) (see x2.2) scale tolarger networks if we solve the tighter relaxation in the dual space. Due to the particular structure ofthe relaxation, a customised solver for problem (3) needs to meet a number of requirements.Fact 1. In order to replicate the success of previous dual algorithms on looser relaxations, we needa solver for problem (3)with the following properties: (i) sparsity : a memory cost linear in thenumber of network activations in spite of exponentially many constraints, (ii) tightness : the bounds3Published as a conference paper at ICLR 2021should reflect the quality of those obtained in the primal space, (iii) anytime : low cost per iterationand valid bounds at each step.The anytime requirement motivates dual solutions: any dual assignment yields a valid bound due toweak duality. Unfortunately, as shown in appendix A, neither of the two dual derivations by Bunelet al. (2020a); Dvijotham et al. (2018) readily satisfy all desiderata at once. Therefore, we need acompletely different approach. Let us introduce dual variables ;and functions thereof:fk(;) =kWTk+1k+1PIkk;Ik+PIk+1(Wk+1Ik+1)Tk+1;Ik+1;gk() =PIk2EkWk(1Ik)Uk1k;Ik+k;0^uk+k;1^lk+PIk2EkWkIkLk1k;Ik+PIk2Ekk;Ikbk;(5)wherePIkis a shorthand forPIk22Wk. Starting from primal (3), we relax all constraints in Akexcept box constraints (see x2.1). We obtain the following dual problem (derivation in appendix C),where functions fk;gkappear in inner products with primal variables xk;zk:max(;)0d(;) where:d(;) := minx;zL(x;z;;);L(x;z;;) ="Pn1k=1bTkkPn1k=0fk(;)TxkPn1k=1gk()Tzk+Pn1k=1PIk2Ek(WkIkLk1)@k;Ik+Tk;1(^lkbk)s.t. x02C; (xk;zk)2[lk;uk][0;1]k2J1;n1K:(6)This is again a challenging problem: the exponentially many constraints in the primal (3) are nowassociated to an exponential number of variables. Nevertheless, we show that the requirements ofFact 1 can be met by operating on a restricted version of dual (6). To this end, we present Active Set,a specialised solver for the relaxation by Anderson et al. (2020) that is sparse, anytime and yieldsbounds reflecting the tightness of the new relaxation. Starting from the dual of problem (2), oursolver iteratively adds variables to a small active set of dual variables Band solves the resultingreduced version of problem (6). We first describe our solver on a fixed Band then outline how toiteratively modify the active set ( x3.2). Pseudo-code can be found in appendix D.3.1 A CTIVE SETSOLVERWe want to solve a version of problem (6) for which the sums over the Ikmasks of each layer kare restricted toBkEk1, withB=[kBk. By keepingB=;, we recover a novel dual solver forthe Big-M relaxation (2) (explicitly described in appendix B), which is employed as initialisation.Settingk;Ik= 0;8Ik2EknBkin (5), (6) and removing these from the formulation, we obtain:fB;k(;B) =kWTk+1k+1PIk2Bk[f0;1gk;Ik;+PIk+12Bk+1[f0;1g(Wk+1Ik+1)Tk+1;Ik+1gB;k(B) =PIk2BkWk(1Ik)Uk1k;Ik+k;0^uk+k;1^lk+PIk2BkWkIkLk1k;Ik+PIk2Bkk;Ikbk;(7)along with the reduced dual problem:max(;B)0dB(;B) where:dB(;B) := minx;zLB(x;z;;B);LB(x;z;;B) ="Pn1k=1bTkkPn1k=0fB;k(;B)TxkPn1k=1gB;k(B)Tzk+Pn1k=1PIk2Bk(WkIkLk1)@k;Ik+Tk;1(^lkbk)s.t. x02C; (xk;zk)2[lk;uk][0;1]k2J1;n1K: (8)We can maximize dB(;B), which is concave and non-smooth, via projected supergradient ascentor variants thereof, such as Adam (Kingma & Ba, 2015). In order to obtain a valid supergradient,we need to perform the inner minimisation over the primals. Thanks to the structure of problem (8),the optimisation decomposes over the layers. For k2J1;n1K, we can perform the minimisationin closed-form by driving the primals to their upper or lower bounds depending on the sign of theircoefficients:xk=1fB;k(;B)0^uk+1fB;k(;B)<0^lk; zk=1gB;k(B)01: (9)1As dual variables k;Ikare indexed by Ik,B=[kBkimplicitly defines an active set of variables B.4Published as a conference paper at ICLR 2021The subproblem corresponding to x0is different, as it involves a linear minimization over x02C:x02argminx0fB;0(;B)Tx0 s.t. x02C: (10)We assumed inx2 that (10) can be performed efficiently. We refer the reader to Bunel et al. (2020a)for descriptions of the minimisation when Cis a`1or`2ball, as common for adversarial examples.Given (x;z)as above, the supergradient of dB(;B)is a subset of the one for d(;), given by:rkd(;) =Wkxk1+bkxk;rk;0d(;) =xkzk^uk;rk;1d(;) =xkWkxk1+bk+ (1zk)^lk;rk;Ikd(;) =xk(WkIk)xk1+WkIkLk1(1zk)zkbk+Wk(1Ik)Uk1zkIk2Bk;(11)for eachk2J0;n1K. At each iteration, after taking a step in the supergradient direction, the dualvariables are projected to the non-negative orthant by clipping negative values.3.2 E XTENDING THE ACTIVE SETWe initialise the dual (6) with a tight bound on the Big-M relaxation by solving for d;(;;)in (8). To satisfy the tightness requirement in Fact 1, we then need to include constraints (via theirLagrangian multipliers) from the exponential family of AkintoBk. Our goal is to tighten them asmuch as possible while keeping the active set small to save memory and compute.The active set strategy is defined by a selection criterion for theIkto be added2toBk, and thefrequency of addition. In practice, we add the variables maximising the entries of supergradientrk;Ikd(;)after a fixed number of dual iterations. We now provide motivation for both choices.Selection criterion The selection criterion needs to be computationally efficient. Thus, we pro-ceed greedily and focus only on the immediate effect at the current iteration. Let us map a restrictedset of dual variables Bto a set of dual variables for the full dual (6). We do so by setting vari-ables not in the active set to 0:B= 0, and=B[B. Then, for each layer k, we add theset of variables k;Ikmaximising the corresponding entries of the supergradient of the full dualproblem (6):k;Ik2argmaxk;Ikfrk;Ikd(;)T1g. Therefore, we use the subderivatives as aproxy for short-term improvement on the full dual objective d(;). Under a primal interpretation,our selection criterion involves a call to the separation oracle (4) by Anderson et al. (2020).Proposition 1. k;Ik(as defined above) represents the Lagrangian multipliers associated to themost violated constraints from Akat(x;z)2argminx;zLB(x;z;;B), the primal minimiserof the current restricted Lagrangian.Proof. See appendix D.1.Frequency Finally, we need to decide the frequency at which to add variables to the active set.Fact 2. Assume we obtained a dual solution (y;yB)2argmaxdB(;B)using Active Set onthe currentB. Then (x;z)2argminx;zLB(x;z;y;yB)is not necessarily an optimal primalsolution for the primal of the current restricted dual problem (Sherali & Choi, 1996).The primal of dB(;B)(restricted primal) is the problem obtained by setting Ek B kin prob-lem (3). While the primal cutting plane algorithm by Anderson et al. (2020) calls the separationoracle (4) at the optimal solution of the current restricted primal, Fact 2 shows that our selectioncriterion leads to a different behaviour even at dual optimality for dB(;B). Therefore, as we haveno theoretical incentive to reach (approximate) subproblem convergence, we add variables after afixed tunable number of supergradient iterations. Furthermore, we can add more than one variable“at once” by running the oracle (4) repeatedly for a number of iterations.We conclude this section by pointing out that, while recovering primal optima is possible in prin-ciple (Sherali & Choi, 1996), doing so would require dual convergence on each restricted dualproblem (8). As the main advantage of dual approaches (Dvijotham et al., 2018; Bunel et al., 2020a)is their ability to quickly achieve tight bounds (rather than formal optimality), adapting the selectioncriterion to mirror the primal cutting plane algorithm would defeat the purpose of Active Set.2adding a single Ikmask toBkextends Bbynkvariables: one for each neuron at layer k.5Published as a conference paper at ICLR 20213.3 I MPLEMENTATION DETAILS , TECHNICAL CHALLENGESAnalogously to previous dual algorithms (Dvijotham et al., 2018; Bunel et al., 2020a), our approachcan leverage the massive parallelism offered by modern GPU architectures in three different ways.First, we execute in parallel the computations of lower and upper bounds relative to all the neuronsof a given layer. Second, in complete verification, we can batch over the different Branch andBound (BaB) subproblems. Third, as most of our solver relies on standard linear algebra operationsemployed during the forward and backward passes of neural networks, we can exploit the highlyoptimised implementations commonly found in modern deep learning frameworks.An exception are what we call “masked” forward/backward passes: operations of the form(WkIk)xkor(WkIk)Txk+1, which are needed whenever dealing with constraints from Ak.In our solver, they appear if Bk6=;(see equations (8), (11)). Masked passes require a customisedlower-level implementation for a proper treatment of convolutional layers, detailed in appendix G.4 R ELATED WORKIn addition to those described in x2, many other relaxations have been proposed in the literature. Infact, all bounding methods are equivalent to solving some convex relaxation of a neural network.This holds for conceptually different ideas such as bound propagation (Gowal et al., 2018), specificdual assignments (Wong & Kolter, 2018), dual formulations based on Lagrangian Relaxation (Dvi-jotham et al., 2018) or Decomposition (Bunel et al., 2020a). The degree of tightness varies greatly:from looser relaxations associated to closed-form methods (Gowal et al., 2018; Weng et al., 2018;Wong & Kolter, 2018) to tighter formulations based on Semi-Definite Programming (SDP) (Raghu-nathan et al., 2018).The speed of closed-form approaches results from simplifying the triangle-shaped feasible regionof the Planet relaxation ( x2.1) (Singh et al., 2018; Wang et al., 2018). On the other hand, tighterrelaxations are more expressive than the linearly-sized LP by (Ehlers, 2017). The SDP formulationby Raghunathan et al. (2018) can represent interactions between activations in the same layer. Simi-larly, Singh et al. (2019) tighten the Planet relaxation by considering the convex hull of the union ofpolyhedra relative to kReLUs of a given layer at once. Alternatively, tighter LPs can be obtained byconsidering the ReLU together with the affine operator before it: standard MIP techniques (Jeroslow,1987) lead to a formulation that is quadratic in the number of variables (see appendix F.2). The relax-ation by Anderson et al. (2020) detailed in x2.2 is a more convenient representation of the same set.By projecting out the auxiliary zvariables, (Tjandraatmadja et al., 2020) recently introduced anotherformulation equivalent to the one by Anderson et al. (2020), with half as many variables and a linearfactor more constraints compared to what described in x2.2. Therefore, the relationship between thetwo formulations mirrors the one between the Planet and Big-M relaxations (see appendix B.1). Ourdual derivation and the Active Set algorithm can be adapted to operate on the projected relaxations.Specialised dual solvers significantly improve in bounding efficiency with respect to off-the-shelfsolvers for both LP (Bunel et al., 2020a) and SDP formulations (Dvijotham et al., 2019). Therefore,the design of similar solvers for other tight relaxations is an interesting line of future research. Wecontribute with a specialised dual solver for the relaxation by Anderson et al. (2020) ( x3). In whatfollows, we demonstrate empirically that by seamlessly transitioning from the Planet relaxation tothe tighter formulation, we can obtain large incomplete and complete verification improvements.5 E XPERIMENTSWe empirically demonstrate the effectiveness of our method under two settings. On incomplete ver-ification (x5.1), we assess the speed and quality of bounds compared to other bounding algorithms.On complete verification ( x5.2), we examine whether our speed-accuracy trade-offs correspond tofaster exact verification. Our implementation is based on Pytorch (Paszke et al., 2017) and is avail-able at https://github.com/oval-group/scaling-the-convex-barrier .5.1 I NCOMPLETE VERIFICATIONWe evaluate incomplete verification performance by upper bounding the robustness margin (the dif-ference between the ground truth logit and the other logits) to adversarial perturbations (Szegedyet al., 2014) on the CIFAR-10 test set (Krizhevsky & Hinton, 2009). If the upper bound is negative,we can certify the network’s vulnerability to adversarial perturbations. We replicate the experimen-6Published as a conference paper at ICLR 2021Method101102TimeGurobiPlanetGurobi1 cutBDD+400 stepsBig-M850 stepsActive Set80 stepsActive Set600 stepsActive Set1050 stepsActive Set1650 stepsActive Set CPU600 steps1.00.50.00.5Improvement from 1 cutFigure 1: Upper plot: distribution of runtime in seconds. Lower plot: difference with the bounds obtainedby Gurobi with a cut from Akper neuron; higher is better. Results for the SGD-trained network from Bunelet al. (2020a). The width at a given value represents the proportion of problems for which this is the result.Comparing Active Sets with 1650 steps to Gurobi 1 Cut, tighter bounds are achieved with a smaller runtime.tal setting from Bunel et al. (2020a). The networks correspond to the small network architecturefrom Wong & Kolter (2018). Here, we present results for a network trained via standard SGDand cross entropy loss, with no modification to the objective for robustness. Perturbations for thisnetwork lie in a `1norm ball with radius ver= 5=255(which is hence lower than commonly em-ployed radii for robustly trained networks). In appendix I, we provide additional CIFAR-10 resultson an adversarially trained network using the method by Madry et al. (2018), and on MNIST (LeCunet al., 1998), for a network adversarially trained with the algorithm by Wong & Kolter (2018).We compare both against previous dual iterative methods and Gurobi (Gurobi Optimization, 2020),the commercial black-box solver employed by Anderson et al. (2020). For Gurobi-based baselines,Planet means solving the Planet Ehlers (2017) relaxation of the network, while Gurobi cut startsfrom the Big-M relaxation and adds constraints from Akin a cutting-plane fashion, as the originalprimal algorithm by Anderson et al. (2020). We run both on 4CPU threads. Amongst dual iterativemethods, run on an Nvidia Titan Xp GPU, we compare with BDD+ , the recent proximal-based solverby Bunel et al. (2020a), operating on a Lagrangian Decomposition dual of the Planet relaxation. Aswe operate on (a subset of) the data by Bunel et al. (2020a), we omit both their supergradient-based approach and the one by Dvijotham et al. (2018), as they both perform worse than BDD+(Bunel et al., 2020a). For the same reason, we omit cheaper (and looser) methods, like intervalpropagation Gowal et al. (2018) and the one by Wong & Kolter (2018). Active Set denotes oursolver for problem (3), described in x3.1. By keepingB=;, Active Set reduces to Big-M , a solverfor the non-projected Planet relaxation (appendix B), which can be seen as Active Set’s initialiser.In line with previous bounding algorithms (Bunel et al., 2020a), we employ Adam updates (Kingma& Ba, 2015) for supergradient-type methods due to their faster empirical convergence. Finally, wecomplement the comparison with Gurobi-based methods by running Active Set on 4CPU threads(Active Set CPU ). Further details, including hyper-parameters, can be found in appendix I.Figure 1 shows the distribution of runtime and the bound improvement with respect to Gurobi cutfor the SGD-trained network. For Gurobi cut, we only add the single most violated cut from Akperneuron, due to the cost of repeatedly solving the LP. We tuned BDD+ and Big-M, the dual methodsoperating on the weaker relaxation (2), to have the same average runtime. They obtain boundscomparable to Gurobi Planet in one order less time. Initialised from 500Big-M iterations, at 6001 2 3 4BDD+ 400 steps1.01.52.02.53.03.54.0Big-M 850 stepsTiming (in s)102101BDD+ 400 steps102101Big-M 850 stepsGap to Planet(a) Comparison of runtime (left) and gap to Gurobi Planetbounds (right). For the latter, lower is better.1 2 3 4Big-M 850 steps1.01.52.02.53.03.54.0Active Set 80 stepsTiming (in s)0.0 0.2 0.4Big-M 850 steps0.10.00.10.20.30.40.5Active Set 80 stepsImprovement from Planet(b) Comparison of runtime (left) and difference with the GurobiPlanet bounds (right). For the latter, higher is better.Figure 2: Pointwise comparison for a subset of the methods on the data presented in Figure 1. Darker colourshades mean higher point density (on a logarithmic scale). The oblique dotted line corresponds to the equality.7Published as a conference paper at ICLR 2021iterations Active Set already achieves better bounds on average than Gurobi cut in around 1=20thof the time. With a computational budget twice as large ( 1050 iterations) or four times as large(1650 iterations), the bounds significantly improve over Gurobi cut in still a fraction of the time. Aswe empirically demonstrate in appendix I, the tightness of the Active Set bounds is strongly linkedto our active set strategy ( x3.2). Remarkably, even if our method is specifically designed to takeadvantage of GPU acceleration, executing it on CPU proves to be strongly competitive with Gurobicut, producing better bounds in less time for the benchmark of Figure 1.Figure 2 shows pointwise comparisons for a subset of the methods of Figure 1, on the same data.Figure 2a shows the gap to the (Gurobi) Planet bound for BDD+ and our Big-M solver. Surprisingly,our Big-M solver is competitive with BDD+, achieving on average better bounds than BDD+, in thesame time. Figure 2b shows the improvement over Planet bounds for Big-M and Active Set. Thelatter achieves markedly better bounds than Big-M in the same time, demonstrating the benefit ofoperating (at least partly) on the tighter dual (6).5.2 C OMPLETE VERIFICATIONWe next evaluate the performance on complete verification, verifying the adversarial robustness ofa network to perturbations in `1norm on a subset of the dataset by Lu & Kumar (2020), replicatingthe experimental setting from Bunel et al. (2020a). The dataset associates a different perturbation ra-diusverif to each CIFAR-10 image, so as to create challenging verification properties. Its difficultymakes the dataset an appropriate testing ground for tighter relaxations like the one by Andersonet al. (2020) (x2.2). Further details, including network architectures, can be found in appendix I.Here, we aim to solve the non-convex problem (1) directly, rather than an approximation like x5.1.In order to do so, we use BaSBR, a branch and bound algorithm from Bunel et al. (2020b). Branchand Bound works by dividing the problem domain into subproblems (branching) and bounding thelocal minimum over those domains. Any domain which cannot contain the global lower bound ispruned away, whereas the others are kept and branched over. In BaBSR, branching is carried out bysplitting an unfixed ReLU into its passing and blocking phases. The ReLU which induces maximumchange in the domain’s lower bound, when made unambiguous, is selected for splitting.A fundamental component of a BaB method is the bounding algorithm, which is, in general, thecomputational bottleneck (Lu & Kumar, 2020). Therefore, we compare the effect on final verifica-tion time of using the different bounding methods in x5.1 within BaBSR. In addition, we evaluateMIPAk, which encodes problem (1) as a Big-M MIP (Tjeng et al., 2019) and solves it in Gurobi byadding cutting planes from Ak, analogously to the original experiments by Anderson et al. (2020).Finally, we also compare against ERAN (Singh et al., 2020), a state-of-the-art complete verificationtoolbox: results on the dataset by Lu & Kumar (2020) are taken from the recent VNN-COMP com-petition (VNN-COMP, 2020). We use 100iterations for Active Set, 100iterations for BDD+ and180iterations for Big-M. For dual iterative algorithms, we solve 300 subproblems at once for thebase network and 200 for the deep and wide networks (see x3.3). Additionally, dual variables areinitialised from their parent node’s bounding computation. As in Bunel et al. (2020a), the time-limitis kept at one hour. Due to the difference in computational cost between algorithms operating on thetighter relaxation by Anderson et al. (2020) and the other bounding algorithms3, we also experimentwith a stratified version of the bounding within BaBSR. We devise a set of heuristics to determine3For Active Set, this is partly due to the masked forward/backward pass described in appendix G.100101102103Computation time [s]020406080100% of properties verifiedBase modelBDD+ BaBSRBig-M BaBSRActive Set BaBSRBig-M + Active Set BaBSRMIP kG. Planet + G. 1 cut BaBSRERAN100101102103Computation time [s]020406080100% of properties verifiedWide large modelBDD+ BaBSRBig-M BaBSRActive Set BaBSRBig-M + Active Set BaBSRMIP kG. Planet + G. 1 cut BaBSRERAN100101102103Computation time [s]020406080100% of properties verifiedDeep large modelBDD+ BaBSRBig-M BaBSRActive Set BaBSRBig-M + Active Set BaBSRMIP kG. Planet + G. 1 cut BaBSRERANFigure 3: Cactus plots on properties from Lu & Kumar (2020), displaying the percentage of solved propertiesas a function of runtime. Baselines are represented by dotted lines.8Published as a conference paper at ICLR 2021Base Wide DeepMethod time(s) sub-problems %Timeout time(s) sub-problems %Timeout time(s) sub-problems %TimeoutBDD+ B ABSR 883.55 82 699.40 22.00 568.25 43 751.88 13.00 281.47 10 763.48 5.00BIG-M B ABSR 826.60 68 582.00 19.00 533.79 35 877.24 12.00 253.37 9346.78 4.00A. S ETBABSR 422.32 9471.90 7.00 169.73 1873.36 3.00 227.26 2302.16 2.00BIG-M + A. S ETBABSR 402.88 11 408.84 7.00 179.73 3712.62 3.00 197.99 3086.62 2.00G. P LANET + G. 1 CUT BABSR 1191.48 2044.28 14.00 1272.99 1352.42 10.00 704.59 677.74 3.00MIPAk 3227.50 226.24 82.00 2500.70 100.93 64.00 3339.37 434.57 91.00ERAN 805.89 - 5.00 632.12 - 9.00 545.72 - 0.00Table 1: We compare average solving time, average number of solved sub-problems and the percentage oftimed out properties on data from Lu & Kumar (2020). The best dual iterative method is highlighted in bold.whether a given subproblem is easy (therefore looser bounds are sufficient) or whether we need tooperate on the tighter relaxation. Instances of this approach are Big-M + Active Set andGurobiPlanet + Gurobi 1 cut . Further details are provided in appendix H.Figure 3 and Table 1 show that Big-M performs competitively with BDD+. Active Set verifies alarger share of properties than the methods operating on the looser formulation (2), demonstratingthe benefit of tighter bounds ( x5.1) in complete verification. On the other hand, the poor performanceof MIP +Akand of Gurobi Planet + Gurobi 1 cut, tied to scaling limitations of off-the-shelf solvers,shows that tighter bounds are effective only if they can be computed efficiently. Nevertheless, thedifference in performance between the two Gurobi-based methods confirms that customised Branchand Bound solvers (BaBSR) are preferable to generic MIP solvers, as observed by Bunel et al.(2020b) on the looser Planet relaxation. Moreover, the stratified bounding system allows us toretain the speed of Big-M on easier properties, without excessively sacrificing Active Set’s gains onthe harder ones. Finally, while ERAN verifies 2%more properties than Active Set on two networks,BaBSR (with any dual bounding algorithm) is faster on most of the properties. BaBSR-based resultscould be further improved by employing the learned branching strategy presented by Lu & Kumar(2020): in this work, we focused on the bounding component of branch and bound.6 D ISCUSSIONThe vast majority of neural network bounding algorithms focuses on (solving or loosening) a pop-ular triangle-shaped relaxation, referred to as the “convex barrier” for verification. Relaxations thatare tighter than this convex barrier have been recently introduced, but their complexity hinders appli-cability. We have presented Active Set, a sparse dual solver for one such relaxation, and empiricallydemonstrated that it yields significant formal verification speed-ups. Our results show that scalabletightness is key to the efficiency of neural network verification and instrumental in the definitionof a more appropriate “convex barrier”. We believe that new customised solvers for similarly tightrelaxations are a crucial avenue for future research in the area, possibly beyond piecewise-linearnetworks. Finally, as it is inevitable that tighter bounds will come at a larger computational cost,future verification systems will be required to recognise a priori whether tight bounds are needed fora given property. A possible solution to this problem could rely on learning algorithms.ACKNOWLEDGMENTSADP was supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Ma-chines and Systems, grant EP/L015987/1, and an IBM PhD fellowship. HSB was supported using aTencent studentship through the University of Oxford.
Ue-TsCMawlK
Review for "Scaling the Convex Barrier with Active Sets"
7: Good paper, accept
The authors present a custom solver for verifying properties of neural networks (such as robustness properties). Prior work for neural network verification relies on generating bounds by solving convex relaxations ("convex barrier"). The authors describe a sparse dual solver for a new relaxation which is tighter (but has higher computational complexity). The solver is represented (for the most part) as standard operations built into pytorch, and so it can be easily run on GPUs (they do require a specialized operator to support masked forward/backward passes, and they describe how this is done efficiently for convolutional networks). The solver involves repeatedly solving modified instances of a problem, where only a small active set of dual variables (instead of exponentially many) is considered at each step. Experimental results are promising in that it outperforms generic solvers in terms of both the bounds achieved and the time taken to do so. This does seem to be a promising approach.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Scaling the Convex Barrier with Active Sets ### Paper Abstract Tight and efficient neural network bounding is of critical importance for the scaling of neural network verification systems. A number of efficient specialised dual solvers for neural network bounds have been presented recently, but they are often too loose to verify more challenging properties. This lack of tightness is linked to the weakness of the employed relaxation, which is usually a linear program of size linear in the number of neurons. While a tighter linear relaxation for piecewise linear activations exists, it comes at the cost of exponentially many constraints and thus currently lacks an efficient customised solver. We alleviate this deficiency via a novel dual algorithm that realises the full potential of the new relaxation by operating on a small active set of dual variables. Our method recovers the strengths of the new relaxation in the dual space: tightness and a linear separation oracle. At the same time, it shares the benefits of previous dual approaches for weaker relaxations: massive parallelism, GPU implementation, low cost per iteration and valid bounds at any time. As a consequence, we obtain better bounds than off-the-shelf solvers in only a fraction of their running time and recover the speed-accuracy trade-offs of looser dual solvers if the computational budget is small. We demonstrate that this results in significant formal verification speed-ups. ### Paper Keywords ["Neural Network Verification", "Neural Network Bounding", "Optimisation for Deep Learning"] ### Paper Content ABSTRACTTight and efficient neural network bounding is of critical importance for the scal-ing of neural network verification systems. A number of efficient specialised dualsolvers for neural network bounds have been presented recently, but they are oftentoo loose to verify more challenging properties. This lack of tightness is linked tothe weakness of the employed relaxation, which is usually a linear program of sizelinear in the number of neurons. While a tighter linear relaxation for piecewiselinear activations exists, it comes at the cost of exponentially many constraints andthus currently lacks an efficient customised solver. We alleviate this deficiency viaa novel dual algorithm that realises the full potential of the new relaxation by op-erating on a small active set of dual variables. Our method recovers the strengthsof the new relaxation in the dual space: tightness and a linear separation oracle. Atthe same time, it shares the benefits of previous dual approaches for weaker relax-ations: massive parallelism, GPU implementation, low cost per iteration and validbounds at any time. As a consequence, we obtain better bounds than off-the-shelfsolvers in only a fraction of their running time and recover the speed-accuracytrade-offs of looser dual solvers if the computational budget is small. We demon-strate that this results in significant formal verification speed-ups.1 I NTRODUCTIONVerification requires formally proving or disproving that a given property of a neural network holdsover all inputs in a specified domain. We consider properties in their canonical form (Bunel et al.,2018), which requires us to either: (i) prove that no input results in a negative output (propertyis true); or (ii) identify a counter-example (property is false). The search for counter-examples istypically performed by efficient methods such as random sampling of the input domain (Webb et al.,2019), or projected gradient descent (Carlini & Wagner, 2017). In contrast, establishing the veracityof a property requires solving a suitable convex relaxation to obtain a lower bound on the minimumoutput. If the lower bound is positive, the given property is true. If the bound is negative and nocounter-example is found, either: (i) we make no conclusions regarding the property (incompleteverification); or (ii) we further refine the counter-example search and lower bound computationwithin a branch-and-bound framework until we reach a concrete conclusion (complete verification).The main bottleneck of branch and bound is the computation of the lower bound for each nodeof the enumeration tree via convex optimization. While earlier works relied on off-the-shelfsolvers (Ehlers, 2017; Bunel et al., 2018), it was quickly established that such an approach doesnot scale-up elegantly with the size of the neural network. This has motivated researchers to designspecialized dual solvers (Dvijotham et al., 2019; Bunel et al., 2020a), thereby providing initial ev-idence that verification can be realised in practice. However, the convex relaxation considered inthe dual solvers is itself very weak (Ehlers, 2017), hitting what is now commonly referred to as the“convex barrier” (Salman et al., 2019). In practice, this implies that either several properties remainundecided in incomplete verification, or take several hours to be verified exactly.Multiple works have tried to overcome the convex barrier for piecewise linear activations (Raghu-nathan et al., 2018; Singh et al., 2019). Here, we focus on the single-neuron Linear Programming(LP) relaxation by Anderson et al. (2020). Unfortunately, its tightness comes at the price of expo-nentially many (in the number of variables) constraints. Therefore, existing dual solvers (Dvijothamet al., 2018; Bunel et al., 2020a) are not easily applicable, limiting the scaling of the new relaxation.Equal contribution.1Published as a conference paper at ICLR 2021We address this problem by presenting a specialized dual solver for the relaxation by Anderson et al.(2020), which realises its full potential by meeting the following desiderata:By keeping an active set of dual variables, we obtain a sparse dual solver that recovers thestrengths of the original primal problem (Anderson et al., 2020) in the dual domain. In line withprevious dual solvers, our approach yields valid bounds at anytime, leverages convolutional net-work structure and enjoys massive parallelism within a GPU implementation, resulting in betterbounds in an order of magnitude less time than off-the-shelf solvers (Gurobi Optimization, 2020).We present a unified dual treatment that includes both a linearly sized LP relaxation (Ehlers,2017) and the tighter formulation. As a consequence, our solver provides a wide range of speed-accuracy trade-offs : (i) it is competitive with dual approaches on the looser relaxation (Dvijothamet al., 2018; Bunel et al., 2020a); and (ii) it yields much tighter bounds if a larger computa-tional budget is available. Owing to this flexibility, we show that our dual algorithm yields largecomplete verification gains compared to primal approaches (Anderson et al., 2020) and previousdual algorithms.2 P RELIMINARIES : NEURAL NETWORK RELAXATIONSWe denote vectors by bold lower case letters (for example, x) and matrices by upper case letters(for example, W). We usefor the Hadamard product, JKfor integer ranges, 1afor the indi-cator vector on condition aand brackets for intervals ( [lk;uk]) and vector or matrix entries ( x[i]orW[i;j]). In addition, given W2Rmnandx2Rm, we will employ WxandW@xasshorthands for respectivelyPicoli(W)xandPicoli(W)Tx, where col i(W)denotes the i-thcolumn of matrix W.LetCbe the network input domain. Similar to Dvijotham et al. (2018); Bunel et al. (2020a), weassume that linear minimisation over Ccan be performed in closed-form. Our goal is to computebounds on the scalar output of a piecewise-linear feedforward neural network. The tightest possiblelower bound can be obtained by solving the following optimization problem:minx;^ x^xn s.t. x02C; (1a)^ xk+1=Wk+1xk+bk+1k2J0;n1K; (1b)xk=(^ xk) k2J1;n1K; (1c)where the activation function (^ xk)is piecewise-linear, ^ xk;xk2Rnkdenote the outputs of the k-th linear layer (fully-connected or convolutional) and activation function respectively, Wkandbkdenote its weight matrix and bias, nkis the number of activations at layer k. We will focus onthe ReLU case ( (x) = max ( x;0)), as common piecewise-linear functions can be expressed as acomposition of ReLUs (Bunel et al., 2020b).Problem (1) is non-convex due to the activation function’s non-linearity (1c). As solving it is NP-hard (Katz et al., 2017), it is commonly approximated by a convex relaxation (see x4). The qualityof the corresponding bounds, which is fundamental in verification, depends on the tightness of therelaxation. Unfortunately, tight relaxations usually correspond to slower bounding procedures. Wefirst review a popular ReLU relaxation in x2.1). We then consider a tighter one in x2.2.2.1 P LANET RELAXATIONThe so-called Planet relaxation (Ehlers, 2017) has enjoyed widespread use due to its amenability toefficient customised solvers (Dvijotham et al., 2018; Bunel et al., 2020a) and is the “relaxation ofchoice” for many works in the area (Bunel et al., 2020b; Lu & Kumar, 2020). Here, we describe itin its non-projected form Mk, the LP relaxation of the Big-M Mixed Integer Programming (MIP)formulation (Tjeng et al., 2019). Applying Mkto problem (1) results in:minx;^ x;z^xns.t. x02C^ xk+1=Wk+1xk+bk+1 k2J0;n1K;xk^ xk;xk^ukzk;xk^ xk^lk(1zk);(xk;^ xk;zk)2[lk;uk][^lk;^uk][0;1]9=;:=Mkk2J1;n1K;(2)where ^lk;^ukandlk;ukareintermediate bounds respectively on pre-activation variables ^ xkandpost-activation variables xk. These constants play an important role in the structure of Mkand,2Published as a conference paper at ICLR 2021together with the relaxed binary constraints on z, define box constraints on the variables. We detailhow to compute intermediate bounds in appendix E. Projecting out auxiliary variables zresults inthe Planet relaxation (cf. appendix B.1 for details), which replaces (1c) by its convex hull.Problem (2), which is linearly-sized, can be easily solved via commercial black-box LPsolvers (Bunel et al., 2018). This does not scale-up well with the size of the neural network, mo-tivating the need for specialised solvers. Customised dual solvers have been designed by relaxingconstraints (1b), (1c) (Dvijotham et al., 2018) or replacing (1c) by the Planet relaxation and employ-ing Lagrangian Decomposition (Bunel et al., 2020a). Both approaches result in bounds very closeto optimality for problem (2) in only a fraction of the runtime of off-the-shelf solvers.2.2 A T IGHTER RELAXATIONA much tighter approximation of problem (1) than the Planet relaxation ( x2.1) can be obtained byrepresenting the convex hull of the composition of (1b) and (1c) rather than the convex hull of (1c)alone. A formulation of this type was recently introduced by Anderson et al. (2020).Let us define Lk1;Uk12Rnknk1as:Lk1[i;j] =lk1[j]1Wk[i;j]0+uk1[j]1Wk[i;j]<0,and Uk1[i;j] =uk1[j]1Wk[i;j]0+lk1[j]1Wk[i;j]<0. Additionally, let us introduce2Wk=f0;1gnknk1, the set of all possible binary masks of weight matrix Wk, andEk:= 2Wknf0;1g, which excludes the all-zero and all-one masks. The new representationresults in the following primal problem:minx;^ x;z^xns.t.x02C^ xk+1=Wk+1xk+bk+1 k2J0;n1K;(xk;^ xk;zk)2M kxk0@(WkIk)xk1+zkbkWkIkLk1(1zk)+Wk(1Ik)Uk1zk1A8Ik2Ek9>>=>>;:=Akk2J1;n1K:(3)BothMkandAkyield valid MIP formulations for problem (1) when imposing integrality con-straints on z. However, the LP relaxation of Akwill yield tighter bounds. In the worst case, thistightness comes at the cost of exponentially many constraints: one for each Ik2Ek. On the otherhand, given a set of primal assignments (x;z)that are not necessarily feasible for problem (3), onecan efficiently compute the most violated constraint (if any) at that point. The mask associated tosuch constraint can be computed in linear-time (Anderson et al., 2020) as:Ik[i;j] =1T((1zk[i])Lk1[i;j]+zk[i]Uk1[i;j]xk1[i])Wk[i;j]0: (4)We point out that Akslightly differs from the original formulation of Anderson et al. (2020),which does not explicitly include pre-activation bounds ^lk;^uk(which we treat via Mk). While thiswas implicitly addressed in practical applications (Botoeva et al., 2020), not doing so has a strongnegative effect on bound tightness, possibly to the point of yielding looser bounds than problem (2).In appendix F, we provide an example in which this is the case and extend the original derivationby Anderson et al. (2020) to recover Akas in problem (3).Owing to the exponential number of constraints, problem (3) cannot be solved as it is. As outlinedby Anderson et al. (2020), the availability of a linear-time separation oracle (4) offers a naturalprimal cutting plane algorithm, which can then be implemented in off-the-shelf solvers: solve theBig-M LP (2), then iteratively add the most violated constraints from Akat the optimal solution.When applied to the verification of small neural networks via off-the-shelf MIP solvers, this leadsto substantial gains with respect to the looser Big-M relaxation (Anderson et al., 2020).3 A NEFFICIENT DUAL SOLVER FOR THE TIGHTER RELAXATIONInspired by the success of dual approaches on looser relaxations (Bunel et al., 2020a; Dvijothamet al., 2019), we show that the formal verification gains by Anderson et al. (2020) (see x2.2) scale tolarger networks if we solve the tighter relaxation in the dual space. Due to the particular structure ofthe relaxation, a customised solver for problem (3) needs to meet a number of requirements.Fact 1. In order to replicate the success of previous dual algorithms on looser relaxations, we needa solver for problem (3)with the following properties: (i) sparsity : a memory cost linear in thenumber of network activations in spite of exponentially many constraints, (ii) tightness : the bounds3Published as a conference paper at ICLR 2021should reflect the quality of those obtained in the primal space, (iii) anytime : low cost per iterationand valid bounds at each step.The anytime requirement motivates dual solutions: any dual assignment yields a valid bound due toweak duality. Unfortunately, as shown in appendix A, neither of the two dual derivations by Bunelet al. (2020a); Dvijotham et al. (2018) readily satisfy all desiderata at once. Therefore, we need acompletely different approach. Let us introduce dual variables ;and functions thereof:fk(;) =kWTk+1k+1PIkk;Ik+PIk+1(Wk+1Ik+1)Tk+1;Ik+1;gk() =PIk2EkWk(1Ik)Uk1k;Ik+k;0^uk+k;1^lk+PIk2EkWkIkLk1k;Ik+PIk2Ekk;Ikbk;(5)wherePIkis a shorthand forPIk22Wk. Starting from primal (3), we relax all constraints in Akexcept box constraints (see x2.1). We obtain the following dual problem (derivation in appendix C),where functions fk;gkappear in inner products with primal variables xk;zk:max(;)0d(;) where:d(;) := minx;zL(x;z;;);L(x;z;;) ="Pn1k=1bTkkPn1k=0fk(;)TxkPn1k=1gk()Tzk+Pn1k=1PIk2Ek(WkIkLk1)@k;Ik+Tk;1(^lkbk)s.t. x02C; (xk;zk)2[lk;uk][0;1]k2J1;n1K:(6)This is again a challenging problem: the exponentially many constraints in the primal (3) are nowassociated to an exponential number of variables. Nevertheless, we show that the requirements ofFact 1 can be met by operating on a restricted version of dual (6). To this end, we present Active Set,a specialised solver for the relaxation by Anderson et al. (2020) that is sparse, anytime and yieldsbounds reflecting the tightness of the new relaxation. Starting from the dual of problem (2), oursolver iteratively adds variables to a small active set of dual variables Band solves the resultingreduced version of problem (6). We first describe our solver on a fixed Band then outline how toiteratively modify the active set ( x3.2). Pseudo-code can be found in appendix D.3.1 A CTIVE SETSOLVERWe want to solve a version of problem (6) for which the sums over the Ikmasks of each layer kare restricted toBkEk1, withB=[kBk. By keepingB=;, we recover a novel dual solver forthe Big-M relaxation (2) (explicitly described in appendix B), which is employed as initialisation.Settingk;Ik= 0;8Ik2EknBkin (5), (6) and removing these from the formulation, we obtain:fB;k(;B) =kWTk+1k+1PIk2Bk[f0;1gk;Ik;+PIk+12Bk+1[f0;1g(Wk+1Ik+1)Tk+1;Ik+1gB;k(B) =PIk2BkWk(1Ik)Uk1k;Ik+k;0^uk+k;1^lk+PIk2BkWkIkLk1k;Ik+PIk2Bkk;Ikbk;(7)along with the reduced dual problem:max(;B)0dB(;B) where:dB(;B) := minx;zLB(x;z;;B);LB(x;z;;B) ="Pn1k=1bTkkPn1k=0fB;k(;B)TxkPn1k=1gB;k(B)Tzk+Pn1k=1PIk2Bk(WkIkLk1)@k;Ik+Tk;1(^lkbk)s.t. x02C; (xk;zk)2[lk;uk][0;1]k2J1;n1K: (8)We can maximize dB(;B), which is concave and non-smooth, via projected supergradient ascentor variants thereof, such as Adam (Kingma & Ba, 2015). In order to obtain a valid supergradient,we need to perform the inner minimisation over the primals. Thanks to the structure of problem (8),the optimisation decomposes over the layers. For k2J1;n1K, we can perform the minimisationin closed-form by driving the primals to their upper or lower bounds depending on the sign of theircoefficients:xk=1fB;k(;B)0^uk+1fB;k(;B)<0^lk; zk=1gB;k(B)01: (9)1As dual variables k;Ikare indexed by Ik,B=[kBkimplicitly defines an active set of variables B.4Published as a conference paper at ICLR 2021The subproblem corresponding to x0is different, as it involves a linear minimization over x02C:x02argminx0fB;0(;B)Tx0 s.t. x02C: (10)We assumed inx2 that (10) can be performed efficiently. We refer the reader to Bunel et al. (2020a)for descriptions of the minimisation when Cis a`1or`2ball, as common for adversarial examples.Given (x;z)as above, the supergradient of dB(;B)is a subset of the one for d(;), given by:rkd(;) =Wkxk1+bkxk;rk;0d(;) =xkzk^uk;rk;1d(;) =xkWkxk1+bk+ (1zk)^lk;rk;Ikd(;) =xk(WkIk)xk1+WkIkLk1(1zk)zkbk+Wk(1Ik)Uk1zkIk2Bk;(11)for eachk2J0;n1K. At each iteration, after taking a step in the supergradient direction, the dualvariables are projected to the non-negative orthant by clipping negative values.3.2 E XTENDING THE ACTIVE SETWe initialise the dual (6) with a tight bound on the Big-M relaxation by solving for d;(;;)in (8). To satisfy the tightness requirement in Fact 1, we then need to include constraints (via theirLagrangian multipliers) from the exponential family of AkintoBk. Our goal is to tighten them asmuch as possible while keeping the active set small to save memory and compute.The active set strategy is defined by a selection criterion for theIkto be added2toBk, and thefrequency of addition. In practice, we add the variables maximising the entries of supergradientrk;Ikd(;)after a fixed number of dual iterations. We now provide motivation for both choices.Selection criterion The selection criterion needs to be computationally efficient. Thus, we pro-ceed greedily and focus only on the immediate effect at the current iteration. Let us map a restrictedset of dual variables Bto a set of dual variables for the full dual (6). We do so by setting vari-ables not in the active set to 0:B= 0, and=B[B. Then, for each layer k, we add theset of variables k;Ikmaximising the corresponding entries of the supergradient of the full dualproblem (6):k;Ik2argmaxk;Ikfrk;Ikd(;)T1g. Therefore, we use the subderivatives as aproxy for short-term improvement on the full dual objective d(;). Under a primal interpretation,our selection criterion involves a call to the separation oracle (4) by Anderson et al. (2020).Proposition 1. k;Ik(as defined above) represents the Lagrangian multipliers associated to themost violated constraints from Akat(x;z)2argminx;zLB(x;z;;B), the primal minimiserof the current restricted Lagrangian.Proof. See appendix D.1.Frequency Finally, we need to decide the frequency at which to add variables to the active set.Fact 2. Assume we obtained a dual solution (y;yB)2argmaxdB(;B)using Active Set onthe currentB. Then (x;z)2argminx;zLB(x;z;y;yB)is not necessarily an optimal primalsolution for the primal of the current restricted dual problem (Sherali & Choi, 1996).The primal of dB(;B)(restricted primal) is the problem obtained by setting Ek B kin prob-lem (3). While the primal cutting plane algorithm by Anderson et al. (2020) calls the separationoracle (4) at the optimal solution of the current restricted primal, Fact 2 shows that our selectioncriterion leads to a different behaviour even at dual optimality for dB(;B). Therefore, as we haveno theoretical incentive to reach (approximate) subproblem convergence, we add variables after afixed tunable number of supergradient iterations. Furthermore, we can add more than one variable“at once” by running the oracle (4) repeatedly for a number of iterations.We conclude this section by pointing out that, while recovering primal optima is possible in prin-ciple (Sherali & Choi, 1996), doing so would require dual convergence on each restricted dualproblem (8). As the main advantage of dual approaches (Dvijotham et al., 2018; Bunel et al., 2020a)is their ability to quickly achieve tight bounds (rather than formal optimality), adapting the selectioncriterion to mirror the primal cutting plane algorithm would defeat the purpose of Active Set.2adding a single Ikmask toBkextends Bbynkvariables: one for each neuron at layer k.5Published as a conference paper at ICLR 20213.3 I MPLEMENTATION DETAILS , TECHNICAL CHALLENGESAnalogously to previous dual algorithms (Dvijotham et al., 2018; Bunel et al., 2020a), our approachcan leverage the massive parallelism offered by modern GPU architectures in three different ways.First, we execute in parallel the computations of lower and upper bounds relative to all the neuronsof a given layer. Second, in complete verification, we can batch over the different Branch andBound (BaB) subproblems. Third, as most of our solver relies on standard linear algebra operationsemployed during the forward and backward passes of neural networks, we can exploit the highlyoptimised implementations commonly found in modern deep learning frameworks.An exception are what we call “masked” forward/backward passes: operations of the form(WkIk)xkor(WkIk)Txk+1, which are needed whenever dealing with constraints from Ak.In our solver, they appear if Bk6=;(see equations (8), (11)). Masked passes require a customisedlower-level implementation for a proper treatment of convolutional layers, detailed in appendix G.4 R ELATED WORKIn addition to those described in x2, many other relaxations have been proposed in the literature. Infact, all bounding methods are equivalent to solving some convex relaxation of a neural network.This holds for conceptually different ideas such as bound propagation (Gowal et al., 2018), specificdual assignments (Wong & Kolter, 2018), dual formulations based on Lagrangian Relaxation (Dvi-jotham et al., 2018) or Decomposition (Bunel et al., 2020a). The degree of tightness varies greatly:from looser relaxations associated to closed-form methods (Gowal et al., 2018; Weng et al., 2018;Wong & Kolter, 2018) to tighter formulations based on Semi-Definite Programming (SDP) (Raghu-nathan et al., 2018).The speed of closed-form approaches results from simplifying the triangle-shaped feasible regionof the Planet relaxation ( x2.1) (Singh et al., 2018; Wang et al., 2018). On the other hand, tighterrelaxations are more expressive than the linearly-sized LP by (Ehlers, 2017). The SDP formulationby Raghunathan et al. (2018) can represent interactions between activations in the same layer. Simi-larly, Singh et al. (2019) tighten the Planet relaxation by considering the convex hull of the union ofpolyhedra relative to kReLUs of a given layer at once. Alternatively, tighter LPs can be obtained byconsidering the ReLU together with the affine operator before it: standard MIP techniques (Jeroslow,1987) lead to a formulation that is quadratic in the number of variables (see appendix F.2). The relax-ation by Anderson et al. (2020) detailed in x2.2 is a more convenient representation of the same set.By projecting out the auxiliary zvariables, (Tjandraatmadja et al., 2020) recently introduced anotherformulation equivalent to the one by Anderson et al. (2020), with half as many variables and a linearfactor more constraints compared to what described in x2.2. Therefore, the relationship between thetwo formulations mirrors the one between the Planet and Big-M relaxations (see appendix B.1). Ourdual derivation and the Active Set algorithm can be adapted to operate on the projected relaxations.Specialised dual solvers significantly improve in bounding efficiency with respect to off-the-shelfsolvers for both LP (Bunel et al., 2020a) and SDP formulations (Dvijotham et al., 2019). Therefore,the design of similar solvers for other tight relaxations is an interesting line of future research. Wecontribute with a specialised dual solver for the relaxation by Anderson et al. (2020) ( x3). In whatfollows, we demonstrate empirically that by seamlessly transitioning from the Planet relaxation tothe tighter formulation, we can obtain large incomplete and complete verification improvements.5 E XPERIMENTSWe empirically demonstrate the effectiveness of our method under two settings. On incomplete ver-ification (x5.1), we assess the speed and quality of bounds compared to other bounding algorithms.On complete verification ( x5.2), we examine whether our speed-accuracy trade-offs correspond tofaster exact verification. Our implementation is based on Pytorch (Paszke et al., 2017) and is avail-able at https://github.com/oval-group/scaling-the-convex-barrier .5.1 I NCOMPLETE VERIFICATIONWe evaluate incomplete verification performance by upper bounding the robustness margin (the dif-ference between the ground truth logit and the other logits) to adversarial perturbations (Szegedyet al., 2014) on the CIFAR-10 test set (Krizhevsky & Hinton, 2009). If the upper bound is negative,we can certify the network’s vulnerability to adversarial perturbations. We replicate the experimen-6Published as a conference paper at ICLR 2021Method101102TimeGurobiPlanetGurobi1 cutBDD+400 stepsBig-M850 stepsActive Set80 stepsActive Set600 stepsActive Set1050 stepsActive Set1650 stepsActive Set CPU600 steps1.00.50.00.5Improvement from 1 cutFigure 1: Upper plot: distribution of runtime in seconds. Lower plot: difference with the bounds obtainedby Gurobi with a cut from Akper neuron; higher is better. Results for the SGD-trained network from Bunelet al. (2020a). The width at a given value represents the proportion of problems for which this is the result.Comparing Active Sets with 1650 steps to Gurobi 1 Cut, tighter bounds are achieved with a smaller runtime.tal setting from Bunel et al. (2020a). The networks correspond to the small network architecturefrom Wong & Kolter (2018). Here, we present results for a network trained via standard SGDand cross entropy loss, with no modification to the objective for robustness. Perturbations for thisnetwork lie in a `1norm ball with radius ver= 5=255(which is hence lower than commonly em-ployed radii for robustly trained networks). In appendix I, we provide additional CIFAR-10 resultson an adversarially trained network using the method by Madry et al. (2018), and on MNIST (LeCunet al., 1998), for a network adversarially trained with the algorithm by Wong & Kolter (2018).We compare both against previous dual iterative methods and Gurobi (Gurobi Optimization, 2020),the commercial black-box solver employed by Anderson et al. (2020). For Gurobi-based baselines,Planet means solving the Planet Ehlers (2017) relaxation of the network, while Gurobi cut startsfrom the Big-M relaxation and adds constraints from Akin a cutting-plane fashion, as the originalprimal algorithm by Anderson et al. (2020). We run both on 4CPU threads. Amongst dual iterativemethods, run on an Nvidia Titan Xp GPU, we compare with BDD+ , the recent proximal-based solverby Bunel et al. (2020a), operating on a Lagrangian Decomposition dual of the Planet relaxation. Aswe operate on (a subset of) the data by Bunel et al. (2020a), we omit both their supergradient-based approach and the one by Dvijotham et al. (2018), as they both perform worse than BDD+(Bunel et al., 2020a). For the same reason, we omit cheaper (and looser) methods, like intervalpropagation Gowal et al. (2018) and the one by Wong & Kolter (2018). Active Set denotes oursolver for problem (3), described in x3.1. By keepingB=;, Active Set reduces to Big-M , a solverfor the non-projected Planet relaxation (appendix B), which can be seen as Active Set’s initialiser.In line with previous bounding algorithms (Bunel et al., 2020a), we employ Adam updates (Kingma& Ba, 2015) for supergradient-type methods due to their faster empirical convergence. Finally, wecomplement the comparison with Gurobi-based methods by running Active Set on 4CPU threads(Active Set CPU ). Further details, including hyper-parameters, can be found in appendix I.Figure 1 shows the distribution of runtime and the bound improvement with respect to Gurobi cutfor the SGD-trained network. For Gurobi cut, we only add the single most violated cut from Akperneuron, due to the cost of repeatedly solving the LP. We tuned BDD+ and Big-M, the dual methodsoperating on the weaker relaxation (2), to have the same average runtime. They obtain boundscomparable to Gurobi Planet in one order less time. Initialised from 500Big-M iterations, at 6001 2 3 4BDD+ 400 steps1.01.52.02.53.03.54.0Big-M 850 stepsTiming (in s)102101BDD+ 400 steps102101Big-M 850 stepsGap to Planet(a) Comparison of runtime (left) and gap to Gurobi Planetbounds (right). For the latter, lower is better.1 2 3 4Big-M 850 steps1.01.52.02.53.03.54.0Active Set 80 stepsTiming (in s)0.0 0.2 0.4Big-M 850 steps0.10.00.10.20.30.40.5Active Set 80 stepsImprovement from Planet(b) Comparison of runtime (left) and difference with the GurobiPlanet bounds (right). For the latter, higher is better.Figure 2: Pointwise comparison for a subset of the methods on the data presented in Figure 1. Darker colourshades mean higher point density (on a logarithmic scale). The oblique dotted line corresponds to the equality.7Published as a conference paper at ICLR 2021iterations Active Set already achieves better bounds on average than Gurobi cut in around 1=20thof the time. With a computational budget twice as large ( 1050 iterations) or four times as large(1650 iterations), the bounds significantly improve over Gurobi cut in still a fraction of the time. Aswe empirically demonstrate in appendix I, the tightness of the Active Set bounds is strongly linkedto our active set strategy ( x3.2). Remarkably, even if our method is specifically designed to takeadvantage of GPU acceleration, executing it on CPU proves to be strongly competitive with Gurobicut, producing better bounds in less time for the benchmark of Figure 1.Figure 2 shows pointwise comparisons for a subset of the methods of Figure 1, on the same data.Figure 2a shows the gap to the (Gurobi) Planet bound for BDD+ and our Big-M solver. Surprisingly,our Big-M solver is competitive with BDD+, achieving on average better bounds than BDD+, in thesame time. Figure 2b shows the improvement over Planet bounds for Big-M and Active Set. Thelatter achieves markedly better bounds than Big-M in the same time, demonstrating the benefit ofoperating (at least partly) on the tighter dual (6).5.2 C OMPLETE VERIFICATIONWe next evaluate the performance on complete verification, verifying the adversarial robustness ofa network to perturbations in `1norm on a subset of the dataset by Lu & Kumar (2020), replicatingthe experimental setting from Bunel et al. (2020a). The dataset associates a different perturbation ra-diusverif to each CIFAR-10 image, so as to create challenging verification properties. Its difficultymakes the dataset an appropriate testing ground for tighter relaxations like the one by Andersonet al. (2020) (x2.2). Further details, including network architectures, can be found in appendix I.Here, we aim to solve the non-convex problem (1) directly, rather than an approximation like x5.1.In order to do so, we use BaSBR, a branch and bound algorithm from Bunel et al. (2020b). Branchand Bound works by dividing the problem domain into subproblems (branching) and bounding thelocal minimum over those domains. Any domain which cannot contain the global lower bound ispruned away, whereas the others are kept and branched over. In BaBSR, branching is carried out bysplitting an unfixed ReLU into its passing and blocking phases. The ReLU which induces maximumchange in the domain’s lower bound, when made unambiguous, is selected for splitting.A fundamental component of a BaB method is the bounding algorithm, which is, in general, thecomputational bottleneck (Lu & Kumar, 2020). Therefore, we compare the effect on final verifica-tion time of using the different bounding methods in x5.1 within BaBSR. In addition, we evaluateMIPAk, which encodes problem (1) as a Big-M MIP (Tjeng et al., 2019) and solves it in Gurobi byadding cutting planes from Ak, analogously to the original experiments by Anderson et al. (2020).Finally, we also compare against ERAN (Singh et al., 2020), a state-of-the-art complete verificationtoolbox: results on the dataset by Lu & Kumar (2020) are taken from the recent VNN-COMP com-petition (VNN-COMP, 2020). We use 100iterations for Active Set, 100iterations for BDD+ and180iterations for Big-M. For dual iterative algorithms, we solve 300 subproblems at once for thebase network and 200 for the deep and wide networks (see x3.3). Additionally, dual variables areinitialised from their parent node’s bounding computation. As in Bunel et al. (2020a), the time-limitis kept at one hour. Due to the difference in computational cost between algorithms operating on thetighter relaxation by Anderson et al. (2020) and the other bounding algorithms3, we also experimentwith a stratified version of the bounding within BaBSR. We devise a set of heuristics to determine3For Active Set, this is partly due to the masked forward/backward pass described in appendix G.100101102103Computation time [s]020406080100% of properties verifiedBase modelBDD+ BaBSRBig-M BaBSRActive Set BaBSRBig-M + Active Set BaBSRMIP kG. Planet + G. 1 cut BaBSRERAN100101102103Computation time [s]020406080100% of properties verifiedWide large modelBDD+ BaBSRBig-M BaBSRActive Set BaBSRBig-M + Active Set BaBSRMIP kG. Planet + G. 1 cut BaBSRERAN100101102103Computation time [s]020406080100% of properties verifiedDeep large modelBDD+ BaBSRBig-M BaBSRActive Set BaBSRBig-M + Active Set BaBSRMIP kG. Planet + G. 1 cut BaBSRERANFigure 3: Cactus plots on properties from Lu & Kumar (2020), displaying the percentage of solved propertiesas a function of runtime. Baselines are represented by dotted lines.8Published as a conference paper at ICLR 2021Base Wide DeepMethod time(s) sub-problems %Timeout time(s) sub-problems %Timeout time(s) sub-problems %TimeoutBDD+ B ABSR 883.55 82 699.40 22.00 568.25 43 751.88 13.00 281.47 10 763.48 5.00BIG-M B ABSR 826.60 68 582.00 19.00 533.79 35 877.24 12.00 253.37 9346.78 4.00A. S ETBABSR 422.32 9471.90 7.00 169.73 1873.36 3.00 227.26 2302.16 2.00BIG-M + A. S ETBABSR 402.88 11 408.84 7.00 179.73 3712.62 3.00 197.99 3086.62 2.00G. P LANET + G. 1 CUT BABSR 1191.48 2044.28 14.00 1272.99 1352.42 10.00 704.59 677.74 3.00MIPAk 3227.50 226.24 82.00 2500.70 100.93 64.00 3339.37 434.57 91.00ERAN 805.89 - 5.00 632.12 - 9.00 545.72 - 0.00Table 1: We compare average solving time, average number of solved sub-problems and the percentage oftimed out properties on data from Lu & Kumar (2020). The best dual iterative method is highlighted in bold.whether a given subproblem is easy (therefore looser bounds are sufficient) or whether we need tooperate on the tighter relaxation. Instances of this approach are Big-M + Active Set andGurobiPlanet + Gurobi 1 cut . Further details are provided in appendix H.Figure 3 and Table 1 show that Big-M performs competitively with BDD+. Active Set verifies alarger share of properties than the methods operating on the looser formulation (2), demonstratingthe benefit of tighter bounds ( x5.1) in complete verification. On the other hand, the poor performanceof MIP +Akand of Gurobi Planet + Gurobi 1 cut, tied to scaling limitations of off-the-shelf solvers,shows that tighter bounds are effective only if they can be computed efficiently. Nevertheless, thedifference in performance between the two Gurobi-based methods confirms that customised Branchand Bound solvers (BaBSR) are preferable to generic MIP solvers, as observed by Bunel et al.(2020b) on the looser Planet relaxation. Moreover, the stratified bounding system allows us toretain the speed of Big-M on easier properties, without excessively sacrificing Active Set’s gains onthe harder ones. Finally, while ERAN verifies 2%more properties than Active Set on two networks,BaBSR (with any dual bounding algorithm) is faster on most of the properties. BaBSR-based resultscould be further improved by employing the learned branching strategy presented by Lu & Kumar(2020): in this work, we focused on the bounding component of branch and bound.6 D ISCUSSIONThe vast majority of neural network bounding algorithms focuses on (solving or loosening) a pop-ular triangle-shaped relaxation, referred to as the “convex barrier” for verification. Relaxations thatare tighter than this convex barrier have been recently introduced, but their complexity hinders appli-cability. We have presented Active Set, a sparse dual solver for one such relaxation, and empiricallydemonstrated that it yields significant formal verification speed-ups. Our results show that scalabletightness is key to the efficiency of neural network verification and instrumental in the definitionof a more appropriate “convex barrier”. We believe that new customised solvers for similarly tightrelaxations are a crucial avenue for future research in the area, possibly beyond piecewise-linearnetworks. Finally, as it is inevitable that tighter bounds will come at a larger computational cost,future verification systems will be required to recognise a priori whether tight bounds are needed fora given property. A possible solution to this problem could rely on learning algorithms.ACKNOWLEDGMENTSADP was supported by the EPSRC Centre for Doctoral Training in Autonomous Intelligent Ma-chines and Systems, grant EP/L015987/1, and an IBM PhD fellowship. HSB was supported using aTencent studentship through the University of Oxford.<|im_end|> <|im_start|>assistant ### Review Title Review for "Scaling the Convex Barrier with Active Sets" ### Review Text The authors present a custom solver for verifying properties of neural networks (such as robustness properties). Prior work for neural network verification relies on generating bounds by solving convex relaxations ("convex barrier"). The authors describe a sparse dual solver for a new relaxation which is tighter (but has higher computational complexity). The solver is represented (for the most part) as standard operations built into pytorch, and so it can be easily run on GPUs (they do require a specialized operator to support masked forward/backward passes, and they describe how this is done efficiently for convolutional networks). The solver involves repeatedly solving modified instances of a problem, where only a small active set of dual variables (instead of exponentially many) is considered at each step. Experimental results are promising in that it outperforms generic solvers in terms of both the bounds achieved and the time taken to do so. This does seem to be a promising approach. ### Review Rating 7: Good paper, accept ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
ByetnC4FwS
ICLR.cc/2020/Conference
2020
Recurrent Chunking Mechanisms for Conversational Machine Reading Comprehension
["Hongyu Gong", "Yelong Shen", "Dian Yu", "Jianshu Chen", "Dong Yu"]
In this paper, we focus on the conversational machine reading comprehension (MRC) problem, where the input to a model could be a lengthy document and a series of interconnected questions. To deal with long inputs, previous approaches usually chunk them into equally-spaced segments and predict answers based on each chunk independently without considering the information from other chunks. As a result, they may form chunks that fail to cover complete answers or have insufficient contexts around the correct answer required for question answering. Moreover, they are less capable of answering questions that need cross-chunk information. We propose to let a model learn to chunk in a more flexible way via reinforcement learning: a model can decide the next chunk that it wants to process in either reading direction. We also apply recurrent mechanisms to allow information to be transferred between chunks. Experiments on two conversational MRC tasks -- CoQA and QuAC -- demonstrate the effectiveness of our recurrent chunking mechanisms: we can obtain chunks that are more likely to contain complete answers and at the same time provide sufficient contexts around the ground truth answers for better predictions. Specifically, our proposed mechanisms can lead to up to 7.5% improvement in F1 over the baseline when addressing extremely long texts.
["Recurrent Chunking Policy", "Machine Reading Comprehension", "Reinforcement Learning"]
ABSTRACTIn this paper, we focus on the conversational machine reading comprehension(MRC) problem, where the input to a model could be a lengthy document and aseries of interconnected questions. To deal with long inputs, previous approachesusually chunk them into equally-spaced segments and predict answers based oneach chunk independently without considering the information from other chunks.As a result, they may form chunks that fail to cover complete answers or haveinsufficient contexts around the correct answer required for question answering.Moreover, they are less capable of answering questions that need cross-chunkinformation.We propose to let a model learn to chunk in a more flexible way via reinforce-ment learning: a model can decide the next chunk that it wants to process in eitherreading direction. We also apply recurrent mechanisms to allow information tobe transferred between chunks. Experiments on two conversational MRC tasks –CoQA and QuAC – demonstrate the effectiveness of our recurrent chunking mech-anisms: we can obtain chunks that are more likely to contain complete answersand at the same time provide sufficient contexts around the ground truth answersfor better predictions. Specifically, our proposed mechanisms can lead to up to7:5%improvement in F1 over the baseline when addressing extremely long texts.1 I NTRODUCTIONRecently, we have seen a surge of interest towards extractive and abstractive machine reading com-prehension (MRC) tasks (Hermann et al., 2015; Hill et al., 2016; Rajpurkar et al., 2016; Shen et al.,2016; Huang et al., 2017; Trischler et al., 2017; Zhang et al., 2018; Ko ˇcisk`y et al., 2018): given adocument and questions, answers can be spans from the document or free-form texts. In this pa-per, we focus on conversational MRC tasks such as CoQA (Choi et al., 2018) and QuAC (Reddyet al., 2018), in which a series of interconnected (instead of independent) questions are designedbased on the given documents. In consequence, these questions together with their answers formconversations.There is also a growing trend of building MRC readers (Hu et al., 2018; Xu et al., 2019; Yang et al.,2019; Keskar et al., 2019) based on pre-trained language models such as GPT (Radford et al., 2018)and BERT (Devlin et al., 2019). Since these models only allow a fixed-length (e.g., 512) input, itis often the case that an input sequence exceeds the length constraint. This is especially the casefor conversational MRC tasks as we may need to combine previous questions to answer the currentquestion, and these tasks have relatively long documents (e.g., 401tokens in QuAC v.s 117tokens inSQuAD (Rajpurkar et al., 2016)). Therefore, dealing with lengthy inputs is an important challengein conversational MRC tasks.There are two major limitations in previous MRC readers when dealing with lengthy documents.First, they typically chunk a lengthy document into multiple equally-spaced segments by movingthe model from the current chunk to the next one using a pre-determined stride. This chunkingstrategy can be problematic since it may result in incomplete answers. Moreover, we also observethat a model tends to make a better prediction when a chunk provides richer contexts around theground truth answer.To confirm our observation, we first fine-tune a BERT-based reader on the CoQA dataset and thenevaluate the obtained model on chunks with different center distances from the answer span (Fig-1Under review as a conference paper at ICLR 2020Figure 1: The influence of the distance between the center of the answer span and the center of thechunk. The test performance (in F1 score) is evaluated on the CoQA dataset using a BERT-basedreader.ure 1). The best performance is achieved when the chunk center coincides with the answer spancenter. Within the distance of 80(in tokens), while 99% answers are completely covered, the per-formance degrades as the chunk center moves away from the answer center and the chunk containsfewer relevant contexts. When the distance reaches 96, more than half of the predicted spans areincomplete. Therefore, we argue that a good chunking policy should generate chunks that not onlyfully cover the correct answer span but also provide sufficient contexts around the correct answer.Second, besides the fixed-length chunking, most existing methods predict answers by only readingthe local information within each chunk. However, in practice, information from different chunksis essential for answering questions that involve global contextual information such as coreferentialname mentions within a document.We propose to let a machine reader learn how to chunk intelligently via reinforcement learning.Instead of using a fixed stride in one direction, we allow the model to decide the next chunk to beprocessed in either direction (i.e., forward or backward). Henceforth, the model is capable of makingbetter predictions based on the carefully selected chunks (Section 2.3). We also apply recurrentmechanisms to allow the information to flow between chunks. As a result, the model can haveaccess to information beyond the current chunk (Section 2.2).In our experiments, we evaluate our model1on two conversational machine reading comprehen-sion datasets: CoQA and QuAC. Experimental results demonstrate that our method can generatechunks that are more likely to cover complete answer spans and provide richer contextual infor-mation around the ground truth answers. The proposed chunking mechanisms lead to performancegains on the benchmark datasets, especially on the cases with extremely long documents.2 M ETHOD2.1 B ASELINE MODELAs shown in Figure 2, our model consists of a pre-trained model, an answer extractor, a policynetwork, and a chunk scorer. We use BERT as the pre-trained model that generates representationsfor document chunks and questions (Devlin et al., 2019). Following the input format of BERT, eachinput sequence starts with a “CLS” token, which is followed by previous questions (PQ), the currentquestion (CQ), and the document chunk. The three parts are separated by “SEP” tokens.The maximum input length in BERT is restricted to be 512. However, the document length of14:1%of test questions in QuAC already exceeds this input length constraint. A popular approachis to segment the long document into multiple chunks (Reddy et al., 2018; Choi et al., 2018).Answer Extractor . Following previous work on extractive machine reading comprehension, wepredict the start and the end positions of the answer span in the given document. BERT first generatesa vector representation hc;ifor eachi-th token in the c-th chunk. Given hc;i, the model scores each1The implementation will be released publicly.2Under review as a conference paper at ICLR 2020Figure 2: Model overview: BERT generates a representation for each input chunk, and recurrenceaccumulates information over chunks. Based on these representations, the answer extractor extractsanswers from the current chunk, and the policy network takes chunking action and moves to the nextchunk. Chunk scorer scores each chunk by giving its likelihood of containing an answer and selectsanswers among predictions from multiple chunks.token by giving the likelihood of it being the start token of the answer span:lsc;i=wTshc;i; (1)where wsis model parameter. The probability that the answer starts at the i-th token is computedby applying the softmax operation to lsc;i:psc;i= softmax( lsc;i) (2)Likewise, the model scores how likely the answer ends at the i-th token in chunk cusinglec;i=wTehc;i; (3)where weis model parameter. The probability of the i-th token being the end of the answer (denotedaspec;i) is calculated in a similar manner as equation 2.2.2 R ECURRENT MECHANISMSGiven a question, existing BERT-based models only access the local information from each chunkand predict answers independently. We argue that document-level information is essential for answerprediction, especially in conversational MRC tasks. In this work, we apply recurrent mechanismsto allow information flow between chunks, and thus a model can have access to information beyondthe current chunk.Suppose that the chunk-level representation of chunk cisvc, which is generated without accessingknowledge from other chunks. As mentioned earlier, “CLS” is the first token of the input sequenceto the BERT model, which has been used to capture the information of the whole chunk (Devlinet al., 2019). We thus use the vector of the “CLS” token as the chunk representation vcin this work.The recurrence is applied to the chunk representation vcso that the information of previous chunkscan be transferred afterwards. The enriched chunk representation ~vcis defined as~vc=f(vc;~vc1); (4)wheref()is the recurrence function. We consider two recurrent mechanisms here: linear recurrenceand Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) recurrence. Linear re-currence is simply a weighted sum of its inputs:flinear(vc;~vc1) =vc+~vc1; (5)where coefficients anddepend on inputs. We have ;=softmax (wTr[vc;~vc1]), where wris model parameter.The LSTM recurrence, which uses LSTM unit as the recurrence function, takes vcas the currentinput and ~vc1as the previous hidden states:fLSTM(vc;~vc1) =LSTM (vc;~vc1): (6)3Under review as a conference paper at ICLR 2020Chunk Scorer . We extract a candidate answer from each chunk. Given multiple chunks from alengthy document, we need to decide the answer among all the candidate answers from multiplechunks. Besides the answer prediction within each chunk, the model also assigns a confidence scoreto each chunk. This score predicts the probability qcthat chunkccontains an answer.qc=(Wc~vc+bc); (7)where Wcandbcare model parameters, and ()is sigmoid function.For a candidate answer span that starts with the i-th token and ends with the j-th token in chunk c,its probability pAi;j;cis based on both the estimation within the chunk and the confidence score qcofthis chunk:pAi;j;c=psc;ipec;jqc: (8)Candidate answer spans selected from different chunks are ranked in terms of the predicted answerprobability, and the one with the highest probability is selected as the answer by the model.2.3 L EARNING TO CHUNKIn this section, we introduce how we let a model learn to chunk a document. Existing approachesusually generate chunks from left to right of a document (Devlin et al., 2019), and models movefrom the current chunk to the next chunk with a fixed stride size m(in tokens). More specifically, ifthe current chunk starts with the i-th token in the document, the next chunk will start at the (i+m)-thtoken. However, one problem with this kind of methods is that a document may be inappropriatelysegmented. For instance, an answer span may cross the chunk boundary after segmentation. Also,the segmentation method might generate a chunk where the answer span is close to the chunk bound-ary. As such, a model may fail to make good predictions due to a lack of sufficient contexts aroundthe answer.We propose to allow the baseline model (Section 2.1) to learn to chunk. Instead of fixing its strideto be a given size and be in one direction, we enable the model to decide the next chunk that it wantsto process and allow it to move back and forth. Henceforth, the model is capable of making betterpredictions based on the carefully selected chunks. Learning to chunk is done with reinforcementlearning (RL).To formulate the learning-to-chunk as an RL problem, we define the state and the action space asfollows. The statesis defined to be the chunks that the model has processed up to the current time,i.e.,s=f1;2;:::;cg. The actionais the size and direction of the stride for moving to the nextchunk. We define the action space Aof chunking as a set of strides. The negative stride allowsthe model to look backwards at the already seen chunks, and the positive stride allows it to moveforward to process the unseen chunks.Policy Network . We use a neural network to model the policy for selecting the actions. Specifically,the policy network uses a feedforward neural network to model the probability distribution pact(ajs)over all stride actions given the current state sencoded in the chunk representation ~vc, which isenriched with document-level information as described in Eq. (4):pact(ajs) =softmax (Wa~vc+ba); (9)where Waandbaare model parameters.During training, the action is randomly sampled with the probability pact(ajs). This allows a betterexploration-exploitation tradeoff (Sutton & Barto, 2018) and increases the diversity of chunks.Rewards . The policy network sequentially selects actions to chunk a document and receives areward that reflects the quality of model’s final answer prediction. Note that this is a delayed rewardproblem since we do not know whether actions are good or bad until the end after the model finishesreading all chunks. We first define the rating of a chunk in the following manner. Suppose a chunkccontains the ground truth answer, which starts at the i-th token and ends at the j-th token. Thenthe rating of the chunk, denoted as rc, is equal topsc;ipec;j. Otherwise, it is zero. Formally,rc=psc;ipec;j;answer included,0; else.(10)4Under review as a conference paper at ICLR 2020Suppose that each chunking action aresults in a new chunk c. A sequence of actions generates asequence of chunks for a document, from which we can compute the reward of each action usingdynamic programming. Specifically, recall from Eq. (7) that qcis the probability that chunk ccontains an answer. We define the reward R(s;a)for taking action ain statesin a recursive manner:R(s;a) =qcrc+ (1qc)R(s0;a0); (11)where (s0;a0)denotes next state-action pair.Ideally, the chunking actions are taken so that the model can maximize the rewards of predictedanswers. Mathematically, we define the expected reward as J, andare model parameters relatedto the chunking decision.J=Epact(ajs)[R(s;a)]: (12)The action probability pact(ajs)is the chunking policy learned by the model. This policy guides themodel to generate chunks on which the model can make good predictions of answer spans.2.4 T RAININGAs shown in Figure 2, our model comprises of a policy network that chunks a given document, ananswer extractor that extracts a candidate answer from the current chunk, and a chunk scorer thatselects the answer among chunks. The training loss of our model consists of three parts — answerlossLans, chunking policy loss Lcp, and chunk scoring loss Lcs— to take care of training all thesemodules. We now discuss these losses separately.Answer Loss . For training instances, the ground truth answer of a given question is marked inthe associated document. We already know the start and end tokens of the answer in a chunk, andthus the answer extractor can be directly trained to identify answers within the chunk via supervisedlearning. As has been described, the answer extractor predicts the probability distribution of answerstart/end over all tokens in a chunk. Suppose that the i-th andj-th token in the chunk are answerstart and end, respectively. We aim to optimize the probability of the two tokens and thus use cross-entropy loss as the answer loss Lans:Lans=Xc;iysilogpsc;iXc;jyejlogpec;j;whereysc;iis a binary label indicating whether the i-th token in chunk cis answer start, and psc;iisits predicted start probability as shown in Eq. (2). Similarly, yec;jandpec;jis the label and predictionof thej-th end token in chunk c, repectively.Chunking Policy Loss . Chunking Policy network, which is trained with the action reward viareinforcement learning, enables more flexible document chunking. The chunking policy loss Lcpisthe negative of expected reward Jin Eq. (12), i.e., Lcp=J.Chunk Scoring Loss . It is known whether a given chunk contains an answer in the training stage.Again we apply cross entropy loss to optimize the chunk scoring. The chunking scoing loss LcsisLcs=Xcyclogqc; (13)whereycis a binary label indicating whether chunk ccontains an answer, and qcis model’s predictedprobability as shown in Eq. (7).In summary, the training loss Lof our model is L=Lans+Lcp+Lcs:Since losses LansandLcsaredifferentiable, the model parameters can be simply updated with their gradients rLansandrLcs.As for the non-differentiable chunking policy loss Lcp, we optimize it by applying the idea fromREINFORCE algorithm (Williams, 1992), which uses a sample approximation to its gradient:rLck=XtE[rlogpact(atjst)R(st;at)];wherepact(atjst)is given in Eq. (9).5Under review as a conference paper at ICLR 20202.5 T ESTINGDuring testing, the model starts from the beginning of the document in its first chunk. Given thecurrent chunk c, the model uses the chunk representation ~ vcto select the optimal stride action awith the policy network, wherea= arg maxa2Asoftmax (Wa~ vc+ba): (14)Stride action ais taken to generate the next chunk c0. Similarly, a set of chunks Care sequentiallyextracted from a document. We score an answer span spanning from the i-th to thej-th token inchunkcwith model’s estimated likelihood pAi;j;cas shown in Eq. (8). The best answer span (i;j)from chunk cis the one with the highest likelihood:i;j;c= arg maxij;c2CpAi;j;c: (15)3 E XPERIMENTDataset Train ValidationQuestion # Avg tokens # Max token # Question # Avg tokens # Max token #CoQA 108,647 352 1323 7,983 341 1037QuAC 83,568 516 2310 7,354 576 2146Table 1: Statistics of CoQA and QuAC data. We consider the number of sub-tokens generated byBERT tokenizer.Datasets . We use two conversational machine reading comprehension datasets (i.e., CoQA (Reddyet al., 2018) and QuAC (Choi et al., 2018)) in our experiments. A background document is providedfor each conversation, which involves a set of questions to be answered based on the given documentsequentially.(1) Conversational Question Answering (CoQA). Answers in the CoQA dataset can be free-formtexts written by annotators. It is reported that an extractive MRC approach can achieve an upperbound as high as 97:8%in F1 score (Yatskar, 2018). Therefore, We preprocess the CoQA trainingdata and select a text span from the document as the extractive answer that achieves the highest F1score compared with the given ground truth answer.(2) Question Answering in Context (QuAC). All the answers in the QuAC dataset are text spanshighlighted by annotators in the given document.The dataset statistics is summarized in Table 1, including the data sizes and the number of sub-tokensin documents. Details of data processing are available in the supplementary material.Baselines . We have two baselines based on BERT, which have achieved state-of-the-art performancein a wide range of natural language understanding tasks including machine reading comprehension.(1) B ASIC BERT MODEL . It achieves competitive performance on extractive machine reading com-prehension tasks such as SQuAD (Rajpurkar et al., 2016; 2018). It adopts a simple chunking policy– moving to the next document chunk with a fixed stride size. In the experiments, we select thestride size to be 64in CoQA and QuAC from (16;32;64;128) , which gives the best performance onboth the two datasets, please see appendix A.2 for details.(2) S ENTENCE SELECTOR . We use a state-of-the-art sentence selector for MRC as our base-line (Htut et al., 2018). Given a question, the selector chooses a subset of sentences that are likelyto contain an answer. The selected sentence are then fed to the BERT-based baseline for answerextraction. Since a question is correlated with its previous questions within a conversation, we applythe sentence selector to select sentences based on the current question alone or the concatenation ofprevious and the current questions.See the results of the two baseline implementations in the rows Sent selector (with previous ques-tions) andSent selector (only current questions) in Table 2, respectively. Following the setting ofprevious work (Htut et al., 2018), we train the selector with the margin ranking loss. The top-rankedsentences are selected under the length constraint and concatenated as the new document.Evaluation Metric . The main evaluation metric is macro-average word-level F1 score. We compareeach prediction with the reference answer. Precision is defined by the percentage of predicted answer6Under review as a conference paper at ICLR 2020tokens that appear in the reference answer, and recall is the percentage of reference answer tokenscaptured in the prediction. F1 score is the the harmonic mean of the precision and recall. Whenmultiple reference answers are provided, the maximum F1 score is used for evaluation.Setting . We perform a set of experiments with different maximum sequence lengths of 192,256,384, and 512. Our system and the two baseline systems are built upon a pre-trained model BERT.We use the 24-layer BERT model released by (Devlin et al., 2019) and tune it in each system with alearning rate of 3e-5. Our model fixes the number of chunks read from a document for each question.It generates 4,3,3, and 2chunks under the length limit of 192,256,384, and 512, respectively.Considering that questions are highly correlated due to the existence of coreferential mentions acrossquestions, we concatenate each question with as many of its previous questions as possible al-lowed by the length limit of 64question tokens. The action space of the model strides is set as[16;16;32;64;128] for CoQA and [16;32;64;128;256] for QuAC considering that documentsin CoQA documents are shorter than those in QuAC. The first chunk always starts with the firsttoken of the document, and the model will take stride action after the first chunk.Dataset CoQA QuACMax sequence length 192 256 384 512 192 256 384 512Basic BERT Devlin et al. (2019) 72.8 76.2 81.0 81.4 34.5 50.6 56.7 61.5Sent selector (with previous questions) 54.5 63.8 75.3 79.4 33.9 38.8 47.6 55.4Sent selector (only current questions) 57.5 66.5 76.5 79.5 34.3 39.1 47.6 56.4Linear recurrence o. RL chunking 74.5 78.6 81.0 81.4 48.8 51.4 56.2 61.4Linear recurrence w. RL chunking 76.0 79.2 81.3 81.8 51.6 55.2 59.9 62.0LSTM recurrence o. RL chunking 74.1 78.5 81.0 81.3 49.2 51.5 56.4 61.6LSTM recurrence w. RL chunking 75.4 79.5 81.3 81.8 53.9 55.6 60.4 61.8Table 2: F1 score ( %) of different algorithms on conversational reading comprehension datasets.Results . We experiment with a set of maximum sequence lengths to evaluate the impact of the inputlength on models’ performance in machine reading comprehension. Table 2 presents F1 scoresachieved by our methods and the baselines.The performance of the basic BERT model drops drastically as the chunk length decreases. We seea drop of 8:6%in F1 score on the CoQA dataset and a drop of 27:0%on the QuAC dataset when thechunk size decreases from 512to192, and more chunks are generated from documents.Followed by the same BERT-based reader, the sentence selector baseline that only considers the cur-rent question achieves better performance than the selector fed with the combination of the currentquestion and its previous questions. The selector that only considers the current question performswell in selecting sentences containing answers from documents. For around 90:4%of questions inCoQA and 81:2%of questions in QuAC, the top-ranked 12sentences in the document can includeat least one complete answer. However, the selector does not improve upon basic BERT despiteits high precision in sentence selection. This might be because selected sentences do not providesufficient contexts for a reader to identify answers accurately.Our model with recurrent chunking mechanisms performs consistently better than both basic BERTand the same baseline with a sentence selector. On the CoQA dataset, our chunking model withlinear recurrence improves upon the basic BERT model by 3:2%,3%,0:3%, and 0:4%for chunklength of 192,256,284, and 512, respectively. The improvement brought by LSTM recurrenceand RL chunking is 2:6%,3:3%,0:3%, and 0:4%on the CoQA dataset. On the QuAC dataset,linear recurrence combined with RL chunking leads to improvements of 17:1%,4:6%,3:2%,0:5%,and LSTM recurrence has gains of 19:4%,5:0%,3:7%,0:3%under different chunk lengths. Wenotice that our model is less sensitive to the chunk length, and the Linear recurrence has comparableperformance to LSTM recurrence.Our model is shown to enhance the performance on both datasets, and the gain is significant whenmore chunks are generated at a smaller chunk length. We note that the gain is relatively small on theCoQA dataSET under the length of 384and512. This is because the average document length ofCoQA data is 352. In that case, a single chunk could cover the whole document for most questions.Similarly, the gain is small on the QuAC data at the input length of 512.7Under review as a conference paper at ICLR 2020To evaluate how well our model extracts answers from long documents, we report its performanceon documents of different lengths in Table 3. The maximum sequence length is set as 512for bothCoQA and QuAC dataset. Although our gain is small over all documents, we note that the gain ismore obvious on long documents. For documents containing more than 400words in the CoQAdataset, RL chunking with linear recurrence has an improvement of 7:3%over the basic BERTbaseline, and RL chunking with LSTM recurrence enhances F1 score by 7:5%. As for QuAC data,the improvement of linear recurrence with RL chunking is 4:5%, and the improvement of LSTMrecurrence is 2:6%.Dataset CoQA QuACDocument len <=200 (200, 300] (300, 400] >400<=300 (300, 450] (450, 600] >=600Query percentage (%) 15.3 63.3 18.9 2.5 20.5 52.0 19.7 7.8Basic BERT 81.0 81.9 81.8 67.2 66.2 62.8 62.2 38.7Linear reccurrence w. RL chunking 81.1 82.1 82.3 74.5 66.1 62.6 63.6 43.2LSTM recurrence w. RL chunking 81.1 82.0 82.3 74.7 66.4 62.6 63.0 41.3Table 3: F1 score on documents of different lengths.Ablation Analysis . We have shown the performance gains brought by the combination of recurrentmechanisms and chunking policy. Here we evaluate the improvements brought by the chunkingpolicy alone. By comparing rows LSTM recurrence w. RL chunking andLSTM recurrence o. RLchunking in Table 2, we can find that RL chunking alone improves F1 score by 1.3%, 1.0%, 0.3%,and 0.5% under the chunk length constraint of 192, 256, 384, and 512 respectively on the CoQAdataset. Its improvements are 4.7%, 4.1%, 4.0%, and 0.2% on the QuAC dataset. The learnedchunking policy brings non-trivial gains. We study the effect of recurrence alone without RL chunk-ing here. As shown in rows basic BERT andLinear recurrence o. RL chunking in Table 2, linearrecurrence alone can improve F1 score by 2:4%, and LSTM recurrence gives an improvement of2:3%without RL chunking when the maximum chunk length is 256. We provide more discussionsand quantitative analysis on the learned chunking policy in Appendix A.44 R ELATED WORK4.1 C ONVERSATIONAL READING COMPREHENSIONConversational MRC tasks require the understanding of conversations, which either contain a se-ries of questions and answers (Saeidi et al., 2018; Choi et al., 2018; Reddy et al., 2018; Xu et al.,2019) or serve as documents (Ma et al., 2018; Moghe et al., 2018; Sun et al., 2019). In this pa-per, we focus on large-scale extractive and abstractive conversational MRC tasks with backgrounddocuments: QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2018). Very recently we see signifi-cant improvements in performance on conversational MRC tasks by leveraging additional extractivenon-conversational datasets such as SQuAD (Rajpurkar et al., 2018) and NewsQA (Trischler et al.,2017), which is beyond the scope of this paper.4.2 A DDRESSING LONG CONTEXTS IN MACHINE READING COMPREHENSION TASKSTo deal with lengthy documents in machine reading comprehension tasks, some previous work skipscertain tokens (Yu et al., 2017; Seo et al., 2018) or selects a set of sentences as input based on thegiven questions (Hewlett et al., 2017; Min et al., 2018; Lin et al., 2018). However, they mainly focuson tasks in which most of the questions are formed by a single informative sentence or limited tomultiple-choice settings (Wang et al., 2019).5 C ONCLUSIONWe propose to let a model learn to chunk in a more flexible way via reinforcement learning: amodel can decide the next chunk that it wants to process in either direction. We also apply recurrentmechanisms to allow information transfer between chunks. Experiments on two conversationalmachine reading comprehension tasks – CoQA and QuAC – demonstrate the effectiveness of ourmechanisms. We can obtain chunks that are more likely to contain complete answers and at thesame time cover sufficient contexts around the correct answer for better answer predictions.8Under review as a conference paper at ICLR 2020
rJedWjNpYB
Official Blind Review #3
6: Weak Accept
The paper considers the task of extractive QA where the document is longer than the document encoder's size limit. (The conversational part does not play a big role in the method.) One solution is to look at a chunk of document tokens per time step. The paper proposes (1) a way to propagate information between time steps, and (2) a RL policy that selects how many tokens to skip when locating the next chunk. The method was evaluated on CoQA and QUaC, with gains observed when the document is much longer than the chunk size. The observation about the answer's location in the chunk is eye-opening, and the need to select the right document chunk is presented well. The recurrent mechanism and chunk scores look correct. However, the paper has two potential weaknesses: 1. The chunking policy looks too complex for the task and might also be incorrect. - Most of the time, the number of possible chunks (with 16-token strides) is small. One could score all such chunks for relevance at once without having to use RL to look at chunks sequentially. Efficiency-wise, scoring multiple chunks in a batch might be even cheaper than embedding a single chunk at each time step. This process of selecting relevant document sections is related to retrieval-based reading comprehension [https://arxiv.org/abs/1704.00051 | https://arxiv.org/abs/1808.10628 | https://arxiv.org/abs/1808.06528], where the relevant documents are retrieved for extractive QA. - In Equation 11, q_c contains parameters theta to be optimized, which I think makes the REINFORCE gradient incorrect. - Equation 13 is missing a term for the negative class (sum_c (1 - y_c) log (1 - q_c)), but this could simply be a typo. 2. There are a few baselines that could have been tested: - As stated above, instead of using a policy-based chunk selector, just score q_c on all spans, and use it to either select a span or to compute the span score q_c * p_start * p_end. This is similar to the sentence selector baseline but instead selecting chunks (which is more comparable). - A model with no recurrence but with RL chunking is missing. - One upper-bound experiment to try is to just select the span with the correct answer in the middle. This will indicate the amount winnable from doing better chunking. Questions and comments: - Page 2: What is the chunk size for the plot? - Page 2: "... the predicted spans are incomplete" -- Does this mean the span is chopped in the middle? - What is the distribution of the actions that the policy takes? In particular, does it use the "-16" action at all? - Appendix A.4: Despite "farmer roast" not appearing, the answer chunk still has a strong prior (due to features such as similarity to the question and negative words). [EDIT] I am changing my score to weak accept. Refer to the replies for details.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Recurrent Chunking Mechanisms for Conversational Machine Reading Comprehension ### Paper Abstract In this paper, we focus on the conversational machine reading comprehension (MRC) problem, where the input to a model could be a lengthy document and a series of interconnected questions. To deal with long inputs, previous approaches usually chunk them into equally-spaced segments and predict answers based on each chunk independently without considering the information from other chunks. As a result, they may form chunks that fail to cover complete answers or have insufficient contexts around the correct answer required for question answering. Moreover, they are less capable of answering questions that need cross-chunk information. We propose to let a model learn to chunk in a more flexible way via reinforcement learning: a model can decide the next chunk that it wants to process in either reading direction. We also apply recurrent mechanisms to allow information to be transferred between chunks. Experiments on two conversational MRC tasks -- CoQA and QuAC -- demonstrate the effectiveness of our recurrent chunking mechanisms: we can obtain chunks that are more likely to contain complete answers and at the same time provide sufficient contexts around the ground truth answers for better predictions. Specifically, our proposed mechanisms can lead to up to 7.5% improvement in F1 over the baseline when addressing extremely long texts. ### Paper Keywords ["Recurrent Chunking Policy", "Machine Reading Comprehension", "Reinforcement Learning"] ### Paper Content ABSTRACTIn this paper, we focus on the conversational machine reading comprehension(MRC) problem, where the input to a model could be a lengthy document and aseries of interconnected questions. To deal with long inputs, previous approachesusually chunk them into equally-spaced segments and predict answers based oneach chunk independently without considering the information from other chunks.As a result, they may form chunks that fail to cover complete answers or haveinsufficient contexts around the correct answer required for question answering.Moreover, they are less capable of answering questions that need cross-chunkinformation.We propose to let a model learn to chunk in a more flexible way via reinforce-ment learning: a model can decide the next chunk that it wants to process in eitherreading direction. We also apply recurrent mechanisms to allow information tobe transferred between chunks. Experiments on two conversational MRC tasks –CoQA and QuAC – demonstrate the effectiveness of our recurrent chunking mech-anisms: we can obtain chunks that are more likely to contain complete answersand at the same time provide sufficient contexts around the ground truth answersfor better predictions. Specifically, our proposed mechanisms can lead to up to7:5%improvement in F1 over the baseline when addressing extremely long texts.1 I NTRODUCTIONRecently, we have seen a surge of interest towards extractive and abstractive machine reading com-prehension (MRC) tasks (Hermann et al., 2015; Hill et al., 2016; Rajpurkar et al., 2016; Shen et al.,2016; Huang et al., 2017; Trischler et al., 2017; Zhang et al., 2018; Ko ˇcisk`y et al., 2018): given adocument and questions, answers can be spans from the document or free-form texts. In this pa-per, we focus on conversational MRC tasks such as CoQA (Choi et al., 2018) and QuAC (Reddyet al., 2018), in which a series of interconnected (instead of independent) questions are designedbased on the given documents. In consequence, these questions together with their answers formconversations.There is also a growing trend of building MRC readers (Hu et al., 2018; Xu et al., 2019; Yang et al.,2019; Keskar et al., 2019) based on pre-trained language models such as GPT (Radford et al., 2018)and BERT (Devlin et al., 2019). Since these models only allow a fixed-length (e.g., 512) input, itis often the case that an input sequence exceeds the length constraint. This is especially the casefor conversational MRC tasks as we may need to combine previous questions to answer the currentquestion, and these tasks have relatively long documents (e.g., 401tokens in QuAC v.s 117tokens inSQuAD (Rajpurkar et al., 2016)). Therefore, dealing with lengthy inputs is an important challengein conversational MRC tasks.There are two major limitations in previous MRC readers when dealing with lengthy documents.First, they typically chunk a lengthy document into multiple equally-spaced segments by movingthe model from the current chunk to the next one using a pre-determined stride. This chunkingstrategy can be problematic since it may result in incomplete answers. Moreover, we also observethat a model tends to make a better prediction when a chunk provides richer contexts around theground truth answer.To confirm our observation, we first fine-tune a BERT-based reader on the CoQA dataset and thenevaluate the obtained model on chunks with different center distances from the answer span (Fig-1Under review as a conference paper at ICLR 2020Figure 1: The influence of the distance between the center of the answer span and the center of thechunk. The test performance (in F1 score) is evaluated on the CoQA dataset using a BERT-basedreader.ure 1). The best performance is achieved when the chunk center coincides with the answer spancenter. Within the distance of 80(in tokens), while 99% answers are completely covered, the per-formance degrades as the chunk center moves away from the answer center and the chunk containsfewer relevant contexts. When the distance reaches 96, more than half of the predicted spans areincomplete. Therefore, we argue that a good chunking policy should generate chunks that not onlyfully cover the correct answer span but also provide sufficient contexts around the correct answer.Second, besides the fixed-length chunking, most existing methods predict answers by only readingthe local information within each chunk. However, in practice, information from different chunksis essential for answering questions that involve global contextual information such as coreferentialname mentions within a document.We propose to let a machine reader learn how to chunk intelligently via reinforcement learning.Instead of using a fixed stride in one direction, we allow the model to decide the next chunk to beprocessed in either direction (i.e., forward or backward). Henceforth, the model is capable of makingbetter predictions based on the carefully selected chunks (Section 2.3). We also apply recurrentmechanisms to allow the information to flow between chunks. As a result, the model can haveaccess to information beyond the current chunk (Section 2.2).In our experiments, we evaluate our model1on two conversational machine reading comprehen-sion datasets: CoQA and QuAC. Experimental results demonstrate that our method can generatechunks that are more likely to cover complete answer spans and provide richer contextual infor-mation around the ground truth answers. The proposed chunking mechanisms lead to performancegains on the benchmark datasets, especially on the cases with extremely long documents.2 M ETHOD2.1 B ASELINE MODELAs shown in Figure 2, our model consists of a pre-trained model, an answer extractor, a policynetwork, and a chunk scorer. We use BERT as the pre-trained model that generates representationsfor document chunks and questions (Devlin et al., 2019). Following the input format of BERT, eachinput sequence starts with a “CLS” token, which is followed by previous questions (PQ), the currentquestion (CQ), and the document chunk. The three parts are separated by “SEP” tokens.The maximum input length in BERT is restricted to be 512. However, the document length of14:1%of test questions in QuAC already exceeds this input length constraint. A popular approachis to segment the long document into multiple chunks (Reddy et al., 2018; Choi et al., 2018).Answer Extractor . Following previous work on extractive machine reading comprehension, wepredict the start and the end positions of the answer span in the given document. BERT first generatesa vector representation hc;ifor eachi-th token in the c-th chunk. Given hc;i, the model scores each1The implementation will be released publicly.2Under review as a conference paper at ICLR 2020Figure 2: Model overview: BERT generates a representation for each input chunk, and recurrenceaccumulates information over chunks. Based on these representations, the answer extractor extractsanswers from the current chunk, and the policy network takes chunking action and moves to the nextchunk. Chunk scorer scores each chunk by giving its likelihood of containing an answer and selectsanswers among predictions from multiple chunks.token by giving the likelihood of it being the start token of the answer span:lsc;i=wTshc;i; (1)where wsis model parameter. The probability that the answer starts at the i-th token is computedby applying the softmax operation to lsc;i:psc;i= softmax( lsc;i) (2)Likewise, the model scores how likely the answer ends at the i-th token in chunk cusinglec;i=wTehc;i; (3)where weis model parameter. The probability of the i-th token being the end of the answer (denotedaspec;i) is calculated in a similar manner as equation 2.2.2 R ECURRENT MECHANISMSGiven a question, existing BERT-based models only access the local information from each chunkand predict answers independently. We argue that document-level information is essential for answerprediction, especially in conversational MRC tasks. In this work, we apply recurrent mechanismsto allow information flow between chunks, and thus a model can have access to information beyondthe current chunk.Suppose that the chunk-level representation of chunk cisvc, which is generated without accessingknowledge from other chunks. As mentioned earlier, “CLS” is the first token of the input sequenceto the BERT model, which has been used to capture the information of the whole chunk (Devlinet al., 2019). We thus use the vector of the “CLS” token as the chunk representation vcin this work.The recurrence is applied to the chunk representation vcso that the information of previous chunkscan be transferred afterwards. The enriched chunk representation ~vcis defined as~vc=f(vc;~vc1); (4)wheref()is the recurrence function. We consider two recurrent mechanisms here: linear recurrenceand Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) recurrence. Linear re-currence is simply a weighted sum of its inputs:flinear(vc;~vc1) =vc+~vc1; (5)where coefficients anddepend on inputs. We have ;=softmax (wTr[vc;~vc1]), where wris model parameter.The LSTM recurrence, which uses LSTM unit as the recurrence function, takes vcas the currentinput and ~vc1as the previous hidden states:fLSTM(vc;~vc1) =LSTM (vc;~vc1): (6)3Under review as a conference paper at ICLR 2020Chunk Scorer . We extract a candidate answer from each chunk. Given multiple chunks from alengthy document, we need to decide the answer among all the candidate answers from multiplechunks. Besides the answer prediction within each chunk, the model also assigns a confidence scoreto each chunk. This score predicts the probability qcthat chunkccontains an answer.qc=(Wc~vc+bc); (7)where Wcandbcare model parameters, and ()is sigmoid function.For a candidate answer span that starts with the i-th token and ends with the j-th token in chunk c,its probability pAi;j;cis based on both the estimation within the chunk and the confidence score qcofthis chunk:pAi;j;c=psc;ipec;jqc: (8)Candidate answer spans selected from different chunks are ranked in terms of the predicted answerprobability, and the one with the highest probability is selected as the answer by the model.2.3 L EARNING TO CHUNKIn this section, we introduce how we let a model learn to chunk a document. Existing approachesusually generate chunks from left to right of a document (Devlin et al., 2019), and models movefrom the current chunk to the next chunk with a fixed stride size m(in tokens). More specifically, ifthe current chunk starts with the i-th token in the document, the next chunk will start at the (i+m)-thtoken. However, one problem with this kind of methods is that a document may be inappropriatelysegmented. For instance, an answer span may cross the chunk boundary after segmentation. Also,the segmentation method might generate a chunk where the answer span is close to the chunk bound-ary. As such, a model may fail to make good predictions due to a lack of sufficient contexts aroundthe answer.We propose to allow the baseline model (Section 2.1) to learn to chunk. Instead of fixing its strideto be a given size and be in one direction, we enable the model to decide the next chunk that it wantsto process and allow it to move back and forth. Henceforth, the model is capable of making betterpredictions based on the carefully selected chunks. Learning to chunk is done with reinforcementlearning (RL).To formulate the learning-to-chunk as an RL problem, we define the state and the action space asfollows. The statesis defined to be the chunks that the model has processed up to the current time,i.e.,s=f1;2;:::;cg. The actionais the size and direction of the stride for moving to the nextchunk. We define the action space Aof chunking as a set of strides. The negative stride allowsthe model to look backwards at the already seen chunks, and the positive stride allows it to moveforward to process the unseen chunks.Policy Network . We use a neural network to model the policy for selecting the actions. Specifically,the policy network uses a feedforward neural network to model the probability distribution pact(ajs)over all stride actions given the current state sencoded in the chunk representation ~vc, which isenriched with document-level information as described in Eq. (4):pact(ajs) =softmax (Wa~vc+ba); (9)where Waandbaare model parameters.During training, the action is randomly sampled with the probability pact(ajs). This allows a betterexploration-exploitation tradeoff (Sutton & Barto, 2018) and increases the diversity of chunks.Rewards . The policy network sequentially selects actions to chunk a document and receives areward that reflects the quality of model’s final answer prediction. Note that this is a delayed rewardproblem since we do not know whether actions are good or bad until the end after the model finishesreading all chunks. We first define the rating of a chunk in the following manner. Suppose a chunkccontains the ground truth answer, which starts at the i-th token and ends at the j-th token. Thenthe rating of the chunk, denoted as rc, is equal topsc;ipec;j. Otherwise, it is zero. Formally,rc=psc;ipec;j;answer included,0; else.(10)4Under review as a conference paper at ICLR 2020Suppose that each chunking action aresults in a new chunk c. A sequence of actions generates asequence of chunks for a document, from which we can compute the reward of each action usingdynamic programming. Specifically, recall from Eq. (7) that qcis the probability that chunk ccontains an answer. We define the reward R(s;a)for taking action ain statesin a recursive manner:R(s;a) =qcrc+ (1qc)R(s0;a0); (11)where (s0;a0)denotes next state-action pair.Ideally, the chunking actions are taken so that the model can maximize the rewards of predictedanswers. Mathematically, we define the expected reward as J, andare model parameters relatedto the chunking decision.J=Epact(ajs)[R(s;a)]: (12)The action probability pact(ajs)is the chunking policy learned by the model. This policy guides themodel to generate chunks on which the model can make good predictions of answer spans.2.4 T RAININGAs shown in Figure 2, our model comprises of a policy network that chunks a given document, ananswer extractor that extracts a candidate answer from the current chunk, and a chunk scorer thatselects the answer among chunks. The training loss of our model consists of three parts — answerlossLans, chunking policy loss Lcp, and chunk scoring loss Lcs— to take care of training all thesemodules. We now discuss these losses separately.Answer Loss . For training instances, the ground truth answer of a given question is marked inthe associated document. We already know the start and end tokens of the answer in a chunk, andthus the answer extractor can be directly trained to identify answers within the chunk via supervisedlearning. As has been described, the answer extractor predicts the probability distribution of answerstart/end over all tokens in a chunk. Suppose that the i-th andj-th token in the chunk are answerstart and end, respectively. We aim to optimize the probability of the two tokens and thus use cross-entropy loss as the answer loss Lans:Lans=Xc;iysilogpsc;iXc;jyejlogpec;j;whereysc;iis a binary label indicating whether the i-th token in chunk cis answer start, and psc;iisits predicted start probability as shown in Eq. (2). Similarly, yec;jandpec;jis the label and predictionof thej-th end token in chunk c, repectively.Chunking Policy Loss . Chunking Policy network, which is trained with the action reward viareinforcement learning, enables more flexible document chunking. The chunking policy loss Lcpisthe negative of expected reward Jin Eq. (12), i.e., Lcp=J.Chunk Scoring Loss . It is known whether a given chunk contains an answer in the training stage.Again we apply cross entropy loss to optimize the chunk scoring. The chunking scoing loss LcsisLcs=Xcyclogqc; (13)whereycis a binary label indicating whether chunk ccontains an answer, and qcis model’s predictedprobability as shown in Eq. (7).In summary, the training loss Lof our model is L=Lans+Lcp+Lcs:Since losses LansandLcsaredifferentiable, the model parameters can be simply updated with their gradients rLansandrLcs.As for the non-differentiable chunking policy loss Lcp, we optimize it by applying the idea fromREINFORCE algorithm (Williams, 1992), which uses a sample approximation to its gradient:rLck=XtE[rlogpact(atjst)R(st;at)];wherepact(atjst)is given in Eq. (9).5Under review as a conference paper at ICLR 20202.5 T ESTINGDuring testing, the model starts from the beginning of the document in its first chunk. Given thecurrent chunk c, the model uses the chunk representation ~ vcto select the optimal stride action awith the policy network, wherea= arg maxa2Asoftmax (Wa~ vc+ba): (14)Stride action ais taken to generate the next chunk c0. Similarly, a set of chunks Care sequentiallyextracted from a document. We score an answer span spanning from the i-th to thej-th token inchunkcwith model’s estimated likelihood pAi;j;cas shown in Eq. (8). The best answer span (i;j)from chunk cis the one with the highest likelihood:i;j;c= arg maxij;c2CpAi;j;c: (15)3 E XPERIMENTDataset Train ValidationQuestion # Avg tokens # Max token # Question # Avg tokens # Max token #CoQA 108,647 352 1323 7,983 341 1037QuAC 83,568 516 2310 7,354 576 2146Table 1: Statistics of CoQA and QuAC data. We consider the number of sub-tokens generated byBERT tokenizer.Datasets . We use two conversational machine reading comprehension datasets (i.e., CoQA (Reddyet al., 2018) and QuAC (Choi et al., 2018)) in our experiments. A background document is providedfor each conversation, which involves a set of questions to be answered based on the given documentsequentially.(1) Conversational Question Answering (CoQA). Answers in the CoQA dataset can be free-formtexts written by annotators. It is reported that an extractive MRC approach can achieve an upperbound as high as 97:8%in F1 score (Yatskar, 2018). Therefore, We preprocess the CoQA trainingdata and select a text span from the document as the extractive answer that achieves the highest F1score compared with the given ground truth answer.(2) Question Answering in Context (QuAC). All the answers in the QuAC dataset are text spanshighlighted by annotators in the given document.The dataset statistics is summarized in Table 1, including the data sizes and the number of sub-tokensin documents. Details of data processing are available in the supplementary material.Baselines . We have two baselines based on BERT, which have achieved state-of-the-art performancein a wide range of natural language understanding tasks including machine reading comprehension.(1) B ASIC BERT MODEL . It achieves competitive performance on extractive machine reading com-prehension tasks such as SQuAD (Rajpurkar et al., 2016; 2018). It adopts a simple chunking policy– moving to the next document chunk with a fixed stride size. In the experiments, we select thestride size to be 64in CoQA and QuAC from (16;32;64;128) , which gives the best performance onboth the two datasets, please see appendix A.2 for details.(2) S ENTENCE SELECTOR . We use a state-of-the-art sentence selector for MRC as our base-line (Htut et al., 2018). Given a question, the selector chooses a subset of sentences that are likelyto contain an answer. The selected sentence are then fed to the BERT-based baseline for answerextraction. Since a question is correlated with its previous questions within a conversation, we applythe sentence selector to select sentences based on the current question alone or the concatenation ofprevious and the current questions.See the results of the two baseline implementations in the rows Sent selector (with previous ques-tions) andSent selector (only current questions) in Table 2, respectively. Following the setting ofprevious work (Htut et al., 2018), we train the selector with the margin ranking loss. The top-rankedsentences are selected under the length constraint and concatenated as the new document.Evaluation Metric . The main evaluation metric is macro-average word-level F1 score. We compareeach prediction with the reference answer. Precision is defined by the percentage of predicted answer6Under review as a conference paper at ICLR 2020tokens that appear in the reference answer, and recall is the percentage of reference answer tokenscaptured in the prediction. F1 score is the the harmonic mean of the precision and recall. Whenmultiple reference answers are provided, the maximum F1 score is used for evaluation.Setting . We perform a set of experiments with different maximum sequence lengths of 192,256,384, and 512. Our system and the two baseline systems are built upon a pre-trained model BERT.We use the 24-layer BERT model released by (Devlin et al., 2019) and tune it in each system with alearning rate of 3e-5. Our model fixes the number of chunks read from a document for each question.It generates 4,3,3, and 2chunks under the length limit of 192,256,384, and 512, respectively.Considering that questions are highly correlated due to the existence of coreferential mentions acrossquestions, we concatenate each question with as many of its previous questions as possible al-lowed by the length limit of 64question tokens. The action space of the model strides is set as[16;16;32;64;128] for CoQA and [16;32;64;128;256] for QuAC considering that documentsin CoQA documents are shorter than those in QuAC. The first chunk always starts with the firsttoken of the document, and the model will take stride action after the first chunk.Dataset CoQA QuACMax sequence length 192 256 384 512 192 256 384 512Basic BERT Devlin et al. (2019) 72.8 76.2 81.0 81.4 34.5 50.6 56.7 61.5Sent selector (with previous questions) 54.5 63.8 75.3 79.4 33.9 38.8 47.6 55.4Sent selector (only current questions) 57.5 66.5 76.5 79.5 34.3 39.1 47.6 56.4Linear recurrence o. RL chunking 74.5 78.6 81.0 81.4 48.8 51.4 56.2 61.4Linear recurrence w. RL chunking 76.0 79.2 81.3 81.8 51.6 55.2 59.9 62.0LSTM recurrence o. RL chunking 74.1 78.5 81.0 81.3 49.2 51.5 56.4 61.6LSTM recurrence w. RL chunking 75.4 79.5 81.3 81.8 53.9 55.6 60.4 61.8Table 2: F1 score ( %) of different algorithms on conversational reading comprehension datasets.Results . We experiment with a set of maximum sequence lengths to evaluate the impact of the inputlength on models’ performance in machine reading comprehension. Table 2 presents F1 scoresachieved by our methods and the baselines.The performance of the basic BERT model drops drastically as the chunk length decreases. We seea drop of 8:6%in F1 score on the CoQA dataset and a drop of 27:0%on the QuAC dataset when thechunk size decreases from 512to192, and more chunks are generated from documents.Followed by the same BERT-based reader, the sentence selector baseline that only considers the cur-rent question achieves better performance than the selector fed with the combination of the currentquestion and its previous questions. The selector that only considers the current question performswell in selecting sentences containing answers from documents. For around 90:4%of questions inCoQA and 81:2%of questions in QuAC, the top-ranked 12sentences in the document can includeat least one complete answer. However, the selector does not improve upon basic BERT despiteits high precision in sentence selection. This might be because selected sentences do not providesufficient contexts for a reader to identify answers accurately.Our model with recurrent chunking mechanisms performs consistently better than both basic BERTand the same baseline with a sentence selector. On the CoQA dataset, our chunking model withlinear recurrence improves upon the basic BERT model by 3:2%,3%,0:3%, and 0:4%for chunklength of 192,256,284, and 512, respectively. The improvement brought by LSTM recurrenceand RL chunking is 2:6%,3:3%,0:3%, and 0:4%on the CoQA dataset. On the QuAC dataset,linear recurrence combined with RL chunking leads to improvements of 17:1%,4:6%,3:2%,0:5%,and LSTM recurrence has gains of 19:4%,5:0%,3:7%,0:3%under different chunk lengths. Wenotice that our model is less sensitive to the chunk length, and the Linear recurrence has comparableperformance to LSTM recurrence.Our model is shown to enhance the performance on both datasets, and the gain is significant whenmore chunks are generated at a smaller chunk length. We note that the gain is relatively small on theCoQA dataSET under the length of 384and512. This is because the average document length ofCoQA data is 352. In that case, a single chunk could cover the whole document for most questions.Similarly, the gain is small on the QuAC data at the input length of 512.7Under review as a conference paper at ICLR 2020To evaluate how well our model extracts answers from long documents, we report its performanceon documents of different lengths in Table 3. The maximum sequence length is set as 512for bothCoQA and QuAC dataset. Although our gain is small over all documents, we note that the gain ismore obvious on long documents. For documents containing more than 400words in the CoQAdataset, RL chunking with linear recurrence has an improvement of 7:3%over the basic BERTbaseline, and RL chunking with LSTM recurrence enhances F1 score by 7:5%. As for QuAC data,the improvement of linear recurrence with RL chunking is 4:5%, and the improvement of LSTMrecurrence is 2:6%.Dataset CoQA QuACDocument len <=200 (200, 300] (300, 400] >400<=300 (300, 450] (450, 600] >=600Query percentage (%) 15.3 63.3 18.9 2.5 20.5 52.0 19.7 7.8Basic BERT 81.0 81.9 81.8 67.2 66.2 62.8 62.2 38.7Linear reccurrence w. RL chunking 81.1 82.1 82.3 74.5 66.1 62.6 63.6 43.2LSTM recurrence w. RL chunking 81.1 82.0 82.3 74.7 66.4 62.6 63.0 41.3Table 3: F1 score on documents of different lengths.Ablation Analysis . We have shown the performance gains brought by the combination of recurrentmechanisms and chunking policy. Here we evaluate the improvements brought by the chunkingpolicy alone. By comparing rows LSTM recurrence w. RL chunking andLSTM recurrence o. RLchunking in Table 2, we can find that RL chunking alone improves F1 score by 1.3%, 1.0%, 0.3%,and 0.5% under the chunk length constraint of 192, 256, 384, and 512 respectively on the CoQAdataset. Its improvements are 4.7%, 4.1%, 4.0%, and 0.2% on the QuAC dataset. The learnedchunking policy brings non-trivial gains. We study the effect of recurrence alone without RL chunk-ing here. As shown in rows basic BERT andLinear recurrence o. RL chunking in Table 2, linearrecurrence alone can improve F1 score by 2:4%, and LSTM recurrence gives an improvement of2:3%without RL chunking when the maximum chunk length is 256. We provide more discussionsand quantitative analysis on the learned chunking policy in Appendix A.44 R ELATED WORK4.1 C ONVERSATIONAL READING COMPREHENSIONConversational MRC tasks require the understanding of conversations, which either contain a se-ries of questions and answers (Saeidi et al., 2018; Choi et al., 2018; Reddy et al., 2018; Xu et al.,2019) or serve as documents (Ma et al., 2018; Moghe et al., 2018; Sun et al., 2019). In this pa-per, we focus on large-scale extractive and abstractive conversational MRC tasks with backgrounddocuments: QuAC (Choi et al., 2018) and CoQA (Reddy et al., 2018). Very recently we see signifi-cant improvements in performance on conversational MRC tasks by leveraging additional extractivenon-conversational datasets such as SQuAD (Rajpurkar et al., 2018) and NewsQA (Trischler et al.,2017), which is beyond the scope of this paper.4.2 A DDRESSING LONG CONTEXTS IN MACHINE READING COMPREHENSION TASKSTo deal with lengthy documents in machine reading comprehension tasks, some previous work skipscertain tokens (Yu et al., 2017; Seo et al., 2018) or selects a set of sentences as input based on thegiven questions (Hewlett et al., 2017; Min et al., 2018; Lin et al., 2018). However, they mainly focuson tasks in which most of the questions are formed by a single informative sentence or limited tomultiple-choice settings (Wang et al., 2019).5 C ONCLUSIONWe propose to let a model learn to chunk in a more flexible way via reinforcement learning: amodel can decide the next chunk that it wants to process in either direction. We also apply recurrentmechanisms to allow information transfer between chunks. Experiments on two conversationalmachine reading comprehension tasks – CoQA and QuAC – demonstrate the effectiveness of ourmechanisms. We can obtain chunks that are more likely to contain complete answers and at thesame time cover sufficient contexts around the correct answer for better answer predictions.8Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text The paper considers the task of extractive QA where the document is longer than the document encoder's size limit. (The conversational part does not play a big role in the method.) One solution is to look at a chunk of document tokens per time step. The paper proposes (1) a way to propagate information between time steps, and (2) a RL policy that selects how many tokens to skip when locating the next chunk. The method was evaluated on CoQA and QUaC, with gains observed when the document is much longer than the chunk size. The observation about the answer's location in the chunk is eye-opening, and the need to select the right document chunk is presented well. The recurrent mechanism and chunk scores look correct. However, the paper has two potential weaknesses: 1. The chunking policy looks too complex for the task and might also be incorrect. - Most of the time, the number of possible chunks (with 16-token strides) is small. One could score all such chunks for relevance at once without having to use RL to look at chunks sequentially. Efficiency-wise, scoring multiple chunks in a batch might be even cheaper than embedding a single chunk at each time step. This process of selecting relevant document sections is related to retrieval-based reading comprehension [https://arxiv.org/abs/1704.00051 | https://arxiv.org/abs/1808.10628 | https://arxiv.org/abs/1808.06528], where the relevant documents are retrieved for extractive QA. - In Equation 11, q_c contains parameters theta to be optimized, which I think makes the REINFORCE gradient incorrect. - Equation 13 is missing a term for the negative class (sum_c (1 - y_c) log (1 - q_c)), but this could simply be a typo. 2. There are a few baselines that could have been tested: - As stated above, instead of using a policy-based chunk selector, just score q_c on all spans, and use it to either select a span or to compute the span score q_c * p_start * p_end. This is similar to the sentence selector baseline but instead selecting chunks (which is more comparable). - A model with no recurrence but with RL chunking is missing. - One upper-bound experiment to try is to just select the span with the correct answer in the middle. This will indicate the amount winnable from doing better chunking. Questions and comments: - Page 2: What is the chunk size for the plot? - Page 2: "... the predicted spans are incomplete" -- Does this mean the span is chopped in the middle? - What is the distribution of the actions that the policy takes? In particular, does it use the "-16" action at all? - Appendix A.4: Despite "farmer roast" not appearing, the answer chunk still has a strong prior (due to features such as similarity to the question and negative words). [EDIT] I am changing my score to weak accept. Refer to the replies for details. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>
rkYgAJWCZ
ICLR.cc/2018/Conference
2018
One-shot and few-shot learning of word embeddings
["Andrew Kyle Lampinen", "James Lloyd McClelland"]
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.
["One-shot learning", "embeddings", "word embeddings", "natural language processing", "NLP"]
ABSTRACTStandard deep learning systems require thousands or millions of examples tolearn a concept, and cannot integrate new concepts easily. By contrast, humanshave an incredible ability to do one-shot or few-shot learning. For instance, fromjust hearing a word used in a sentence, humans can infer a great deal about it,by leveraging what the syntax and semantics of the surrounding words tells us.Here, we draw inspiration from this to highlight a simple technique by which deeprecurrent networks can similarly exploit their prior knowledge to learn a usefulrepresentation for a new word from little data. This could make natural languageprocessing systems much more flexible, by allowing them to learn continually fromthe new words they encounter.1 I NTRODUCTIONHumans are often able to infer approximate meanings of new words from context. For example,consider the following stanza from the poem “Jabberwocky” by Lewis Carroll:He took his vorpal sword in hand:Long time the manxome foe he soughtSo rested he by the Tumtum tree,And stood awhile in thought.Despite the fact that there are several nonsense words, we can follow the narrative of the poem andunderstand approximately what many of the words mean by how they relate other words. This a vitalskill for interacting with the world – we constantly need to learn new words and ideas from context.Even beyond language, humans are often able adapt quickly to gracefully accomodate situations thatdiffer radically from what they have seen before. Complementary learning systems theory (Kumaranet al., 2016) suggests that it is the interaction between a slow-learning system that learns structuralfeatures of the world (i.e. a deep-learning like system) and a fast-learning system (i.e. a memory-likesystem) that allows humans to adapt rapidly from few experiences.By comparison, standard deep learning systems usually require much more data to learn a conceptor task, and sometimes generalize poorly Lake et al. (2017). They can be trained to learn a conceptin one-shot if this is their sole task (Vinyals et al., 2016, e.g.), but this limits the types of tasks thatcan be performed. Furthermore, these models typically discard this information after a single use.In order for deep learning systems to be adaptible, they will need to build on their prior knowledgeto learn effectively from a few pieces of information. In other words, they will need to integratelearning experiences across different timescales, as complementary learning systems theory suggeststhat humans and other animals do. In this paper, we explore this broad issue in the specific context ofcreating a useful representation for a new word based on its context.1.1 B ACKGROUNDContinuous representations of words have proven to be very effective (Mikolov et al., 2013; Penning-ton et al., 2014, e.g.). These approaches represent words as vectors in a space, which are learned fromlarge corpuses of text data. Using these vectors, deep learning systems have achieved success ontasks ranging from natural language translation (Wu et al., 2016, e.g.) to question answering (Santoroet al., 2017, e.g.).1Under review as a conference paper at ICLR 2018However, these word vectors are typically trained on very large datasets, and there has been sur-prisingly little prior work on how to learn embeddings for new words once the system has beentrained. Cotterell et al. (2016) proposed a method for incorporating morphological information intoword embeddings that allows for limited generalization to new words (e.g. generalizing to unseenconjugations of a known verb). However, this is not a general system for learning representationsfor new words, and requires building in rather strong structural assumptions about the language andhaving appropriately labelled data to exploit them.More recently Lazaridou et al. (2017) explored multi-modal word learning (similar to what childrendo when an adult points out a new object), and in the process suggested a simple heuristic forinferring a word vector from context: simply average together all the surrounding word vectors. Thisis sensible, since the surrounding words will likely occur in similar contexts. However, it ignores allsyntactic information, by treating all the surrounding words identically, and it relies on the semanticinformation being linearly combinable between different word embeddings. Both of these factorswill likely limit its performance.Can we do better? A deep learning system which has been trained to perform a language task musthave learned a great deal of semantic and syntactic structure which would be useful for inferringand representing the meaning of a new word. However, this knowledge is opaquely encoded in itsweights. Is there a way we can exploit this knowledge when learning about a new word?2 A PPROACHWe suggest that we already have a way to update the representations of a network while accountingfor its current knowledge and inferences – this is precisely what backpropagation was invented for!Of course, we cannot simply train the whole network to accomodate this new word, this would leadto catastrophic interference. However, Rumelhart & Todd (1993) showed that a simple network couldbe taught about a new input by freezing all its weights except for those connecting the new inputto the first hidden layer, and optimizing these by gradient descent as usual. They showed that thisresulted in the network making appropriate generalizations about the new input, and by design thetraining procedure does not interfere with the network’s prior knowledge. They used this as a modelfor human concept learning (as have other authors, for example Rogers & McClelland (2004)).We take inspiration from this work to guide our approach. To learn from one sentence (or a few)containing a new word, we freeze all the weights in the network except those representing the newword (in a complex NLP system, there may be more than one such set of weights, for example themodel we evaluate has distinct input and output embeddings for each word). We then use stochasticgradient descent ( = 0:01) to update the weights for the new word using 100 epochs of training overthe sentence(s) containing it.Of course, there are a variety of possible initializations for the embeddings before optimizing. In thispaper, we consider three possibilities:1.Beginning with an embedding for a token that was placed in the softmax but never usedduring training (for the purpose of having a useful initialization for new words). This mighthelp separate the new embedding from other embeddings.2. Beginning with a vector of zeros.3.Beginning with the centroid of the other words in the sentence, which Lazaridou et al. (2017)suggested was a useful estimate of an appropriate embedding.We compare these to two baselines:1. The centroid of the embeddings of the other words in the sentence Lazaridou et al. (2017).2.Training the model with the 10 training sentences included in the corpus from the beginning(i.e. the “standard” deep-learning approach).(A reader of an earlier draft of this paper noted that Herbelot & Baroni (2017) independently triedsome similar strategies, but our results go farther in a variety of ways, see Appendix E.)2Under review as a conference paper at ICLR 2018Figure 1: Percent change in perplexity on 10 test sentences containing new word, plotted vs. thenumber of training sentences, across four different words, comparing optimizing from three differentstarting points to centroid and training with the word baselines. Averages across 10 permutations areshown in the dark lines, the individual results are shown in light lines. (Note that the full trainingwith the word was only run once, with a single permutation of all 10 training sentences.)2.1 T ASK, MODEL ,AND APPROACHThe framework we have described for updating embeddings could be applied very generally, but forthe sake of this paper we ground it in a simple task: predicting the next word of a sentence based onthe previous words, on the Penn Treebank dataset (Marcus et al., 1993). Specifically, we will use the(large) model architecture and approach of Zaremba et al. (2014), see Appendix B.1 for details.Of course, human language understanding is much more complex than just prediction, and groundinglanguage in situations and goals is likely important for achieving deeper understanding of language(Gauthier & Mordatch, 2016). More recent work has begun to do this (Hermann et al., 2017, e.g),and it’s likely that our approach would be more effective in settings like these. The ability of humansto make rich inferences about text stems from the richness of our knowledge. However, for simplicity,we have chosen to first demonstrate it on the prediction task.In order to test our one-shot word-learning algorithm on the Penn Treebank, we chose a word whichappeared only 20 times in the training set. We removed the sentences containing this word andthen trained the model with the remaining sentences for 55 epochs using the learning rate decaystrategy of Zaremba et al. (2014). Because the PTB dataset contains over 40,000 sentences, the 20missing ones had essentially no effect on the networks overall performance. We then split the 20sentences containing the new word into 10 train and 10 test, and trained on 1 - 10 of these in 10different permutations (via a balanced Latin square (Campbell & Geller, 1980), which ensures thateach sentence was used for one-shot learning once, and enforces diversity in the multi-shot examples).In other words, we performed 100 training runs for each word, 10 training runs each with a distinctsingle sentence, 10 with a distinct pair of sentences, etc.3Under review as a conference paper at ICLR 20183 R ESULTSWe first evaluated our approach on the words “bonuses,” “explained,” “marketers,” and “strategist,”either initializing from the never-seen embedding that the network is optimized to never produce, thezero vector, or the centroid of the surrounding words, and compared to the baselines of just takingthe centroid of the surrounding words and full training with the words, see Fig. 1. Optimizing fromthe centroid outperforms all other approaches for learning a new word for all datasets, including thecentroid approach of Lazaridou et al. (2017), and even outperforms full training with the word1in threeof the four cases (however, this likely comes with a tradeoff, see below). The optimizing approachesare strongly affected by embedding initialization with few training sentences (e.g. one-shot learning),but by 10 sentences they all perform quite similarly.Of course, this learning might still cause interference with the networks prior knowledge. In order toevaluate this, we replicated these findings with four new words (“borrow,” “cowboys”, “immune,” and“rice”), but also evaluated the change in perplexity on the PTB test corpus (see Appendix A, Fig. 5).Indeed, we found that while the centroid method does not substantially change the test perplexity onthe corpus, the optimizing method causes increasingly more interference as more training sentencesare provided, up to a 1% decline in the case of training on all 10 sentences.This is not necessarily surprising, the base rate of occurrence of the new word in the training datais artificially inflated relative to its true base rate probability. This problem of learning from datawhich is locally highly correlated and biased has been solved in many recent domains by the use ofreplay buffers to interleave learning, especially in reinforcement learning (Mnih et al., 2015, e.g).The importance of replay is also highlighted in the complementary learning systems theory (Kumaranet al., 2016) that helped inspire this work. We therefore tested whether using a replay buffer whilelearning the new word would ameliorate this interference. Specifically, we sampled 100 negativesentences from the without-word corpus the network was pre-trained on, and interleaved these atrandom with the new word training sentences (the same negative samples were used every epoch).Indeed, interleaving sentences without the word substantially reduced the interference caused by thenew word, see Fig. 2. The maximum increase in perplexity was 0:06%. This interleaving did resultin somewhat less improvement on the new-word test sentences, but this is probably simply becausethe test sentences over-represent the new word and the network was overfitting to this and predictingthe new word much more than is warranted. The optimizing approach still reduces perplexity on newword dataset by up to 33% (about 10 percentage points better than the centroid approach).3.1 W HERE IS THE MAGIC HAPPENING ?Because the model we are using has distinct input and output embeddings, we are able to evaluatetheir distinct contributions to learning about the new word. Specifically, we compared learning onlythe softmax weights and bias (output embedding), to learning only the input embedding, as well as tolearning both, for one- and ten-shot learning2. See Fig. 3 for our results.We found that the changes in the output embeddings were almost entirely responsible for theoverall improvement. In one-shot learning, changing the input embedding alone causes almost noimprovement, and changing both embeddings does not seem substantially different from changingjust the output embedding. However, with ten training sentences the updated input embedding isproducing some improvement, both alone and when trained together with the output embedding.Even in this case, however, the effect of the input embedding is still much smaller than the effectof the output embedding. From this evidence, it seems likely that the model is mostly improving inpredicting the new word in context, rather than predicting context based on the new word.This is sensible for several reasons. First, whatever the new word conveys about the context willalready be partly conveyed by the other words in the context. Second, our training approach was1It is interesting to note that only one of these words appears in the PTB test data (“cowboys”), and it onlyappears once. Why then does full training with the word result in lowered test perplexity on the PTB test data inthree out of four cases? This may just be chance variation, of course, but in general we would expect learningabout a word to be useful not just for the sake of learning about that word, but because each word is a smallpiece of the signal by which the network learns about language more generally.2Note that these analyses were conducted before we incorporated the replay buffer, but we expect the generalpattern of the results would not be altered by including replay.4Under review as a conference paper at ICLR 2018(a) Percent change in perplexity on 10 test sentences containing new word.(b) Percent change in perplexity on full PTB test corpus.Figure 2: Comparing full training with the word, centroid, and optimizing from the centroid ap-proaches on both the new word dataset and the full test corpus (to assess interference), while using100 negatively sampled sentences for replay. When using a replay buffer, learning new words doesnot interfere substantially with prior knowledge.5Under review as a conference paper at ICLR 2018Figure 3: Comparing change in perplexity on the new word test set when optimizing the inputembedding, output embedding, or both on either 1 or 10 sentences containing the new word. Lightlines are 10 independent runs, dark lines are averages.more unnatural than the situations in which a human might experience a new word, or even than thepretraining for the network, in that the sentences were presented without the context of surroundingsentences. This means that the model has less data to learn about how the new word predictssurrounding context, and less information about the context which predicts this word. This may alsoexplain why full training with the word still produced better results in some cases than updating for it.Finally, efficiently passing information about the new word through the model from the input mightrequire adjustments to the intermediate weights, which were frozen.3.2 D IGGING DEEPER INTO MODEL PERFORMANCENew word is correct Wrong but relevant Wrong and irrelevantFull training with the word -9.21 -12.75 -15.13Centroid -9.16 -9.46 -10.44Optimizing from centroid -6.20 -9.32 -10.91Table 1: Average log-probabilities of new word when: the word is the current target, the new wordis not the current target but does appear in the current sentence, and the word doesn’t appear in thesentence or context. (Analysis computed with 10 training sentences, patterns are similar but lesssevere with fewer sentences, see Appendix C.)In order to dig more deeply into the effects of learning the new words embedding, we conducted moredetailed analyses of the models predictions of the probability of the new word in different contexts.Specifically, we evaluated how well the word was predicted in three cases: when it was the actualtarget word, when it was not the current target but did appear in the sentence (“wrong but relevant”)and when it was did not even appear in the sentence at all (“wrong and irrelevant”). This allowed usto investigate whether the model was learning something useful, or simply overfitting to the new data.We compared the average log-probability the model assigned to the new word in each of these cases6Under review as a conference paper at ICLR 2018for the full training with the word baseline, the centroid approach to learning from 10 words, and ourapproach (with a replay buffer). The relevant cases were evaluated on our held-out test data for thatword; the irrelevant case was evaluated on the first 10 sentences of the PTB test corpus (an articlethat does not contain any of the words we used). See Table 1 for our results.The model fully trained with the word shows clear distinctions between the three conditions – theword is estimated to be about 10 times more likely in contexts where it appears than in irrelevantcontexts, and is estimated to be about 25 times more likely again when it actually appears. However,the model severely underestimates the probability of the word when it does appear; the word wouldhave a similar log probability under a uniform distribution over the whole vocabulary. The centroidmethod also has this issue, but in addition it does not even distinguish particularly well betweencontexts. The word is only estimated to be about 4 times more likely when it is the target than incompletely irrelevant contexts.By contrast, our approach results in a good distinction between contexts – the word is predicted to beabout 5 times as likely in contexts where it appears compared to the irrelevant context, and about 25times as likely again when the word actually appears. These relative probabilities are quite similarto those exhibited by the model fully trained with the word. In both respects, it appears superior tothe centroid approach. When compared to the full training with the word, however, it appears thatthe base rate estimates of the prevalence of the word are inflated (which is sensible, even with 100negative samples per positive sample in our training data the prevalence of the word is much higherthan in the actual corpus). This explains the residual small increase in test perplexity on the datasetnot containing the new word. It is possible that this could be ameliorated either by setting a prior onthe bias for the new word (perhaps penalize the `2norm of the distance of the bias from the values forother rare words), or with a validation set, or just by using more negative samples during training. Inany case, the optimized embeddings are capturing some of the important features of when the worddoes and does not appear, and are doing so more effectively than the centroid approach.3.3 M ORE WORDSUp to this point, we have presented all the results in this paper broken down by word rather thanas averages, because there were large word-by-word differences on almost every analysis, and weevaluated on relatively few new words. In order to establish the generality of our findings with a largesample of words, we ran an additional experiment that spans the space of words more broadly.In this experiment, we selected 100 of the 150 words that appear exactly 20 times in the PTB traincorpus (omitting the words we used in prior experiments, see Appendix B.1.3 for a complete list).Instead of training separate models without each word as we had previously, we trained a single modelwithnone of these words included in the train set. We then tested our few-shot learning techniquewith a replay buffer (optimizing from centroid) and the centroid technique on these sentences, andcompared to results obtained from full training with all words – a model trained with the entire traincorpus, including the train sentences for each of the hundred words. (In all cases, the same 10 ofthe 20 sentences containing the new word were used in training, and the other 10 were used fortesting.) Notice that the comparison to “full training with all words” is not as precise as our previousexperiments – the model receives about 2.5 % more training data overall than any of the few-shotlearning models, which means it will have more linguistic structure to learn the new words from, aswell as the advantage of interleaving them. However, the comparisons between our technique andthe centroid technique are still valid, and the comparison to the full training with all words gives aworst-case bound on how poorly one-shot methods will do compared to full training. With this inmind, see Fig. 4 (and Appendix A Fig. 6) for our results.As before, optimizing from the centroid performed much better than simply using the centroid – onaverage it produced a 64% (11 percentage point) improvement over the centroid result, and on noneof the 100 words did it perform worse than the centroid method. More quantitatively, the optimizingmethod performed significantly better (paired t-test,t(99) =20:5,p<11016). Furthermore,despite the disadvantage of being exposed to less total data, the optimizing approach seems to doapproximately as well as the full training approach on average – although the full training approachsometimes results in much larger improvements, the results did not significantly differ (paired t-test,t(99) = 0:9,p= 0:39). Across a wide variety of words, optimizing improves on the centroidapproach, and performs comparably to full training.7Under review as a conference paper at ICLR 2018Figure 4: Percent change in perplexity on 100 new words from applying the centroid, optimizingfrom the centroid, and full training with all words. Ten sentences containing the new word were usedin training and 10 were used in testing. Large solid dots indicate the change in the mean, smaller dotsindicate the change for individual words.4 D ISCUSSIONOverall, using our technique of updating only the embedding vectors of a word while training onsentences containing it and negative sampled sentences from the networks past experience seemsquite effective. It allows for substantial reductions in perplexity on text containing the new word,without greatly interfering with knowledge about other words. Furthermore, it seems to be capturingmore useful structure about how the word is used in context than previous approaches, and performsclose to as well as full training with the word. These results are exciting beyond their potentialapplications to natural language processing – this technique could easily be extended to adaptingsystems to other types of new experiences, for example a vision network for an RL agent could havea few new filters per layer added and trained to accomodate a new type of object.Under what circumstances will this strategy fail? Complementary learning systems theory (Kumaranet al., 2016), from which we drew inspiration, suggests that information which is schema-consistent(i.e. fits in with the network’s previous knowledge) can be integrated easily, whereas schema-inconsistent knowledge (i.e. knowledge that differs from the network’s previous experience) willcause interference. Similar principles should apply here. Our approach should work for learninga new word on a topic which is already somewhat familiar, but would likely fail to learn from anew word in a context that is not well understood. For example, it would be difficult to learn a newGerman word from context if the model has only experienced English.On the other hand, this perspective also provides promises. We expect that our technique wouldperform even better in a system that had a more sophisticated understanding of language, becauseit would have more prior knowledge from which to bootstrap understanding of new words. Thus itwould be very interesting to apply our technique on more complicated tasks like question answering,such as Santoro et al. (2017), or in a grounded context, such as Hermann et al. (2017).8Under review as a conference paper at ICLR 20185 C ONCLUSIONSWe have presented a technique for doing one- or few-shot learning of word embeddings from text data:freeze all the weights in the network except the embeddings for the new word, and then optimize theseembeddings for the sentence, interleaving with negative examples from network’s prior experienceand stopping early. This results in substantial improvement of the ability to predict the word incontext, with minimal impairment of prediction of other words. This technique could allow naturallanguage processing systems to adapt more flexibly to a changing world, like humans do. Moregenerally, it could serve as a model for how to integrate rapid adaptation into deep learning systems.
HJnrNAtlG
Review
3: Clear rejection
The paper proposes a technique for exploiting prior knowledge to learn embedding representations for new words with minimal data. The authors provide a good motivation for the task and it is also a nice step in the general direction of learning deep nets and other systems with minimal supervision. The problem is useful and very relevant to natural language applications, especially considering the widespread use of word embeddings within NLP systems. However, the demonstrated experimental results do not match the claims which seems a little grand. Overall, the empirical results is unsatisfactory. The authors pick a few example words and provide a detailed analysis. This is useful to understand how the test perplexity varies with #training examples for these individual settings. However, it is hardly enough to draw conclusion about the general applicability of the technique or effectiveness of the results. Why were these specific words chosen? If the reason is due to some statistical property (e.g., frequency) observed in the corpus, then why not generalize this idea and demonstrate empirical results for a class of words exhibiting the property. Such an analysis would be useful to understand the effectiveness of the overall approach. Another idea would be to use the one/few-shot learning to learn embeddings and evaluate their quality on a semantic task (as suggested in Section 3.3), but on a larger scale. The technical contributions are also not novel. Coupled with the narrow experimentation protocol, it does not make the paper’s contributions or proposed claims convincing.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title One-shot and few-shot learning of word embeddings ### Paper Abstract Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter. ### Paper Keywords ["One-shot learning", "embeddings", "word embeddings", "natural language processing", "NLP"] ### Paper Content ABSTRACTStandard deep learning systems require thousands or millions of examples tolearn a concept, and cannot integrate new concepts easily. By contrast, humanshave an incredible ability to do one-shot or few-shot learning. For instance, fromjust hearing a word used in a sentence, humans can infer a great deal about it,by leveraging what the syntax and semantics of the surrounding words tells us.Here, we draw inspiration from this to highlight a simple technique by which deeprecurrent networks can similarly exploit their prior knowledge to learn a usefulrepresentation for a new word from little data. This could make natural languageprocessing systems much more flexible, by allowing them to learn continually fromthe new words they encounter.1 I NTRODUCTIONHumans are often able to infer approximate meanings of new words from context. For example,consider the following stanza from the poem “Jabberwocky” by Lewis Carroll:He took his vorpal sword in hand:Long time the manxome foe he soughtSo rested he by the Tumtum tree,And stood awhile in thought.Despite the fact that there are several nonsense words, we can follow the narrative of the poem andunderstand approximately what many of the words mean by how they relate other words. This a vitalskill for interacting with the world – we constantly need to learn new words and ideas from context.Even beyond language, humans are often able adapt quickly to gracefully accomodate situations thatdiffer radically from what they have seen before. Complementary learning systems theory (Kumaranet al., 2016) suggests that it is the interaction between a slow-learning system that learns structuralfeatures of the world (i.e. a deep-learning like system) and a fast-learning system (i.e. a memory-likesystem) that allows humans to adapt rapidly from few experiences.By comparison, standard deep learning systems usually require much more data to learn a conceptor task, and sometimes generalize poorly Lake et al. (2017). They can be trained to learn a conceptin one-shot if this is their sole task (Vinyals et al., 2016, e.g.), but this limits the types of tasks thatcan be performed. Furthermore, these models typically discard this information after a single use.In order for deep learning systems to be adaptible, they will need to build on their prior knowledgeto learn effectively from a few pieces of information. In other words, they will need to integratelearning experiences across different timescales, as complementary learning systems theory suggeststhat humans and other animals do. In this paper, we explore this broad issue in the specific context ofcreating a useful representation for a new word based on its context.1.1 B ACKGROUNDContinuous representations of words have proven to be very effective (Mikolov et al., 2013; Penning-ton et al., 2014, e.g.). These approaches represent words as vectors in a space, which are learned fromlarge corpuses of text data. Using these vectors, deep learning systems have achieved success ontasks ranging from natural language translation (Wu et al., 2016, e.g.) to question answering (Santoroet al., 2017, e.g.).1Under review as a conference paper at ICLR 2018However, these word vectors are typically trained on very large datasets, and there has been sur-prisingly little prior work on how to learn embeddings for new words once the system has beentrained. Cotterell et al. (2016) proposed a method for incorporating morphological information intoword embeddings that allows for limited generalization to new words (e.g. generalizing to unseenconjugations of a known verb). However, this is not a general system for learning representationsfor new words, and requires building in rather strong structural assumptions about the language andhaving appropriately labelled data to exploit them.More recently Lazaridou et al. (2017) explored multi-modal word learning (similar to what childrendo when an adult points out a new object), and in the process suggested a simple heuristic forinferring a word vector from context: simply average together all the surrounding word vectors. Thisis sensible, since the surrounding words will likely occur in similar contexts. However, it ignores allsyntactic information, by treating all the surrounding words identically, and it relies on the semanticinformation being linearly combinable between different word embeddings. Both of these factorswill likely limit its performance.Can we do better? A deep learning system which has been trained to perform a language task musthave learned a great deal of semantic and syntactic structure which would be useful for inferringand representing the meaning of a new word. However, this knowledge is opaquely encoded in itsweights. Is there a way we can exploit this knowledge when learning about a new word?2 A PPROACHWe suggest that we already have a way to update the representations of a network while accountingfor its current knowledge and inferences – this is precisely what backpropagation was invented for!Of course, we cannot simply train the whole network to accomodate this new word, this would leadto catastrophic interference. However, Rumelhart & Todd (1993) showed that a simple network couldbe taught about a new input by freezing all its weights except for those connecting the new inputto the first hidden layer, and optimizing these by gradient descent as usual. They showed that thisresulted in the network making appropriate generalizations about the new input, and by design thetraining procedure does not interfere with the network’s prior knowledge. They used this as a modelfor human concept learning (as have other authors, for example Rogers & McClelland (2004)).We take inspiration from this work to guide our approach. To learn from one sentence (or a few)containing a new word, we freeze all the weights in the network except those representing the newword (in a complex NLP system, there may be more than one such set of weights, for example themodel we evaluate has distinct input and output embeddings for each word). We then use stochasticgradient descent ( = 0:01) to update the weights for the new word using 100 epochs of training overthe sentence(s) containing it.Of course, there are a variety of possible initializations for the embeddings before optimizing. In thispaper, we consider three possibilities:1.Beginning with an embedding for a token that was placed in the softmax but never usedduring training (for the purpose of having a useful initialization for new words). This mighthelp separate the new embedding from other embeddings.2. Beginning with a vector of zeros.3.Beginning with the centroid of the other words in the sentence, which Lazaridou et al. (2017)suggested was a useful estimate of an appropriate embedding.We compare these to two baselines:1. The centroid of the embeddings of the other words in the sentence Lazaridou et al. (2017).2.Training the model with the 10 training sentences included in the corpus from the beginning(i.e. the “standard” deep-learning approach).(A reader of an earlier draft of this paper noted that Herbelot & Baroni (2017) independently triedsome similar strategies, but our results go farther in a variety of ways, see Appendix E.)2Under review as a conference paper at ICLR 2018Figure 1: Percent change in perplexity on 10 test sentences containing new word, plotted vs. thenumber of training sentences, across four different words, comparing optimizing from three differentstarting points to centroid and training with the word baselines. Averages across 10 permutations areshown in the dark lines, the individual results are shown in light lines. (Note that the full trainingwith the word was only run once, with a single permutation of all 10 training sentences.)2.1 T ASK, MODEL ,AND APPROACHThe framework we have described for updating embeddings could be applied very generally, but forthe sake of this paper we ground it in a simple task: predicting the next word of a sentence based onthe previous words, on the Penn Treebank dataset (Marcus et al., 1993). Specifically, we will use the(large) model architecture and approach of Zaremba et al. (2014), see Appendix B.1 for details.Of course, human language understanding is much more complex than just prediction, and groundinglanguage in situations and goals is likely important for achieving deeper understanding of language(Gauthier & Mordatch, 2016). More recent work has begun to do this (Hermann et al., 2017, e.g),and it’s likely that our approach would be more effective in settings like these. The ability of humansto make rich inferences about text stems from the richness of our knowledge. However, for simplicity,we have chosen to first demonstrate it on the prediction task.In order to test our one-shot word-learning algorithm on the Penn Treebank, we chose a word whichappeared only 20 times in the training set. We removed the sentences containing this word andthen trained the model with the remaining sentences for 55 epochs using the learning rate decaystrategy of Zaremba et al. (2014). Because the PTB dataset contains over 40,000 sentences, the 20missing ones had essentially no effect on the networks overall performance. We then split the 20sentences containing the new word into 10 train and 10 test, and trained on 1 - 10 of these in 10different permutations (via a balanced Latin square (Campbell & Geller, 1980), which ensures thateach sentence was used for one-shot learning once, and enforces diversity in the multi-shot examples).In other words, we performed 100 training runs for each word, 10 training runs each with a distinctsingle sentence, 10 with a distinct pair of sentences, etc.3Under review as a conference paper at ICLR 20183 R ESULTSWe first evaluated our approach on the words “bonuses,” “explained,” “marketers,” and “strategist,”either initializing from the never-seen embedding that the network is optimized to never produce, thezero vector, or the centroid of the surrounding words, and compared to the baselines of just takingthe centroid of the surrounding words and full training with the words, see Fig. 1. Optimizing fromthe centroid outperforms all other approaches for learning a new word for all datasets, including thecentroid approach of Lazaridou et al. (2017), and even outperforms full training with the word1in threeof the four cases (however, this likely comes with a tradeoff, see below). The optimizing approachesare strongly affected by embedding initialization with few training sentences (e.g. one-shot learning),but by 10 sentences they all perform quite similarly.Of course, this learning might still cause interference with the networks prior knowledge. In order toevaluate this, we replicated these findings with four new words (“borrow,” “cowboys”, “immune,” and“rice”), but also evaluated the change in perplexity on the PTB test corpus (see Appendix A, Fig. 5).Indeed, we found that while the centroid method does not substantially change the test perplexity onthe corpus, the optimizing method causes increasingly more interference as more training sentencesare provided, up to a 1% decline in the case of training on all 10 sentences.This is not necessarily surprising, the base rate of occurrence of the new word in the training datais artificially inflated relative to its true base rate probability. This problem of learning from datawhich is locally highly correlated and biased has been solved in many recent domains by the use ofreplay buffers to interleave learning, especially in reinforcement learning (Mnih et al., 2015, e.g).The importance of replay is also highlighted in the complementary learning systems theory (Kumaranet al., 2016) that helped inspire this work. We therefore tested whether using a replay buffer whilelearning the new word would ameliorate this interference. Specifically, we sampled 100 negativesentences from the without-word corpus the network was pre-trained on, and interleaved these atrandom with the new word training sentences (the same negative samples were used every epoch).Indeed, interleaving sentences without the word substantially reduced the interference caused by thenew word, see Fig. 2. The maximum increase in perplexity was 0:06%. This interleaving did resultin somewhat less improvement on the new-word test sentences, but this is probably simply becausethe test sentences over-represent the new word and the network was overfitting to this and predictingthe new word much more than is warranted. The optimizing approach still reduces perplexity on newword dataset by up to 33% (about 10 percentage points better than the centroid approach).3.1 W HERE IS THE MAGIC HAPPENING ?Because the model we are using has distinct input and output embeddings, we are able to evaluatetheir distinct contributions to learning about the new word. Specifically, we compared learning onlythe softmax weights and bias (output embedding), to learning only the input embedding, as well as tolearning both, for one- and ten-shot learning2. See Fig. 3 for our results.We found that the changes in the output embeddings were almost entirely responsible for theoverall improvement. In one-shot learning, changing the input embedding alone causes almost noimprovement, and changing both embeddings does not seem substantially different from changingjust the output embedding. However, with ten training sentences the updated input embedding isproducing some improvement, both alone and when trained together with the output embedding.Even in this case, however, the effect of the input embedding is still much smaller than the effectof the output embedding. From this evidence, it seems likely that the model is mostly improving inpredicting the new word in context, rather than predicting context based on the new word.This is sensible for several reasons. First, whatever the new word conveys about the context willalready be partly conveyed by the other words in the context. Second, our training approach was1It is interesting to note that only one of these words appears in the PTB test data (“cowboys”), and it onlyappears once. Why then does full training with the word result in lowered test perplexity on the PTB test data inthree out of four cases? This may just be chance variation, of course, but in general we would expect learningabout a word to be useful not just for the sake of learning about that word, but because each word is a smallpiece of the signal by which the network learns about language more generally.2Note that these analyses were conducted before we incorporated the replay buffer, but we expect the generalpattern of the results would not be altered by including replay.4Under review as a conference paper at ICLR 2018(a) Percent change in perplexity on 10 test sentences containing new word.(b) Percent change in perplexity on full PTB test corpus.Figure 2: Comparing full training with the word, centroid, and optimizing from the centroid ap-proaches on both the new word dataset and the full test corpus (to assess interference), while using100 negatively sampled sentences for replay. When using a replay buffer, learning new words doesnot interfere substantially with prior knowledge.5Under review as a conference paper at ICLR 2018Figure 3: Comparing change in perplexity on the new word test set when optimizing the inputembedding, output embedding, or both on either 1 or 10 sentences containing the new word. Lightlines are 10 independent runs, dark lines are averages.more unnatural than the situations in which a human might experience a new word, or even than thepretraining for the network, in that the sentences were presented without the context of surroundingsentences. This means that the model has less data to learn about how the new word predictssurrounding context, and less information about the context which predicts this word. This may alsoexplain why full training with the word still produced better results in some cases than updating for it.Finally, efficiently passing information about the new word through the model from the input mightrequire adjustments to the intermediate weights, which were frozen.3.2 D IGGING DEEPER INTO MODEL PERFORMANCENew word is correct Wrong but relevant Wrong and irrelevantFull training with the word -9.21 -12.75 -15.13Centroid -9.16 -9.46 -10.44Optimizing from centroid -6.20 -9.32 -10.91Table 1: Average log-probabilities of new word when: the word is the current target, the new wordis not the current target but does appear in the current sentence, and the word doesn’t appear in thesentence or context. (Analysis computed with 10 training sentences, patterns are similar but lesssevere with fewer sentences, see Appendix C.)In order to dig more deeply into the effects of learning the new words embedding, we conducted moredetailed analyses of the models predictions of the probability of the new word in different contexts.Specifically, we evaluated how well the word was predicted in three cases: when it was the actualtarget word, when it was not the current target but did appear in the sentence (“wrong but relevant”)and when it was did not even appear in the sentence at all (“wrong and irrelevant”). This allowed usto investigate whether the model was learning something useful, or simply overfitting to the new data.We compared the average log-probability the model assigned to the new word in each of these cases6Under review as a conference paper at ICLR 2018for the full training with the word baseline, the centroid approach to learning from 10 words, and ourapproach (with a replay buffer). The relevant cases were evaluated on our held-out test data for thatword; the irrelevant case was evaluated on the first 10 sentences of the PTB test corpus (an articlethat does not contain any of the words we used). See Table 1 for our results.The model fully trained with the word shows clear distinctions between the three conditions – theword is estimated to be about 10 times more likely in contexts where it appears than in irrelevantcontexts, and is estimated to be about 25 times more likely again when it actually appears. However,the model severely underestimates the probability of the word when it does appear; the word wouldhave a similar log probability under a uniform distribution over the whole vocabulary. The centroidmethod also has this issue, but in addition it does not even distinguish particularly well betweencontexts. The word is only estimated to be about 4 times more likely when it is the target than incompletely irrelevant contexts.By contrast, our approach results in a good distinction between contexts – the word is predicted to beabout 5 times as likely in contexts where it appears compared to the irrelevant context, and about 25times as likely again when the word actually appears. These relative probabilities are quite similarto those exhibited by the model fully trained with the word. In both respects, it appears superior tothe centroid approach. When compared to the full training with the word, however, it appears thatthe base rate estimates of the prevalence of the word are inflated (which is sensible, even with 100negative samples per positive sample in our training data the prevalence of the word is much higherthan in the actual corpus). This explains the residual small increase in test perplexity on the datasetnot containing the new word. It is possible that this could be ameliorated either by setting a prior onthe bias for the new word (perhaps penalize the `2norm of the distance of the bias from the values forother rare words), or with a validation set, or just by using more negative samples during training. Inany case, the optimized embeddings are capturing some of the important features of when the worddoes and does not appear, and are doing so more effectively than the centroid approach.3.3 M ORE WORDSUp to this point, we have presented all the results in this paper broken down by word rather thanas averages, because there were large word-by-word differences on almost every analysis, and weevaluated on relatively few new words. In order to establish the generality of our findings with a largesample of words, we ran an additional experiment that spans the space of words more broadly.In this experiment, we selected 100 of the 150 words that appear exactly 20 times in the PTB traincorpus (omitting the words we used in prior experiments, see Appendix B.1.3 for a complete list).Instead of training separate models without each word as we had previously, we trained a single modelwithnone of these words included in the train set. We then tested our few-shot learning techniquewith a replay buffer (optimizing from centroid) and the centroid technique on these sentences, andcompared to results obtained from full training with all words – a model trained with the entire traincorpus, including the train sentences for each of the hundred words. (In all cases, the same 10 ofthe 20 sentences containing the new word were used in training, and the other 10 were used fortesting.) Notice that the comparison to “full training with all words” is not as precise as our previousexperiments – the model receives about 2.5 % more training data overall than any of the few-shotlearning models, which means it will have more linguistic structure to learn the new words from, aswell as the advantage of interleaving them. However, the comparisons between our technique andthe centroid technique are still valid, and the comparison to the full training with all words gives aworst-case bound on how poorly one-shot methods will do compared to full training. With this inmind, see Fig. 4 (and Appendix A Fig. 6) for our results.As before, optimizing from the centroid performed much better than simply using the centroid – onaverage it produced a 64% (11 percentage point) improvement over the centroid result, and on noneof the 100 words did it perform worse than the centroid method. More quantitatively, the optimizingmethod performed significantly better (paired t-test,t(99) =20:5,p<11016). Furthermore,despite the disadvantage of being exposed to less total data, the optimizing approach seems to doapproximately as well as the full training approach on average – although the full training approachsometimes results in much larger improvements, the results did not significantly differ (paired t-test,t(99) = 0:9,p= 0:39). Across a wide variety of words, optimizing improves on the centroidapproach, and performs comparably to full training.7Under review as a conference paper at ICLR 2018Figure 4: Percent change in perplexity on 100 new words from applying the centroid, optimizingfrom the centroid, and full training with all words. Ten sentences containing the new word were usedin training and 10 were used in testing. Large solid dots indicate the change in the mean, smaller dotsindicate the change for individual words.4 D ISCUSSIONOverall, using our technique of updating only the embedding vectors of a word while training onsentences containing it and negative sampled sentences from the networks past experience seemsquite effective. It allows for substantial reductions in perplexity on text containing the new word,without greatly interfering with knowledge about other words. Furthermore, it seems to be capturingmore useful structure about how the word is used in context than previous approaches, and performsclose to as well as full training with the word. These results are exciting beyond their potentialapplications to natural language processing – this technique could easily be extended to adaptingsystems to other types of new experiences, for example a vision network for an RL agent could havea few new filters per layer added and trained to accomodate a new type of object.Under what circumstances will this strategy fail? Complementary learning systems theory (Kumaranet al., 2016), from which we drew inspiration, suggests that information which is schema-consistent(i.e. fits in with the network’s previous knowledge) can be integrated easily, whereas schema-inconsistent knowledge (i.e. knowledge that differs from the network’s previous experience) willcause interference. Similar principles should apply here. Our approach should work for learninga new word on a topic which is already somewhat familiar, but would likely fail to learn from anew word in a context that is not well understood. For example, it would be difficult to learn a newGerman word from context if the model has only experienced English.On the other hand, this perspective also provides promises. We expect that our technique wouldperform even better in a system that had a more sophisticated understanding of language, becauseit would have more prior knowledge from which to bootstrap understanding of new words. Thus itwould be very interesting to apply our technique on more complicated tasks like question answering,such as Santoro et al. (2017), or in a grounded context, such as Hermann et al. (2017).8Under review as a conference paper at ICLR 20185 C ONCLUSIONSWe have presented a technique for doing one- or few-shot learning of word embeddings from text data:freeze all the weights in the network except the embeddings for the new word, and then optimize theseembeddings for the sentence, interleaving with negative examples from network’s prior experienceand stopping early. This results in substantial improvement of the ability to predict the word incontext, with minimal impairment of prediction of other words. This technique could allow naturallanguage processing systems to adapt more flexibly to a changing world, like humans do. Moregenerally, it could serve as a model for how to integrate rapid adaptation into deep learning systems.<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text The paper proposes a technique for exploiting prior knowledge to learn embedding representations for new words with minimal data. The authors provide a good motivation for the task and it is also a nice step in the general direction of learning deep nets and other systems with minimal supervision. The problem is useful and very relevant to natural language applications, especially considering the widespread use of word embeddings within NLP systems. However, the demonstrated experimental results do not match the claims which seems a little grand. Overall, the empirical results is unsatisfactory. The authors pick a few example words and provide a detailed analysis. This is useful to understand how the test perplexity varies with #training examples for these individual settings. However, it is hardly enough to draw conclusion about the general applicability of the technique or effectiveness of the results. Why were these specific words chosen? If the reason is due to some statistical property (e.g., frequency) observed in the corpus, then why not generalize this idea and demonstrate empirical results for a class of words exhibiting the property. Such an analysis would be useful to understand the effectiveness of the overall approach. Another idea would be to use the one/few-shot learning to learn embeddings and evaluate their quality on a semantic task (as suggested in Section 3.3), but on a larger scale. The technical contributions are also not novel. Coupled with the narrow experimentation protocol, it does not make the paper’s contributions or proposed claims convincing. ### Review Rating 3: Clear rejection ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Sklyn6EYvH
ICLR.cc/2020/Conference
2020
Disentangled Representation Learning with Sequential Residual Variational Autoencoder
["Nanxiang Li", "Shabnam Ghaffarzadegan", "Liu Ren"]
Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction. We propose Sequential Residual Variational Autoencoder (SR-VAE) that defines a "Residual learning" mechanism as the training regime instead of the augmented objective function. Our proposed solution deploys two important ideas in a single framework: (1) learning from the residual between the input data and the accumulated reconstruction of sequentially added latent variables; (2) decomposing the reconstruction into decoder output and a residual term. This formulation encourages the disentanglement in the latent space by inducing explicit dependency structure, and reduces the bottleneck of VAE by adding the residual term to facilitate reconstruction. More importantly, SR-VAE eliminates the hyperparameter tuning, a crucial step for the prior state-of-the-art performance using the objective function augmentation approach. We demonstrate both qualitatively and quantitatively that SR-VAE improves the state-of-the-art unsupervised disentangled representation learning on a variety of complex datasets.
["Disentangled Representation Learning", "Variational Autoencoder", "Residual Learning"]
ABSTRACTRecent advancements in unsupervised disentangled representation learning focuson extending the variational autoencoder (V AE) with an augmented objective func-tion to balance the trade-off between disentanglement and reconstruction. Wepropose Sequential Residual Variational Autoencoder (SR-V AE) that defines a“Residual learning” mechanism as the training regime instead of the augmentedobjective function. Our proposed solution deploys two important ideas in a singleframework: (1) learning from the residual between the input data and the accu-mulated reconstruction of sequentially added latent variables; (2) decomposingthe reconstruction into decoder output and a residual term. This formulation en-courages the disentanglement in the latent space by inducing explicit dependencystructure, and reduces the bottleneck of V AE by adding the residual term to facil-itate reconstruction. More importantly, SR-V AE eliminates the hyperparametertuning, a crucial step for the prior state-of-the-art performance using the objectivefunction augmentation approach. We demonstrate both qualitatively and quan-titatively that SR-V AE improves the state-of-the-art unsupervised disentangledrepresentation learning on a variety of complex datasets.11 I NTRODUCTIONLearning a sparse and interpretable representation of data is a critical component of a generalized,robust and explanatory intelligent system. This concept is inspired by human’s ability to generalizethe knowledge with abstract concepts and use them to reason the unseen environments Gupta et al.(2018). Despite recent advances on representation learning, it was shown that deep convolutionalneural networks (CNN’s) have a tendency to learn superficial statistics of data associated with giventasks, rather than important generative factors embedded in the physical world Jo & Bengio (2017);Goodfellow et al. (2014). One way towards this goal is disentangled representation learning whichaims to capture the independent and interpretable generative factors of the data. Bengio et al. (2013)defines the disentangled representation intuitively as a representation where changes in one dimensioncorrespond to changes in only one generative factor of the data, while being relatively invariant tochanges in other factors. Recently, Higgins et al. (2018) assigned a principled definition by connectingsymmetry transformations to vector representations using the group and representation theory.Based on these definitions, disentangled representation can be learned in a supervised fashion whereexplicit and/or implicit prior knowledge on the generative factors of data are available. However,it is ideal to achieve this in an unsupervised learning setting to take advantage of the large amountof available unlabeled data. Along with the recent development of the generative models, manyunsupervised disentangled learning approaches have been proposed based on either the generativeadversarial networks (GAN) (proposed as InfoGAN in Chen et al. (2016)) or the variational autoen-coders (V AE) (proposed as -V AE in Higgins et al. (2017)). While -V AE achieves better resultsand does not suffer from the training stability issue of InfoGAN, it faces a trade-off between thedisentanglement and reconstruction due to its information bottleneck. The current state-of-the-artapproaches extend the -V AE with augmented objective function to reduce this trade-off Burgess et al.(2017); Kim & Mnih (2018); Chen et al. (2018); Kumar et al. (2017). A recent study by Locatello1Codes available at: https://www.dropbox.com/s/5hkfn8xy5r8w5sz/Code.zip?dl=01Under review as a conference paper at ICLR 2020et al. (2018) carefully compared these approaches based on extensive experiments. They found thatthe performance of these approaches is very sensitive to the hyperparameter tuning associated withthe augmented objective function and the initial random seed during training. More importantly, theyproved that unsupervised learning of disentangled representation is impossible without introducinginductive bias on either the model or the data. We believe the trade-off between disentanglement andreconstruction in V AE-based approaches can be addressed by a different training approach. The ideaof relying on modified training approaches, instead of augmented objective function, to encouragenetwork behavior is commonly used for different problems. Take model over-fitting prevention forexample, one way to address this is to augment the objective function with regularization terms, suchasL1orL2regularization. An alternative solution is to apply special operations during training toenforce the generalization of the network representations, such as Dropout Srivastava et al. (2014) orBatch Normalization Ioffe & Szegedy (2015).Our main contribution in this work is four-fold: 1) We propose Sequential Residual VariationalAutoencoder (SR-V AE) that uses a novel “Residual learning” mechanism to learn disentangledrepresentation with the original V AE objective. This is different from previous V AE-based approachesthat merely focus on objective function design where hyperparameter tuning is crucial. 2) We showthe proposed “Residual learning” mechanism defines an explicit dependency structure among thelatent variables via sequential latent variable update. This encourages learning the disentangledrepresentation. 3) We highlight that SR-V AE decomposes the reconstruction into residual andnetwork decoder output via skip connection. This relaxation of reconstruction reduces the trade-offbetween disentanglement and reconstruction of V AE. 4) We demonstrate both qualitatively andquantitatively that SR-V AE improves the current state-of-the-art disentanglement representationlearning performance on a variety of complex datasets.2 C HALLENGES OF DISENTANGLING WITH AUGMENTED VAE O BJECTIVEIn this section, we first briefly review the V AE framework, followed by the -V AE and its extensionsfor disentangled representation learning. We highlight the challenges of using an augmented objectivefunction to balance the V AE’s trade-off between the disentanglement and reconstruction. From thesediscussions, we then motivate the proposed SR-V AE framework.V AE is a deep directed graphical model consisting of an encoder and a decoder Kingma & Welling(2013). The encoder maps the input data xto a latent representation q(zjx)and the decoder mapsthe latent representation back to the data space q(xjz), whereandrepresent model parameters.The loss function of the V AE is defined as following:LVAE =Eq(zjx)[logq(xjz)]KL(q(zjx)kp(z)); (1)where KL(:k:)stands for the Kullback-Leibler divergence. By regularizing the posterior q(zjx)with a prior over the latent representation p(z)N(0;I), where Iis identity matrix, V AE learnsa latent representation q(zjx)that contains the variations in the data. The goal of disentangledrepresentation learning is to identify the latent representation z2Rdwhere each latent variable onlycorresponds to one of the generative factors for given data x. To achieve this, -V AE augments V AEobjective with an adjustable hyperparameter as:LVAE =Eq(zjx)[logq(xjz)]KL(q(zjx)kp(z)): (2)The addition of encourages the posterior q(zjx)to match the factorized unit Gaussian priorp(z). It enhances the independence among the latent variables thus disentangling the representation.On the other hand, it reduces the amount of information about xstored inz, which can lead to apoor reconstruction especially for high values of . This trade-off is further discussed from therate-distortion theory perspective in Burgess et al. (2017).To reduce the trade-off, different augmentations of -V AE objective are proposed Burgess et al.(2017); Kim & Mnih (2018); Chen et al. (2018); Kumar et al. (2017). Locatello et al. (2018)categorized these methods into three main categories of bottleneck capacity ,penalizing the totalcorrelation anddisentangled priors . Burgess et al. (2017) focused on bottleneck capacity andproposed to gradually increase the average KL divergence from zero for each generative factor.This method relaxes the information bottleneck during training via increasing the encoding capacitythrough a parameter Cthat is linearly dependent on the training iteration. Kim & Mnih (2018) aimed2Under review as a conference paper at ICLR 2020Figure 1: “Residual learning” mechanism consists of dsteps in a single forward pass with the same encoderq(zjx)and decoderq(xjz). Latent variables are sequentially sampled from the encoder. In step i, only theithlatent variable zifollows the distribution learned from the current residual. Previous latent variables follow thesame distribution learned from their corresponding residuals. The latent variables zi+1tozdhave fixed 0 value.The final output x0consists of the decoder output using all the latent variables ^xdand the skip connection d.to solve the disentangled representation learning by minimizing the total correlation (TC) term. Theyproposed FactorV AE where the objective is augmented with a TC term controlled by hyperparameter. Minimizing the TC term forces the hidden representation to be factorial and hence independent.Chen et al. (2018) looked into an alternative way to minimize the TC term in the augmented objective,named-TCV AE. They used a mini-batch based alternative instead of a density-ratio-trick basedmethod from FactorV AE. However, the results from both methods are sensitive to hyperparameterassociated with the TC term. Kumar et al. (2017) studied the disentangled prior and introduced aregularizer to the objective that is associated with hyperparamter . This regularizer encourage thecovariance of q(z)to match the identity matrix. While all aforementioned approaches have shownpromising results, they rely on a careful tuning of the hyperparamater introduced in the augmentedobjective functions such as in Higgins et al. (2017), Cin Burgess et al. (2017), in Kim & Mnih(2018) and Chen et al. (2018), and in Kumar et al. (2017). Finding the optimal hyperparamatersetting can be challenging especially in an unsupervised learning setting where the evaluation of theresults mainly relies on visualization and human inspection. More importantly, Locatello et al. (2018)found that hyperparameter tuning is more important for state-of-the-art performance than the choiceof augmented objective functions.In this work, we propose SR-V AE to address the aforementioned challenge. Instead of the forwardpass in the original V AE, SR-V AE uses a “Residual learning” forward pass illustrated in Fig. 1as an inductive bias on the model. SR-V AE consists of two components: 1) Explicit dependencyin the latent space via a multi-step sequential forward pass where one latent variable is updatedat each step; 2) Decomposition of the reconstruction via skip connetion between the input andnetwork output to relax the network reconstruction constraint. Together, these two componentsenable SR-V AE to address the trade-off between the reconstruction and disentanglement in V AEusing the original objective in Eq. 1. In the next section, we first describe the details of the “Residuallearning” forward pass and SR-V AE training. We then discuss the two aforementioned componentsin detail. In Section 5, we demonstrate the effectiveness of SR-V AE and investigate the effect of eachcomponent separately. The experimental results show our approach achieves better disentanglementand reconstruction results compared to the current state-of-the-art approaches, including -V AEHiggins et al. (2017) and FactorV AE Kim & Mnih (2018).3 S EQUENTIAL RESIDUAL VARIATIONAL AUTOENCODER – SR-VAESame as the original V AE, SR-V AE consists of an encoder network noted as q(~ zjx), and a decodernetwork noted as q(xj~ z). Herexand~ zstand for input data and latent representation vector; andrepresent encoder and decoder network parameters. Let the dimension of the latent representation bed, SR-V AE learns ~ z= [z1;z2;:::;zd]2Rdas the latent representation of the data. Its forward passfollows a “Residual learning” mechanism that consists of dsteps. Each step only updates one latentvariable. In the first step, the input data xpasses through the encoder to calculate the parameterizedposterior, noted as ~ 1and~ 1. Instead of drawing samples for all latent variables ~ zN(~ 1;~ 1), weonly sample the first latent variable z1N(~ 1[1];~ 1[1])and set the remaining latent variables to 0.The modified latent variables ~ z= [z1;0;:::; 0]then passes the decoder to generate the output noted3Under review as a conference paper at ICLR 2020Algorithm 1 SR-VAE Forward PassInput: observationx, latent dimension d, V AE encoderq(zjx), V AE decoder, q(xjz)Output: reconstruction x0, latent representation ~ 0;~ 01:1 x2:~ 0= [0;:::; 0]2Rd3:~ 0= [0;:::; 0]2Rd4:fori = 1 toddo5:f~ i;~ ig Encoder:q(i)6:~ 0[i] =~ i[i]7:~ 0[i] =~ i[i]8:~ z Reparameterize( ~ 0,~ 0)9: ^xi Decoder:q(~ z)10: ifi<dthen11:i+1 i^xi12: end if13:end for14:x0 ^xd+dAlgorithm 2 SR-VAE LearningInput: DatasetX, batch sizem, latent dimension d,Initialize V AE parameters ;1:repeat2: Randomly select batch x=x(i)i2Bof sizem,3:fx0;(~ ;~ )g Forward_pass (x)4:Lrecon MSE _loss(x;x0)5:LKL 12BPj=1dPi=1[1 + log(j[i])2(j[i])2(j[i])2]6:L L KL+Lrecon7:f;g Backward (L)8:until convergence of objectiveas^x1. We subtract the decoder output from the skip connection (defined as an Identity function) asthe input for the second pass, noted as 2=1^x1. In the second pass, 2passes the same encoderto generates a new parameterized posterior ( ~ 2and~ 2). This time, we sample only the second latentvariable with this parameterized posterior as z2N (~ 2[2];~ 2[2]). We re-sample the first latentvariable with z1N(~ 1[1];~ 1[1])while setting the remaining latent variables to 0. The modifiedlatent variable ~ z= [z1;z2;0;:::; 0]is then used to generate the new reconstruction ^x2. We thencalculate the corresponding residual 3=2^x2as the input for the third pass. In the ith pass, theith latent variable is sampled from the encoder thus ziN(~ i[i];~ i[i]). The previous updated latentvariables follow their corresponding residual encoding and the remaining latent variables are set tozeros,~ z= [z1;z2;:::;zi;0;:::; 0]. The process repeats dtimes such that all the latent variables aresampled. In step d, the final output of SR-V AE, x0consists of the decoder output ^xdand the residualtermdasx0= ^xd+d. In the case where d= 1, SR-V AE follows the last step and the input isconnected to the output through the skip connection. Algorithm 1 shows the pseudo code of the“Residual learning” forward pass in SR-V AE.We train the SR-V AE with the original V AE objective defined in Eq. 1. The parameters are updatedusing the standard back-propagation demonstrated in Algorithm 2. The prior p(z)is set to the isotropicunit GaussianN(0;I)and posterior q(zjx)is parameterized as Gaussians with a diagonal covariancematrix. The “reparametrization trick” is used to transform each random variable ziq(zjx)as adifferentiable transformation of a noise variable N(0;1)withzi=i+i.Due to the sequential update process, SR-V AE can generate a sequence of images during the forwardpass. As we shall see in Section 5, these images reflect image transformations corresponding to thedisentangled factors at different steps. Comparing to other V AE based approach that directly generatesa single reconstruction output by sampling from the joint distribution ~ zq(~ zjx)for all latentvariables, this step-by-step visual inspection allows for better understanding of the learned generativefactor. As a result, SR-V AE provides a new way to understand the disentangled representation results.Explicit Dependency in the Latent Space: The SR-V AE forward pass defines a sequential updateof latent variables: the added latent variable ziat stepilearns from the residual between the input dataand the previously updated latent variables zj;8j2f1;:::;i1g. This procedure defines explicitdependency among the latent variables in the posterior that can be written as q(z1;z2;:::;zdjx) =q(z1jx)q(z2jz1;x):::q(zdjz1;:::;zd1;x). The KLloss term of the original V AE objective inEq. 1 encourages the posterior q(z1;z2;:::;zdjx)to match the factorized unit Gaussian prior p(~ z).Adding the explicit dependency by the “Residual learning” mechanism, the SR-V AE objective can beseen as a modified V AE objective:maximize;LSRVAE =Eq(~ zjx)[logq(xj~ z)]KL(q(~ zjx)kp(~ z));subject to p(z1)q(z1jx);p(z2)q(z2jz1;x);:::;p (zd)q(zdjz1;:::;zd1;x):(3)4Under review as a conference paper at ICLR 2020These constraints encourage the newly added latent variable to be independent of the ones alreadyadded, thus enhance the disentanglement of the latent representation. Moreover, the solution space ofEq. 3 is a subset of the original V AE. The constrained objective limits the optimization search spaceto regions where a better local optimal solution exists. We empirically verify this result in terms ofboth the performance and stability to random initialization in Section 5.Decomposition of the Reconstruction: The final output of SR-V AE, x0consists of the decoderoutput and the residual term as x0= ^xd+d. This formulation relaxes the reconstruction constraint onthe network’s decoder output when comparing with other V AE-based approaches. More importantly,it creates a balancing measure between the data generation and reconstruction. In one extreme case,the inputxdirectly passes through the first d1steps and reaches step das the input. In this case,SR-V AE becomes the original V AE model with added skip connection between input and output(see the last step in Fig 1). This architecture relaxes the V AE reconstruction hence reduces V AEreconstruction and disentanglement trade-off. We will show in Section 5.1, this architecture alonecan reach similar performance to FactorV AE. On the other extreme case, if the first d1stepshave learned a perfect disentangled representation of the data, the input for step dwould be 0. Inthis case, the reconstruction loss encourages SR-V AE to generate the input data from learned latentrepresentation vectors. Combining these two extreme cases, SR-V AE can be understood as a trainingmechanism to balance between a V AE model with emphasis on the reconstruction quality (as the firstcase) and the data generation model given the learned latent variables (as the latter case).Notice that each of the aforementioned components can be separately added to V AE as a modifiedmodel. To add the explicit dependency in the latent space component, we can apply the sequentialforward pass of SR-V AE with the output x0= ^xd. We refer to this model as SeqV AE. For theDecomposition of the Reconstruction component, as mentioned earlier, it is equivalent to addinga skip connection to the original V AE between the input and output. We refer to this model asResV AE. Using these two models, we perform the ablation study to understand the effectiveness ofeach individual component in Section 5.1.Computational Complexity: SR-V AE replaces the standard forward pass of V AE with dforwardpasses, thus increases the computational complexity. However, in addition to the improved state-of-the-art performance, it eliminates the hyperparameter tuning associated with prior works. Asmentioned earlier, the hyperparameter tuning was shown to be critical for state-of-the-art performance.It is a difficult and time-consuming process especially for unlabeled data due to: 1) The largehyperparameter search space of continuous values; 2) The lack of evaluation metric. As a result, webelieve that the increased computational complexity by SR-V AE is reasonable. Moreover, we willshow that each of the dforward passes in SR-V AE correspond to a disentangled generative factor.Visualization of these intermediate steps provides a new way to understand the result.4 R ELATED WORKConnection to Other VAE-based Approaches: We highlight the similarity and advantages ofSR-V AE over the V AE-based approaches introduced in Section 2. The sequential update of latentvariables in SR-V AE is similar to the idea of gradually increasing the KL divergence in Burgess et al.(2017). Instead of introducing an augmented objective, SR-V AE directly achieves this by learning onelatent variable at a time. When comparing to the work in Kumar et al. (2017), SR-V AE encouragesthe independence among the latent variables by defining an explicit latent variable dependency ratherthan emphasizing on individual statistics (the covariance between the latent representations in Kumaret al. (2017)). Finally, the explicit latent variable dependency defined by SR-V AE also encouragesthe factorial latent representation, serving the same purpose as lowering the TC term in Kim & Mnih(2018); Chen et al. (2018). It is worth noticing that the low TC term is necessary but not sufficient fordisentangled representation.Connection to Residual Deep Neural Network: ResNet He et al. (2016) introduces the idea oflearning from residuals by adding the skip connections between layers such that input can propagatethrough layers. The key idea of ResNets is to replace learning the direct mapping between input andoutput,H(x) =x!y, with learning a residual formulation, H(x) =F(x) +x!y, whereF(x)represents stacked non-linear layers. This formulation reduces the loss of important information whilepropagating through the network. In addition, it was suggested that learning the residual mapping iseasier compared to learning the direct mapping He et al. (2016). The proposed SR-V AE shares similar5Under review as a conference paper at ICLR 2020skip connection structure as ResNets. Here F(x)representsVAEiin SR-V AE. As in ResNets, F(x)can learn useful abstraction of data while the skip connection iallows for circumventing difficultiesin reconstructing the input data.Connection to Deep Recurrent Attentive Writer (DRAW): DRAW Gregor et al. (2015) uses asequential variational auto-encoding framework to achieve iterative construction of an image. DRAWdeploys the recurrent neural network with the attention mechanism to dynamically determine whereand what to generate. The attention mechanism serves a similar purpose to the skip connection in the“Residual learning” mechanism. Moreover, the idea of successively adding the decoder output forimage generation in DRAW is similar to the reconstruction decomposition in SR-V AE. One maindifference between the two approaches is that DRAW relies on the recurrent network frameworkto model the iterative image generation in the image space. SR-V AE uses the latent dependency toemphasize iterative generation of image in the latent space.5 E XPERIMENTSWe compare SR-V AE with -V AE and FactorV AE on four different datasets both quantitatively andqualitatively. The datasets used in this study includes: 2D Shape Higgins et al. (2017), TeapotsEastwood & Williams (2018), CelebA Liu et al. (2014) and Chairs Aubry et al. (2014). Appendix Aintroduces the details of these datasets. For all datasets, we use visualization for qualitative evaluationby observing the changes in the reconstruction while altering only one latent dimension, known as thetraversal of the latent variable. A good disentangled representation reveals interpretable changes inthe reconstruction image corresponding to one generative factor. Moreover, 2D Shape andTeapotsdatasets contain ground truth generative factors that are used to synthesize the dataset, which allowus to conduct quantitative evaluations. To compare with the previous studies, we use the metricproposed in Kim & Mnih (2018) (noted as FactorVAE metric ) for2D Shape , and the metric proposedin Eastwood & Williams (2018) (noted as Disentanglement-Informativeness-Completeness metric )forTeapots . These two metrics are found to cover similar notions to other disentanglement metrics inLocatello et al. (2018). We implemented our approach using Pytorch Paszke et al. (2017), with theexperiments run on several machines each with 4 GTX1080 Ti GPUs. See Appendix C for details onmodel architecture.5.1 Q UANTITATIVE EVALUATIONMetrics: TheFactorVAE metric in Kim & Mnih (2018) is calculated as follows: 1) select a latentfactork, 2) generate new data ywith factorkfixed and other factors varying randomly, 3) calculatethe mean ofq(zjx), 4) normalize each dimension by its empirical standard deviation over all the dataor a large enough subset, 5) build a majority-vote classifier with the input of index of the dimensionwith the lowest variance and the output of factor k. The classifier accuracy is used as the evaluationmetric. Eastwood & Williams (2018) defines three criteria of disentangled representation, namelydisentanglement ,completeness andinformativeness . Disentanglement is the degree to which thelearned representation disentangles the underlying generative factors; completeness is the degree towhich the generative factors are captured by one latent representation; and finally the informativenessis the amount of information of the generative factors that is captured by the latent representation.Disentanglement and completeness can be perceived by visualizing rows and columns of the Hintondiagram; and informativeness is calculated based on the mapping error between the learned latentrepresentation and the ground truth factors.Comparison to -VAE and FactorVAE: Similar to previous studies, we set d= 10 for all datasetsexcept for CelebA where d= 32 due to its complexity. We use the optimal parameter setting fromthe original studies in Higgins et al. (2017); Eastwood & Williams (2018) for -V AE and FactorV AE.Fig. 2(a) and 2(b) show that SR-V AE outperforms -V AE and FactorV AE in terms of both thereconstruction error and the disentanglement metric in Kim & Mnih (2018) on 2D Shape . The bestmean disentanglement measurement of SR-V AE is around 0.86, significantly higher than the one for-V AE at 0.72 and FactorV AE at 0.81. For reconstruction error, -V AE and FactorV AE converge tosimilar results while SR-V AE achieves better performance. In Fig. 2(c)-(e), we compare SR-V AEwith-V AE and FactorV AE using metric proposed in Eastwood & Williams (2018) on Teapots .Three criteria used in this metric are disentanglement, completeness and informativeness. Note thatin the informativeness term in this metric is highly dependent to the regressor type. In Eastwood &6Under review as a conference paper at ICLR 2020Figure 2: Quantitative evaluations. Similar to previous studies, all results are reported on the best 10 of 30runs with random seeds except for (j) and (k). The line and shaded area correspond to the mean and confidenceintervals. (a) and (b): the metric in Kim & Mnih (2018) and the reconstruction error for 2D Shape ; (c), (d) and(e): the three metrics in Eastwood & Williams (2018) for Teapots ; (f) and (g): ablation study comparing theSeqV AE, ResV AE and SR-V AE using the FactorVAE metric and reconstruction error on 2D Shape ; (h) and(i): comparison between SR-V AE and SR- -V AE, using the FactorVAE metric and reconstruction error on 2DShape ; (j) and (k): Sensitivity to initialization study that compares the worst 10 of 30 runs with random seedsusing the FactorVAE metric and reconstruction error on 2D Shape .Williams (2018) Lasso and Random Forest regressors are used which resulted in a different orderingof methods in informativeness score. Random Forest is used in our experiments to be comparablewith the original paper. The results show SR-V AE achieves higher disentanglement and completenessscores compare to -V AE and FactorV AE, while FactorV AE achieves the best informativeness.However, we believe the informativeness can be different if a more complex regressor was used.Ablation Study: To investigate the individual effect of the two main components in SR-V AE asdiscussed in Section 3, we compare the performance among SeqV AE, ResV AE and SR-V AE on 2DShape dataset. Similar as before, the top 10 of 30 runs with random initialization of all models arereported in Fig. 2(f) and 2(g). The results show that both ResV AE and SeqV AE perform worse thanSR-V AE in terms of FactorVAE metric . When comparing with -V AE and FactorV AE, ResV AEperforms similar to FactorV AE while SeqV AE performs similar to -V AE. One interesting resultwe noticed is that the reconstruction error from ResV AE is similar to if not better than SR-V AE.These results verify our analysis in Section 3 that the decomposition of reconstruction relaxes thereconstruction constraint of the network, and adding the explicit dependency in the latent spaceimproves disentangled representation learning. While both components are important for the superiorperformance of SR-V AE, relaxing the construction constraint on the network with skip connection ismore important as it directly addresses the bottleneck of V AE.SR-VAE with -VAE objective: We also examined if using the -V AE objective in Eq. 2 withthe “Residual learning” mechanism would improve the performance, referred to as SR- -V AE. Ifso, the proposed “Residual learning” mechanism would benefit from the augmented objective toachieve better performance. Figures 2(h) and 2(i) show that best disentanglement score is obtained bySR-V AE and higher values do not help with improving the performance. These results further verifythe effectiveness of SR-V AE in solving the trade-off between disentanglement and reconstruction inV AE-based approaches.Sensitivity to initialization: The study in Locatello et al. (2018) showed that existing approaches aresensitive to the initialization in addition to the hyperparameter tuning. One advantage of SR-V AE isthat it reduces the solution space and improve training stability. To verify this, we compare the worst7Under review as a conference paper at ICLR 2020Figure 3: (a)-(c) Latent traversals with the same input image across each latent dimension with d= 10 for 2DShape dataset, using SR-V AE, -V AE and FactorV AE respectively ; (d) Decomposition of decoder output andskip connection at each step in SR-V AE during latent traversal towards the last column of (a).10 runs out of 30 for 2D Shape in Fig 2(j) and Fig 2(k). We consistently observe better performanceand smaller variances by SR-V AE, suggesting its robustness against random initialization.5.2 Q UALITATIVE EVALUATIONFigure 3(a)-(c) show the latent traversal of SR-V AE, -V AE and FactorV AE for a fixed input imageof2D Shape .zivalues are chosen from the range of -3 to 3 as shown in the figure. We see thatwhile all three models are capable of finding data generative factors – X-position, Y-position, Shape,Rotation and Scale –-V AE and FactorV AE struggle to disentangle these factors completely. Asmentioned in Kim & Mnih (2018) shape is a discrete variation factor in 2D Shape . Ideally, this factorshould be modeled with a discrete rather than Gaussian latent variable. Despite this mismatchedassumption, SR-V AE still captures the shape in the fifth latent variable. However, it also mixes thesize with the shape between oval and square in the third latent variable. We also experiment the latenttraversal with the Teapots dataset and observe superior performance as shown in Appendix D.For datasets without ground truth generative factors, such as CelebA andChairs , inspecting latenttraversals is the only evaluation method. Similar as before, we used the optimal parameter settingfor-V AE and FactorV AE from the original studies in Higgins et al. (2017); Eastwood & Williams(2018). As seen in Figure 4 for CelebA dataset, SR-V AE is able to learn interpretable factors ofvariation such as background, face and hair characteristics, skin color, etc. Compared to -V AE andFactorV AE, we observe some common factors as well as some unique ones. Note that only the mostobvious factors are presented in this figure. Moreover, the interpretation of each latent dimension isbased on our best judgment. We also observe better reconstruction quality with more details usingSR-V AE method. Reconstruction losses also confirm this observation with the converged values of300, 252 and 158 for -V AE, FactorV AE and SR-V AE, respectively. Admittedly, careful tuning ofparameters in -V AE and FactorV AE could potentially reveal more latent variables. However, findingthe optimal value is a difficult task especially when there is no prior information about the generativefactors of data and a quantitative metric is not available.Visualizing the “Residual learning” Mechanism: To gain a better understanding of the internalprocess of the “Residual learning” mechanism, we show the decoder output, the residual mapping,of each internal step ( ^x1;:::;^x10) and their skip connections ( 1;:::; 10) for2D Shape in Fig. 3(d).Each row presents the internal steps when setting different latent variables ( z1toz5) to value 3. Thefinal outputs of this process correspond to the last column of Fig. 3(a). In this figure, we observe thestep by step transition to the final transformed image during the latent traversal. The result shows that8Under review as a conference paper at ICLR 2020Figure 4: Left: Latent traversals across each latent dimension with d= 64 for CelebA using SR-V AE, FactorV AEand-V AE, respectively. Right: Decomposition of decoder output and skip connection at each step in SR-V AEduring latent traversal towards the first column of the corresponding row.the two terms are working together to capture the learned disentangled factor at each step. Based onFig. 3(a), we know the learned factors in each step are: X-position, Y-position, Size, Rotation+shape ,andShape , respectively. In Fig. 3(d), we observe that X-position of the reconstructed image aregenerated during the first step. In step two, both X-position and Y-position are generated. Thisprocess continues and at each step the decoder output and the residual transform the image accordingto the learned latent encoding.Similarly, we show the step-by-step visualization for CelebA dataset along with its latent traversalresult in Fig 4. We highlight a few factors due to the space limit. Although CelebA presents challengein the complexity of real-world image, we observe similar results as the 2D Shape dataset. Thestep-by-step visualization shows how the latent factors are related to the transformed face imageduring the latent traversal. For example, the gender factor can be identified as the fifth latent factoras we observe major changes in the eyes and hair style from step five. Another example is thebackground contrast factor where major changes can be observed in step eight. These step-by-stepvisualizations provide an alternative way to understand and interpret the learned disentangled factorsand can be interesting for data generation tasks.6 C ONCLUSIONSIn this work, we propose SR-V AE for disentangled representation learning in an unsupervised setting.The proposed solution defines the “Residual learning” mechanism in the training regime, insteadof augmented objective, to solve the trade-off between disentanglement and reconstruction of theV AE-based approaches. SR-V AE defines explicit dependency structure between latent variables anddecomposes the reconstruction via skip connection. We showed that SR-V AE achieves state-of-the-art results compared to previous approaches including -V AE and FactorV AE. Moreover, SR-V AEcan be directly applied to any V AE architecture without an additional hyperparameter tuning. Thestep-by-step process of the SR-V AE provides novel ways to visualize the results and understand theinternal process of learning disentangled factors. We believe this can open a new direction for futureresearch towards disentangled representation learning.
rylCNq2htS
Official Blind Review #2
8: Accept
The authors of this paper present a novel that for unsupervised disentangled representation learning. The model, named sequential residual VAE (SR-VAE), gradually activates individual latent variables to reconstruct residuals. Quantitative and qualitative experiments show that the proposed model outperforms beta-VAE and Factor-VAE. Since the training involves a sequence of model training, SR-VAE certainly consumes more time than other VAEs. Minors: citations in the main text should be put in brackets.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Disentangled Representation Learning with Sequential Residual Variational Autoencoder ### Paper Abstract Recent advancements in unsupervised disentangled representation learning focus on extending the variational autoencoder (VAE) with an augmented objective function to balance the trade-off between disentanglement and reconstruction. We propose Sequential Residual Variational Autoencoder (SR-VAE) that defines a "Residual learning" mechanism as the training regime instead of the augmented objective function. Our proposed solution deploys two important ideas in a single framework: (1) learning from the residual between the input data and the accumulated reconstruction of sequentially added latent variables; (2) decomposing the reconstruction into decoder output and a residual term. This formulation encourages the disentanglement in the latent space by inducing explicit dependency structure, and reduces the bottleneck of VAE by adding the residual term to facilitate reconstruction. More importantly, SR-VAE eliminates the hyperparameter tuning, a crucial step for the prior state-of-the-art performance using the objective function augmentation approach. We demonstrate both qualitatively and quantitatively that SR-VAE improves the state-of-the-art unsupervised disentangled representation learning on a variety of complex datasets. ### Paper Keywords ["Disentangled Representation Learning", "Variational Autoencoder", "Residual Learning"] ### Paper Content ABSTRACTRecent advancements in unsupervised disentangled representation learning focuson extending the variational autoencoder (V AE) with an augmented objective func-tion to balance the trade-off between disentanglement and reconstruction. Wepropose Sequential Residual Variational Autoencoder (SR-V AE) that defines a“Residual learning” mechanism as the training regime instead of the augmentedobjective function. Our proposed solution deploys two important ideas in a singleframework: (1) learning from the residual between the input data and the accu-mulated reconstruction of sequentially added latent variables; (2) decomposingthe reconstruction into decoder output and a residual term. This formulation en-courages the disentanglement in the latent space by inducing explicit dependencystructure, and reduces the bottleneck of V AE by adding the residual term to facil-itate reconstruction. More importantly, SR-V AE eliminates the hyperparametertuning, a crucial step for the prior state-of-the-art performance using the objectivefunction augmentation approach. We demonstrate both qualitatively and quan-titatively that SR-V AE improves the state-of-the-art unsupervised disentangledrepresentation learning on a variety of complex datasets.11 I NTRODUCTIONLearning a sparse and interpretable representation of data is a critical component of a generalized,robust and explanatory intelligent system. This concept is inspired by human’s ability to generalizethe knowledge with abstract concepts and use them to reason the unseen environments Gupta et al.(2018). Despite recent advances on representation learning, it was shown that deep convolutionalneural networks (CNN’s) have a tendency to learn superficial statistics of data associated with giventasks, rather than important generative factors embedded in the physical world Jo & Bengio (2017);Goodfellow et al. (2014). One way towards this goal is disentangled representation learning whichaims to capture the independent and interpretable generative factors of the data. Bengio et al. (2013)defines the disentangled representation intuitively as a representation where changes in one dimensioncorrespond to changes in only one generative factor of the data, while being relatively invariant tochanges in other factors. Recently, Higgins et al. (2018) assigned a principled definition by connectingsymmetry transformations to vector representations using the group and representation theory.Based on these definitions, disentangled representation can be learned in a supervised fashion whereexplicit and/or implicit prior knowledge on the generative factors of data are available. However,it is ideal to achieve this in an unsupervised learning setting to take advantage of the large amountof available unlabeled data. Along with the recent development of the generative models, manyunsupervised disentangled learning approaches have been proposed based on either the generativeadversarial networks (GAN) (proposed as InfoGAN in Chen et al. (2016)) or the variational autoen-coders (V AE) (proposed as -V AE in Higgins et al. (2017)). While -V AE achieves better resultsand does not suffer from the training stability issue of InfoGAN, it faces a trade-off between thedisentanglement and reconstruction due to its information bottleneck. The current state-of-the-artapproaches extend the -V AE with augmented objective function to reduce this trade-off Burgess et al.(2017); Kim & Mnih (2018); Chen et al. (2018); Kumar et al. (2017). A recent study by Locatello1Codes available at: https://www.dropbox.com/s/5hkfn8xy5r8w5sz/Code.zip?dl=01Under review as a conference paper at ICLR 2020et al. (2018) carefully compared these approaches based on extensive experiments. They found thatthe performance of these approaches is very sensitive to the hyperparameter tuning associated withthe augmented objective function and the initial random seed during training. More importantly, theyproved that unsupervised learning of disentangled representation is impossible without introducinginductive bias on either the model or the data. We believe the trade-off between disentanglement andreconstruction in V AE-based approaches can be addressed by a different training approach. The ideaof relying on modified training approaches, instead of augmented objective function, to encouragenetwork behavior is commonly used for different problems. Take model over-fitting prevention forexample, one way to address this is to augment the objective function with regularization terms, suchasL1orL2regularization. An alternative solution is to apply special operations during training toenforce the generalization of the network representations, such as Dropout Srivastava et al. (2014) orBatch Normalization Ioffe & Szegedy (2015).Our main contribution in this work is four-fold: 1) We propose Sequential Residual VariationalAutoencoder (SR-V AE) that uses a novel “Residual learning” mechanism to learn disentangledrepresentation with the original V AE objective. This is different from previous V AE-based approachesthat merely focus on objective function design where hyperparameter tuning is crucial. 2) We showthe proposed “Residual learning” mechanism defines an explicit dependency structure among thelatent variables via sequential latent variable update. This encourages learning the disentangledrepresentation. 3) We highlight that SR-V AE decomposes the reconstruction into residual andnetwork decoder output via skip connection. This relaxation of reconstruction reduces the trade-offbetween disentanglement and reconstruction of V AE. 4) We demonstrate both qualitatively andquantitatively that SR-V AE improves the current state-of-the-art disentanglement representationlearning performance on a variety of complex datasets.2 C HALLENGES OF DISENTANGLING WITH AUGMENTED VAE O BJECTIVEIn this section, we first briefly review the V AE framework, followed by the -V AE and its extensionsfor disentangled representation learning. We highlight the challenges of using an augmented objectivefunction to balance the V AE’s trade-off between the disentanglement and reconstruction. From thesediscussions, we then motivate the proposed SR-V AE framework.V AE is a deep directed graphical model consisting of an encoder and a decoder Kingma & Welling(2013). The encoder maps the input data xto a latent representation q(zjx)and the decoder mapsthe latent representation back to the data space q(xjz), whereandrepresent model parameters.The loss function of the V AE is defined as following:LVAE =Eq(zjx)[logq(xjz)]KL(q(zjx)kp(z)); (1)where KL(:k:)stands for the Kullback-Leibler divergence. By regularizing the posterior q(zjx)with a prior over the latent representation p(z)N(0;I), where Iis identity matrix, V AE learnsa latent representation q(zjx)that contains the variations in the data. The goal of disentangledrepresentation learning is to identify the latent representation z2Rdwhere each latent variable onlycorresponds to one of the generative factors for given data x. To achieve this, -V AE augments V AEobjective with an adjustable hyperparameter as:LVAE =Eq(zjx)[logq(xjz)]KL(q(zjx)kp(z)): (2)The addition of encourages the posterior q(zjx)to match the factorized unit Gaussian priorp(z). It enhances the independence among the latent variables thus disentangling the representation.On the other hand, it reduces the amount of information about xstored inz, which can lead to apoor reconstruction especially for high values of . This trade-off is further discussed from therate-distortion theory perspective in Burgess et al. (2017).To reduce the trade-off, different augmentations of -V AE objective are proposed Burgess et al.(2017); Kim & Mnih (2018); Chen et al. (2018); Kumar et al. (2017). Locatello et al. (2018)categorized these methods into three main categories of bottleneck capacity ,penalizing the totalcorrelation anddisentangled priors . Burgess et al. (2017) focused on bottleneck capacity andproposed to gradually increase the average KL divergence from zero for each generative factor.This method relaxes the information bottleneck during training via increasing the encoding capacitythrough a parameter Cthat is linearly dependent on the training iteration. Kim & Mnih (2018) aimed2Under review as a conference paper at ICLR 2020Figure 1: “Residual learning” mechanism consists of dsteps in a single forward pass with the same encoderq(zjx)and decoderq(xjz). Latent variables are sequentially sampled from the encoder. In step i, only theithlatent variable zifollows the distribution learned from the current residual. Previous latent variables follow thesame distribution learned from their corresponding residuals. The latent variables zi+1tozdhave fixed 0 value.The final output x0consists of the decoder output using all the latent variables ^xdand the skip connection d.to solve the disentangled representation learning by minimizing the total correlation (TC) term. Theyproposed FactorV AE where the objective is augmented with a TC term controlled by hyperparameter. Minimizing the TC term forces the hidden representation to be factorial and hence independent.Chen et al. (2018) looked into an alternative way to minimize the TC term in the augmented objective,named-TCV AE. They used a mini-batch based alternative instead of a density-ratio-trick basedmethod from FactorV AE. However, the results from both methods are sensitive to hyperparameterassociated with the TC term. Kumar et al. (2017) studied the disentangled prior and introduced aregularizer to the objective that is associated with hyperparamter . This regularizer encourage thecovariance of q(z)to match the identity matrix. While all aforementioned approaches have shownpromising results, they rely on a careful tuning of the hyperparamater introduced in the augmentedobjective functions such as in Higgins et al. (2017), Cin Burgess et al. (2017), in Kim & Mnih(2018) and Chen et al. (2018), and in Kumar et al. (2017). Finding the optimal hyperparamatersetting can be challenging especially in an unsupervised learning setting where the evaluation of theresults mainly relies on visualization and human inspection. More importantly, Locatello et al. (2018)found that hyperparameter tuning is more important for state-of-the-art performance than the choiceof augmented objective functions.In this work, we propose SR-V AE to address the aforementioned challenge. Instead of the forwardpass in the original V AE, SR-V AE uses a “Residual learning” forward pass illustrated in Fig. 1as an inductive bias on the model. SR-V AE consists of two components: 1) Explicit dependencyin the latent space via a multi-step sequential forward pass where one latent variable is updatedat each step; 2) Decomposition of the reconstruction via skip connetion between the input andnetwork output to relax the network reconstruction constraint. Together, these two componentsenable SR-V AE to address the trade-off between the reconstruction and disentanglement in V AEusing the original objective in Eq. 1. In the next section, we first describe the details of the “Residuallearning” forward pass and SR-V AE training. We then discuss the two aforementioned componentsin detail. In Section 5, we demonstrate the effectiveness of SR-V AE and investigate the effect of eachcomponent separately. The experimental results show our approach achieves better disentanglementand reconstruction results compared to the current state-of-the-art approaches, including -V AEHiggins et al. (2017) and FactorV AE Kim & Mnih (2018).3 S EQUENTIAL RESIDUAL VARIATIONAL AUTOENCODER – SR-VAESame as the original V AE, SR-V AE consists of an encoder network noted as q(~ zjx), and a decodernetwork noted as q(xj~ z). Herexand~ zstand for input data and latent representation vector; andrepresent encoder and decoder network parameters. Let the dimension of the latent representation bed, SR-V AE learns ~ z= [z1;z2;:::;zd]2Rdas the latent representation of the data. Its forward passfollows a “Residual learning” mechanism that consists of dsteps. Each step only updates one latentvariable. In the first step, the input data xpasses through the encoder to calculate the parameterizedposterior, noted as ~ 1and~ 1. Instead of drawing samples for all latent variables ~ zN(~ 1;~ 1), weonly sample the first latent variable z1N(~ 1[1];~ 1[1])and set the remaining latent variables to 0.The modified latent variables ~ z= [z1;0;:::; 0]then passes the decoder to generate the output noted3Under review as a conference paper at ICLR 2020Algorithm 1 SR-VAE Forward PassInput: observationx, latent dimension d, V AE encoderq(zjx), V AE decoder, q(xjz)Output: reconstruction x0, latent representation ~ 0;~ 01:1 x2:~ 0= [0;:::; 0]2Rd3:~ 0= [0;:::; 0]2Rd4:fori = 1 toddo5:f~ i;~ ig Encoder:q(i)6:~ 0[i] =~ i[i]7:~ 0[i] =~ i[i]8:~ z Reparameterize( ~ 0,~ 0)9: ^xi Decoder:q(~ z)10: ifi<dthen11:i+1 i^xi12: end if13:end for14:x0 ^xd+dAlgorithm 2 SR-VAE LearningInput: DatasetX, batch sizem, latent dimension d,Initialize V AE parameters ;1:repeat2: Randomly select batch x=x(i)i2Bof sizem,3:fx0;(~ ;~ )g Forward_pass (x)4:Lrecon MSE _loss(x;x0)5:LKL 12BPj=1dPi=1[1 + log(j[i])2(j[i])2(j[i])2]6:L L KL+Lrecon7:f;g Backward (L)8:until convergence of objectiveas^x1. We subtract the decoder output from the skip connection (defined as an Identity function) asthe input for the second pass, noted as 2=1^x1. In the second pass, 2passes the same encoderto generates a new parameterized posterior ( ~ 2and~ 2). This time, we sample only the second latentvariable with this parameterized posterior as z2N (~ 2[2];~ 2[2]). We re-sample the first latentvariable with z1N(~ 1[1];~ 1[1])while setting the remaining latent variables to 0. The modifiedlatent variable ~ z= [z1;z2;0;:::; 0]is then used to generate the new reconstruction ^x2. We thencalculate the corresponding residual 3=2^x2as the input for the third pass. In the ith pass, theith latent variable is sampled from the encoder thus ziN(~ i[i];~ i[i]). The previous updated latentvariables follow their corresponding residual encoding and the remaining latent variables are set tozeros,~ z= [z1;z2;:::;zi;0;:::; 0]. The process repeats dtimes such that all the latent variables aresampled. In step d, the final output of SR-V AE, x0consists of the decoder output ^xdand the residualtermdasx0= ^xd+d. In the case where d= 1, SR-V AE follows the last step and the input isconnected to the output through the skip connection. Algorithm 1 shows the pseudo code of the“Residual learning” forward pass in SR-V AE.We train the SR-V AE with the original V AE objective defined in Eq. 1. The parameters are updatedusing the standard back-propagation demonstrated in Algorithm 2. The prior p(z)is set to the isotropicunit GaussianN(0;I)and posterior q(zjx)is parameterized as Gaussians with a diagonal covariancematrix. The “reparametrization trick” is used to transform each random variable ziq(zjx)as adifferentiable transformation of a noise variable N(0;1)withzi=i+i.Due to the sequential update process, SR-V AE can generate a sequence of images during the forwardpass. As we shall see in Section 5, these images reflect image transformations corresponding to thedisentangled factors at different steps. Comparing to other V AE based approach that directly generatesa single reconstruction output by sampling from the joint distribution ~ zq(~ zjx)for all latentvariables, this step-by-step visual inspection allows for better understanding of the learned generativefactor. As a result, SR-V AE provides a new way to understand the disentangled representation results.Explicit Dependency in the Latent Space: The SR-V AE forward pass defines a sequential updateof latent variables: the added latent variable ziat stepilearns from the residual between the input dataand the previously updated latent variables zj;8j2f1;:::;i1g. This procedure defines explicitdependency among the latent variables in the posterior that can be written as q(z1;z2;:::;zdjx) =q(z1jx)q(z2jz1;x):::q(zdjz1;:::;zd1;x). The KLloss term of the original V AE objective inEq. 1 encourages the posterior q(z1;z2;:::;zdjx)to match the factorized unit Gaussian prior p(~ z).Adding the explicit dependency by the “Residual learning” mechanism, the SR-V AE objective can beseen as a modified V AE objective:maximize;LSRVAE =Eq(~ zjx)[logq(xj~ z)]KL(q(~ zjx)kp(~ z));subject to p(z1)q(z1jx);p(z2)q(z2jz1;x);:::;p (zd)q(zdjz1;:::;zd1;x):(3)4Under review as a conference paper at ICLR 2020These constraints encourage the newly added latent variable to be independent of the ones alreadyadded, thus enhance the disentanglement of the latent representation. Moreover, the solution space ofEq. 3 is a subset of the original V AE. The constrained objective limits the optimization search spaceto regions where a better local optimal solution exists. We empirically verify this result in terms ofboth the performance and stability to random initialization in Section 5.Decomposition of the Reconstruction: The final output of SR-V AE, x0consists of the decoderoutput and the residual term as x0= ^xd+d. This formulation relaxes the reconstruction constraint onthe network’s decoder output when comparing with other V AE-based approaches. More importantly,it creates a balancing measure between the data generation and reconstruction. In one extreme case,the inputxdirectly passes through the first d1steps and reaches step das the input. In this case,SR-V AE becomes the original V AE model with added skip connection between input and output(see the last step in Fig 1). This architecture relaxes the V AE reconstruction hence reduces V AEreconstruction and disentanglement trade-off. We will show in Section 5.1, this architecture alonecan reach similar performance to FactorV AE. On the other extreme case, if the first d1stepshave learned a perfect disentangled representation of the data, the input for step dwould be 0. Inthis case, the reconstruction loss encourages SR-V AE to generate the input data from learned latentrepresentation vectors. Combining these two extreme cases, SR-V AE can be understood as a trainingmechanism to balance between a V AE model with emphasis on the reconstruction quality (as the firstcase) and the data generation model given the learned latent variables (as the latter case).Notice that each of the aforementioned components can be separately added to V AE as a modifiedmodel. To add the explicit dependency in the latent space component, we can apply the sequentialforward pass of SR-V AE with the output x0= ^xd. We refer to this model as SeqV AE. For theDecomposition of the Reconstruction component, as mentioned earlier, it is equivalent to addinga skip connection to the original V AE between the input and output. We refer to this model asResV AE. Using these two models, we perform the ablation study to understand the effectiveness ofeach individual component in Section 5.1.Computational Complexity: SR-V AE replaces the standard forward pass of V AE with dforwardpasses, thus increases the computational complexity. However, in addition to the improved state-of-the-art performance, it eliminates the hyperparameter tuning associated with prior works. Asmentioned earlier, the hyperparameter tuning was shown to be critical for state-of-the-art performance.It is a difficult and time-consuming process especially for unlabeled data due to: 1) The largehyperparameter search space of continuous values; 2) The lack of evaluation metric. As a result, webelieve that the increased computational complexity by SR-V AE is reasonable. Moreover, we willshow that each of the dforward passes in SR-V AE correspond to a disentangled generative factor.Visualization of these intermediate steps provides a new way to understand the result.4 R ELATED WORKConnection to Other VAE-based Approaches: We highlight the similarity and advantages ofSR-V AE over the V AE-based approaches introduced in Section 2. The sequential update of latentvariables in SR-V AE is similar to the idea of gradually increasing the KL divergence in Burgess et al.(2017). Instead of introducing an augmented objective, SR-V AE directly achieves this by learning onelatent variable at a time. When comparing to the work in Kumar et al. (2017), SR-V AE encouragesthe independence among the latent variables by defining an explicit latent variable dependency ratherthan emphasizing on individual statistics (the covariance between the latent representations in Kumaret al. (2017)). Finally, the explicit latent variable dependency defined by SR-V AE also encouragesthe factorial latent representation, serving the same purpose as lowering the TC term in Kim & Mnih(2018); Chen et al. (2018). It is worth noticing that the low TC term is necessary but not sufficient fordisentangled representation.Connection to Residual Deep Neural Network: ResNet He et al. (2016) introduces the idea oflearning from residuals by adding the skip connections between layers such that input can propagatethrough layers. The key idea of ResNets is to replace learning the direct mapping between input andoutput,H(x) =x!y, with learning a residual formulation, H(x) =F(x) +x!y, whereF(x)represents stacked non-linear layers. This formulation reduces the loss of important information whilepropagating through the network. In addition, it was suggested that learning the residual mapping iseasier compared to learning the direct mapping He et al. (2016). The proposed SR-V AE shares similar5Under review as a conference paper at ICLR 2020skip connection structure as ResNets. Here F(x)representsVAEiin SR-V AE. As in ResNets, F(x)can learn useful abstraction of data while the skip connection iallows for circumventing difficultiesin reconstructing the input data.Connection to Deep Recurrent Attentive Writer (DRAW): DRAW Gregor et al. (2015) uses asequential variational auto-encoding framework to achieve iterative construction of an image. DRAWdeploys the recurrent neural network with the attention mechanism to dynamically determine whereand what to generate. The attention mechanism serves a similar purpose to the skip connection in the“Residual learning” mechanism. Moreover, the idea of successively adding the decoder output forimage generation in DRAW is similar to the reconstruction decomposition in SR-V AE. One maindifference between the two approaches is that DRAW relies on the recurrent network frameworkto model the iterative image generation in the image space. SR-V AE uses the latent dependency toemphasize iterative generation of image in the latent space.5 E XPERIMENTSWe compare SR-V AE with -V AE and FactorV AE on four different datasets both quantitatively andqualitatively. The datasets used in this study includes: 2D Shape Higgins et al. (2017), TeapotsEastwood & Williams (2018), CelebA Liu et al. (2014) and Chairs Aubry et al. (2014). Appendix Aintroduces the details of these datasets. For all datasets, we use visualization for qualitative evaluationby observing the changes in the reconstruction while altering only one latent dimension, known as thetraversal of the latent variable. A good disentangled representation reveals interpretable changes inthe reconstruction image corresponding to one generative factor. Moreover, 2D Shape andTeapotsdatasets contain ground truth generative factors that are used to synthesize the dataset, which allowus to conduct quantitative evaluations. To compare with the previous studies, we use the metricproposed in Kim & Mnih (2018) (noted as FactorVAE metric ) for2D Shape , and the metric proposedin Eastwood & Williams (2018) (noted as Disentanglement-Informativeness-Completeness metric )forTeapots . These two metrics are found to cover similar notions to other disentanglement metrics inLocatello et al. (2018). We implemented our approach using Pytorch Paszke et al. (2017), with theexperiments run on several machines each with 4 GTX1080 Ti GPUs. See Appendix C for details onmodel architecture.5.1 Q UANTITATIVE EVALUATIONMetrics: TheFactorVAE metric in Kim & Mnih (2018) is calculated as follows: 1) select a latentfactork, 2) generate new data ywith factorkfixed and other factors varying randomly, 3) calculatethe mean ofq(zjx), 4) normalize each dimension by its empirical standard deviation over all the dataor a large enough subset, 5) build a majority-vote classifier with the input of index of the dimensionwith the lowest variance and the output of factor k. The classifier accuracy is used as the evaluationmetric. Eastwood & Williams (2018) defines three criteria of disentangled representation, namelydisentanglement ,completeness andinformativeness . Disentanglement is the degree to which thelearned representation disentangles the underlying generative factors; completeness is the degree towhich the generative factors are captured by one latent representation; and finally the informativenessis the amount of information of the generative factors that is captured by the latent representation.Disentanglement and completeness can be perceived by visualizing rows and columns of the Hintondiagram; and informativeness is calculated based on the mapping error between the learned latentrepresentation and the ground truth factors.Comparison to -VAE and FactorVAE: Similar to previous studies, we set d= 10 for all datasetsexcept for CelebA where d= 32 due to its complexity. We use the optimal parameter setting fromthe original studies in Higgins et al. (2017); Eastwood & Williams (2018) for -V AE and FactorV AE.Fig. 2(a) and 2(b) show that SR-V AE outperforms -V AE and FactorV AE in terms of both thereconstruction error and the disentanglement metric in Kim & Mnih (2018) on 2D Shape . The bestmean disentanglement measurement of SR-V AE is around 0.86, significantly higher than the one for-V AE at 0.72 and FactorV AE at 0.81. For reconstruction error, -V AE and FactorV AE converge tosimilar results while SR-V AE achieves better performance. In Fig. 2(c)-(e), we compare SR-V AEwith-V AE and FactorV AE using metric proposed in Eastwood & Williams (2018) on Teapots .Three criteria used in this metric are disentanglement, completeness and informativeness. Note thatin the informativeness term in this metric is highly dependent to the regressor type. In Eastwood &6Under review as a conference paper at ICLR 2020Figure 2: Quantitative evaluations. Similar to previous studies, all results are reported on the best 10 of 30runs with random seeds except for (j) and (k). The line and shaded area correspond to the mean and confidenceintervals. (a) and (b): the metric in Kim & Mnih (2018) and the reconstruction error for 2D Shape ; (c), (d) and(e): the three metrics in Eastwood & Williams (2018) for Teapots ; (f) and (g): ablation study comparing theSeqV AE, ResV AE and SR-V AE using the FactorVAE metric and reconstruction error on 2D Shape ; (h) and(i): comparison between SR-V AE and SR- -V AE, using the FactorVAE metric and reconstruction error on 2DShape ; (j) and (k): Sensitivity to initialization study that compares the worst 10 of 30 runs with random seedsusing the FactorVAE metric and reconstruction error on 2D Shape .Williams (2018) Lasso and Random Forest regressors are used which resulted in a different orderingof methods in informativeness score. Random Forest is used in our experiments to be comparablewith the original paper. The results show SR-V AE achieves higher disentanglement and completenessscores compare to -V AE and FactorV AE, while FactorV AE achieves the best informativeness.However, we believe the informativeness can be different if a more complex regressor was used.Ablation Study: To investigate the individual effect of the two main components in SR-V AE asdiscussed in Section 3, we compare the performance among SeqV AE, ResV AE and SR-V AE on 2DShape dataset. Similar as before, the top 10 of 30 runs with random initialization of all models arereported in Fig. 2(f) and 2(g). The results show that both ResV AE and SeqV AE perform worse thanSR-V AE in terms of FactorVAE metric . When comparing with -V AE and FactorV AE, ResV AEperforms similar to FactorV AE while SeqV AE performs similar to -V AE. One interesting resultwe noticed is that the reconstruction error from ResV AE is similar to if not better than SR-V AE.These results verify our analysis in Section 3 that the decomposition of reconstruction relaxes thereconstruction constraint of the network, and adding the explicit dependency in the latent spaceimproves disentangled representation learning. While both components are important for the superiorperformance of SR-V AE, relaxing the construction constraint on the network with skip connection ismore important as it directly addresses the bottleneck of V AE.SR-VAE with -VAE objective: We also examined if using the -V AE objective in Eq. 2 withthe “Residual learning” mechanism would improve the performance, referred to as SR- -V AE. Ifso, the proposed “Residual learning” mechanism would benefit from the augmented objective toachieve better performance. Figures 2(h) and 2(i) show that best disentanglement score is obtained bySR-V AE and higher values do not help with improving the performance. These results further verifythe effectiveness of SR-V AE in solving the trade-off between disentanglement and reconstruction inV AE-based approaches.Sensitivity to initialization: The study in Locatello et al. (2018) showed that existing approaches aresensitive to the initialization in addition to the hyperparameter tuning. One advantage of SR-V AE isthat it reduces the solution space and improve training stability. To verify this, we compare the worst7Under review as a conference paper at ICLR 2020Figure 3: (a)-(c) Latent traversals with the same input image across each latent dimension with d= 10 for 2DShape dataset, using SR-V AE, -V AE and FactorV AE respectively ; (d) Decomposition of decoder output andskip connection at each step in SR-V AE during latent traversal towards the last column of (a).10 runs out of 30 for 2D Shape in Fig 2(j) and Fig 2(k). We consistently observe better performanceand smaller variances by SR-V AE, suggesting its robustness against random initialization.5.2 Q UALITATIVE EVALUATIONFigure 3(a)-(c) show the latent traversal of SR-V AE, -V AE and FactorV AE for a fixed input imageof2D Shape .zivalues are chosen from the range of -3 to 3 as shown in the figure. We see thatwhile all three models are capable of finding data generative factors – X-position, Y-position, Shape,Rotation and Scale –-V AE and FactorV AE struggle to disentangle these factors completely. Asmentioned in Kim & Mnih (2018) shape is a discrete variation factor in 2D Shape . Ideally, this factorshould be modeled with a discrete rather than Gaussian latent variable. Despite this mismatchedassumption, SR-V AE still captures the shape in the fifth latent variable. However, it also mixes thesize with the shape between oval and square in the third latent variable. We also experiment the latenttraversal with the Teapots dataset and observe superior performance as shown in Appendix D.For datasets without ground truth generative factors, such as CelebA andChairs , inspecting latenttraversals is the only evaluation method. Similar as before, we used the optimal parameter settingfor-V AE and FactorV AE from the original studies in Higgins et al. (2017); Eastwood & Williams(2018). As seen in Figure 4 for CelebA dataset, SR-V AE is able to learn interpretable factors ofvariation such as background, face and hair characteristics, skin color, etc. Compared to -V AE andFactorV AE, we observe some common factors as well as some unique ones. Note that only the mostobvious factors are presented in this figure. Moreover, the interpretation of each latent dimension isbased on our best judgment. We also observe better reconstruction quality with more details usingSR-V AE method. Reconstruction losses also confirm this observation with the converged values of300, 252 and 158 for -V AE, FactorV AE and SR-V AE, respectively. Admittedly, careful tuning ofparameters in -V AE and FactorV AE could potentially reveal more latent variables. However, findingthe optimal value is a difficult task especially when there is no prior information about the generativefactors of data and a quantitative metric is not available.Visualizing the “Residual learning” Mechanism: To gain a better understanding of the internalprocess of the “Residual learning” mechanism, we show the decoder output, the residual mapping,of each internal step ( ^x1;:::;^x10) and their skip connections ( 1;:::; 10) for2D Shape in Fig. 3(d).Each row presents the internal steps when setting different latent variables ( z1toz5) to value 3. Thefinal outputs of this process correspond to the last column of Fig. 3(a). In this figure, we observe thestep by step transition to the final transformed image during the latent traversal. The result shows that8Under review as a conference paper at ICLR 2020Figure 4: Left: Latent traversals across each latent dimension with d= 64 for CelebA using SR-V AE, FactorV AEand-V AE, respectively. Right: Decomposition of decoder output and skip connection at each step in SR-V AEduring latent traversal towards the first column of the corresponding row.the two terms are working together to capture the learned disentangled factor at each step. Based onFig. 3(a), we know the learned factors in each step are: X-position, Y-position, Size, Rotation+shape ,andShape , respectively. In Fig. 3(d), we observe that X-position of the reconstructed image aregenerated during the first step. In step two, both X-position and Y-position are generated. Thisprocess continues and at each step the decoder output and the residual transform the image accordingto the learned latent encoding.Similarly, we show the step-by-step visualization for CelebA dataset along with its latent traversalresult in Fig 4. We highlight a few factors due to the space limit. Although CelebA presents challengein the complexity of real-world image, we observe similar results as the 2D Shape dataset. Thestep-by-step visualization shows how the latent factors are related to the transformed face imageduring the latent traversal. For example, the gender factor can be identified as the fifth latent factoras we observe major changes in the eyes and hair style from step five. Another example is thebackground contrast factor where major changes can be observed in step eight. These step-by-stepvisualizations provide an alternative way to understand and interpret the learned disentangled factorsand can be interesting for data generation tasks.6 C ONCLUSIONSIn this work, we propose SR-V AE for disentangled representation learning in an unsupervised setting.The proposed solution defines the “Residual learning” mechanism in the training regime, insteadof augmented objective, to solve the trade-off between disentanglement and reconstruction of theV AE-based approaches. SR-V AE defines explicit dependency structure between latent variables anddecomposes the reconstruction via skip connection. We showed that SR-V AE achieves state-of-the-art results compared to previous approaches including -V AE and FactorV AE. Moreover, SR-V AEcan be directly applied to any V AE architecture without an additional hyperparameter tuning. Thestep-by-step process of the SR-V AE provides novel ways to visualize the results and understand theinternal process of learning disentangled factors. We believe this can open a new direction for futureresearch towards disentangled representation learning.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text The authors of this paper present a novel that for unsupervised disentangled representation learning. The model, named sequential residual VAE (SR-VAE), gradually activates individual latent variables to reconstruct residuals. Quantitative and qualitative experiments show that the proposed model outperforms beta-VAE and Factor-VAE. Since the training involves a sequence of model training, SR-VAE certainly consumes more time than other VAEs. Minors: citations in the main text should be put in brackets. ### Review Rating 8: Accept ### Review Confidence <|im_end|> <|im_end|>
6VPl9khIMz
ICLR.cc/2021/Conference
2021
Adaptive Stacked Graph Filter
["Hoang NT", "Takanori Maehara", "Tsuyoshi Murata"]
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fully-connected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
["Graph Convolutional Network", "vertex classification", "graph signal processing", "adaptive graph filter"]
ABSTRACTWe study Graph Convolutional Networks (GCN) from the graph signal processingviewpoint by addressing a difference between learning graph filters with fully-connected weights versus trainable polynomial coefficients. We find that by stack-ing graph filters with learnable polynomial parameters, we can build a highlyadaptive and robust vertex classification model. Our treatment here relaxes thelow-frequency (or equivalently, high homophily) assumptions in existing vertexclassification models, resulting a more ubiquitous solution in terms of spectralproperties. Empirically, by using only one hyper-parameter setting, our modelachieves strong results on most benchmark datasets across the frequency spectrum.1 I NTRODUCTIONThe semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributedgraphs has become one of the most fundamental machine learning problems in recent years. Thisproblem is often associated with its most popular recent solution, namely Graph ConvolutionalNetworks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of researchto improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well asperformance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).Existing vertex classification models often (implicitly) assume that the graph has large vertex ho-mophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019);see Section 2.1 for graph frequency . However, this assumption is not true in general. For instance,let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff,courses, and projects. These categories naturally exhibit different frequency patterns1. Connectionsbetween people are often low-frequency, while connections between topics and projects are oftenmidrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset;for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).This paper aims at establishing a GCN model for the vertex classification problem (Definition 1)that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasetswithout any hyper-parameter tuning for the graph structure.Contributions. By observing the relation between label frequency and performance of existingGCN-like models, we propose to learn the graph filters coefficients directly rather than learning theMLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, whichis capable of learning any filter function. Our stacked filter construction with novel learnable filterparameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. Byusing only one hyper-parameter setting, we show that our model is more adaptive than existing workon a wide range of benchmark datasets.The rest of our paper is organized as follows. Section 2 introduces notations and analytical tools.Section 3 provides insights into the vertex classification problem and motivations to our model’sdesign. Section 4 presents an implementation of our model. Section 5 summarizes related literaturewith a focus on graph filters and state-of-the-art models. Section 6 compares our model and otherexisting methods empirically. We also provide additional experimental results in Appendix A.1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2.1Under review as a conference paper at ICLR 20212 P RELIMINARIESWe consider a simple undirected graph G= (V;E), whereV=f1;:::;ngis a set ofnvertices andEVVis a set of edges. A graph Gis called an attributed graph, denoted by G(X), when it isassociated with a vertex feature mapping X:V7!Rd, wheredis the dimension of the features. Wedefine the following vertex classification problem, also known in the literature as the semi-supervisedvertex classification problem (Yang et al., 2016).Definition 1 (Vertex Classification Problem) .We are given an attributed graph G(X), a set oftraining vertices VtrV, training labels Ytr:Vtr!C, and label setC. The task is to find a modelh:V!C using the training data (Vtr;Ytr)that approximates the true labeling function Y:V!C .LetAbe the adjacency matrix of the graph G, i.e.,Ai;j= 1 if(i;j)2Eand0otherwise. Letdi=PjAijbe the degree of vertex i2V, and letD= diag(d1;:::;dn)be thenndiagonalmatrix of degrees. Let L=DAbe the combinatorial graph Laplacian. Let L=D1=2LD1=2be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graphLaplacian due to its interesting spectral properties: (1) its eigenvalues range from 0to 2; and (2) thespectral properties can be compared between different graphs (Chung & Graham, 1997). In recentliterature, the normalized adjacency matrix with added self-loops, ~A=IL+c, is often used asthe propagation matrix, where cis some diagonal matrix.2.1 G RAPH FREQUENCYGraph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signalprocessing to graphs using the graph Laplacian. Let L=UU>be the eigendecomposition of theLaplacian, where U2Rnnis the orthogonal matrix consists of the orthonormal eigenvectors of Landis the diagonal matrix of eigenvalues. Then, we can regard each eigenvector ukas a “oscillationpattern” and its eigenvalue kas the “frequency” of the oscillation. This intuition is supported by theRayleigh quotient as follows.r(L;x),x>Lxx>x=PuvLu;v(x(u)x(v))2Pu2Vx(u)2: (1)wherePuvsums over all unordered pairs for which uandvare adjacent, x(u)denotes the entryof vectorxcorresponding to vertex u, andLu;vis the (u;v)-entry ofL. From the definition wesee thatr(x)is non-negative and Lis positive semi-definite. r(x)is also known as a variationalcharacterization of eigenvalues of L(Horn & Johnson, 2012, Chapter 4), hence 0r(x)2forany non-zero real vector x. We use the notation r(x)to denote the Rayleigh quotient when thenormalized graph Laplacian is clear from context. The Rayleigh quotient r(x)measures how thedataxis oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient”interchangeably. By the definition, the eigenvector uihas the frequency of i.The labeling yof the vertices is low-frequency if the adjacent vertices are more likely to have thesame label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik,2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, usedin network science, correspond to low-frequency and high-frequency, respectively.2.2 G RAPH FILTERINGIn classical signal processing, a given signal is processed by filters in order to remove unwantedinterference. Here, we first design a frequency response f()of the filter, and then apply thefilter to the signal in the sense that each frequency component ^x()of the data is modulated asf()^x(). Graph signal processing extends this concept as follows. Same as in classical signalprocessing, we design a filter f(). Then, we represent a given graph signal x2RjVjas a linearcombination of the eigenvectors as x=Pixiui. Then, we modulate each frequency componentbyf()asx=Pif(i)xiui. An important fact is that this can be done without performing theeigendecomposition explicitly. Let f(L)be the matrix function induced from f(). Then, the filter isrepresented by f(L)x.As an extension of signal processing, graph signal processing deals with signals defined on graphs.In definition 1, each column of the feature matrix X2Rndis a “graph signal”. Let L=UU>be2Under review as a conference paper at ICLR 2021the eigendecomposition where U2Rnnconsists of orthonormal eigenvectors. Signal Xis filteredby functionfof the eigenvalues as follow.X=Uf()U>X=f(L)X (2)In general, different implementations of f(L)lead to different graph convolution models. For instance,GCN and SGC (Wu et al., 2019) are implemented by f(L) = (IL+(D+I)1=2L(D+I)1=2)k,where the constant term stems from the fact that self-loops are added to vertices and kis the filterorder. Generally, the underlying principle is to learn or construct the appropriate filter function fsuchthat it transforms Xinto a more expressive representation. The filter in GCN is called a low-passfilter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019).3 S PECTRAL PROPERTIES OF FILTERSTowards building a ubiquitous solution, we take an intermediate step to study the vertex classificationproblem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumptionis commonly made. However, the semi-supervised vertex classification problem is more involvedbecause vertex labels can have complicated non-local patterns. Table 1 shows three groups ofdatasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell,Texas) have mixed label frequencies; some labels have low frequencies while others have midrangefrequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filteringfunctionf()in a similar way as proposed by Defferrard et al. (2016).The filtering function f()is often approximated using a polynomial of the graph Laplacian asf(L)poly(L) =KXi=0iLi: (3)Because polynomials can uniformly approximate any real continuous function on a compact interval(see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.Kipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approxi-mated a graph filter gby Chebyshev polynomials TkasgxKXk=0kTk(D1=2AD1=2)x: (4)Then, they took the first two terms and shared the parameters as 0=1to obtain their equation 7:gxIN+D1=2AD1=2x(2INL) (5)Finally, they extended a scalar to a matrix to accommodate multiple feature dimensions asZ=~D1=2~A~D1=2X (6)Kipf & Welling (2017) claimed that the weight matrix can learn different filters, and subsequentworks (e.g., (Veli ˇckovi ́c et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the constructionsuggest, a GCN layer only represents a filter of the form f()2. To properly learn differentgraph filters, we should learn the multiplying parameters 0;1;:::;Kin equation 3. In the nextsection, we propose a learning model which directly learns these multiplying parameters.4 M ODEL DESCRIPTIONThe previous discussion provided several insights: (1) Vertex classification model’s frequency isdecided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directlylearning the polynomial filter’s coefficients is more desirable if we do not want to make any frequencyassumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF)model. Figure 1 visually describes SGF.3Under review as a conference paper at ICLR 2021Figure 1: Block description of SGF. ~A=Lmeans we can plug either the augmented normalizedadjacency matrix or the symmetric normalized Laplacian into this model. In each filter layer, thescalar`controls the filter’s tangent and the scalar `controls the filter’s vertical translation.Design decisions. The novelty of our model is the stacked filter, and we directly learn the filteringfunction by filter coefficients and, which makes SGF work well universally without frequencyhyper-parameters. The deep filter module consists of filters stacked on top of each other with skip-connections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: `and`which control the shape of the linear filter (Figure 1). Two learnable linear layers WinandWoutwith a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).The input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the inputsignals (vertex features) are passed through a learning weight, then fed into filtering. The output partof our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filteredsignals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down(SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our modelperforms filtering (propagation) on the latent representation and classifies the filtered representation,whereas APPNP propagates the predicted features and SGC classifies the filtered features.From the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al.,2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polyno-mial basis is often used in signal processing because it provides optimal interpolation points (Cheney,1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polyno-mial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the StackedFilter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshevpolynomial basis approach has similar performance to the stacked approach with one slight caveaton choosing max. We empirically show this problem by setting the scaling factor max= 1:5. Notethat, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assumingmax= 2so all eigenvalues stay in [1;1].Given an instance of Problem 1, let be an activation function (e.g., ReLU), ~A=I(D+I)1=2L(D+I)1=2be the augmented adjacency matrix, `and`be the filter parameters at layer`, aK-layer SGF is given by:SGF : Input ~A SGF : InputLH0=(XW in) H0=(XW in)H`=`~AH`1+`H0; `= 1:::K H `=`LH`1+`H0; `= 1:::K^y=HKWout ^y=HKWoutSGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution toProblem 1. We present our models using the augmented adjacency matrix to show its similarity toexisting literature. However, as noted in Figure 1, we can replace ~AwithL.4Under review as a conference paper at ICLR 2021The stacked filter is easy to implement. Moreover, it can learn any polynomial of order- Kas follows.The closed-form of the stacked filter (Figure 1) is given byKI+KXi=1(KYj=ij)i1LKi+1(7)where0= 1. Because each term of equation 7 contains a unique parameter, we obtain the following.Proposition 2. Any polynomial poly(L)of orderKcan be represented by the form equation 7.Note that the same result holds if we replace Lin equation 7 by ~A. In practice, we typically setthe initial values of i= 0:5and update them via the back-propagation. The learned iis thenlikely to satisfyjij<1, which yields a further property of the stacked filter: it prefers a low-degree filter, because the coefficients of the higher-order terms are higher-order in iwhich vanishesexponentially faster. This advantage is relevant when we compare with a trivial implementation ofthe polynomial filter that learns idirectly (this approach corresponds to horizontal stacking andChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations andconfirm that the stacked filter is more robust in terms of filter degree than the trivial implementation.5 R ELATED WORKGCN-like models cover a subset of an increasingly large literature on graph-structured data learningwith graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classificationand graph classification are the two main benchmark problems. The principles for representationlearning behind modern graph learning models can also be split into two views: graph propaga-tion/diffusion and graph signal filtering. In this section, we briefly summarize recent advances inthe vertex classification problem with a focus on propagation and filtering methods. For a morecomprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and alsorecent workshops on graph representation learning2.Feature Propagation. Feature propagation/message-passing and graph signal filtering are twoequivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017).From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchersfocus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al.(2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices.More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifierpart (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguishbetween 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregatefeatures from random subgraphs to further improve their model’s expressivity. Pei et al. (2020)proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknessesof GCN-like models. Most notably, they discussed the relation between network homophily andGCN’s performance, which is similar to label frequency r(Y)in Table 1. Spinelli et al. (2020)introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops”to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, theystill use a fully-connected layer to implement the halting criteria, which controls feature propagation.AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficientsdirectly. However their construction only allows for binary coefficients3. We later show that fullhorizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomialorder than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address thedifficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolutionand the attention mechanism, which has a “reconnecting” effect to increase homophily.Graph Filtering. GCN-like models can also be viewed as graph signal filters where vertex featurevectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrardet al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses labelefficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using2See, e.g., https://grlplus.github.io/3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-mentation used GCNConv (which is IL+c) from pytorch-geometric.5Under review as a conference paper at ICLR 2021traditional signal processing techniques. For example, the Lanczos algorithm is applied in learninggraph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neuralnetworks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also followthe decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deepGCN named GCNII which holds the current best results for original splits of Cora, Citeseer, andPubmed. They further showed that their model can estimate any filter function with an assumptionthat the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2).6 E XPERIMENTAL RESULTSWe conduct experiments on benchmark and synthetic data to empirically evaluate our proposedmodels. First, we compare our models with several existing models in terms of average classificationaccuracy. Our experimental results show that our single model can perform well across all frequencyranges. Second, we plot the learned filter functions of our model to show that our model can learnthe frequency range from the data — such visualization is difficult in existing works as the models’filters are fixed before the training process.6.1 D ATASETSWe use three groups of datasets corresponding to three types of label frequency (low, midrange, high).The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer,Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchuret al., 2018). The second group is network datasets with midrange label frequency (close to 1):Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The lastgroup consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset,we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge densityof 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of thesedatasets; see Appendix B.3 for more detail.Table 1: Overview of graph datasets, divided to three frequency groupsDATASETS jVj jEjdjCj r(Y)r(X) TypeCora 2,708 5,278 1,433 7 0.23 0.04 0.910.10 CitationCiteseer 3,327 4,676 3,703 6 0.27 0.03 0.810.19 CitationPubmed 19,717 44,327 500 3 0.55 0.02 0.870.07 CitationAmz-Photo 7,487 119,043 745 8 0.25 0.04 0.820.04 Co-purchaseAmz-Computer 13,381 245,778 767 10 0.27 0.05 0.830.04 Co-purchaseWisconsin 251 450 1703 5 0.87 0.08 0.890.23 WebCornell 183 277 1703 5 0.86 0.11 0.860.32 WebTexas 183 279 1703 5 0.98 0.03 0.840.32 WebChameleon 2,277 31,371 2325 5 0.81 0.05 0.990.01 WikipediaBipartite 2,000 50,182 50 2 2.0 0.00 1.00.00 Synthetic6.2 V ERTEX CLASSIFICATIONWe compare our method with some of the best models in the current literature. Two layersMLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019),and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020),JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently amongthe best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) andsetmax= 1:5. The Literature section of Table 2 and 3 shows the best results found in the literaturewhere these models are set at the recommended hyper-parameters and recommended variants foreach dataset. In our experiment, we fix the graph-related hyper-parameters of each model and reportthe classification results. Our model contains 16 layers of stacked filters ( ~A) and has 64 hiddendimensions. Learning rate is set at 0:01, weight decay is 5e104, and dropout rate for linear6Under review as a conference paper at ICLR 2021layers is 0:7. From an intuition that the filter should discover the required frequency pattern beforethe linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate.This experimental setup shows that SGF can adapt to the label frequency without setting specifichyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On theother hand, in Table 3, SGF is not only better than others in our experiments but also surpassing thebest results in the literature. Note that we also the exact same SGF model across all experiments .Table 2: Vertex classification accuracy for low-frequency datasetsMETHODSDATASETSCora Citeseer Pubmed Photo ComputerOur experiments (Average over 10 runs of stratified 0.6/0.2/0.2 splits)MLP 75.01 1.33 73.241.28 83.560.44 85.051.62 80.420.73SGC (k= 2) 87.151.57 75.000.93 87.970.35 93.670.68 90.870.43APPNP (= 0:2) 88.071.32 76.710.88 88.210.37 94.700.50 91.160.44GCNII (0:5;0:5) 86.211.40 76.861.29 89.770.52 92.570.61 88.710.55SGF-Cheby ( max= 2:0)88.421.60 76.851.01 87.740.37 91.261.76 89.710.55SGF-Cheby ( max= 1:5) 30.050.60 21.110.03 41.722.99 26.791.82 36.990.03SGF 88.97 1.21 77.581.11 90.120.40 95.580.55 92.150.41Literature (Best result among their variants)GCN 85.77 73.68 88.13 (not avail.) (not avail.)GAT 86.37 74.32 87.62 (not avail.) (not avail.)Geom-GCN 85.27 77.99 90.05 (not avail.) (not avail.)APPNP 87.87 76.53 89.40 (not avail.) (not avail.)JKNet+DE 87.46 75.96 89.45 (not avail.) (not avail.)GCNII 88.49 77.13 90.30 (not avail.) (not avail.)Table 3: Vertex classification accuracy for midrange and high frequency datasetsMETHODSDATASETSWisconsin Cornell Texas Chameleon BipartiteOur experiments (Average over 10 runs of stratified 0.6/0.2/0.2 splits)MLP 83.72 3.40 80.134.59 80.305.55 45.631.88 48.341.67SGC (k= 2) 56.276.79 53.375.41 51.496.75 26.512.44 48.071.47APPNP (= 0:2) 71.025.98 74.554.49 66.956.02 54.581.67 50.891.08GCNII (0:5;0:5) 71.575.13 74.475.42 73.786.72 55.811.55 49.701.75SGF-Cheby ( max= 2:0) 76.284.23 69.325.67 77.594.36 70.162.08 100.00.00SGF-Cheby ( max= 1:5) 52.346.11 59.253.14 62.225.43 28.713.19 100.00.00SGF 87.06 4.66 82.456.19 80.565.63 58.771.90 100.00.00Literature (Best results among their variants)GCN 45.88 52.70 52.16 28.18 (not avail.)GAT 49.41 54.32 58.38 42.93 (not avail.)Geom-GCN 64.12 60.81 67.57 60.90 (not avail.)APPNP 69.02 73.51 65.41 54.30 (not avail.)JKNet+DE 50.59 61.08 57.30 62.08 (not avail.)GCNII 81.57 76.49 77.84 62.48 (not avail.)Results in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitiveto its parameters and. In our experiment, we fix the parameter to 0.5 for all datasets, while in theirmanuscript the recommended values are around 1.5 depending on the dataset. With the recommendedhyper-parameters, GCNII can achieve the average accuracy of 81:57% on Wisconsin data. However,its performance dropped around 310% with different values. This comparison highlights ourmodel’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.The Chebyshev polynomial basis performs comparably to the staking implementation as we discussedin the previous sections. The value max= 1:5is choosen because the typical maximum eigenvalueof real-world networks are often at this value. However, in practice, one should set max= 2 as7Under review as a conference paper at ICLR 2021036Acc: 87.10612Acc: 89.0x10-4-21Acc: 100x10024Init.024Init.InitializationAfter trainingCora Wisconsin Bipartite00.81.6Acc: 100x10357Acc: 84.4Acc: 79.2x101.12.00.2Figure 2 : Learned filteringfunctionsf()on threedatasets corresponding tothree frequency ranges.Each row shows thelearning results for eachinitialization. Lightenedlines represent the learnedfiltering functions of 10different runs. The averageaccuracy is shown on thetop right corner.discussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numericalinstability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis.Since for vertex classification any polynomial basis is equivalent, numerical stable ones like ourimplementation of SGF is certainly more preferable in practice.6.3 F ILTER VISUALIZATIONAnother advantage of our model is the ability to visualize the filter function using an inversion ofProposition 2. The first row of Figure 2 shows the filtering functions at initialization and after trainingwhen input is the normalized augmented adjacency matrix. The second row shows the results whenthe input is the normalized Laplacian matrix. These two cases can be interpreted as starting with alow-pass filter ( ~A) or starting with a high-pass filter ( L). Figure 2 clearly shows that our method canlearn the suitable filtering shapes from data regardless of the initialization. We expect the visualizationhere can be used as an effective exploratory tool and baseline method for future graph data.6.4 A DAPTIVITY TO STRUCTURAL NOISERecently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graphneural network for graph classification. Z ̈ugner et al. (2018) posed a similar problem related toadversarial attack on graphs by perturbations of vertex feature or graph structure for the vertexclassification setting (Dai et al., 2018; Bojchevski & G ̈unnemann, 2019; Z ̈ugner & G ̈unnemann,2019). Here, we evaluate the robustness of the models against the structural noise, where we perturba fraction of edges while preserving the degree sequence4. This structural noise collapses the relationbetween the features and the graph structure; hence, it makes the dataset to have the midrangefrequency. This experimental setting shows that adaptive models like ours and GCNII are more robustto structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models areat least as good as an MLP on vertex features. Figure 3 shows vertex classification results at eachamount of edge perturbation: from 10% to 90%. APPNP with = 0:2and SGC with k= 2havesimilar behavior under structural noise since these models give more weights to filtered features. Onthe other hand, APPNP with = 0:8is much more robust to structural noise as it depends more onthe vertex features. This result suggests that adaptive models like ours and GCNII can be a goodbaseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).6.5 D YNAMICS OF ’S AND’SIn addition to Section 6.3, this section studies the dynamic of andduring training for tworepresentative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of andin SGF ( ~A) every 20 training epochs and plot the result. Figure 4 shows the values of andin16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Coradataset, we see that the over-smoothing effect is quickly migrated as the ’s automatically go to zerowith the exception of the last three layers. Similarly, the weights for skip-connections – ’s – quickly4https://en.wikipedia.org/wiki/Degree-preserving_randomization8Under review as a conference paper at ICLR 20210.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.94050607080Accuracy(%)Fraction of perturbated edgesSGFAPPNP (0.8)APPNP (0.2)SGCMLP0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.95055606570Accuracy(%)Fraction of perturbated edgesSGFAPPNP (0.8)APPNP (0.2)SGCMLP750.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9707580Accuracy(%)Fraction of perturbated edgesSGFAPPNP (0.8)APPNP (0.2)SGCMLP85Cora Citeseer PubmedFigure 3: Vertex classification accuracy for each amount of edge perturbation. Since GCNII hassimilar performance as our model in this setting, we only plot the results for SGF.Init. e=20 e=40 e=80 e=120 e=160 e=140 e=180 e=220 e=260 e=100 e=300 2.01.00.0(a) CoraInit. e=20 e=40 e=80 e=120 e=160 e=140 e=180 e=220 e=260 e=100 e=300 2.01.00.0(b) WisconsinFigure 4: Dynamic of ’s and’s with fixed initialization at 0.5.go to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there isalmost no filtering because all ’s go to zero quickly and there is only one active skip-connection inthe last layer. This single active skip-connection phenomenon is further confirmed by the experimenton MLP (Table 3) where MLP performed comparably to graph-based models. These results furtherexplained the ability to adapt of our model.Additional Experiments. We provide several other experimental results in Appendix A. Section A.1discusses the advantages of vertical stacking (SGF) versus a na ̈ıve horizontal stacking (learningin equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range(Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additionalexperiments where ’s and’s are initialized randomly. We show that our model is still adaptiveeven with uniform [1;1]initialization.7 C ONCLUSIONWe show that simply by learning the polynomial coefficients rather the linear layers in the formulationof GCN can lead to a highly adaptive vertex classification model. Our experiment shows that byusing only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore,SGF can also adapt to structural noise extremely well, promising a robust model in practice. Sinceour objective is to relax the frequency assumption, one could expect our model will perform weaklywhen number of training data is limited. Because the estimation of label frequency becomes difficultwith a small number of data (Appendix A.2), designing a learning model that is both adaptive anddata-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a moreinvolved filter learning scheme is needed to address this problem in the future.9Under review as a conference paper at ICLR 2021
VXgr7O_yV5p
The Review
5: Marginally below acceptance threshold
This paper proposes to stack the graph filters with learnable polynomial parameters to construct the new graph neural network model. Generally, this paper is well organized and easy to read. Here are my concerns. 1.Essentially, this paper argues that the approximation of Chebyshev polynomials in GCN can only capture the low-frequency features in the spectral domain, and proposes a more general approximation scheme by stacking the graph filter in the spatial domain. However, the low-frequency property of GCN is highly related to the localized first-order approximation of graph convolutions. Without this first-order approximation, GCN model can capture the high-frequency information in graphs, e.g, ChebyNet [2] with large enough order K. It's better to add more discussions/comparisons with this kind of GCNs. Moreover, my core concern is the superiority of why the proposed polynomial approximation (in Equation 7) is better than the previous Chebyshev approximation from both theoretical and practical justifications. In graph signal processing, using a polynomial series to approximate the graph filter has been well studied in the literature. As pointed out by [1], Chebyshev polynomial is a good approximator to approximate graph filters. It is better to add more justifications (e.g., numerical analysis) about the proposed approximation scheme. 2.Another concern is the experiment. Dataset splitting: It seems like that this paper adopts the new splitting plan (stratified 0.6/0.2/0.2 splits) for all datasets. Meanwhile, the paper also reports the best results reported in the literature. However, I think it’s improper to put them in the same table since we can’t make a fair comparison under different data splitting. Moreover, I would like to see the results of SGF on the public splitting of these datasets. Hyperprameters: In Appendix B.4, the authors claim that they follow the hyperparameter recommendation in the original paper of baselines. However, it seems that some of the given hyperparameters are not the best hyper-parameters. For example, for Cora, \alpha of GCNII is set to 0.2, while in Appendix B.4, \alpha=0.5 which inconsistent with the original paper [3]. On the other hand, In Appendix B.2, the authors adopt the random strategy to search the hyperparameters of SGF. Since the authors re-run all the experiments of baselines in the new splits, it’s better to conduct the same hyper-parameter search process for each baseline to ensure a fair comparison. The filter parameters visualization: From the model construction perspective, since the only difference between SGF and GCNII/APPNP is the trainable filter parameters. Therefore, I’m curious about the value of \alpha and \beta after the training. Could you visualize the value of two parameters in each layer from SGF? Overall, I think this paper is marginally below the acceptance threshold. [1] David K. Hammond, Pierre Vandergheynst, and Re ́mi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011. [2] Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering." Advances in neural information processing systems. 2016. [3] Chen, M., Wei, Z., Huang, Z., Ding, B., & Li, Y. (2020). Simple and deep graph convolutional networks. arXiv preprint arXiv:2007.02133.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Adaptive Stacked Graph Filter ### Paper Abstract We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fully-connected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum. ### Paper Keywords ["Graph Convolutional Network", "vertex classification", "graph signal processing", "adaptive graph filter"] ### Paper Content ABSTRACTWe study Graph Convolutional Networks (GCN) from the graph signal processingviewpoint by addressing a difference between learning graph filters with fully-connected weights versus trainable polynomial coefficients. We find that by stack-ing graph filters with learnable polynomial parameters, we can build a highlyadaptive and robust vertex classification model. Our treatment here relaxes thelow-frequency (or equivalently, high homophily) assumptions in existing vertexclassification models, resulting a more ubiquitous solution in terms of spectralproperties. Empirically, by using only one hyper-parameter setting, our modelachieves strong results on most benchmark datasets across the frequency spectrum.1 I NTRODUCTIONThe semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributedgraphs has become one of the most fundamental machine learning problems in recent years. Thisproblem is often associated with its most popular recent solution, namely Graph ConvolutionalNetworks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of researchto improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well asperformance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).Existing vertex classification models often (implicitly) assume that the graph has large vertex ho-mophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019);see Section 2.1 for graph frequency . However, this assumption is not true in general. For instance,let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff,courses, and projects. These categories naturally exhibit different frequency patterns1. Connectionsbetween people are often low-frequency, while connections between topics and projects are oftenmidrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset;for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).This paper aims at establishing a GCN model for the vertex classification problem (Definition 1)that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasetswithout any hyper-parameter tuning for the graph structure.Contributions. By observing the relation between label frequency and performance of existingGCN-like models, we propose to learn the graph filters coefficients directly rather than learning theMLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, whichis capable of learning any filter function. Our stacked filter construction with novel learnable filterparameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. Byusing only one hyper-parameter setting, we show that our model is more adaptive than existing workon a wide range of benchmark datasets.The rest of our paper is organized as follows. Section 2 introduces notations and analytical tools.Section 3 provides insights into the vertex classification problem and motivations to our model’sdesign. Section 4 presents an implementation of our model. Section 5 summarizes related literaturewith a focus on graph filters and state-of-the-art models. Section 6 compares our model and otherexisting methods empirically. We also provide additional experimental results in Appendix A.1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2.1Under review as a conference paper at ICLR 20212 P RELIMINARIESWe consider a simple undirected graph G= (V;E), whereV=f1;:::;ngis a set ofnvertices andEVVis a set of edges. A graph Gis called an attributed graph, denoted by G(X), when it isassociated with a vertex feature mapping X:V7!Rd, wheredis the dimension of the features. Wedefine the following vertex classification problem, also known in the literature as the semi-supervisedvertex classification problem (Yang et al., 2016).Definition 1 (Vertex Classification Problem) .We are given an attributed graph G(X), a set oftraining vertices VtrV, training labels Ytr:Vtr!C, and label setC. The task is to find a modelh:V!C using the training data (Vtr;Ytr)that approximates the true labeling function Y:V!C .LetAbe the adjacency matrix of the graph G, i.e.,Ai;j= 1 if(i;j)2Eand0otherwise. Letdi=PjAijbe the degree of vertex i2V, and letD= diag(d1;:::;dn)be thenndiagonalmatrix of degrees. Let L=DAbe the combinatorial graph Laplacian. Let L=D1=2LD1=2be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graphLaplacian due to its interesting spectral properties: (1) its eigenvalues range from 0to 2; and (2) thespectral properties can be compared between different graphs (Chung & Graham, 1997). In recentliterature, the normalized adjacency matrix with added self-loops, ~A=IL+c, is often used asthe propagation matrix, where cis some diagonal matrix.2.1 G RAPH FREQUENCYGraph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signalprocessing to graphs using the graph Laplacian. Let L=UU>be the eigendecomposition of theLaplacian, where U2Rnnis the orthogonal matrix consists of the orthonormal eigenvectors of Landis the diagonal matrix of eigenvalues. Then, we can regard each eigenvector ukas a “oscillationpattern” and its eigenvalue kas the “frequency” of the oscillation. This intuition is supported by theRayleigh quotient as follows.r(L;x),x>Lxx>x=PuvLu;v(x(u)x(v))2Pu2Vx(u)2: (1)wherePuvsums over all unordered pairs for which uandvare adjacent, x(u)denotes the entryof vectorxcorresponding to vertex u, andLu;vis the (u;v)-entry ofL. From the definition wesee thatr(x)is non-negative and Lis positive semi-definite. r(x)is also known as a variationalcharacterization of eigenvalues of L(Horn & Johnson, 2012, Chapter 4), hence 0r(x)2forany non-zero real vector x. We use the notation r(x)to denote the Rayleigh quotient when thenormalized graph Laplacian is clear from context. The Rayleigh quotient r(x)measures how thedataxis oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient”interchangeably. By the definition, the eigenvector uihas the frequency of i.The labeling yof the vertices is low-frequency if the adjacent vertices are more likely to have thesame label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik,2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, usedin network science, correspond to low-frequency and high-frequency, respectively.2.2 G RAPH FILTERINGIn classical signal processing, a given signal is processed by filters in order to remove unwantedinterference. Here, we first design a frequency response f()of the filter, and then apply thefilter to the signal in the sense that each frequency component ^x()of the data is modulated asf()^x(). Graph signal processing extends this concept as follows. Same as in classical signalprocessing, we design a filter f(). Then, we represent a given graph signal x2RjVjas a linearcombination of the eigenvectors as x=Pixiui. Then, we modulate each frequency componentbyf()asx=Pif(i)xiui. An important fact is that this can be done without performing theeigendecomposition explicitly. Let f(L)be the matrix function induced from f(). Then, the filter isrepresented by f(L)x.As an extension of signal processing, graph signal processing deals with signals defined on graphs.In definition 1, each column of the feature matrix X2Rndis a “graph signal”. Let L=UU>be2Under review as a conference paper at ICLR 2021the eigendecomposition where U2Rnnconsists of orthonormal eigenvectors. Signal Xis filteredby functionfof the eigenvalues as follow.X=Uf()U>X=f(L)X (2)In general, different implementations of f(L)lead to different graph convolution models. For instance,GCN and SGC (Wu et al., 2019) are implemented by f(L) = (IL+(D+I)1=2L(D+I)1=2)k,where the constant term stems from the fact that self-loops are added to vertices and kis the filterorder. Generally, the underlying principle is to learn or construct the appropriate filter function fsuchthat it transforms Xinto a more expressive representation. The filter in GCN is called a low-passfilter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019).3 S PECTRAL PROPERTIES OF FILTERSTowards building a ubiquitous solution, we take an intermediate step to study the vertex classificationproblem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumptionis commonly made. However, the semi-supervised vertex classification problem is more involvedbecause vertex labels can have complicated non-local patterns. Table 1 shows three groups ofdatasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell,Texas) have mixed label frequencies; some labels have low frequencies while others have midrangefrequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filteringfunctionf()in a similar way as proposed by Defferrard et al. (2016).The filtering function f()is often approximated using a polynomial of the graph Laplacian asf(L)poly(L) =KXi=0iLi: (3)Because polynomials can uniformly approximate any real continuous function on a compact interval(see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.Kipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approxi-mated a graph filter gby Chebyshev polynomials TkasgxKXk=0kTk(D1=2AD1=2)x: (4)Then, they took the first two terms and shared the parameters as 0=1to obtain their equation 7:gxIN+D1=2AD1=2x(2INL) (5)Finally, they extended a scalar to a matrix to accommodate multiple feature dimensions asZ=~D1=2~A~D1=2X (6)Kipf & Welling (2017) claimed that the weight matrix can learn different filters, and subsequentworks (e.g., (Veli ˇckovi ́c et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the constructionsuggest, a GCN layer only represents a filter of the form f()2. To properly learn differentgraph filters, we should learn the multiplying parameters 0;1;:::;Kin equation 3. In the nextsection, we propose a learning model which directly learns these multiplying parameters.4 M ODEL DESCRIPTIONThe previous discussion provided several insights: (1) Vertex classification model’s frequency isdecided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directlylearning the polynomial filter’s coefficients is more desirable if we do not want to make any frequencyassumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF)model. Figure 1 visually describes SGF.3Under review as a conference paper at ICLR 2021Figure 1: Block description of SGF. ~A=Lmeans we can plug either the augmented normalizedadjacency matrix or the symmetric normalized Laplacian into this model. In each filter layer, thescalar`controls the filter’s tangent and the scalar `controls the filter’s vertical translation.Design decisions. The novelty of our model is the stacked filter, and we directly learn the filteringfunction by filter coefficients and, which makes SGF work well universally without frequencyhyper-parameters. The deep filter module consists of filters stacked on top of each other with skip-connections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: `and`which control the shape of the linear filter (Figure 1). Two learnable linear layers WinandWoutwith a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).The input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the inputsignals (vertex features) are passed through a learning weight, then fed into filtering. The output partof our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filteredsignals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down(SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our modelperforms filtering (propagation) on the latent representation and classifies the filtered representation,whereas APPNP propagates the predicted features and SGC classifies the filtered features.From the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al.,2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polyno-mial basis is often used in signal processing because it provides optimal interpolation points (Cheney,1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polyno-mial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the StackedFilter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshevpolynomial basis approach has similar performance to the stacked approach with one slight caveaton choosing max. We empirically show this problem by setting the scaling factor max= 1:5. Notethat, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assumingmax= 2so all eigenvalues stay in [1;1].Given an instance of Problem 1, let be an activation function (e.g., ReLU), ~A=I(D+I)1=2L(D+I)1=2be the augmented adjacency matrix, `and`be the filter parameters at layer`, aK-layer SGF is given by:SGF : Input ~A SGF : InputLH0=(XW in) H0=(XW in)H`=`~AH`1+`H0; `= 1:::K H `=`LH`1+`H0; `= 1:::K^y=HKWout ^y=HKWoutSGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution toProblem 1. We present our models using the augmented adjacency matrix to show its similarity toexisting literature. However, as noted in Figure 1, we can replace ~AwithL.4Under review as a conference paper at ICLR 2021The stacked filter is easy to implement. Moreover, it can learn any polynomial of order- Kas follows.The closed-form of the stacked filter (Figure 1) is given byKI+KXi=1(KYj=ij)i1LKi+1(7)where0= 1. Because each term of equation 7 contains a unique parameter, we obtain the following.Proposition 2. Any polynomial poly(L)of orderKcan be represented by the form equation 7.Note that the same result holds if we replace Lin equation 7 by ~A. In practice, we typically setthe initial values of i= 0:5and update them via the back-propagation. The learned iis thenlikely to satisfyjij<1, which yields a further property of the stacked filter: it prefers a low-degree filter, because the coefficients of the higher-order terms are higher-order in iwhich vanishesexponentially faster. This advantage is relevant when we compare with a trivial implementation ofthe polynomial filter that learns idirectly (this approach corresponds to horizontal stacking andChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations andconfirm that the stacked filter is more robust in terms of filter degree than the trivial implementation.5 R ELATED WORKGCN-like models cover a subset of an increasingly large literature on graph-structured data learningwith graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classificationand graph classification are the two main benchmark problems. The principles for representationlearning behind modern graph learning models can also be split into two views: graph propaga-tion/diffusion and graph signal filtering. In this section, we briefly summarize recent advances inthe vertex classification problem with a focus on propagation and filtering methods. For a morecomprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and alsorecent workshops on graph representation learning2.Feature Propagation. Feature propagation/message-passing and graph signal filtering are twoequivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017).From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchersfocus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al.(2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices.More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifierpart (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguishbetween 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregatefeatures from random subgraphs to further improve their model’s expressivity. Pei et al. (2020)proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknessesof GCN-like models. Most notably, they discussed the relation between network homophily andGCN’s performance, which is similar to label frequency r(Y)in Table 1. Spinelli et al. (2020)introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops”to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, theystill use a fully-connected layer to implement the halting criteria, which controls feature propagation.AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficientsdirectly. However their construction only allows for binary coefficients3. We later show that fullhorizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomialorder than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address thedifficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolutionand the attention mechanism, which has a “reconnecting” effect to increase homophily.Graph Filtering. GCN-like models can also be viewed as graph signal filters where vertex featurevectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrardet al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses labelefficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using2See, e.g., https://grlplus.github.io/3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-mentation used GCNConv (which is IL+c) from pytorch-geometric.5Under review as a conference paper at ICLR 2021traditional signal processing techniques. For example, the Lanczos algorithm is applied in learninggraph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neuralnetworks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also followthe decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deepGCN named GCNII which holds the current best results for original splits of Cora, Citeseer, andPubmed. They further showed that their model can estimate any filter function with an assumptionthat the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2).6 E XPERIMENTAL RESULTSWe conduct experiments on benchmark and synthetic data to empirically evaluate our proposedmodels. First, we compare our models with several existing models in terms of average classificationaccuracy. Our experimental results show that our single model can perform well across all frequencyranges. Second, we plot the learned filter functions of our model to show that our model can learnthe frequency range from the data — such visualization is difficult in existing works as the models’filters are fixed before the training process.6.1 D ATASETSWe use three groups of datasets corresponding to three types of label frequency (low, midrange, high).The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer,Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchuret al., 2018). The second group is network datasets with midrange label frequency (close to 1):Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The lastgroup consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset,we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge densityof 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of thesedatasets; see Appendix B.3 for more detail.Table 1: Overview of graph datasets, divided to three frequency groupsDATASETS jVj jEjdjCj r(Y)r(X) TypeCora 2,708 5,278 1,433 7 0.23 0.04 0.910.10 CitationCiteseer 3,327 4,676 3,703 6 0.27 0.03 0.810.19 CitationPubmed 19,717 44,327 500 3 0.55 0.02 0.870.07 CitationAmz-Photo 7,487 119,043 745 8 0.25 0.04 0.820.04 Co-purchaseAmz-Computer 13,381 245,778 767 10 0.27 0.05 0.830.04 Co-purchaseWisconsin 251 450 1703 5 0.87 0.08 0.890.23 WebCornell 183 277 1703 5 0.86 0.11 0.860.32 WebTexas 183 279 1703 5 0.98 0.03 0.840.32 WebChameleon 2,277 31,371 2325 5 0.81 0.05 0.990.01 WikipediaBipartite 2,000 50,182 50 2 2.0 0.00 1.00.00 Synthetic6.2 V ERTEX CLASSIFICATIONWe compare our method with some of the best models in the current literature. Two layersMLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019),and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020),JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently amongthe best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) andsetmax= 1:5. The Literature section of Table 2 and 3 shows the best results found in the literaturewhere these models are set at the recommended hyper-parameters and recommended variants foreach dataset. In our experiment, we fix the graph-related hyper-parameters of each model and reportthe classification results. Our model contains 16 layers of stacked filters ( ~A) and has 64 hiddendimensions. Learning rate is set at 0:01, weight decay is 5e104, and dropout rate for linear6Under review as a conference paper at ICLR 2021layers is 0:7. From an intuition that the filter should discover the required frequency pattern beforethe linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate.This experimental setup shows that SGF can adapt to the label frequency without setting specifichyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On theother hand, in Table 3, SGF is not only better than others in our experiments but also surpassing thebest results in the literature. Note that we also the exact same SGF model across all experiments .Table 2: Vertex classification accuracy for low-frequency datasetsMETHODSDATASETSCora Citeseer Pubmed Photo ComputerOur experiments (Average over 10 runs of stratified 0.6/0.2/0.2 splits)MLP 75.01 1.33 73.241.28 83.560.44 85.051.62 80.420.73SGC (k= 2) 87.151.57 75.000.93 87.970.35 93.670.68 90.870.43APPNP (= 0:2) 88.071.32 76.710.88 88.210.37 94.700.50 91.160.44GCNII (0:5;0:5) 86.211.40 76.861.29 89.770.52 92.570.61 88.710.55SGF-Cheby ( max= 2:0)88.421.60 76.851.01 87.740.37 91.261.76 89.710.55SGF-Cheby ( max= 1:5) 30.050.60 21.110.03 41.722.99 26.791.82 36.990.03SGF 88.97 1.21 77.581.11 90.120.40 95.580.55 92.150.41Literature (Best result among their variants)GCN 85.77 73.68 88.13 (not avail.) (not avail.)GAT 86.37 74.32 87.62 (not avail.) (not avail.)Geom-GCN 85.27 77.99 90.05 (not avail.) (not avail.)APPNP 87.87 76.53 89.40 (not avail.) (not avail.)JKNet+DE 87.46 75.96 89.45 (not avail.) (not avail.)GCNII 88.49 77.13 90.30 (not avail.) (not avail.)Table 3: Vertex classification accuracy for midrange and high frequency datasetsMETHODSDATASETSWisconsin Cornell Texas Chameleon BipartiteOur experiments (Average over 10 runs of stratified 0.6/0.2/0.2 splits)MLP 83.72 3.40 80.134.59 80.305.55 45.631.88 48.341.67SGC (k= 2) 56.276.79 53.375.41 51.496.75 26.512.44 48.071.47APPNP (= 0:2) 71.025.98 74.554.49 66.956.02 54.581.67 50.891.08GCNII (0:5;0:5) 71.575.13 74.475.42 73.786.72 55.811.55 49.701.75SGF-Cheby ( max= 2:0) 76.284.23 69.325.67 77.594.36 70.162.08 100.00.00SGF-Cheby ( max= 1:5) 52.346.11 59.253.14 62.225.43 28.713.19 100.00.00SGF 87.06 4.66 82.456.19 80.565.63 58.771.90 100.00.00Literature (Best results among their variants)GCN 45.88 52.70 52.16 28.18 (not avail.)GAT 49.41 54.32 58.38 42.93 (not avail.)Geom-GCN 64.12 60.81 67.57 60.90 (not avail.)APPNP 69.02 73.51 65.41 54.30 (not avail.)JKNet+DE 50.59 61.08 57.30 62.08 (not avail.)GCNII 81.57 76.49 77.84 62.48 (not avail.)Results in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitiveto its parameters and. In our experiment, we fix the parameter to 0.5 for all datasets, while in theirmanuscript the recommended values are around 1.5 depending on the dataset. With the recommendedhyper-parameters, GCNII can achieve the average accuracy of 81:57% on Wisconsin data. However,its performance dropped around 310% with different values. This comparison highlights ourmodel’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.The Chebyshev polynomial basis performs comparably to the staking implementation as we discussedin the previous sections. The value max= 1:5is choosen because the typical maximum eigenvalueof real-world networks are often at this value. However, in practice, one should set max= 2 as7Under review as a conference paper at ICLR 2021036Acc: 87.10612Acc: 89.0x10-4-21Acc: 100x10024Init.024Init.InitializationAfter trainingCora Wisconsin Bipartite00.81.6Acc: 100x10357Acc: 84.4Acc: 79.2x101.12.00.2Figure 2 : Learned filteringfunctionsf()on threedatasets corresponding tothree frequency ranges.Each row shows thelearning results for eachinitialization. Lightenedlines represent the learnedfiltering functions of 10different runs. The averageaccuracy is shown on thetop right corner.discussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numericalinstability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis.Since for vertex classification any polynomial basis is equivalent, numerical stable ones like ourimplementation of SGF is certainly more preferable in practice.6.3 F ILTER VISUALIZATIONAnother advantage of our model is the ability to visualize the filter function using an inversion ofProposition 2. The first row of Figure 2 shows the filtering functions at initialization and after trainingwhen input is the normalized augmented adjacency matrix. The second row shows the results whenthe input is the normalized Laplacian matrix. These two cases can be interpreted as starting with alow-pass filter ( ~A) or starting with a high-pass filter ( L). Figure 2 clearly shows that our method canlearn the suitable filtering shapes from data regardless of the initialization. We expect the visualizationhere can be used as an effective exploratory tool and baseline method for future graph data.6.4 A DAPTIVITY TO STRUCTURAL NOISERecently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graphneural network for graph classification. Z ̈ugner et al. (2018) posed a similar problem related toadversarial attack on graphs by perturbations of vertex feature or graph structure for the vertexclassification setting (Dai et al., 2018; Bojchevski & G ̈unnemann, 2019; Z ̈ugner & G ̈unnemann,2019). Here, we evaluate the robustness of the models against the structural noise, where we perturba fraction of edges while preserving the degree sequence4. This structural noise collapses the relationbetween the features and the graph structure; hence, it makes the dataset to have the midrangefrequency. This experimental setting shows that adaptive models like ours and GCNII are more robustto structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models areat least as good as an MLP on vertex features. Figure 3 shows vertex classification results at eachamount of edge perturbation: from 10% to 90%. APPNP with = 0:2and SGC with k= 2havesimilar behavior under structural noise since these models give more weights to filtered features. Onthe other hand, APPNP with = 0:8is much more robust to structural noise as it depends more onthe vertex features. This result suggests that adaptive models like ours and GCNII can be a goodbaseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).6.5 D YNAMICS OF ’S AND’SIn addition to Section 6.3, this section studies the dynamic of andduring training for tworepresentative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of andin SGF ( ~A) every 20 training epochs and plot the result. Figure 4 shows the values of andin16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Coradataset, we see that the over-smoothing effect is quickly migrated as the ’s automatically go to zerowith the exception of the last three layers. Similarly, the weights for skip-connections – ’s – quickly4https://en.wikipedia.org/wiki/Degree-preserving_randomization8Under review as a conference paper at ICLR 20210.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.94050607080Accuracy(%)Fraction of perturbated edgesSGFAPPNP (0.8)APPNP (0.2)SGCMLP0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.95055606570Accuracy(%)Fraction of perturbated edgesSGFAPPNP (0.8)APPNP (0.2)SGCMLP750.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9707580Accuracy(%)Fraction of perturbated edgesSGFAPPNP (0.8)APPNP (0.2)SGCMLP85Cora Citeseer PubmedFigure 3: Vertex classification accuracy for each amount of edge perturbation. Since GCNII hassimilar performance as our model in this setting, we only plot the results for SGF.Init. e=20 e=40 e=80 e=120 e=160 e=140 e=180 e=220 e=260 e=100 e=300 2.01.00.0(a) CoraInit. e=20 e=40 e=80 e=120 e=160 e=140 e=180 e=220 e=260 e=100 e=300 2.01.00.0(b) WisconsinFigure 4: Dynamic of ’s and’s with fixed initialization at 0.5.go to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there isalmost no filtering because all ’s go to zero quickly and there is only one active skip-connection inthe last layer. This single active skip-connection phenomenon is further confirmed by the experimenton MLP (Table 3) where MLP performed comparably to graph-based models. These results furtherexplained the ability to adapt of our model.Additional Experiments. We provide several other experimental results in Appendix A. Section A.1discusses the advantages of vertical stacking (SGF) versus a na ̈ıve horizontal stacking (learningin equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range(Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additionalexperiments where ’s and’s are initialized randomly. We show that our model is still adaptiveeven with uniform [1;1]initialization.7 C ONCLUSIONWe show that simply by learning the polynomial coefficients rather the linear layers in the formulationof GCN can lead to a highly adaptive vertex classification model. Our experiment shows that byusing only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore,SGF can also adapt to structural noise extremely well, promising a robust model in practice. Sinceour objective is to relax the frequency assumption, one could expect our model will perform weaklywhen number of training data is limited. Because the estimation of label frequency becomes difficultwith a small number of data (Appendix A.2), designing a learning model that is both adaptive anddata-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a moreinvolved filter learning scheme is needed to address this problem in the future.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title The Review ### Review Text This paper proposes to stack the graph filters with learnable polynomial parameters to construct the new graph neural network model. Generally, this paper is well organized and easy to read. Here are my concerns. 1.Essentially, this paper argues that the approximation of Chebyshev polynomials in GCN can only capture the low-frequency features in the spectral domain, and proposes a more general approximation scheme by stacking the graph filter in the spatial domain. However, the low-frequency property of GCN is highly related to the localized first-order approximation of graph convolutions. Without this first-order approximation, GCN model can capture the high-frequency information in graphs, e.g, ChebyNet [2] with large enough order K. It's better to add more discussions/comparisons with this kind of GCNs. Moreover, my core concern is the superiority of why the proposed polynomial approximation (in Equation 7) is better than the previous Chebyshev approximation from both theoretical and practical justifications. In graph signal processing, using a polynomial series to approximate the graph filter has been well studied in the literature. As pointed out by [1], Chebyshev polynomial is a good approximator to approximate graph filters. It is better to add more justifications (e.g., numerical analysis) about the proposed approximation scheme. 2.Another concern is the experiment. Dataset splitting: It seems like that this paper adopts the new splitting plan (stratified 0.6/0.2/0.2 splits) for all datasets. Meanwhile, the paper also reports the best results reported in the literature. However, I think it’s improper to put them in the same table since we can’t make a fair comparison under different data splitting. Moreover, I would like to see the results of SGF on the public splitting of these datasets. Hyperprameters: In Appendix B.4, the authors claim that they follow the hyperparameter recommendation in the original paper of baselines. However, it seems that some of the given hyperparameters are not the best hyper-parameters. For example, for Cora, \alpha of GCNII is set to 0.2, while in Appendix B.4, \alpha=0.5 which inconsistent with the original paper [3]. On the other hand, In Appendix B.2, the authors adopt the random strategy to search the hyperparameters of SGF. Since the authors re-run all the experiments of baselines in the new splits, it’s better to conduct the same hyper-parameter search process for each baseline to ensure a fair comparison. The filter parameters visualization: From the model construction perspective, since the only difference between SGF and GCNII/APPNP is the trainable filter parameters. Therefore, I’m curious about the value of \alpha and \beta after the training. Could you visualize the value of two parameters in each layer from SGF? Overall, I think this paper is marginally below the acceptance threshold. [1] David K. Hammond, Pierre Vandergheynst, and Re ́mi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011. [2] Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering." Advances in neural information processing systems. 2016. [3] Chen, M., Wei, Z., Huang, Z., Ding, B., & Li, Y. (2020). Simple and deep graph convolutional networks. arXiv preprint arXiv:2007.02133. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
HklldOe6sV
ICML.cc/2019/Workshop/RL4RealLife
2019
Neural Heterogeneous Scheduler
["Tegg Taekyong Sung", "Valliappa Chockalingam", "Alex Yahja", "Bo Ryu"]
Access to massive computation allows researchers and developers to succeeded in using technology to enhance processes in many applications. However there have been claims as to the tapering of the exponential decrease in the cost of hardware (following Moore's law) due to physical hardware limitations. Next generation special purpose systems making using of multiple kinds of coprocessors, known as heterogeneous system-on-chips, have been in active research recently. In this paper, we introduce a method to intelligently schedule a stream of tasks to available processing elements in such a system. We use deep reinforcement learning which allows for complex decision making and demonstrate that machine learning can be used for scheduling decisions and provides for a viable, likely better alternative to reducing execution time, given a set of tasks.
["deep reinforcement learning", "resource allocation", "actor-critic"]
Neural Heterogeneous SchedulerTegg Taekyong Sung1 *Valliappa Chockalingam1 2 *Alex Yahja1Bo Ryu1AbstractAccess to massive computation allows researchersand developers to succeeded in using technologyto enhance processes in many applications. How-ever there have been claims as to the tapering ofthe exponential decrease in the cost of hardware(following Moore’s law) due to physical hard-ware limitations. Next generation special purposesystems making using of multiple kinds of co-processors, known as heterogeneous system-on-chips, have been in active research recently. Inthis paper, we introduce a method to intelligentlyschedule a stream of tasks to available processingelements in such a system. We use deep reinforce-ment learning which allows for complex decisionmaking and demonstrate that machine learningcan be used for scheduling decisions and providesfor a viable, likely better alternative to reducingexecution time, given a set of tasks.1. IntroductionNumerous breakthroughs in deep learning would not bepossible without the help of hardware capable of massiveamounts of computation. Especially in machine learning,neural networks have taken advantage of parallelizationthrough GPUs and, more recently, special purpose hardwarelike TPUs have been used to perform millions of computa-tions in a quicker manner. However, when we consider re-cent real-world applications built with embedded hardware,different considerations have to be made. For instance, cus-tom ASICs and FPGAs have different performance capabil-ities and power and energy consumption. So, essential caremust be taken in choosing which applications to developon which types of hardware. Concurrently, hardware per-formance has been becoming cheaper, following Moore’s*Equal contribution1EpiSys Science, San Diego, USA2Department of Computer Science, University of Alberta,Edmonton, Canada. Correspondence to: Tegg TaekyongSung <tegg@episyscience.com >, Valliappa Chockalingam<valli@episyscience.com >.Proceedings of the 36thInternational Conference on MachineLearning , Long Beach, California, PMLR 97, 2019. Copyright2019 by the author(s).law (Schaller, 1997), but transistor density in micropro-cessors has also recently reached the maximum projectionbased on physical limitations.The System-on-Chip (SoC) architecture has been revealedas a novel approach to chip design which merges differentlevels of computational cores (Borel, 1997). In addition,domain-specific heterogeneous SoCs enable different func-tionalities necessary in certain domains and provide for easysoftware implementations. Heterogeneous chips have thestrength that they can attain the best performance for cer-tain applications if the cores in the chip are systematicallyscheduled once the task becomes available. Combinationsof operations are optimally scheduled to process jobs withdifferent requirements. Heterogeneous processors are ex-pected to break traditional trade-offs such as that betweenpower and performance. Thus, how to optimally scheduleoperations becomes a main research topic which is generallyknown as an NP-hard problem. While there are many heuris-tic or approximate algorithms that can make this problemmore tractable, we take the view that optimally distributingready-be-assigned tasks into available processing elementsin the heterogeneous SoC can be formalized as a sequentialdecision making problem.In this paper, we use deep reinforcement learning (DRL)which provides for a powerful and flexible way to solvecomplex sequential decision making problems. Here, weespecially consider tasks that have dependencies. Thus, thescheduling agent must learn how to schedule the tasks giventhat some tasks may require other tasks to have already run.This makes the problem difficult due to long-term depen-dencies and partial observability. Moreover, without pre-emptions, the former entails that the agent cannot choosescheduling actions at every time step but only when assignedtasks are completed and new tasks ready to be scheduledappear. The following sections describe the details of thesimulation environment which takes a job consisting of aset of tasks and a resource matrix specifying the perfor-mance and communication specifications of the processingelements. Next, we formalize the reinforcement learning(RL) setting and describe the policy-based algorithm we useto tackle the sequential decision-making problem. In theexperiments, we compare our model, which we interchange-ably refer to as Deep Resource Manager (DRM) or NeuralHeterogeneous Scheduler, with baselines to verify that theNeural Heterogeneous Schedulerperformance of the Deep RL agent we introduce is betterthan different heuristic scheduling algorithms. We also pro-vide saliency map and GANTT chart visualizations duringthe learning process to reason about the agent’s decisions.2. BackgroundDeep reinforcement learning (Deep RL) has been success-fully applied to several domains such as robotics (Levineet al., 2016) and games (Silver et al., 2017; Vinyals et al.,2019). Most successful RL applications stand in the usualRL framework of Markov decision processes. However,in our case, actions can take various amounts of time tocomplete. Scheduling chip processors in real-world appli-cations involves a continually filling stream of tasks wheremany activities progress simultaneously. Action decisionsare only performed when tasks are ready to be scheduled.Given these properties and limitations of the environment,this process can be defined as semi-Markov decision pro-cess (SMDP) with temporally-extended actions or options.When an assigned action is not completed, then the agentessentially takes the ‘no-operation’ action.Mathematically, the MDP setting can be formalized as a5-tuplehS;A;R;P;i(Sutton & Barto, 2018; Puterman,2014). Here, Sdenotes the state space, A, the action space,R, the reward signal which is generally defined over statesor state-action pairs, P, a stochastic matrix specifying tran-sition probabilities to next states given a state and action,and2[0;1], the discount factor.Normally, the SMDP framework would involve an optionframework, augmenting the action space, but instead, weuse simple options here that take no-op actions and henceleave the option framework with preemption of runningtasks as future work.In addition, the heterogeneous resource management en-vironment is essentially partially observable, because theagent can only observe the tasks ready-to-be-assigned toa processing element. To address this, we augment thestate with the other task lists as well (not just the ready listcontaining the ready-to-be-assigned tasks) and transition tofully-observable problem.3. Proposed Approach3.1. Environment SettingThe heterogeneous SoC chip we consider is to be used invarious applications such as WiFi RX/TX or pulse doppler issimulated in a realistic discrete-event environment, DASH-Sim, which is described in 3.1.1. It is developed with theSimPy library (Lnsdorf & Scherfke, 2018) to implementrunning tasks in a continuous time frame setting. The spec-ifications of the set of tasks and processing elements arewritten within job andresource matrix files that aredescribed in 3.1.2.3.1.1. S IMULATIONThere are some manufactured embedded boards like ZynqA53, Odroid A7, or Odroid A15 with their heterogeneouschip sets, and their functionalities include encoder and de-coder, scrambler and descrambler, interleaver and dein-terleaver, QPSK modulation and demodulation, FFT andinverse-FFT. We however consider simulation as in manyRL applications. The goal of an agent is to achieve a lowtime to completion, given task combinations.In the recent past, RL algorithms have been commonlydeveloped using the OpenAI Gym environment inter-face (Brockman et al., 2016) and many assume the Markovdecision process framework, whereas we use SimPy envi-ronment, which simulates sequential discrete-events. Eachevent represents a task. Each task, upon execution, runstill completion and the scheduler can only choose to sched-ule tasks in the ready list. The simulator is visualized inFigure 1.DRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask list statusDASH-SimRunning tasksOutstanding tasks12345678910Job listsResource matrix listsSchedulerFigure 1. An overall diagram of the DASH-Sim environment whichruns with the SimPy discrete-event library. The simulator con-structs task queues and processing elements based on the descrip-tions in the job andresource matrix file lists. The schedulerassigns tasks in the ready list to cores in SoC chip.Prior to running a simulation, the information of pro-cessing elements (PEs) and tasks are parsed withresource matrix andjob text files described in 3.1.2.Each PE represents chipsets such as RAM, CPU, GPU, ormemory accelerators in heterogeneous SoC and have differ-ent execution time, energy and power consumption. In thispaper, we only consider the execution time for the perfor-mance.At the start of a job, all the tasks are fed into the outstand-ing list and the ones that do not have task dependenciesare pushed into the ready list . Then, the agent assigns aprocessing element, through an ID, PEID, to tasks that arein the ready list and these tasks then proceed to the runningNeural Heterogeneous Schedulerlistwhen the begin execution. During this time period, theagent chooses a ‘no-operation’ action for the running tasks.Once a task completes, it is moved to the completed list . Ifall the tasks are in the completed list , the scheduling episodeis finished and the next resource matrix andjob filesin the list are used for the next episode.3.1.2. T ASK AND RESOURCE MATRIXTasks are constantly generated and a scheduler distributesthem to different chipsets in the SoC. We assume tasks havedependencies such as those shown in Figure 2. We describethe list of tasks in a job file and the associated processingelements in a resource matrix file. Their structuresare described in below.Job listjobname<job name>addnewtasks<number of tasks ><task name><task ID><task predecessors ><task name><earliest start time ><deadline>Resource matrix listaddnewresource <resource ID > < number oftasks><task name><performance >In a job file, tasks have HEAD andTAIL flags that indicatethe start and end. In this paper, we consider 10 tasks of onejob with 3 processing elements. However, we add random-ization to the resource matrix to train our agent be morerobust. This results in differing performances. Performancehere refers to the fact that the execution time taken to pro-cess a given task in a given processing element varies. Theearliest start time, deadline, and performance are all givenin units of milliseconds.3.2. AlgorithmIn this paper, we develop a new agent using deep reinforce-ment learning to allocate resources to a heterogeneous SoC,a long-term credit assignment task. As described in theSection 2, the environment in its most general form can bethought of as a partially observable SMDP.Figure 3 shows the interaction between DASH-Sim andDRM scheduler. Task list transitions are controlled byDASH-Sim environment. The scheduler agent takes thetasks from the ready list as an input and assigns each taskaPEID. Particularly, our DRM scheduler receives readytasks but also generates state representations with all theDRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask QueuesJob listsDASH-SimRunning tasksOutstanding tasksResource matrix listsScheduler12345678910Figure 2. Task dependency visualization of the jobTop file.Each circle represents task numbers and arrows show task de-pendencies.task lists. We convert the task lists and resource matrix tobinary vector representations when representing integer val-ues, multi-binary representations for state features that cantake on multiple values and concatenate the represenation toform the final state representation vector. This informationfrom all the state lists not only addresses partial observabil-ity in the DASH-Sim environment, but also gives additionalinformation about the relations between tasks through thetask list transitions.DRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask list statusDASH-SimRunning tasksOutstanding tasks12345678910Job listsResource matrix listsSchedulerFigure 3. A diagram of the interaction of DRM architecture andDASH-Sim environment. Tasks in the lists are controlled byDASH-Sim environment and DRM assigns every PEIDto readytasks.We use an actor-critic algorithm described in Equation 1.The agents action is taken for tasks in ready list. We usea simple reward of -1 per timestep to encourage the agentto complete tasks quickly. At the end of the episode, theagent is updated by looking over the past state represen-tation when action choices were made and the resultingdiscounted reward for the scheduling decision made. Theseupdates follow the traditional actor-critic losses and hap-pen on-policy as the function approximator is not updatedNeural Heterogeneous Schedulerduring an episode. Additionally, we use a decaying temper-ature in the SoftMax action selection to gradually reduceexploration and move between SoftMax to argmax. This issimilar to the addition of entropy and lets the agent avoidskewed action distribution early in learning and introducesmore exploration at the beginning of the simulation whileslowly relying on exploitation later in training.rJ() =E[rlog(atjst)At(st)]; (1)Above, the objective is to find that parameterizes the neuralnetwork to maximize J.Atis the advantage that subtractsthe state-value of state st,V(st), fromGt, the empiricallyobserved discounted reward, the baseline V(st)servingto reduce potentially high variance. The above specifiesthe actor-loss in the actor-critic framework. The critic isupdated to minimize the advantage, i.e., (GtV(st))2is minimized. The overall algorithm with DASH-Sim isdescribed in Algorithm 1.Algorithm 1 Deep Resource ManagementInput: jobs ,resource matrices , and DASH-Simenvironmentforeach episode doinitialize environment with next job andresource matrix filerepeatfortasks in ready list doConstructstateChoose action action w.r.t. taskAssignaction toPEIDfor this taskSavestate ,actionend forPenalize -1 for rewarduntil all tasks are the in the completed list ormaxsimulation lengthCompute losses using Eq. 1 with saved state s andaction sUpdate agent by backpropagating with the lossesend for4. ExperimentsTo the best of our knowledge, this paper is the first to ap-ply reinforcement learning to heterogeneous resource man-agement where long-term scheduling decisions need to bemade. In this section, we show the experimental results ofthe comparison of the DRM scheduler with other heuris-tic schedulers, Earliest Finish Time (EFT), Earliest TimeFirst (ETF), and Minimum Execution Time (MET) (But-tazzo, 2011). Both EFT and ETF pick the resource whichis expected to give the earliest finish time for the task. ETFadditionally schedules the task with the lowest earliest fin-ish time first. MET assigns the processing element ID withminimum execution time for the current task.100 200 300 400 500 600 700 800 900 10009095100105110115120125Execution/uni00A0Time/uni00A0(ms)ETF EFT MET DRM100 200 300 400 500 600 700 800 900 1000Episode859095100105110Execution/uni00A0Time/uni00A0(ms)Figure 4. Execution Time versus Episode during training for thesimpler case where the resource matrix specifying performance andcommunication characteristics is fixed ( Top) and is randomizedbefore each run ( Bottom )As shown in Figure 4, the deep RL scheduler has the bestperformance overall. We tried different experiment settings:one with fixed data shown in top and another with random-ized data in bottom. According to the fixed input, the DRMagent is trained to have about 94ms performance and sat-urated starting at about 720 episodes. Because the staticdata does not have much variation, the agent does not havemuch variance in performance but eventually overfits to acertain extent. Interestingly, MET has better performancethan DRM agent, because MET picks a resource which hasthe minimum execution time for the tasks in ready list. Wepresume the MET corresponds to a locally optimal action atevery timestep whereas DRM could not exceed this optimalvalue.When we experimented with the randomized data, our DRMscheduler had the best performance. Despite fluctuating re-sults of all schedulers, the DRM agent is the only one thathad improved performance, of course, over time as learningprogressed. The DRM agent applies an RL algorithm to ex-Neural Heterogeneous Schedulerplore various policies given different jobs and PEs, allowingfor better generalization and better adaptivity. To provideconvincing results, we performed 30 trials with differentrandom seeds. We expect to apply ensembles in the trainingexperiences to provide much more reliable model.DRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask list statusDASH-SimRunning tasksOutstanding tasks12345678910Job listsResource matrix listsSchedulerFigure 5. A visualization of the state representation which hasinformation about task lists and PEs. Initialized state representation(Top) and the result of GradCam (Selvaraju et al., 2017) performedwith the DRM feature layers ( Bottom ).0 20 40 60 80 100 120 140Time012Processor5 8 94 1 20 3 7 60 20 40 60 80 100Time012Processor54 1 8 7 90 2 3 6Figure 6. GANTT chart representing when tasks ran on the dif-ferent processing elements for the first episode (Top) and the lastepisode (Bottom) when training DRMWe provide a visualization of the saliency to reason aboutthe action decisions, shown in Figure 5. The top figureshows the initial state representation formed by the task listsand resource matrix information. After passing the state toDRM agent, we perform GradCam (Selvaraju et al., 2017)and retrieve the saliency, mapped onto the input, shown inthe bottom of Figure 5. Notice that the agent oversees taskswhich are not shown in initial representation. Describedwith different color intensities, we presume that the DRMagent actually understand the tasks belonging to differentstatus lists and that this more complex decision makinginput allows for better policies.Finally, the GANTT chart showcases how the policy im-proves over training for the fixed resource matrix case. Ini-tially, DRM gets quite a high execution time of 140 mswhile it produces a better policy of about 100 ms at theend of training. Note that this chart corresponds to the taskdependency graph shown earlier, Figure 2, with the onlydifference being that the tasks are 0-indexed.Some interesting changes include the choice of processingelement 1 for Task 9 over processing element 0. It is cleatthat this task is faster on PE1 compared to PE0. On the otherhand, the choice of PE2 for task 2 vs PE1 is also interestingas task 2 takes longer. However, this might be better due tothe task-dependency graph.5. Related WorkResource management has been actively researched in manycommunities. Several works have been applying deep RL tooptimally allocate resources and distribute tasks. DeepRMuses standard deep Q-learning algorithm to formalize re-source management as a Tetris game, however, it only workwith homogeneous settings (Mao et al., 2016). A variant ofDeepRM leverages convolution neural networks as a back-bone network to improve performance in scheduling (Chenet al., 2017). Subsequent work in DeepRM, Pensieve ap-plies a resource managing algorithm to video streaming tooptimally control the bitrate and successfully reduced buffer-ing (Mao et al., 2017). Hopfield networks have been appliedto schedule heterogeneous SoC (Chillet et al., 2011). Morerecent work combines heuristic and learning algorithms,starting from an existing plan and iteratively improving itand successfully applying it in heterogeneous job schedulingtask (Chen & Tian, 2018). However, their work follows thegeneral MDP setting where, again, the agent chooses actionat every timestep. From the perspective of hardware, recentwork has proposed new accelerator architectures which havepotential advances (Chen et al., 2018).6. ConclusionNeural schedulers using deep reinforcement learning havebeen researched in many areas and greatly improved theperformance compared to heuristic algorithms. In this paper,we propose the promising approach of resource allocationapplied in heterogeneous SoC chips. We use ‘no-operation’action and refer to all task lists, regardless of task status, toaddress partially-observability and the SMDP problem. Tothe best of our knowledge, this paper has been the first todeal with scheduling different tasks on different hardwarechips to discover the optimal combination of functionalities.Furthermore, we expect the general value functions andpredictive knowledge approach and the option frameworkto improve performance and leave them as future work.Neural Heterogeneous SchedulerReferencesBorel, J. Technologies for multimedia systems on a chip. In1997 IEEE International Solids-State Circuits Conference.Digest of Technical Papers , pp. 18–21. IEEE, 1997.Brockman, G., Cheung, V ., Pettersson, L., Schneider, J.,Schulman, J., Tang, J., and Zaremba, W. Openai gym.arXiv preprint arXiv:1606.01540 , 2016.Buttazzo, G. C. Hard real-time computing systems: pre-dictable scheduling algorithms and applications , vol-ume 24. Springer Science & Business Media, 2011.Chen, W., Xu, Y ., and Wu, X. Deep reinforcement learningfor multi-resource multi-machine job scheduling. arXivpreprint arXiv:1711.07440 , 2017.Chen, X. and Tian, Y . Learning to progressively plan. arXivpreprint arXiv:1810.00337 , 2018.Chen, Y .-H., Emer, J., and Sze, V . Eyeriss v2: A flexible andhigh-performance accelerator for emerging deep neuralnetworks. arXiv preprint arXiv:1807.07928 , 2018.Chillet, D., Eiche, A., Pillement, S., and Sentieys, O. Real-time scheduling on heterogeneous system-on-chip archi-tectures using an optimised artificial neural network. Jour-nal of Systems Architecture , 57(4):340–353, 2011.Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end training of deep visuomotor policies. The Journal ofMachine Learning Research , 17(1):1334–1373, 2016.Lnsdorf, O. and Scherfke, S. Simpy3. https://github.com/cristiklein/simpy , 2018.Mao, H., Alizadeh, M., Menache, I., and Kandula, S. Re-source management with deep reinforcement learning. InProceedings of the 15th ACM Workshop on Hot Topics inNetworks , pp. 50–56. ACM, 2016.Mao, H., Netravali, R., and Alizadeh, M. Neural adaptivevideo streaming with pensieve. In Proceedings of theConference of the ACM Special Interest Group on DataCommunication , pp. 197–210. ACM, 2017.Puterman, M. L. Markov decision processes: discretestochastic dynamic programming . John Wiley & Sons,2014.Schaller, R. R. Moore’s law: past, present and future. IEEEspectrum , 34(6):52–59, 1997.Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,Parikh, D., and Batra, D. Grad-cam: Visual explanationsfrom deep networks via gradient-based localization. InProceedings of the IEEE International Conference onComputer Vision , pp. 618–626, 2017.Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou,I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M.,Bolton, A., et al. Mastering the game of go withouthuman knowledge. Nature , 550(7676):354, 2017.Sutton, R. S. and Barto, A. G. Reinforcement learning: Anintroduction . MIT press, 2018.Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M.,Jaderberg, M., Czarnecki, W., Dudzik, A., Huang, A.,Georgiev, P., Powell, R., et al. Alphastar: Mastering thereal-time strategy game starcraft ii, 2019.
Hkl_i3qSp4
Bordeline. Good problem. Missing technical details and references. Limited empirical evaluation.
3: Borderline
This paper proposes a reinforcement learning scheduler, which is implemented using a neural network. The scheduler decides adaptively which jobs to execute. The job can be executed only if all jobs that this job depends on have been executed. The goal is to minimize the total execution time of a batch of jobs. This problem is challenging because of potentially complicated dependencies between the jobs. I like the problem in this paper. It is general and definitely important. This paper suffers from several issues though. My detailed comments are below: 1) Many technical details are missing. For instance, the jobs are scheduled conditioned on the past, which is represented by a feature vector. The construction of the feature vector is described in Section 3.2. This description is vague and it is unclear what the feature vector is. The optimization step in (1) is also unclear. How are A and \pi represented? Can you provide more details on how you optimize them? 2) Missing references. When you talk topics, such as actor-critic in Section 3.2, cite relevant papers, such as https://papers.nips.cc/paper/1786-actor-critic-algorithms.pdf 3) Limited empirical evaluation. The class of problems in Figure 2 is very limiting and does not do justice to the generality of your approach.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Neural Heterogeneous Scheduler ### Paper Abstract Access to massive computation allows researchers and developers to succeeded in using technology to enhance processes in many applications. However there have been claims as to the tapering of the exponential decrease in the cost of hardware (following Moore's law) due to physical hardware limitations. Next generation special purpose systems making using of multiple kinds of coprocessors, known as heterogeneous system-on-chips, have been in active research recently. In this paper, we introduce a method to intelligently schedule a stream of tasks to available processing elements in such a system. We use deep reinforcement learning which allows for complex decision making and demonstrate that machine learning can be used for scheduling decisions and provides for a viable, likely better alternative to reducing execution time, given a set of tasks. ### Paper Keywords ["deep reinforcement learning", "resource allocation", "actor-critic"] ### Paper Content Neural Heterogeneous SchedulerTegg Taekyong Sung1 *Valliappa Chockalingam1 2 *Alex Yahja1Bo Ryu1AbstractAccess to massive computation allows researchersand developers to succeeded in using technologyto enhance processes in many applications. How-ever there have been claims as to the tapering ofthe exponential decrease in the cost of hardware(following Moore’s law) due to physical hard-ware limitations. Next generation special purposesystems making using of multiple kinds of co-processors, known as heterogeneous system-on-chips, have been in active research recently. Inthis paper, we introduce a method to intelligentlyschedule a stream of tasks to available processingelements in such a system. We use deep reinforce-ment learning which allows for complex decisionmaking and demonstrate that machine learningcan be used for scheduling decisions and providesfor a viable, likely better alternative to reducingexecution time, given a set of tasks.1. IntroductionNumerous breakthroughs in deep learning would not bepossible without the help of hardware capable of massiveamounts of computation. Especially in machine learning,neural networks have taken advantage of parallelizationthrough GPUs and, more recently, special purpose hardwarelike TPUs have been used to perform millions of computa-tions in a quicker manner. However, when we consider re-cent real-world applications built with embedded hardware,different considerations have to be made. For instance, cus-tom ASICs and FPGAs have different performance capabil-ities and power and energy consumption. So, essential caremust be taken in choosing which applications to developon which types of hardware. Concurrently, hardware per-formance has been becoming cheaper, following Moore’s*Equal contribution1EpiSys Science, San Diego, USA2Department of Computer Science, University of Alberta,Edmonton, Canada. Correspondence to: Tegg TaekyongSung <tegg@episyscience.com >, Valliappa Chockalingam<valli@episyscience.com >.Proceedings of the 36thInternational Conference on MachineLearning , Long Beach, California, PMLR 97, 2019. Copyright2019 by the author(s).law (Schaller, 1997), but transistor density in micropro-cessors has also recently reached the maximum projectionbased on physical limitations.The System-on-Chip (SoC) architecture has been revealedas a novel approach to chip design which merges differentlevels of computational cores (Borel, 1997). In addition,domain-specific heterogeneous SoCs enable different func-tionalities necessary in certain domains and provide for easysoftware implementations. Heterogeneous chips have thestrength that they can attain the best performance for cer-tain applications if the cores in the chip are systematicallyscheduled once the task becomes available. Combinationsof operations are optimally scheduled to process jobs withdifferent requirements. Heterogeneous processors are ex-pected to break traditional trade-offs such as that betweenpower and performance. Thus, how to optimally scheduleoperations becomes a main research topic which is generallyknown as an NP-hard problem. While there are many heuris-tic or approximate algorithms that can make this problemmore tractable, we take the view that optimally distributingready-be-assigned tasks into available processing elementsin the heterogeneous SoC can be formalized as a sequentialdecision making problem.In this paper, we use deep reinforcement learning (DRL)which provides for a powerful and flexible way to solvecomplex sequential decision making problems. Here, weespecially consider tasks that have dependencies. Thus, thescheduling agent must learn how to schedule the tasks giventhat some tasks may require other tasks to have already run.This makes the problem difficult due to long-term depen-dencies and partial observability. Moreover, without pre-emptions, the former entails that the agent cannot choosescheduling actions at every time step but only when assignedtasks are completed and new tasks ready to be scheduledappear. The following sections describe the details of thesimulation environment which takes a job consisting of aset of tasks and a resource matrix specifying the perfor-mance and communication specifications of the processingelements. Next, we formalize the reinforcement learning(RL) setting and describe the policy-based algorithm we useto tackle the sequential decision-making problem. In theexperiments, we compare our model, which we interchange-ably refer to as Deep Resource Manager (DRM) or NeuralHeterogeneous Scheduler, with baselines to verify that theNeural Heterogeneous Schedulerperformance of the Deep RL agent we introduce is betterthan different heuristic scheduling algorithms. We also pro-vide saliency map and GANTT chart visualizations duringthe learning process to reason about the agent’s decisions.2. BackgroundDeep reinforcement learning (Deep RL) has been success-fully applied to several domains such as robotics (Levineet al., 2016) and games (Silver et al., 2017; Vinyals et al.,2019). Most successful RL applications stand in the usualRL framework of Markov decision processes. However,in our case, actions can take various amounts of time tocomplete. Scheduling chip processors in real-world appli-cations involves a continually filling stream of tasks wheremany activities progress simultaneously. Action decisionsare only performed when tasks are ready to be scheduled.Given these properties and limitations of the environment,this process can be defined as semi-Markov decision pro-cess (SMDP) with temporally-extended actions or options.When an assigned action is not completed, then the agentessentially takes the ‘no-operation’ action.Mathematically, the MDP setting can be formalized as a5-tuplehS;A;R;P;i(Sutton & Barto, 2018; Puterman,2014). Here, Sdenotes the state space, A, the action space,R, the reward signal which is generally defined over statesor state-action pairs, P, a stochastic matrix specifying tran-sition probabilities to next states given a state and action,and2[0;1], the discount factor.Normally, the SMDP framework would involve an optionframework, augmenting the action space, but instead, weuse simple options here that take no-op actions and henceleave the option framework with preemption of runningtasks as future work.In addition, the heterogeneous resource management en-vironment is essentially partially observable, because theagent can only observe the tasks ready-to-be-assigned toa processing element. To address this, we augment thestate with the other task lists as well (not just the ready listcontaining the ready-to-be-assigned tasks) and transition tofully-observable problem.3. Proposed Approach3.1. Environment SettingThe heterogeneous SoC chip we consider is to be used invarious applications such as WiFi RX/TX or pulse doppler issimulated in a realistic discrete-event environment, DASH-Sim, which is described in 3.1.1. It is developed with theSimPy library (Lnsdorf & Scherfke, 2018) to implementrunning tasks in a continuous time frame setting. The spec-ifications of the set of tasks and processing elements arewritten within job andresource matrix files that aredescribed in 3.1.2.3.1.1. S IMULATIONThere are some manufactured embedded boards like ZynqA53, Odroid A7, or Odroid A15 with their heterogeneouschip sets, and their functionalities include encoder and de-coder, scrambler and descrambler, interleaver and dein-terleaver, QPSK modulation and demodulation, FFT andinverse-FFT. We however consider simulation as in manyRL applications. The goal of an agent is to achieve a lowtime to completion, given task combinations.In the recent past, RL algorithms have been commonlydeveloped using the OpenAI Gym environment inter-face (Brockman et al., 2016) and many assume the Markovdecision process framework, whereas we use SimPy envi-ronment, which simulates sequential discrete-events. Eachevent represents a task. Each task, upon execution, runstill completion and the scheduler can only choose to sched-ule tasks in the ready list. The simulator is visualized inFigure 1.DRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask list statusDASH-SimRunning tasksOutstanding tasks12345678910Job listsResource matrix listsSchedulerFigure 1. An overall diagram of the DASH-Sim environment whichruns with the SimPy discrete-event library. The simulator con-structs task queues and processing elements based on the descrip-tions in the job andresource matrix file lists. The schedulerassigns tasks in the ready list to cores in SoC chip.Prior to running a simulation, the information of pro-cessing elements (PEs) and tasks are parsed withresource matrix andjob text files described in 3.1.2.Each PE represents chipsets such as RAM, CPU, GPU, ormemory accelerators in heterogeneous SoC and have differ-ent execution time, energy and power consumption. In thispaper, we only consider the execution time for the perfor-mance.At the start of a job, all the tasks are fed into the outstand-ing list and the ones that do not have task dependenciesare pushed into the ready list . Then, the agent assigns aprocessing element, through an ID, PEID, to tasks that arein the ready list and these tasks then proceed to the runningNeural Heterogeneous Schedulerlistwhen the begin execution. During this time period, theagent chooses a ‘no-operation’ action for the running tasks.Once a task completes, it is moved to the completed list . Ifall the tasks are in the completed list , the scheduling episodeis finished and the next resource matrix andjob filesin the list are used for the next episode.3.1.2. T ASK AND RESOURCE MATRIXTasks are constantly generated and a scheduler distributesthem to different chipsets in the SoC. We assume tasks havedependencies such as those shown in Figure 2. We describethe list of tasks in a job file and the associated processingelements in a resource matrix file. Their structuresare described in below.Job listjobname<job name>addnewtasks<number of tasks ><task name><task ID><task predecessors ><task name><earliest start time ><deadline>Resource matrix listaddnewresource <resource ID > < number oftasks><task name><performance >In a job file, tasks have HEAD andTAIL flags that indicatethe start and end. In this paper, we consider 10 tasks of onejob with 3 processing elements. However, we add random-ization to the resource matrix to train our agent be morerobust. This results in differing performances. Performancehere refers to the fact that the execution time taken to pro-cess a given task in a given processing element varies. Theearliest start time, deadline, and performance are all givenin units of milliseconds.3.2. AlgorithmIn this paper, we develop a new agent using deep reinforce-ment learning to allocate resources to a heterogeneous SoC,a long-term credit assignment task. As described in theSection 2, the environment in its most general form can bethought of as a partially observable SMDP.Figure 3 shows the interaction between DASH-Sim andDRM scheduler. Task list transitions are controlled byDASH-Sim environment. The scheduler agent takes thetasks from the ready list as an input and assigns each taskaPEID. Particularly, our DRM scheduler receives readytasks but also generates state representations with all theDRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask QueuesJob listsDASH-SimRunning tasksOutstanding tasksResource matrix listsScheduler12345678910Figure 2. Task dependency visualization of the jobTop file.Each circle represents task numbers and arrows show task de-pendencies.task lists. We convert the task lists and resource matrix tobinary vector representations when representing integer val-ues, multi-binary representations for state features that cantake on multiple values and concatenate the represenation toform the final state representation vector. This informationfrom all the state lists not only addresses partial observabil-ity in the DASH-Sim environment, but also gives additionalinformation about the relations between tasks through thetask list transitions.DRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask list statusDASH-SimRunning tasksOutstanding tasks12345678910Job listsResource matrix listsSchedulerFigure 3. A diagram of the interaction of DRM architecture andDASH-Sim environment. Tasks in the lists are controlled byDASH-Sim environment and DRM assigns every PEIDto readytasks.We use an actor-critic algorithm described in Equation 1.The agents action is taken for tasks in ready list. We usea simple reward of -1 per timestep to encourage the agentto complete tasks quickly. At the end of the episode, theagent is updated by looking over the past state represen-tation when action choices were made and the resultingdiscounted reward for the scheduling decision made. Theseupdates follow the traditional actor-critic losses and hap-pen on-policy as the function approximator is not updatedNeural Heterogeneous Schedulerduring an episode. Additionally, we use a decaying temper-ature in the SoftMax action selection to gradually reduceexploration and move between SoftMax to argmax. This issimilar to the addition of entropy and lets the agent avoidskewed action distribution early in learning and introducesmore exploration at the beginning of the simulation whileslowly relying on exploitation later in training.rJ() =E[rlog(atjst)At(st)]; (1)Above, the objective is to find that parameterizes the neuralnetwork to maximize J.Atis the advantage that subtractsthe state-value of state st,V(st), fromGt, the empiricallyobserved discounted reward, the baseline V(st)servingto reduce potentially high variance. The above specifiesthe actor-loss in the actor-critic framework. The critic isupdated to minimize the advantage, i.e., (GtV(st))2is minimized. The overall algorithm with DASH-Sim isdescribed in Algorithm 1.Algorithm 1 Deep Resource ManagementInput: jobs ,resource matrices , and DASH-Simenvironmentforeach episode doinitialize environment with next job andresource matrix filerepeatfortasks in ready list doConstructstateChoose action action w.r.t. taskAssignaction toPEIDfor this taskSavestate ,actionend forPenalize -1 for rewarduntil all tasks are the in the completed list ormaxsimulation lengthCompute losses using Eq. 1 with saved state s andaction sUpdate agent by backpropagating with the lossesend for4. ExperimentsTo the best of our knowledge, this paper is the first to ap-ply reinforcement learning to heterogeneous resource man-agement where long-term scheduling decisions need to bemade. In this section, we show the experimental results ofthe comparison of the DRM scheduler with other heuris-tic schedulers, Earliest Finish Time (EFT), Earliest TimeFirst (ETF), and Minimum Execution Time (MET) (But-tazzo, 2011). Both EFT and ETF pick the resource whichis expected to give the earliest finish time for the task. ETFadditionally schedules the task with the lowest earliest fin-ish time first. MET assigns the processing element ID withminimum execution time for the current task.100 200 300 400 500 600 700 800 900 10009095100105110115120125Execution/uni00A0Time/uni00A0(ms)ETF EFT MET DRM100 200 300 400 500 600 700 800 900 1000Episode859095100105110Execution/uni00A0Time/uni00A0(ms)Figure 4. Execution Time versus Episode during training for thesimpler case where the resource matrix specifying performance andcommunication characteristics is fixed ( Top) and is randomizedbefore each run ( Bottom )As shown in Figure 4, the deep RL scheduler has the bestperformance overall. We tried different experiment settings:one with fixed data shown in top and another with random-ized data in bottom. According to the fixed input, the DRMagent is trained to have about 94ms performance and sat-urated starting at about 720 episodes. Because the staticdata does not have much variation, the agent does not havemuch variance in performance but eventually overfits to acertain extent. Interestingly, MET has better performancethan DRM agent, because MET picks a resource which hasthe minimum execution time for the tasks in ready list. Wepresume the MET corresponds to a locally optimal action atevery timestep whereas DRM could not exceed this optimalvalue.When we experimented with the randomized data, our DRMscheduler had the best performance. Despite fluctuating re-sults of all schedulers, the DRM agent is the only one thathad improved performance, of course, over time as learningprogressed. The DRM agent applies an RL algorithm to ex-Neural Heterogeneous Schedulerplore various policies given different jobs and PEs, allowingfor better generalization and better adaptivity. To provideconvincing results, we performed 30 trials with differentrandom seeds. We expect to apply ensembles in the trainingexperiences to provide much more reliable model.DRMReady tasksCompleted tasksactionProcessing ElementsProcessing ElementsTask QueuesTask list statusDASH-SimRunning tasksOutstanding tasks12345678910Job listsResource matrix listsSchedulerFigure 5. A visualization of the state representation which hasinformation about task lists and PEs. Initialized state representation(Top) and the result of GradCam (Selvaraju et al., 2017) performedwith the DRM feature layers ( Bottom ).0 20 40 60 80 100 120 140Time012Processor5 8 94 1 20 3 7 60 20 40 60 80 100Time012Processor54 1 8 7 90 2 3 6Figure 6. GANTT chart representing when tasks ran on the dif-ferent processing elements for the first episode (Top) and the lastepisode (Bottom) when training DRMWe provide a visualization of the saliency to reason aboutthe action decisions, shown in Figure 5. The top figureshows the initial state representation formed by the task listsand resource matrix information. After passing the state toDRM agent, we perform GradCam (Selvaraju et al., 2017)and retrieve the saliency, mapped onto the input, shown inthe bottom of Figure 5. Notice that the agent oversees taskswhich are not shown in initial representation. Describedwith different color intensities, we presume that the DRMagent actually understand the tasks belonging to differentstatus lists and that this more complex decision makinginput allows for better policies.Finally, the GANTT chart showcases how the policy im-proves over training for the fixed resource matrix case. Ini-tially, DRM gets quite a high execution time of 140 mswhile it produces a better policy of about 100 ms at theend of training. Note that this chart corresponds to the taskdependency graph shown earlier, Figure 2, with the onlydifference being that the tasks are 0-indexed.Some interesting changes include the choice of processingelement 1 for Task 9 over processing element 0. It is cleatthat this task is faster on PE1 compared to PE0. On the otherhand, the choice of PE2 for task 2 vs PE1 is also interestingas task 2 takes longer. However, this might be better due tothe task-dependency graph.5. Related WorkResource management has been actively researched in manycommunities. Several works have been applying deep RL tooptimally allocate resources and distribute tasks. DeepRMuses standard deep Q-learning algorithm to formalize re-source management as a Tetris game, however, it only workwith homogeneous settings (Mao et al., 2016). A variant ofDeepRM leverages convolution neural networks as a back-bone network to improve performance in scheduling (Chenet al., 2017). Subsequent work in DeepRM, Pensieve ap-plies a resource managing algorithm to video streaming tooptimally control the bitrate and successfully reduced buffer-ing (Mao et al., 2017). Hopfield networks have been appliedto schedule heterogeneous SoC (Chillet et al., 2011). Morerecent work combines heuristic and learning algorithms,starting from an existing plan and iteratively improving itand successfully applying it in heterogeneous job schedulingtask (Chen & Tian, 2018). However, their work follows thegeneral MDP setting where, again, the agent chooses actionat every timestep. From the perspective of hardware, recentwork has proposed new accelerator architectures which havepotential advances (Chen et al., 2018).6. ConclusionNeural schedulers using deep reinforcement learning havebeen researched in many areas and greatly improved theperformance compared to heuristic algorithms. In this paper,we propose the promising approach of resource allocationapplied in heterogeneous SoC chips. We use ‘no-operation’action and refer to all task lists, regardless of task status, toaddress partially-observability and the SMDP problem. Tothe best of our knowledge, this paper has been the first todeal with scheduling different tasks on different hardwarechips to discover the optimal combination of functionalities.Furthermore, we expect the general value functions andpredictive knowledge approach and the option frameworkto improve performance and leave them as future work.Neural Heterogeneous SchedulerReferencesBorel, J. Technologies for multimedia systems on a chip. In1997 IEEE International Solids-State Circuits Conference.Digest of Technical Papers , pp. 18–21. IEEE, 1997.Brockman, G., Cheung, V ., Pettersson, L., Schneider, J.,Schulman, J., Tang, J., and Zaremba, W. Openai gym.arXiv preprint arXiv:1606.01540 , 2016.Buttazzo, G. C. Hard real-time computing systems: pre-dictable scheduling algorithms and applications , vol-ume 24. Springer Science & Business Media, 2011.Chen, W., Xu, Y ., and Wu, X. Deep reinforcement learningfor multi-resource multi-machine job scheduling. arXivpreprint arXiv:1711.07440 , 2017.Chen, X. and Tian, Y . Learning to progressively plan. arXivpreprint arXiv:1810.00337 , 2018.Chen, Y .-H., Emer, J., and Sze, V . Eyeriss v2: A flexible andhigh-performance accelerator for emerging deep neuralnetworks. arXiv preprint arXiv:1807.07928 , 2018.Chillet, D., Eiche, A., Pillement, S., and Sentieys, O. Real-time scheduling on heterogeneous system-on-chip archi-tectures using an optimised artificial neural network. Jour-nal of Systems Architecture , 57(4):340–353, 2011.Levine, S., Finn, C., Darrell, T., and Abbeel, P. End-to-end training of deep visuomotor policies. The Journal ofMachine Learning Research , 17(1):1334–1373, 2016.Lnsdorf, O. and Scherfke, S. Simpy3. https://github.com/cristiklein/simpy , 2018.Mao, H., Alizadeh, M., Menache, I., and Kandula, S. Re-source management with deep reinforcement learning. InProceedings of the 15th ACM Workshop on Hot Topics inNetworks , pp. 50–56. ACM, 2016.Mao, H., Netravali, R., and Alizadeh, M. Neural adaptivevideo streaming with pensieve. In Proceedings of theConference of the ACM Special Interest Group on DataCommunication , pp. 197–210. ACM, 2017.Puterman, M. L. Markov decision processes: discretestochastic dynamic programming . John Wiley & Sons,2014.Schaller, R. R. Moore’s law: past, present and future. IEEEspectrum , 34(6):52–59, 1997.Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,Parikh, D., and Batra, D. Grad-cam: Visual explanationsfrom deep networks via gradient-based localization. InProceedings of the IEEE International Conference onComputer Vision , pp. 618–626, 2017.Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou,I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M.,Bolton, A., et al. Mastering the game of go withouthuman knowledge. Nature , 550(7676):354, 2017.Sutton, R. S. and Barto, A. G. Reinforcement learning: Anintroduction . MIT press, 2018.Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M.,Jaderberg, M., Czarnecki, W., Dudzik, A., Huang, A.,Georgiev, P., Powell, R., et al. Alphastar: Mastering thereal-time strategy game starcraft ii, 2019.<|im_end|> <|im_start|>assistant ### Review Title Bordeline. Good problem. Missing technical details and references. Limited empirical evaluation. ### Review Text This paper proposes a reinforcement learning scheduler, which is implemented using a neural network. The scheduler decides adaptively which jobs to execute. The job can be executed only if all jobs that this job depends on have been executed. The goal is to minimize the total execution time of a batch of jobs. This problem is challenging because of potentially complicated dependencies between the jobs. I like the problem in this paper. It is general and definitely important. This paper suffers from several issues though. My detailed comments are below: 1) Many technical details are missing. For instance, the jobs are scheduled conditioned on the past, which is represented by a feature vector. The construction of the feature vector is described in Section 3.2. This description is vague and it is unclear what the feature vector is. The optimization step in (1) is also unclear. How are A and \pi represented? Can you provide more details on how you optimize them? 2) Missing references. When you talk topics, such as actor-critic in Section 3.2, cite relevant papers, such as https://papers.nips.cc/paper/1786-actor-critic-algorithms.pdf 3) Limited empirical evaluation. The class of problems in Figure 2 is very limiting and does not do justice to the generality of your approach. ### Review Rating 3: Borderline ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
1l8xClLl53
Interspeech.org/2023/Workshop/SSW
2023
Importance of Human Factors in Text-To-Speech Evaluations
["Lev Finkelstein", "Joshua Camp", "Rob Clark"]
Both mean opinion score (MOS) evaluations and preference tests in text-to-speech are often associated with high rating variance. In this paper we investigate two important factors that affect that variance. One factor is that the variance is coming from how raters are picked for a specific test, and another is the dynamic behavior of individual raters across time. This paper increases the awareness of these issues when designing an evaluation experiment, since the standard confidence interval on the test level cannot incorporate the variance associated with these two factors. We show the impact of the two sources of variance and how they can be mitigated. We demonstrate that simple improvements in experiment design such as using a smaller number of rating tasks per rater can significantly improve the experiment confidence intervals / reproducibility with no extra cost.
["Text-to-Speech", "Subjective evaluations", "MOS", "Preference Tests", "Comparative Tests", "Test Reproducibility"]
Importance of Human Factors in Text-To-Speech EvaluationsLev Finkelstein, Josh Camp, Rob ClarkGoogle{finklev, joshcamp, rajclarck }@google.comAbstractBoth mean opinion score (MOS) evaluations and preferencetests in text-to-speech are often associated with high rating vari-ance. In this paper we investigate two important factors thataffect that variance. One factor is that the variance is comingfrom how raters are picked for a specific test, and another is thedynamic behavior of individual raters across time.This paper increases the awareness of these issues when de-signing an evaluation experiment, since the standard confidenceinterval on the test level cannot incorporate the variance asso-ciated with these two factors. We show the impact of the twosources of variance and how they can be mitigated. We demon-strate that simple improvements in experiment design such asusing a smaller number of rating tasks per rater can significantlyimprove the experiment confidence intervals / reproducibilitywith no extra cost.Index Terms : Text-to-Speech, subjective evaluations, MOS,preference tests, comparative tests, test reproducibility1. IntroductionBoth mean opinion score (MOS) evaluations and preferencetests in text-to-speech (TTS) often use a standard confidenceinterval. It is perceived as a sufficient safeguard for the ratingvariance, while, as we show in this paper, it lacks the ability toaddress a few major factors contributing to this variance.We investigate two important factors that affect the vari-ance. One factor is that the variance is coming from how wepick the raters for a specific test. Namely, besides the objectiveaudio sample quality, each rater has their own preferences, so ifthe rater pool is large enough (e.g., in crowdsourcing projects),and the actual number of raters chosen for a specific test is toosmall, then test reproducibility will become a challenge sinceaveraging over the scoring tasks will not remove the bias asso-ciated with raters’ personal preference.MOS evaluations require scoring an individual audio sam-ple, this is a less constrained task than a two-sided comparisonin preference tests, so intuitively raters variance can be moreaffected here. Note that this kind of variance is something thatcannot be seen from the test scores alone, so it is not directlyreflected in the confidence intervals. More elaborate techniquessuch as intra-class correlation analysis can help to get betterestimations, but they also require a sufficient number of datapoints, as well as a proper experiment design to be used effec-tively.An additional variance factor is the order-dependence of se-quences of ratings. If a rater rates multiple samples sequentially,then their rating for a given item technically depends on its po-sition in the sequence and is not independent of their other rat-ings. We do not know how a particular rater will behave a pri-ori – this may be a result of some inherent calibration process(learning curve) of each rater, or fatigue for a large number ofrating tasks. In any case, this phenomenon can definitely affectthe ratings both in MOS and in preference tests.In this paper we demonstrate the impact of these twosources of variance. The rater-induced variance impact is eval-uated by bootstrap analysis [1], and the impact of the order-dependent ratings is analyzed by showing non-random trends inthe score behavior. Intuitively, the first issue may be mitigatedby introducing more raters for a specific test, and the secondfactor by limiting the number of audio samples per rater, andwe show this is indeed the case. Using 60 audio samples perrater instead of 10 may double the variance even if the numberof rating tasks remains the same. While it is a natural decision,it is not always taken into account in the design of this type ofthe experiments.2. Related workThe issue of score variation arising from rater sampling hasbeen studied previously. In [2], the authors show that per-participant MOS values vary considerably within the same test,and in an analysis of the 2013 Blizzard Challenge results [3],the authors found the number of raters to be a key factor in testreliability and sensitivity. However, neither study attempts tocharacterize the variance of the sampling distribution directlyas is done in the present work.The influence of the number of ratings completed by a sin-gle rater has, to the best of our knowledge, not been studied inthe context of subjective evaluations of TTS systems. The issuehas received attention in the context of crowdsourced evalua-tions of degraded speech, however (e.g. those observed in tele-phony). In [4], a study was conducted in which crowdsourcedworkers completed an MOS evaluation of degraded speech sam-ples from [5], with three groups rating 10, 20, and 40 samplesrespectively. They found that while the groups that rated 10 and20 samples performed similarly, the group that rated 40 sam-ples reported much higher levels of fatigue, and had lower par-ticipant retention. For the 40-sample group, they also foundrater performance (as measured by correlation with laboratoryresults) to increase throughout the first half of the samples anddecrease in the second half. Contrarily, in a study of crowd-sourced spoken word recognition, authors of [6] found raterperformance to improve in the second half of the task, whichthey attribute to increased familiarity with the task. It seemsthere may be competing factors at play: as the number of rat-ing tasks increases, performance improves, but so does fatigue.Word recognition, however, is presumably less subjective thanTTS evaluations.In this paper we show that the same calibration and/or fa-tigue phenomena present in TTS subjective evaluations as well,and that it causes a clear monotonic trend both in MOS andin preference test outcomes. Unlike [4], where the benchmarkswere either self-reported fatigue scores and correlation with lab-oratory results, our results demonstrate the influence of numberof ratings intrinsically.Reliability of judgments is a known problem, and somemethods, including using intra-class correlation coefficient,may help in the analysis (see, for example, [7, 8]), but thesemethods require a specific experimental design. An applica-tion of cluster-based methods to text-to-speech tasks was donein [9], that used them for evaluating both MOS and preferencetests. In particular, it was observed that the number of listen-ers has a strong impact on the confidence intervals (a fact thatis often ignored if using out-of-the-box methods for confidenceinterval estimation), and that MOS tests are more sensitive tothe number of listeners than preference tests.3. Evaluating rater distribution impactWe have two independent methods for evaluating the varianceassociated with these human-related factors. First, we use abootstrap-like methodology to estimate the impact of the raterdistribution. Second, we perform a special time-based analysisto investigate dynamic rater behavior.3.1. Formal setupWe start with the rater distribution evaluation. To evaluate theimpact of rater distributions, we investigate the reproducibilityof the test scores at the test level.Let us define an MOS experiment setup1Mas a mappingfrom a set of audio samples Sand rater pool Rto the MOS. Wemay assume that such a mapping depends on the rater distribu-tion, on the instructions presented to the raters, and on the waythe samples are assigned to the raters. So, we can assume anexistence of some distribution of MOS scores, PS,R(M)thatdescribes applying a setup Mfor the same set of audio samplesSand the same rater pool R.We may measure the variance of the rater-associated factorsby measuring the deviation of the distribution above given therest of the factors, such as the instructions and specific samplesto be tested, which are not affected. Note that directly mea-suring of the variance by rerunning the same test many times isvery resource-consuming, due to the distribution of the standarddeviation. Instead, we use an approach which is a variation ofbootstrapping.More specifically, we created a large test with Naudio sam-ples, and required each sample to be evaluated by Ldifferentraters. After that, we are able to randomly sample one rating peritem under certain constraints (such as a fixed number of sam-ples per rater), thus creating a simulated test2. This simulatedtest can then may be used to estimate the per-test score distribu-tion under these constraints, without running a large number ofreal experiments.Formally, if the real test Tcontains ratings Rij, where iisthe audio sample number and jis the rating index of this item,a simulated test Tnis a subset of Rij′ofRij, where each iap-pears exactly once, and j′is a single rating among the availableones. The average score (MOS score) of such a simulated test is1This set of definitions is for the MOS tests, but it can be applied tothe preference tests as well.2We experimented with multiple ratings per sample with similar re-sults, so we use this setup for simplicity.Sn= Mean( Rij′), and the standard deviation among Sncanbe used a reliable estimation of the test-level deviation for thetest using exactly one rating per item.Note that in order to simulate a test with up to Ksamplesper rater and 1 rating per sample, we need a special samplingprocedure. Ideally, 1000 samples with one rating per sampleand the limit of up to 60 stimuli per rater should require 17raters (e.g., 16 raters with 60 stimuli each, and one rater with 40stimuli). If sampling for bootstrapping purposes is performedin a random order, however, such dense packing will probablynot be achieved since the same stimuli are rated by a number ofraters, which may create scheduling conflicts. A naive samplingcould result, for example, in associating 33 raters with 30 stim-uli each, and one rater with 10 stimuli. In order to simulate adense schedule, we implemented a greedy scheduler that mini-mizes the number of raters given constraints. In practice, even agreedy scheduler cannot obtain the optimal dense packing sincethe data available for bootstrapping is limited, and there stillwill be slightly more raters participating in each simulated testthan we could theoretically get in real life.3.2. Experimentation setupWe created two large tests of 990 audio samples. The samples, afew seconds each, were generated by a TTS system using a 24Ksample rate. The tests were crowdsourced, with 10 ratings persample, where the samples were given to the raters in batchesof 10 samples. In one test each rater was allowed to evaluate upto 60 samples, while in another test each rater was only allowedto evaluate 10 samples. A histogram of the actual number ofsamples per rater in the first test is shown in Figure 1. Note thatnot all raters completed rating 60 samples. The second test hada limit of up to 10 samples per rater, so, given the samples werepresented in batches of 10, each rater rated exactly 10 samples.10 20 30 40 50 60Samples per rater010203040506070Number of ratersFigure 1: Histogram of samples per rater in the first test (up to60 samples per rater).3.3. Experiments: RobustnessWe analyzed the standard deviation for both tests using the boot-strapping methodology described above. We generated 1,000simulated tests from our real data identical to the real test con-ditions, but with a single rating per sample, and calculated theirMOS scores. The graph of the standard deviation for thesescores as a function of the maximum number of samples perrater is shown in Figure 2 (top). Note that the standard devia-10 20 30 40 50 60Max samples per rater0.050.060.070.080.09Standard deviationProject 1: Up to 60 samples / raterProject 2: 10 samples / rater30 40 50 60 70 80 90 100Number of raters0.050.060.070.080.09Standard deviationProject 1: Up to 60 samples / raterProject 2: 10 samples / raterFigure 2: (Top) the standard deviation as a function of maxsamples per rater, (Bottom) the standard deviation as a functionof the number of raters in the simulated tests.tion number increase is almost by factor of 2, which means thatthe confidence interval should be doubled.The same plot contains a single green dot corresponding tothe (real) test with 10 samples per rater. The deviation for thissetup is higher since the simulated data for that test was sam-pled from the artificial pool of the second test with many moreraters, vs. the artificial smaller pool of the first test, thus lead-ing to a higher variance. Note that the artificial scheduling inthis framework is not really capable of getting real cases like16 raters with 60 samples per rater since the data available forbootstrapping is limited, so the deviation in our graph is pre-sumably lower than in real life. The dependency of the standarddeviation on the number of raters in the simulated tests is shownin Figure 2 (bottom). It is possible to see that simulated samplelimits of 40, 50, and 60 samples per rater resulted in a very closenumber of raters.3.4. Experiments: DistributionFrom the distribution point of view, the difference may beviewed in Figure 3 showing the histogram of MOS scores ofthe simulated tests. Each test has a limitation of Ksamples perrater. The top graph corresponds to K= 10 vs. K= 30, andthe bottom graph to K= 10 vs. K= 60. Having 10 samplesper rater significantly reduces the deviation of MOS scores, pre-sumably due to a larger number of raters.Another interesting observation is that the number of sam-ples per rater leads to an increased MOS score. The experimen-3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6MOS score050100150200250 10 samples / raterUp to 30 samples / rater3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6MOS score050100150200250 10 samples / raterUp to 60 samples / raterFigure 3: Histogram of MOS scores of simulated tests. (Top) K= 10 vs. K = 30 samples per rater, (Bottom) K = 10 vs. K = 60samples per rater.tation includes many raters, so it should not be a random fluctu-ation. We also observed similar behavior in other experimentsnot mentioned in this paper. We don’t know the exact reasonfor the difference in the average. It is possible that this is anartifact of a rater getting assigned a large number of successiverating tasks, which leads to some kind of a bias as discussed insection 4.3.5. Experiments: Different speakersIntuitively, different voices may have a different score variance.To analyze the behavior of the standard deviation as a functionof number of samples for different voices, we compared the be-havior of two different speakers (60 samples per rater both),where the quality of the first speaker is better. Note that thestandard deviation depends on the MOS scale, so to present thespeakers on the same scale, we multiplied the deviation of thesecond speaker by the coefficient equal to MOS(first speaker)/ MOS(second speaker). The results are shown in Figure 4.We hypothesize that a higher variance of the second speaker iscaused by the fact that their voice quality is worse, thus leadingto a wider MOS dynamic range.4. Evaluating the impact oforder-dependent ratingsIn this section we show how to validate the impact of the order-dependent scores of the raters that get more than one rating task.It is interesting that not all the experiments are subject to this10 20 30 40 50 60Max samples per rater0.050.060.070.080.090.100.110.12Standard deviationSpeaker 1Speaker 2 (scaled)Figure 4: The standard deviation as a function of the number ofraters in the simulated tests for two speakers.change in ratings. Also, the impact, if present, is not necessarilypositive or negative; we saw experiments with both trends.Assume that test T(either MOS or preference test) containsscores Srj, where ris a rater, and jis the serial index of thesample obtained by this rater3; in other words, Srjrepresentsthejth rating obtained from rater r. Let us select a set of ratersRhaving at least Kratings. Then we can define a special valueSR(k)withk≤Kas the cumulative average over all the ratingtasks with the serial number of kor less:SR(k) =1k|R|Xr∈R,j≤kSrj. (1)We use the cumulative average since it better reflects the dy-namics of the number of samples per rater. Note that we neededto preselect the set of raters Rto have at least Kratings in orderto have the same rater population in every slicing. If the sampleratings were independent, the behavior of SR(k)as a functionofkwould be more or less random and have no clear mono-tonic trends. In our experiments, however, we demonstrate thatthe behavior is often systematic, with relatively long monotonicregions. In the next sections we show the behavior of SR(k)indifferent setups. We also present a special sample-based analy-sis to prove our hypothesis using a different metric.4.1. Raters’ scores in preference testsIn the first experiment we analyze the ratings in two differentpreference tests. The preference test setup we use is actually acomparative MOS (CMOS) task where raters score the sampleon the whole-number scale of -3 to +3, where -3 is a strongpreference for one stimulus and +3 a strong preference for theother. The raters were able to rate up to 60 samples. Note thatnot all raters will achieve this. The cumulative average SR(k)as a function of k for the raters that rated at least 40 and at least60 samples is shown in Figure 5. Intuitively, a monotonic trendin both graphs after about 10 samples should not be random, butwe cannot conclude it from the graph only, and a more formalanalysis is given in Section 4.3. Also in both these cases theaverage scores are all positive, reflecting that the experimentvoice turned out to be considered better than the baseline.3The notation here differs from Section 3.0 5 10 15 20 25 30 35 40Number of samples0.010.020.030.040.050.060.070.080.09Average score0 10 20 30 40 50 60Number of samples0.000.020.040.060.080.10Average scoreFigure 5: The average score as a function of the number ofsamples, for the raters that rated at least 40 samples (top), forthe raters that rated at least 60 samples (bottom).4.2. Raters’ scores in MOS testsThe MOS tests are subject to the same phenomenon. We com-pared the MOS tests for two speakers4. Each test used 10 rat-ings per sample, with up to 60 samples per rater in batches of10. The results are shown in Figure 6. We do see a clear trendin the second speaker test but not in the first speaker test. Thefirst speaker has a higher quality, so it is possible that fatigue /calibration plays a lesser role than for the second speaker.4.3. Sample-level analysisSince there is still a chance of monotonic trends occurring inrandom sequences, we performed a different type of analysis tosupport the existence of the fatigue / calibration trend. In thisseries of experiments we analyze tests with multiple ratings peraudio sample and show that the ratings have a monotonic trend.Let us have a test with Lratings per audio sample, and letT(r, X)be a rating task of rater rassociated with sample X,andSr(X)be its score. Assume that for each rater rwe sortall the rating tasks {T(r, X)}performed by this rater in time-based order, such that each task becomes associated with thecorresponding ordinal number from 1 to |{T(r, X)}|, whichwe denote by N(r, X). For example, N(r, X) = 2 means thatsample Xwas the second audio sample rated by rater r.Let{Sr(X)}be the set of multiple scores of the same au-dio sample X, and assume that we define an order on this set,based on N(r, X). Namely, we say that Sr1(X)⪯Sr2(X)iffN(r1, X)≤N(r2, X). Let V(X)be the vector obtained bysorting {Sr(X)}according to the relation above. Each vectorV(X)contains Litems (the number of ratings per sample), anddue to the nature of the relation, if i < j , then the rating Vi(X)is associated with the “earlier” rating than Vj(X)(not necessar-4The two tests from Section 3.5.10 20 30 40 50 60Number of samples4.2804.2824.2844.2864.2884.2904.292Average score10 20 30 40 50 60Number of samples3.963.984.004.024.044.064.084.10Average scoreFigure 6: The average score as a function of the number ofsamples, for the raters that rated at least 60 samples, for twodifferent speakers.ily of the same rater), since the relation promotes early ratingsof each rater.We may, therefore, define Lartificial evaluation tests Ti={Vi(X)}, each one containing the i-th slice across all vectorsV(X). The average Mi= Mean[ Vi(X)]is an average eval-uation score (e.g., MOS or CMOS) of Ti, where smaller in-dices are associated with earlier ratings. Note that since theorder of the elements in V(X)is not uniquely defined for thecaseN(r1, X) =N(r2, X),Tiare not uniquely defined ei-ther. To avoid random fluctuations, we used the averaged valuesM′i= Mean[ Mi]over large number of random iterations.In the rest of this section we show that the resulting vector(M′1, M′2, . . . , M′L)has a clear monotonic trend, at least whenthe number of rating tasks per rater is large enough. We use theMann-Kendall test [10,11] to validate the monotonicity hypoth-esis. The implementation is based on [12] for a small numberof data points. We calculate the Mann-Kendall statisticS=L−1Xk=1LXj=k+1sign(M′j−M′k),and look for the p-value for the null-hypothesis of no trend toSandLin the table specified in [12]. The lower p-value is, themore we are confident that the sequence is monotonic.Table 1 shows the outcome for different tests and differentconfigurations. The tests were conducted on different sets ofaudio samples and required 10 ratings per item, except T9andT10with 8 ratings per item. It is possible to see that all thetests with many tasks per rater had a clear monotonic behav-ior (except maybe T1which had a somewhat high probabilitythreshold). However, tests with a small number of tasks perrater had a less clear behavior – some of them had a monotonicbehavior, while some didn’t. Other parameters like the natureof the test (MOS / CMOS) seemed to have no impact. Neitherwe were able to predict whether the sequence of M′iis increas-ing or decreasing. We hypothesize that there are two trends,calibration and fatigue, where the calibration trend affects someof the raters even for a small number of rating tasks, while thefatigue trend affects almost all the raters having a large numberof tasks.Table 1: Mann-Kendall p-value and monotonicity trend for dif-ferent evaluation tests.Test TypeSamplesTrend p-valueper raterT1 MOS 60 Down 0.108T2 MOS 60 Up 0.014T3 MOS 60 Up <0.001T4 MOS 60 Up 0.023T5 CMOS 60 Down 0.014T6 CMOS 10 Down 0.431T7 MOS 10 Up <0.001T8 MOS 10 Up 0.431T9 MOS 6 Up 0.500T10 MOS 7 Down 0.031T11 CMOS 5 Up 0.0145. DiscussionThis work has focused on two very significant sources of vari-ability in the TTS evaluations. The first one, caused by the vari-ance among raters, may be considered well-proven, but taking itinto account in a confidence interval requires a more elaboratedsetup than is typically used in TTS evaluations. However, it canbe addressed by increasing the number of raters, which leadsto reducing the variance without affecting the number of ratedsamples.In our experiments we observed a substantial improvementby increasing the number of raters to a rather large number. Thiscorresponds to the findings in [3], where the recommendationswere to use about 30 paid raters in controlled conditions, andmany more raters (the exact number was not specified) for lesscontrolled scenarios like crowdsourcing.An interesting question is whether this behavior is commonfor all MOS tasks (text-to-speech synthesis, voice conversion,speech enhancement, etc.). We would expect some differencesince we observed the difference even across the samples pro-duced by the same TTS system for different speakers (see Sec-tion 3.5). We believe though that the variance associated withthe rater choice should be inherent to MOS tests, thus creatinga similar type of the dependency on the number of raters, evenif the absolute numbers differ.The second factor that is analyzed in this paper is causedby a dynamic trend in the raters’ rating process. This factor ismore vague. While we observe its existence, we cannot claimexactly what the source of this type of behavior is—fatigue, orsome process of raters self-calibration, or something else. It isalso unclear how different this factor is for different raters. It ispossible that this type of problem may be mitigated by modify-ing the instructions for raters, in a way to keep them more alertand calibrated.Note that the tradeoff between calibration and fatigue ishard to analyze given the lack of ground truth in this type ofevaluation. So, we assume that there should be a minimal num-ber of audio samples for the calibration, but the paper doesn’tset a goal to find this number (and it is unclear if it is feasiblein the current setup). Given that reducing the number of tasksper rater also requires more raters and thus reduces the impactof per-rater variability (the first factor), we do consider limitingthe number of tasks per rater beneficial.6. ConclusionsIn this work we presented the analysis of two important aspectsof TTS evaluations that are currently not taken into account bythe way confidence intervals are usually calculated.The first factor is caused by the rater variance, i.e. by pick-ing the raters from the rater pool. We showed the impact of thistype of variance using bootstrapping simulations on tests witha large number of raters and with multiple ratings per task. Inparticular, we showed that using 60 audio samples per rater in-stead of 10 may double the variance even if the number of ratingtasks remains the same.The second aspect implies that we have a non-random com-ponent in our evaluations that depends on the number of tasksperformed by the rater, which causes the scores to behavemonotonically depending on the order of the rating task. Whilethis does not necessarily increase the variance, this factor leadsto quality-unrelated scoring of items, and affects both MOS andpreference tests. It is unclear though whether all raters are sub-ject to such a behavior, or only some of them. Our resultsdemonstrate the presence and the impact of this phenomenonintrinsically. Increasing the number of raters, which is equiv-alent to reducing the number of rating tasks per rater, helps topartially mitigate the problem.It is difficult to give recommendations regarding the exactnumber of raters since the process is affected by many factors.There is a tradeoff between the necessity for a rater to learnthe task on one hand and not to be affected by fatigue on theanother. In this paper, we used the minimal number of 10 audiosamples per rater, and the number of raters was derived fromthe number of samples per rater, e.g., a test with 1000 audiosamples required 100 raters. For a different type of the test,it may be beneficial to fine tune these numbers by calculatingconfidence intervals using the techniques like [9] that take intoaccount the rater variability.Increasing awareness of these factors will allow researchersto make more informed decisions when setting up TTS evalua-tions. A large number of rating tasks per rater may lead to theevaluation artifacts that are typically not addressed in the wayresults are analyzed, and the results of such experiments maynot be reproducible. However, very simple changes in experi-ment design may significantly improve the reproducibility (andpotentially provide a more precise score) without changing tothe number of overall rated items.7. AcknowledgmentsThe authors would like to thank Tilman Achberger for his valu-able comments on the experiment design, and to the reviewerswhose input helped to make the experiment description and thediscussion much more accurate.8. References[1] B. Efron, “Bootstrap methods: another look at the Jackknife,” TheAnnals of Statistics , vol. 7, no. 1, pp. 1–26, 1979.[2] A. Rosenberg and B. Ramabhadran, “Bias and statistical signifi-cance in evaluating speech synthesis with mean opinion scores,”inProc. Interspeech 2017 , 2017, pp. 3976–3980.[3] M. Wester, C. Valentini-Botinhao, and G. E. Henter, “Are we us-ing enough listeners? No! An empirically-supported critique ofInterspeech 2014 TTS evaluations,” in Proc. Interspeech 2015 ,2015, pp. 3476–3480.[4] R. Z. Jim ́enez, L. F. Gallardo, and S. M ̈oller, “Influence of num-ber of stimuli for subjective speech quality assessment in crowd-sourcing,” in 2018 Tenth International Conference on Quality ofMultimedia Experience (QoMEX) , 2018, pp. 1–6.[5] I. Rec, “P. 863: Perceptual objective listening quality assessment,”International Telecommunication Union, Geneva , 2014.[6] J. Slote and J. F. Strand, “Conducting spoken word recognitionresearch online: Validation and a new timing method,” BehaviorResearch Methods , vol. 48, pp. 553–566, 2016.[7] P. E. Shrout and J. L. Fleiss, “Intraclass correlations: uses in as-sessing rater reliability.” Psychological bulletin , vol. 86, no. 2, pp.420–428, 1979.[8] K. O. McGraw and S. P. Wong, “Forming inferences about someintraclass correlation coefficients.” Psychological methods , vol. 1,no. 1, pp. 30–46, 1996.[9] J. Camp, T. Kenter, L. Finkelstein, and R. Clark, “MOS vs. AB:Evaluating text-to-speech systems reliably using clustered stan-dard errors,” in Proc. Interspeech 2023 , 2023, p. To appear.[10] H. B. Mann, “Nonparametric tests against trend,” Econometrica:Journal of the econometric society , vol. 13, no. 3, pp. 245–259,1945.[11] M. G. Kendall, Rank correlation methods. , 4th ed. Charles Grif-fin, 1975.[12] R. O. Gilbert, Statistical methods for environmental pollutionmonitoring . John Wiley & Sons, 1987.
mkhe0X-a6m
Interesting research on implications of some design aspects of MOS/preference test.
7: Good paper, accept
This paper analyses how two human factors affect the variance of MOS/preference test evaluations. 1) rater bias -each people may have different views; 2) variation of rates with time (learning vs fatigue). This is a very interesting topic as human evaluation is still dominant in our area and reducing the variance would allow to learn and progress faster. Based on an analysis on a large evaluation the authors give a clear recommendation: increase the number of raters reducing number of samples per rater. However, it is not clear if this is a general recommendation or specific for their particular evaluation. Furthermore the clarity of the paper could be improved. The paper present some facts and give a recommendation, but I think the authors could be more precise. The authors recommend to increase the number of raters to reduce variance, but they report different results in reference [3]. The justification is that technology know is different and that people are more exposed to TTS. As a practitioner, I am not sure if the recommendation is valid for my technology, or should I do similar test? What about using MOS for e.g. voice conversion or speech enhancement? The paper does not explain the technology used to generate the samples, but I think it should be explained. E.g.: how many systems are the raters exposed? I would say 1 TTS, not sure if one or two voices. Does the number of systems influence the results? E.g in Blizzard Challenges (ref [3]) many systems participated and maybe this helped raters did a self-calibration? Another possible explanation: maybe if you only listen to one system after scoring a few similar rates (4, 4, 4) raters try to put attention to small differences to be able to provide valuable feedback. However, if several systems are presented and the rates have more natural differences, the raters feel a ‘4’ for a given system is all it’s needed as they use different scores for other systems. To summarise: thanks to the authors for pointing out the differences with reference [3], but in my opinion further evidence is required to give such a recommendation. On the other hand, in case the authors justify that the recommendation is valid for a wide range of tasks and technologies, I think it would be useful to make explicit the benefit (e.g. translate score+variance into confidence ranges) and they could give a specific recommendation (e.g. as much raters as possible? or number of ratings per rater < K (k=10?). The conclusion for the second factor analysed (temporal evolution of ratings) is less clear. Table 1 shows that sometimes goes up, sometimes goes down. The authors give two possible causes: learning vs fatigue. However in conclusions the recommendation is to reduce number of rating tasks per rater. I am not sure this is generally good recommendation (it’s for some cases, not for others: helps with fatigue, does not with respect learning the task). About the clarity: - The authors mention in several parts of papers (e.g. 3rd paragraph of introduction) that the variance can be analyzed using “intra-class” correlation. I would appreciate a reference to understand what they are referring. This is again mentioned in end of section 2 (this time with references) and in the discussion section “a more elaborated setup than is typically used”. It’s clear that the authors consider it a good solution but more complex and I recommend to explain a little bit so that the readers can be convinced that it’s not worth the effort of the complex setup compared with the simple recommendation in the paper. - In section 3.1, I am afraid I don’t understand what is the sampling procedure, or more specifically what is “schedule” in this context. 
“the schedule containing 16 raters with 60 stimuli and one rater with 40 stimuli should be preferred over a schedule containing 33 raters with 30 stimuli each and one rater with 10 stimuli. For this purpose, we implemented a simple greedy scheduler that minimizes the number of raters given constraints”. Why are 16+1 rater preferred to 30+1? Is schedule a good word with respect “sampling criterium”? - In section 3.2, I would add some information. As I already mentioned, please explain how samples are generated. But also, indicated for both test how many raters and mean of rating per rater (from figure, around 240 ratees, total ratings 7400, approx 7 rates per sample?). What about second test? - Maybe I don’t understand the sampling, because I don’t understand this sentence: “so, there are actually more raters participating in each simulated test than we could theoretically get in real life”
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Importance of Human Factors in Text-To-Speech Evaluations ### Paper Abstract Both mean opinion score (MOS) evaluations and preference tests in text-to-speech are often associated with high rating variance. In this paper we investigate two important factors that affect that variance. One factor is that the variance is coming from how raters are picked for a specific test, and another is the dynamic behavior of individual raters across time. This paper increases the awareness of these issues when designing an evaluation experiment, since the standard confidence interval on the test level cannot incorporate the variance associated with these two factors. We show the impact of the two sources of variance and how they can be mitigated. We demonstrate that simple improvements in experiment design such as using a smaller number of rating tasks per rater can significantly improve the experiment confidence intervals / reproducibility with no extra cost. ### Paper Keywords ["Text-to-Speech", "Subjective evaluations", "MOS", "Preference Tests", "Comparative Tests", "Test Reproducibility"] ### Paper Content Importance of Human Factors in Text-To-Speech EvaluationsLev Finkelstein, Josh Camp, Rob ClarkGoogle{finklev, joshcamp, rajclarck }@google.comAbstractBoth mean opinion score (MOS) evaluations and preferencetests in text-to-speech are often associated with high rating vari-ance. In this paper we investigate two important factors thataffect that variance. One factor is that the variance is comingfrom how raters are picked for a specific test, and another is thedynamic behavior of individual raters across time.This paper increases the awareness of these issues when de-signing an evaluation experiment, since the standard confidenceinterval on the test level cannot incorporate the variance asso-ciated with these two factors. We show the impact of the twosources of variance and how they can be mitigated. We demon-strate that simple improvements in experiment design such asusing a smaller number of rating tasks per rater can significantlyimprove the experiment confidence intervals / reproducibilitywith no extra cost.Index Terms : Text-to-Speech, subjective evaluations, MOS,preference tests, comparative tests, test reproducibility1. IntroductionBoth mean opinion score (MOS) evaluations and preferencetests in text-to-speech (TTS) often use a standard confidenceinterval. It is perceived as a sufficient safeguard for the ratingvariance, while, as we show in this paper, it lacks the ability toaddress a few major factors contributing to this variance.We investigate two important factors that affect the vari-ance. One factor is that the variance is coming from how wepick the raters for a specific test. Namely, besides the objectiveaudio sample quality, each rater has their own preferences, so ifthe rater pool is large enough (e.g., in crowdsourcing projects),and the actual number of raters chosen for a specific test is toosmall, then test reproducibility will become a challenge sinceaveraging over the scoring tasks will not remove the bias asso-ciated with raters’ personal preference.MOS evaluations require scoring an individual audio sam-ple, this is a less constrained task than a two-sided comparisonin preference tests, so intuitively raters variance can be moreaffected here. Note that this kind of variance is something thatcannot be seen from the test scores alone, so it is not directlyreflected in the confidence intervals. More elaborate techniquessuch as intra-class correlation analysis can help to get betterestimations, but they also require a sufficient number of datapoints, as well as a proper experiment design to be used effec-tively.An additional variance factor is the order-dependence of se-quences of ratings. If a rater rates multiple samples sequentially,then their rating for a given item technically depends on its po-sition in the sequence and is not independent of their other rat-ings. We do not know how a particular rater will behave a pri-ori – this may be a result of some inherent calibration process(learning curve) of each rater, or fatigue for a large number ofrating tasks. In any case, this phenomenon can definitely affectthe ratings both in MOS and in preference tests.In this paper we demonstrate the impact of these twosources of variance. The rater-induced variance impact is eval-uated by bootstrap analysis [1], and the impact of the order-dependent ratings is analyzed by showing non-random trends inthe score behavior. Intuitively, the first issue may be mitigatedby introducing more raters for a specific test, and the secondfactor by limiting the number of audio samples per rater, andwe show this is indeed the case. Using 60 audio samples perrater instead of 10 may double the variance even if the numberof rating tasks remains the same. While it is a natural decision,it is not always taken into account in the design of this type ofthe experiments.2. Related workThe issue of score variation arising from rater sampling hasbeen studied previously. In [2], the authors show that per-participant MOS values vary considerably within the same test,and in an analysis of the 2013 Blizzard Challenge results [3],the authors found the number of raters to be a key factor in testreliability and sensitivity. However, neither study attempts tocharacterize the variance of the sampling distribution directlyas is done in the present work.The influence of the number of ratings completed by a sin-gle rater has, to the best of our knowledge, not been studied inthe context of subjective evaluations of TTS systems. The issuehas received attention in the context of crowdsourced evalua-tions of degraded speech, however (e.g. those observed in tele-phony). In [4], a study was conducted in which crowdsourcedworkers completed an MOS evaluation of degraded speech sam-ples from [5], with three groups rating 10, 20, and 40 samplesrespectively. They found that while the groups that rated 10 and20 samples performed similarly, the group that rated 40 sam-ples reported much higher levels of fatigue, and had lower par-ticipant retention. For the 40-sample group, they also foundrater performance (as measured by correlation with laboratoryresults) to increase throughout the first half of the samples anddecrease in the second half. Contrarily, in a study of crowd-sourced spoken word recognition, authors of [6] found raterperformance to improve in the second half of the task, whichthey attribute to increased familiarity with the task. It seemsthere may be competing factors at play: as the number of rat-ing tasks increases, performance improves, but so does fatigue.Word recognition, however, is presumably less subjective thanTTS evaluations.In this paper we show that the same calibration and/or fa-tigue phenomena present in TTS subjective evaluations as well,and that it causes a clear monotonic trend both in MOS andin preference test outcomes. Unlike [4], where the benchmarkswere either self-reported fatigue scores and correlation with lab-oratory results, our results demonstrate the influence of numberof ratings intrinsically.Reliability of judgments is a known problem, and somemethods, including using intra-class correlation coefficient,may help in the analysis (see, for example, [7, 8]), but thesemethods require a specific experimental design. An applica-tion of cluster-based methods to text-to-speech tasks was donein [9], that used them for evaluating both MOS and preferencetests. In particular, it was observed that the number of listen-ers has a strong impact on the confidence intervals (a fact thatis often ignored if using out-of-the-box methods for confidenceinterval estimation), and that MOS tests are more sensitive tothe number of listeners than preference tests.3. Evaluating rater distribution impactWe have two independent methods for evaluating the varianceassociated with these human-related factors. First, we use abootstrap-like methodology to estimate the impact of the raterdistribution. Second, we perform a special time-based analysisto investigate dynamic rater behavior.3.1. Formal setupWe start with the rater distribution evaluation. To evaluate theimpact of rater distributions, we investigate the reproducibilityof the test scores at the test level.Let us define an MOS experiment setup1Mas a mappingfrom a set of audio samples Sand rater pool Rto the MOS. Wemay assume that such a mapping depends on the rater distribu-tion, on the instructions presented to the raters, and on the waythe samples are assigned to the raters. So, we can assume anexistence of some distribution of MOS scores, PS,R(M)thatdescribes applying a setup Mfor the same set of audio samplesSand the same rater pool R.We may measure the variance of the rater-associated factorsby measuring the deviation of the distribution above given therest of the factors, such as the instructions and specific samplesto be tested, which are not affected. Note that directly mea-suring of the variance by rerunning the same test many times isvery resource-consuming, due to the distribution of the standarddeviation. Instead, we use an approach which is a variation ofbootstrapping.More specifically, we created a large test with Naudio sam-ples, and required each sample to be evaluated by Ldifferentraters. After that, we are able to randomly sample one rating peritem under certain constraints (such as a fixed number of sam-ples per rater), thus creating a simulated test2. This simulatedtest can then may be used to estimate the per-test score distribu-tion under these constraints, without running a large number ofreal experiments.Formally, if the real test Tcontains ratings Rij, where iisthe audio sample number and jis the rating index of this item,a simulated test Tnis a subset of Rij′ofRij, where each iap-pears exactly once, and j′is a single rating among the availableones. The average score (MOS score) of such a simulated test is1This set of definitions is for the MOS tests, but it can be applied tothe preference tests as well.2We experimented with multiple ratings per sample with similar re-sults, so we use this setup for simplicity.Sn= Mean( Rij′), and the standard deviation among Sncanbe used a reliable estimation of the test-level deviation for thetest using exactly one rating per item.Note that in order to simulate a test with up to Ksamplesper rater and 1 rating per sample, we need a special samplingprocedure. Ideally, 1000 samples with one rating per sampleand the limit of up to 60 stimuli per rater should require 17raters (e.g., 16 raters with 60 stimuli each, and one rater with 40stimuli). If sampling for bootstrapping purposes is performedin a random order, however, such dense packing will probablynot be achieved since the same stimuli are rated by a number ofraters, which may create scheduling conflicts. A naive samplingcould result, for example, in associating 33 raters with 30 stim-uli each, and one rater with 10 stimuli. In order to simulate adense schedule, we implemented a greedy scheduler that mini-mizes the number of raters given constraints. In practice, even agreedy scheduler cannot obtain the optimal dense packing sincethe data available for bootstrapping is limited, and there stillwill be slightly more raters participating in each simulated testthan we could theoretically get in real life.3.2. Experimentation setupWe created two large tests of 990 audio samples. The samples, afew seconds each, were generated by a TTS system using a 24Ksample rate. The tests were crowdsourced, with 10 ratings persample, where the samples were given to the raters in batchesof 10 samples. In one test each rater was allowed to evaluate upto 60 samples, while in another test each rater was only allowedto evaluate 10 samples. A histogram of the actual number ofsamples per rater in the first test is shown in Figure 1. Note thatnot all raters completed rating 60 samples. The second test hada limit of up to 10 samples per rater, so, given the samples werepresented in batches of 10, each rater rated exactly 10 samples.10 20 30 40 50 60Samples per rater010203040506070Number of ratersFigure 1: Histogram of samples per rater in the first test (up to60 samples per rater).3.3. Experiments: RobustnessWe analyzed the standard deviation for both tests using the boot-strapping methodology described above. We generated 1,000simulated tests from our real data identical to the real test con-ditions, but with a single rating per sample, and calculated theirMOS scores. The graph of the standard deviation for thesescores as a function of the maximum number of samples perrater is shown in Figure 2 (top). Note that the standard devia-10 20 30 40 50 60Max samples per rater0.050.060.070.080.09Standard deviationProject 1: Up to 60 samples / raterProject 2: 10 samples / rater30 40 50 60 70 80 90 100Number of raters0.050.060.070.080.09Standard deviationProject 1: Up to 60 samples / raterProject 2: 10 samples / raterFigure 2: (Top) the standard deviation as a function of maxsamples per rater, (Bottom) the standard deviation as a functionof the number of raters in the simulated tests.tion number increase is almost by factor of 2, which means thatthe confidence interval should be doubled.The same plot contains a single green dot corresponding tothe (real) test with 10 samples per rater. The deviation for thissetup is higher since the simulated data for that test was sam-pled from the artificial pool of the second test with many moreraters, vs. the artificial smaller pool of the first test, thus lead-ing to a higher variance. Note that the artificial scheduling inthis framework is not really capable of getting real cases like16 raters with 60 samples per rater since the data available forbootstrapping is limited, so the deviation in our graph is pre-sumably lower than in real life. The dependency of the standarddeviation on the number of raters in the simulated tests is shownin Figure 2 (bottom). It is possible to see that simulated samplelimits of 40, 50, and 60 samples per rater resulted in a very closenumber of raters.3.4. Experiments: DistributionFrom the distribution point of view, the difference may beviewed in Figure 3 showing the histogram of MOS scores ofthe simulated tests. Each test has a limitation of Ksamples perrater. The top graph corresponds to K= 10 vs. K= 30, andthe bottom graph to K= 10 vs. K= 60. Having 10 samplesper rater significantly reduces the deviation of MOS scores, pre-sumably due to a larger number of raters.Another interesting observation is that the number of sam-ples per rater leads to an increased MOS score. The experimen-3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6MOS score050100150200250 10 samples / raterUp to 30 samples / rater3.9 4.0 4.1 4.2 4.3 4.4 4.5 4.6MOS score050100150200250 10 samples / raterUp to 60 samples / raterFigure 3: Histogram of MOS scores of simulated tests. (Top) K= 10 vs. K = 30 samples per rater, (Bottom) K = 10 vs. K = 60samples per rater.tation includes many raters, so it should not be a random fluctu-ation. We also observed similar behavior in other experimentsnot mentioned in this paper. We don’t know the exact reasonfor the difference in the average. It is possible that this is anartifact of a rater getting assigned a large number of successiverating tasks, which leads to some kind of a bias as discussed insection 4.3.5. Experiments: Different speakersIntuitively, different voices may have a different score variance.To analyze the behavior of the standard deviation as a functionof number of samples for different voices, we compared the be-havior of two different speakers (60 samples per rater both),where the quality of the first speaker is better. Note that thestandard deviation depends on the MOS scale, so to present thespeakers on the same scale, we multiplied the deviation of thesecond speaker by the coefficient equal to MOS(first speaker)/ MOS(second speaker). The results are shown in Figure 4.We hypothesize that a higher variance of the second speaker iscaused by the fact that their voice quality is worse, thus leadingto a wider MOS dynamic range.4. Evaluating the impact oforder-dependent ratingsIn this section we show how to validate the impact of the order-dependent scores of the raters that get more than one rating task.It is interesting that not all the experiments are subject to this10 20 30 40 50 60Max samples per rater0.050.060.070.080.090.100.110.12Standard deviationSpeaker 1Speaker 2 (scaled)Figure 4: The standard deviation as a function of the number ofraters in the simulated tests for two speakers.change in ratings. Also, the impact, if present, is not necessarilypositive or negative; we saw experiments with both trends.Assume that test T(either MOS or preference test) containsscores Srj, where ris a rater, and jis the serial index of thesample obtained by this rater3; in other words, Srjrepresentsthejth rating obtained from rater r. Let us select a set of ratersRhaving at least Kratings. Then we can define a special valueSR(k)withk≤Kas the cumulative average over all the ratingtasks with the serial number of kor less:SR(k) =1k|R|Xr∈R,j≤kSrj. (1)We use the cumulative average since it better reflects the dy-namics of the number of samples per rater. Note that we neededto preselect the set of raters Rto have at least Kratings in orderto have the same rater population in every slicing. If the sampleratings were independent, the behavior of SR(k)as a functionofkwould be more or less random and have no clear mono-tonic trends. In our experiments, however, we demonstrate thatthe behavior is often systematic, with relatively long monotonicregions. In the next sections we show the behavior of SR(k)indifferent setups. We also present a special sample-based analy-sis to prove our hypothesis using a different metric.4.1. Raters’ scores in preference testsIn the first experiment we analyze the ratings in two differentpreference tests. The preference test setup we use is actually acomparative MOS (CMOS) task where raters score the sampleon the whole-number scale of -3 to +3, where -3 is a strongpreference for one stimulus and +3 a strong preference for theother. The raters were able to rate up to 60 samples. Note thatnot all raters will achieve this. The cumulative average SR(k)as a function of k for the raters that rated at least 40 and at least60 samples is shown in Figure 5. Intuitively, a monotonic trendin both graphs after about 10 samples should not be random, butwe cannot conclude it from the graph only, and a more formalanalysis is given in Section 4.3. Also in both these cases theaverage scores are all positive, reflecting that the experimentvoice turned out to be considered better than the baseline.3The notation here differs from Section 3.0 5 10 15 20 25 30 35 40Number of samples0.010.020.030.040.050.060.070.080.09Average score0 10 20 30 40 50 60Number of samples0.000.020.040.060.080.10Average scoreFigure 5: The average score as a function of the number ofsamples, for the raters that rated at least 40 samples (top), forthe raters that rated at least 60 samples (bottom).4.2. Raters’ scores in MOS testsThe MOS tests are subject to the same phenomenon. We com-pared the MOS tests for two speakers4. Each test used 10 rat-ings per sample, with up to 60 samples per rater in batches of10. The results are shown in Figure 6. We do see a clear trendin the second speaker test but not in the first speaker test. Thefirst speaker has a higher quality, so it is possible that fatigue /calibration plays a lesser role than for the second speaker.4.3. Sample-level analysisSince there is still a chance of monotonic trends occurring inrandom sequences, we performed a different type of analysis tosupport the existence of the fatigue / calibration trend. In thisseries of experiments we analyze tests with multiple ratings peraudio sample and show that the ratings have a monotonic trend.Let us have a test with Lratings per audio sample, and letT(r, X)be a rating task of rater rassociated with sample X,andSr(X)be its score. Assume that for each rater rwe sortall the rating tasks {T(r, X)}performed by this rater in time-based order, such that each task becomes associated with thecorresponding ordinal number from 1 to |{T(r, X)}|, whichwe denote by N(r, X). For example, N(r, X) = 2 means thatsample Xwas the second audio sample rated by rater r.Let{Sr(X)}be the set of multiple scores of the same au-dio sample X, and assume that we define an order on this set,based on N(r, X). Namely, we say that Sr1(X)⪯Sr2(X)iffN(r1, X)≤N(r2, X). Let V(X)be the vector obtained bysorting {Sr(X)}according to the relation above. Each vectorV(X)contains Litems (the number of ratings per sample), anddue to the nature of the relation, if i < j , then the rating Vi(X)is associated with the “earlier” rating than Vj(X)(not necessar-4The two tests from Section 3.5.10 20 30 40 50 60Number of samples4.2804.2824.2844.2864.2884.2904.292Average score10 20 30 40 50 60Number of samples3.963.984.004.024.044.064.084.10Average scoreFigure 6: The average score as a function of the number ofsamples, for the raters that rated at least 60 samples, for twodifferent speakers.ily of the same rater), since the relation promotes early ratingsof each rater.We may, therefore, define Lartificial evaluation tests Ti={Vi(X)}, each one containing the i-th slice across all vectorsV(X). The average Mi= Mean[ Vi(X)]is an average eval-uation score (e.g., MOS or CMOS) of Ti, where smaller in-dices are associated with earlier ratings. Note that since theorder of the elements in V(X)is not uniquely defined for thecaseN(r1, X) =N(r2, X),Tiare not uniquely defined ei-ther. To avoid random fluctuations, we used the averaged valuesM′i= Mean[ Mi]over large number of random iterations.In the rest of this section we show that the resulting vector(M′1, M′2, . . . , M′L)has a clear monotonic trend, at least whenthe number of rating tasks per rater is large enough. We use theMann-Kendall test [10,11] to validate the monotonicity hypoth-esis. The implementation is based on [12] for a small numberof data points. We calculate the Mann-Kendall statisticS=L−1Xk=1LXj=k+1sign(M′j−M′k),and look for the p-value for the null-hypothesis of no trend toSandLin the table specified in [12]. The lower p-value is, themore we are confident that the sequence is monotonic.Table 1 shows the outcome for different tests and differentconfigurations. The tests were conducted on different sets ofaudio samples and required 10 ratings per item, except T9andT10with 8 ratings per item. It is possible to see that all thetests with many tasks per rater had a clear monotonic behav-ior (except maybe T1which had a somewhat high probabilitythreshold). However, tests with a small number of tasks perrater had a less clear behavior – some of them had a monotonicbehavior, while some didn’t. Other parameters like the natureof the test (MOS / CMOS) seemed to have no impact. Neitherwe were able to predict whether the sequence of M′iis increas-ing or decreasing. We hypothesize that there are two trends,calibration and fatigue, where the calibration trend affects someof the raters even for a small number of rating tasks, while thefatigue trend affects almost all the raters having a large numberof tasks.Table 1: Mann-Kendall p-value and monotonicity trend for dif-ferent evaluation tests.Test TypeSamplesTrend p-valueper raterT1 MOS 60 Down 0.108T2 MOS 60 Up 0.014T3 MOS 60 Up <0.001T4 MOS 60 Up 0.023T5 CMOS 60 Down 0.014T6 CMOS 10 Down 0.431T7 MOS 10 Up <0.001T8 MOS 10 Up 0.431T9 MOS 6 Up 0.500T10 MOS 7 Down 0.031T11 CMOS 5 Up 0.0145. DiscussionThis work has focused on two very significant sources of vari-ability in the TTS evaluations. The first one, caused by the vari-ance among raters, may be considered well-proven, but taking itinto account in a confidence interval requires a more elaboratedsetup than is typically used in TTS evaluations. However, it canbe addressed by increasing the number of raters, which leadsto reducing the variance without affecting the number of ratedsamples.In our experiments we observed a substantial improvementby increasing the number of raters to a rather large number. Thiscorresponds to the findings in [3], where the recommendationswere to use about 30 paid raters in controlled conditions, andmany more raters (the exact number was not specified) for lesscontrolled scenarios like crowdsourcing.An interesting question is whether this behavior is commonfor all MOS tasks (text-to-speech synthesis, voice conversion,speech enhancement, etc.). We would expect some differencesince we observed the difference even across the samples pro-duced by the same TTS system for different speakers (see Sec-tion 3.5). We believe though that the variance associated withthe rater choice should be inherent to MOS tests, thus creatinga similar type of the dependency on the number of raters, evenif the absolute numbers differ.The second factor that is analyzed in this paper is causedby a dynamic trend in the raters’ rating process. This factor ismore vague. While we observe its existence, we cannot claimexactly what the source of this type of behavior is—fatigue, orsome process of raters self-calibration, or something else. It isalso unclear how different this factor is for different raters. It ispossible that this type of problem may be mitigated by modify-ing the instructions for raters, in a way to keep them more alertand calibrated.Note that the tradeoff between calibration and fatigue ishard to analyze given the lack of ground truth in this type ofevaluation. So, we assume that there should be a minimal num-ber of audio samples for the calibration, but the paper doesn’tset a goal to find this number (and it is unclear if it is feasiblein the current setup). Given that reducing the number of tasksper rater also requires more raters and thus reduces the impactof per-rater variability (the first factor), we do consider limitingthe number of tasks per rater beneficial.6. ConclusionsIn this work we presented the analysis of two important aspectsof TTS evaluations that are currently not taken into account bythe way confidence intervals are usually calculated.The first factor is caused by the rater variance, i.e. by pick-ing the raters from the rater pool. We showed the impact of thistype of variance using bootstrapping simulations on tests witha large number of raters and with multiple ratings per task. Inparticular, we showed that using 60 audio samples per rater in-stead of 10 may double the variance even if the number of ratingtasks remains the same.The second aspect implies that we have a non-random com-ponent in our evaluations that depends on the number of tasksperformed by the rater, which causes the scores to behavemonotonically depending on the order of the rating task. Whilethis does not necessarily increase the variance, this factor leadsto quality-unrelated scoring of items, and affects both MOS andpreference tests. It is unclear though whether all raters are sub-ject to such a behavior, or only some of them. Our resultsdemonstrate the presence and the impact of this phenomenonintrinsically. Increasing the number of raters, which is equiv-alent to reducing the number of rating tasks per rater, helps topartially mitigate the problem.It is difficult to give recommendations regarding the exactnumber of raters since the process is affected by many factors.There is a tradeoff between the necessity for a rater to learnthe task on one hand and not to be affected by fatigue on theanother. In this paper, we used the minimal number of 10 audiosamples per rater, and the number of raters was derived fromthe number of samples per rater, e.g., a test with 1000 audiosamples required 100 raters. For a different type of the test,it may be beneficial to fine tune these numbers by calculatingconfidence intervals using the techniques like [9] that take intoaccount the rater variability.Increasing awareness of these factors will allow researchersto make more informed decisions when setting up TTS evalua-tions. A large number of rating tasks per rater may lead to theevaluation artifacts that are typically not addressed in the wayresults are analyzed, and the results of such experiments maynot be reproducible. However, very simple changes in experi-ment design may significantly improve the reproducibility (andpotentially provide a more precise score) without changing tothe number of overall rated items.7. AcknowledgmentsThe authors would like to thank Tilman Achberger for his valu-able comments on the experiment design, and to the reviewerswhose input helped to make the experiment description and thediscussion much more accurate.8. References[1] B. Efron, “Bootstrap methods: another look at the Jackknife,” TheAnnals of Statistics , vol. 7, no. 1, pp. 1–26, 1979.[2] A. Rosenberg and B. Ramabhadran, “Bias and statistical signifi-cance in evaluating speech synthesis with mean opinion scores,”inProc. Interspeech 2017 , 2017, pp. 3976–3980.[3] M. Wester, C. Valentini-Botinhao, and G. E. Henter, “Are we us-ing enough listeners? No! An empirically-supported critique ofInterspeech 2014 TTS evaluations,” in Proc. Interspeech 2015 ,2015, pp. 3476–3480.[4] R. Z. Jim ́enez, L. F. Gallardo, and S. M ̈oller, “Influence of num-ber of stimuli for subjective speech quality assessment in crowd-sourcing,” in 2018 Tenth International Conference on Quality ofMultimedia Experience (QoMEX) , 2018, pp. 1–6.[5] I. Rec, “P. 863: Perceptual objective listening quality assessment,”International Telecommunication Union, Geneva , 2014.[6] J. Slote and J. F. Strand, “Conducting spoken word recognitionresearch online: Validation and a new timing method,” BehaviorResearch Methods , vol. 48, pp. 553–566, 2016.[7] P. E. Shrout and J. L. Fleiss, “Intraclass correlations: uses in as-sessing rater reliability.” Psychological bulletin , vol. 86, no. 2, pp.420–428, 1979.[8] K. O. McGraw and S. P. Wong, “Forming inferences about someintraclass correlation coefficients.” Psychological methods , vol. 1,no. 1, pp. 30–46, 1996.[9] J. Camp, T. Kenter, L. Finkelstein, and R. Clark, “MOS vs. AB:Evaluating text-to-speech systems reliably using clustered stan-dard errors,” in Proc. Interspeech 2023 , 2023, p. To appear.[10] H. B. Mann, “Nonparametric tests against trend,” Econometrica:Journal of the econometric society , vol. 13, no. 3, pp. 245–259,1945.[11] M. G. Kendall, Rank correlation methods. , 4th ed. Charles Grif-fin, 1975.[12] R. O. Gilbert, Statistical methods for environmental pollutionmonitoring . John Wiley & Sons, 1987.<|im_end|> <|im_start|>assistant ### Review Title Interesting research on implications of some design aspects of MOS/preference test. ### Review Text This paper analyses how two human factors affect the variance of MOS/preference test evaluations. 1) rater bias -each people may have different views; 2) variation of rates with time (learning vs fatigue). This is a very interesting topic as human evaluation is still dominant in our area and reducing the variance would allow to learn and progress faster. Based on an analysis on a large evaluation the authors give a clear recommendation: increase the number of raters reducing number of samples per rater. However, it is not clear if this is a general recommendation or specific for their particular evaluation. Furthermore the clarity of the paper could be improved. The paper present some facts and give a recommendation, but I think the authors could be more precise. The authors recommend to increase the number of raters to reduce variance, but they report different results in reference [3]. The justification is that technology know is different and that people are more exposed to TTS. As a practitioner, I am not sure if the recommendation is valid for my technology, or should I do similar test? What about using MOS for e.g. voice conversion or speech enhancement? The paper does not explain the technology used to generate the samples, but I think it should be explained. E.g.: how many systems are the raters exposed? I would say 1 TTS, not sure if one or two voices. Does the number of systems influence the results? E.g in Blizzard Challenges (ref [3]) many systems participated and maybe this helped raters did a self-calibration? Another possible explanation: maybe if you only listen to one system after scoring a few similar rates (4, 4, 4) raters try to put attention to small differences to be able to provide valuable feedback. However, if several systems are presented and the rates have more natural differences, the raters feel a ‘4’ for a given system is all it’s needed as they use different scores for other systems. To summarise: thanks to the authors for pointing out the differences with reference [3], but in my opinion further evidence is required to give such a recommendation. On the other hand, in case the authors justify that the recommendation is valid for a wide range of tasks and technologies, I think it would be useful to make explicit the benefit (e.g. translate score+variance into confidence ranges) and they could give a specific recommendation (e.g. as much raters as possible? or number of ratings per rater < K (k=10?). The conclusion for the second factor analysed (temporal evolution of ratings) is less clear. Table 1 shows that sometimes goes up, sometimes goes down. The authors give two possible causes: learning vs fatigue. However in conclusions the recommendation is to reduce number of rating tasks per rater. I am not sure this is generally good recommendation (it’s for some cases, not for others: helps with fatigue, does not with respect learning the task). About the clarity: - The authors mention in several parts of papers (e.g. 3rd paragraph of introduction) that the variance can be analyzed using “intra-class” correlation. I would appreciate a reference to understand what they are referring. This is again mentioned in end of section 2 (this time with references) and in the discussion section “a more elaborated setup than is typically used”. It’s clear that the authors consider it a good solution but more complex and I recommend to explain a little bit so that the readers can be convinced that it’s not worth the effort of the complex setup compared with the simple recommendation in the paper. - In section 3.1, I am afraid I don’t understand what is the sampling procedure, or more specifically what is “schedule” in this context. 
“the schedule containing 16 raters with 60 stimuli and one rater with 40 stimuli should be preferred over a schedule containing 33 raters with 30 stimuli each and one rater with 10 stimuli. For this purpose, we implemented a simple greedy scheduler that minimizes the number of raters given constraints”. Why are 16+1 rater preferred to 30+1? Is schedule a good word with respect “sampling criterium”? - In section 3.2, I would add some information. As I already mentioned, please explain how samples are generated. But also, indicated for both test how many raters and mean of rating per rater (from figure, around 240 ratees, total ratings 7400, approx 7 rates per sample?). What about second test? - Maybe I don’t understand the sampling, because I don’t understand this sentence: “so, there are actually more raters participating in each simulated test than we could theoretically get in real life” ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
-6b4dsHIdW0
NeurIPS.cc/2022/Workshop/LaReL
2022
Language-guided Task Adaptation for Imitation Learning
["Prasoon Goyal", "Ray Mooney", "Scott Niekum"]
We introduce a novel setting, wherein an agent needs to learn a task from a demonstration of a related task with the difference between the tasks communicated in natural language. The proposed setting allows reusing demonstrations from other tasks, by providing low effort language descriptions, and can also be used to provide feedback to correct agent errors, which are both important desiderata for building intelligent agents that assist humans in daily tasks. To enable progress in this proposed setting, we create two benchmarks---Room Rearrangement and Room Navigation---that cover a diverse set of task adaptations. Further, we propose a framework that uses a transformer-based model to reason about the entities in the tasks and their relationships, to learn a policy for the target task.
["language", "imitation learning", "learning from demonstration"]
Language-guided Task Adaptation for ImitationLearningPrasoon Goyal, Raymond J. Mooney, Scott NiekumDepartment of Computer ScienceUniversity of Texas at Austin{pgoyal,mooney,sniekum}@cs.utexas.eduAbstractWe introduce a novel setting, wherein an agent needs to learn a task from ademonstration of a related task with the difference between the tasks communicatedin natural language. The proposed setting allows reusing demonstrations fromother tasks, by providing low effort language descriptions, and can also be usedto provide feedback to correct agent errors, which are both important desideratafor building intelligent agents that assist humans in daily tasks. To enable progressin this proposed setting, we create two benchmarks—Room Rearrangement andRoom Navigation—that cover a diverse set of task adaptations. Further, we proposea framework that uses a transformer-based model to reason about the entities in thetasks and their relationships, to learn a policy for the target task.1 IntroductionImitation learning and instruction-following are two common approaches to communicate a new taskto a learning agent, using demonstrations and natural language respectively. However, providingdemonstrations for each new task can be burdersome for the user, while providing intricate detailsusing language can also become challenging. This motivates a new paradigm that combines thestrengths of both demonstrations and natural language. To this end, we propose a novel setting—givena demonstration of a task (the source task ), we want an agent to complete a somewhat different task(thetarget task ) in a zero-shot setting, that is, without access to anydemonstrations for the target task.The difference between the source task and the target task is communicated using natural language.Figure 1: Example of the settingFor example, consider an environment consist-ing of objects in a room, as shown in Figure 1.Suppose we have a robot, to which we havealready provided the demonstration shown onthe left. Now, we want to teach it to go to theopposite side of the table without providing anew demonstration, using a demonstration forthe source task and a linguistic description ofthe difference between the source and the targettasks, such as “Go to the opposite side of thewide table”. Note that, to infer the target goal, neither the source demonstration, nor the description,is sufficient by itself, and the agent must therefore combine information from both the modalities.This setting (1) allows reusing demonstrations from related tasks, and (2) enables demonstratingcomplex tasks where both the modalities may be essential. Further, it can be used for correctingthe behavior of an agent, where the agent’s current behavior can be seen as a demonstration of thesource task and a natural language description can be provided to guide the agent towards the correctbehavior.36th Conference on Neural Information Processing Systems (NeurIPS 2022).2 Related WorkOur proposed setting is related to, but distinct from, several prior research directions: (1) instead ofgetting a demonstration of the desired task as in standard imitation learning [3, 23, 26, 1, 25, 36, 9, 16,11] and only language in instruction-following [2, 10, 34, 32, 14, 4, 28, 29, 20, 30], in our approachthe agent gets a demonstration of a related task, with the difference between the demonstrated taskand the desired task communicated using language; (2) our approach can be seen as an instance oftransfer learning [31, 35], where the transfer is guided using language; (3) our approach is orthogonalto many other lines of work that use language to aid learning [19, 13, 12, 18, 33, 5, 21].3 Benchmark DatasetsWe create two benchmark environments: Room Rearrangement and Room Navigation. The RoomRearrangement Environment consists of a 5×5grid, with 2 distinct objects. The goal is to move eachobject to a desired goal position. The agent and the objects are spawned randomly in the grid. Theaction space for the agent consists of 7 actions— Up,Down ,Left ,Right ,Grasp ,Release , and Stop .The Room Navigation Environment consists of a 2D arena, (x, y)∈[−100,100]2, with 4 distinctobjects. The agent is spawned at a random location in the arena, and needs to navigate to a desiredgoal position. The action space for the agent is (dx, dy )∈[−1,1]2. We use a common set of objectsin both the environments— Chair ,Table ,Sofa ,Light ,Shelf , and Wardrobe . Further, each objectcan have one of 6 attributes— Large ,Wide ,Wooden ,Metallic ,Corner , and Foldable .For each domain, we create three types of adaptations. For Room Rearrangement, these adaptationsinvolve specifying an absolute change in the goal position of each entity, the relative change in thegoal position of one entity with respect to the other, and swapping the goal positions of the entities.For Room Navigation, these adaptations involve moving closer to an entity, moving further awayfrom an entity, and going to the opposite side of an entity. For each adaptation template, 5,000datapoints were generated for training, 100 for validation of the reward and goal learning, 5 for tuningthe RL hyperparameters, and 10 for the RL test set. We generate template-based descriptions for allthe datapoints. Further, we use Amazon Mechanical Turk to collect natural language paraphrasesfor 10% of the training datapoints, and all the datapoints in the other splits. More details about thedataset are provided in the appendix.4 RElational Task Adaptation for Imitation with Language (RETAIL)Figure 2: The RETAIL frameworkWe propose the RElational Task Adaptation forImitation with Language (RETAIL) frameworkthat takes in a source demonstration, τsrc, andthe difference between the source and targettasks described using natural language, l, tolearn a policy for the target task πsrc. The frame-work consists of two independent approaches,as shown in Figure 2. The first approach—Relational Reward Adaptation—involves infer-ring a reward function for the target task Rtgtusing the source demonstration τsrcand lan-guage l, from which a policy for the target taskπtgtis learned using RL. The second approach—Relational Policy Adaptation—involves learn-ing a policy for the source task πsrcfromthe source demonstration τsrc, which is thenadapted using language lto obtain a policy forthe target task πtgt.For both these approaches, we assume access to a training set D={(τisrc, τitgt, li)}Ni=1, where fortheithdatapoint, τisrcis a demonstration for the source task, τitgtis a demonstration for the targettask, and liis the linguistic description of the difference between the source task and the target task.2We propose a relational model since many adaptations require reasoning about the relation betweenentities (e.g. “Move the big table two units away from the wooden chair"). Since entity extraction isnot the focus of this work, we assume access to a set of entities for each task, where each entity isrepresented using two one-hot vectors, corresponding to an attribute and a noun. Further, each state isrepresented as a list, where element icorresponds to the (x, y)coordinates of the ithentity. Finally,we assume that the number of entities, denoted as Nentities , is fixed for a given domain.We start by describing some common components used in both the approaches. To encode an entity,its attribute and noun are first encoded using an embedding layer, and the (x, y) position is encodedusing a linear layer. These embeddings are concatenated to get the final vector representation of theentity. To encode language, we experiment with 4 encoders: (1) pretrained CLIP model [24], (2)pretrained BERT model [8], (3) BERT model initialized randomly, and (4) GloVE word embeddings[22], with a two-layer bidirectional LSTM [17].4.1 Relational Reward AdaptationWe define the reward R(s, s′)using a potential function as, R(s, s′) =φ(s′)−φ(s). Thus, theproblem of reward learning is reduced to the problem of learning the potential function φ(s). Wedecompose the potential function learning problem into two subproblems: (1) predicting the goal statefor the target task given the source goal and the language, gtgt=Adapt (gsrc, l), and (2) learninga distance function between two states, d(s, s′). The potential function for the target task is thendefined as φtgt(s|gsrc, l) =−d(s, Adapt (gsrc, l)).The goal prediction module is a transformer model that takes in the encoded entities for the sourcegoal state and the token embeddings generated by the language encoder, to output the goal state forthe target task. The distance function encodes states sands′using a multi-layered perceptron, andcomputes an l2-distance between the encoded states. The goal prediction module and the distancefunction are trained independently, and the combined to obtain a reward function that is used to learna policy for the target task using PPO [27]. More details are provided in the Appendix.4.2 Relational Policy AdaptationInstead of learning a model to infer the reward function for the target task from the source demonstra-tion and language, in this section, we describe an alternate approach wherein we learn a model toinfer the target task policy from the source task policy.First, a goal-conditioned policy π(s|g)is learned using all the source and target demonstrations—given the goal state for a task, g, (which is assumed to be the last state in the demonstration), andanother state, s, we use behavior cloning to learn a policy that predicts the action to be taken at states. We use a neural network to parameterize this policy, wherein the states gandsare concatenatedand then passed through a multi-layer perceptron to predict the action at state s.The learned model is then used to generate data of the form (state, language, source action, targetaction). For each datapoint of the form (τisrc, τitgt, li)in the original dataset, the states in the sourceand target demonstrations are passed through the learned goal-conditioned policy, passing in thesource task goal and the target task goal to obtain the actions in the source and target tasks respectively.This data is used to train a transformer-based adaptation model, that takes in the source action, theentities in the state s, and the language to predict the target action. See Figure 8 in the appendix for adiagram of the approach.During evaluation, we are given the source demonstration and language, as before. We use thegoal-conditioned policy π(s|g)to first predict the action for the current state under the source task,and then pass this predicted action, along with the encoded entities and language to the adaptationmodel, to obtain the action under the target task. This action is then executed in the environment.The process is repeated until the STOP action is executed or the maximum episode length is reached.Note that this approach does not involve reinforcement learning to learn the policy.4.3 Combining Reward and Policy AdaptationRecall that the actor-critic model in the PPO algorithm consists of a policy network and a valuenetwork. We use the models trained using the policy adaptation and the reward adaptation approaches3to initialize these networks respectively using knowledge distillation [15]. The details of our fullapproach are described in the appendix.5 ExperimentsTable 1: Success ratesNo. of successesSetting Rearrangement NavigationReward Adaptation 2996.02 ±136.21 247.98 ±20.51Oracle 4402.78 ±410.67 337.22 ±7.34Zero reward 121.02 ±4.25 0.29 ±0.04Reward+Policy Adaptation 8516.78 ±894.35 430.80 ±5.08Evaluation Metrics. In theRoom Rearrangement domain,an episode is deemed success-ful if both the entities are in thedesired goal locations when theagent executes Stop , while forthe Room Navigation domain, anepisode is deemed successful ifthel2-distance between the agent’s final position and the desired goal position is less than 5 units.(Recall that the total arena size is 200×200units.)Relational Reward Adaptation. We train a policy using the reward function obtained by combiningthe predicted goal state and distance function for each target task, and report the total number ofsuccessful episodes at the end of 500,000 and 100,000 for Rearrangement and Navigation respectively,averaged across 3 RL runs per target task. We compare our approach to two other reward functions:(1) a zero-reward, that gives a zero reward for all actions, serving as a lower bound, and (2) an oracle,that has access to the true goal state for the target task and uses l1-distance for Rearrangement, andl2-distance for Navigation. Our results, summarized in Table 1 (rows 1-3), show that the proposedapproach is about 70% as good as the oracle, and significantly better than the zero-reward lowerbound.Relational Policy Adaptation. To evaluate this approach, we generate 100 rollouts using thetrained models for each test task, and compute the number of successful episodes. We find that theapproach completes 15.33% tasks on the Rearrangement domain, and 3.87% tasks on the Navigationdomain. Recall that this approach does not involve RL on the target task.Combining Reward and Policy Adaptation. We report the number of successes when PPOis initialized using the adapted policy in Table 1 (row 4). We observe that on both the domains,initializing the policy network using the Relational Policy Adaptation approach and the value networkusing the Relational Reward Adaptation approach leads to a substantially faster policy learning onthe target tasks, compared to randomly initialized PPO networks. Figure 6 in the appendix shows thelearning curves for these experiments.Key Takeaways. To summarize, our experiments demonstrate that: (1) Relational Reward Adap-tation leads to successfully learning the target task from the source demonstration and language inmany test tasks, but there is room for improvement; (2) Relational Policy Adaptation can be usedto complete some target tasks without RL, but there is a significant room for improvement; and (3)combining the two approaches followed by finetuning with RL leads to a much better performancethan using either approach independently.6 ConclusionsWe introduced a new problem setting, wherein an agent needs to learn a policy for a target task,given the demonstration of a source task and a linguistic description of the difference between thesource and the target tasks, and created two relational benchmarks – Room Rearrangement and RoomNavigation – for this setting. We presented two relational approaches for the problem setting. Thefirst approach – relational reward adaptation – learns a transformer-based model that predicts the goalstate for the target task, and learns a distance function between two states. These trained modules arethen combined to obtain a reward function for the target task, which is used to learn a policy usingRL. The second approach – relational policy adaptation – learns a transformer-based model that takesin a state, and the action at this state under the source task, to output the action at this state underthe target task, conditioned on the source task goal and language. We show that combining theseapproaches results in effective policy learning on the target tasks.4References[1] Pieter Abbeel and Andrew Y Ng. “Apprenticeship learning via inverse reinforcement learning”.In:Proceedings of the twenty-first international conference on Machine learning . 2004, p. 1.[2] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid,Stephen Gould, and Anton Van Den Hengel. “Vision-and-language navigation: Interpretingvisually-grounded navigation instructions in real environments”. In: Proceedings of the IEEEConference on Computer Vision and Pattern Recognition . 2018, pp. 3674–3683.[3] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. “A survey of robotlearning from demonstration”. In: Robotics and autonomous systems 57.5 (2009), pp. 469–483.[4] Jacob Arkin, Matthew R Walter, Adrian Boteanu, Michael E Napoli, Harel Biggie, Hadas Kress-Gazit, and Thomas M Howard. “Contextual awareness: Understanding monologic naturallanguage instructions for autonomous robots”. In: 2017 26th IEEE International Symposiumon Robot and Human Interactive Communication (RO-MAN) . IEEE. 2017, pp. 502–509.[5] SRK Branavan, David Silver, and Regina Barzilay. “Learning to win by reading manuals in aMonte-Carlo framework”. In: Journal of Artificial Intelligence Research 43 (2012).[6] Daniel Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. “Extrapolating beyondsuboptimal demonstrations via inverse reinforcement learning from observations”. In: Interna-tional Conference on Machine Learning . PMLR. 2019, pp. 783–792.[7] Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei.“Deep reinforcement learning from human preferences”. In: arXiv preprint arXiv:1706.03741(2017).[8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Bert: Pre-trainingof deep bidirectional transformers for language understanding”. In: arXiv preprintarXiv:1810.04805 (2018).[9] Chelsea Finn, Sergey Levine, and Pieter Abbeel. “Guided cost learning: Deep inverse optimalcontrol via policy optimization”. In: International conference on machine learning . PMLR.2016, pp. 49–58.[10] Daniel Fried, Ronghang Hu, V olkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-PhilippeMorency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. “Speaker-follower models for vision-and-language navigation”. In: arXiv preprint arXiv:1806.02724(2018).[11] Justin Fu, Katie Luo, and Sergey Levine. “Learning robust rewards with adversarial inversereinforcement learning”. In: arXiv preprint arXiv:1710.11248 (2017).[12] Prasoon Goyal, Scott Niekum, and Raymond J Mooney. “PixL2R: Guiding Reinforce-ment Learning Using Natural Language by Mapping Pixels to Rewards”. In: arXiv preprintarXiv:2007.15543 (2020).[13] Prasoon Goyal, Scott Niekum, and Raymond J Mooney. “Using natural language for rewardshaping in reinforcement learning”. In: arXiv preprint arXiv:1903.02020 (2019).[14] Sachithra Hemachandra, Felix Duvallet, Thomas M Howard, Nicholas Roy, Anthony Stentz,and Matthew R Walter. “Learning models for following natural language directions in unknownenvironments”. In: 2015 IEEE International Conference on Robotics and Automation (ICRA) .IEEE. 2015, pp. 5608–5615.[15] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. “Distilling the knowledge in a neural network”.In:arXiv preprint arXiv:1503.02531 2.7 (2015).[16] Jonathan Ho and Stefano Ermon. “Generative adversarial imitation learning”. In: arXiv preprintarXiv:1606.03476 (2016).[17] Sepp Hochreiter and Jürgen Schmidhuber. “Long short-term memory”. In: Neural computation9.8 (1997), pp. 1735–1780.[18] Russell Kaplan, Christopher Sauer, and Alexander Sosa. “Beating atari with natural languageguided reinforcement learning”. In: arXiv preprint arXiv:1704.05539 (2017).[19] Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Ed-ward Grefenstette, Shimon Whiteson, and Tim Rocktaschel. “A Survey of ReinforcementLearning Informed by Natural Language”. In: IJCAI 2019: Proceedings of the Twenty-EighthInternational Joint Conference on Artificial Intelligence . Aug. 2019.5[20] Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. “Tell me dave: Context-sensitive grounding of natural language to manipulation instructions”. In: The InternationalJournal of Robotics Research 35.1-3 (2016), pp. 281–300.[21] Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. “Grounding language for transferin deep reinforcement learning”. In: Journal of Artificial Intelligence Research 63 (2018),pp. 849–874.[22] Jeffrey Pennington, Richard Socher, and Christopher Manning. “Glove: Global vectors forword representation”. In: Proceedings of the 2014 conference on empirical methods in naturallanguage processing (EMNLP) . 2014, pp. 1532–1543.[23] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network . Tech. rep.CARNEGIE-MELLON UNIV PITTSBURGH PA ARTIFICIAL INTELLIGENCE and PSY-CHOLOGY . . ., 1989.[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. “Learning transferable visualmodels from natural language supervision”. In: International Conference on Machine Learning .PMLR. 2021, pp. 8748–8763.[25] Deepak Ramachandran and Eyal Amir. “Bayesian Inverse Reinforcement Learning.” In: IJCAI .V ol. 7. 2007, pp. 2586–2591.[26] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. “A reduction of imitation learning andstructured prediction to no-regret online learning”. In: Proceedings of the fourteenth interna-tional conference on artificial intelligence and statistics . JMLR Workshop and ConferenceProceedings. 2011, pp. 627–635.[27] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. “Proximalpolicy optimization algorithms”. In: arXiv preprint arXiv:1707.06347 (2017).[28] Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, RoozbehMottaghi, Luke Zettlemoyer, and Dieter Fox. “Alfred: A benchmark for interpreting groundedinstructions for everyday tasks”. In: Proceedings of the IEEE/CVF conference on computervision and pattern recognition . 2020, pp. 10740–10749.[29] Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, and Heni BenAmor. “Language-Conditioned Imitation Learning for Robot Manipulation Tasks”. In: arXivpreprint arXiv:2010.12083 (2020).[30] Jaeyong Sung, Seok Hyun Jin, and Ashutosh Saxena. “Robobarista: Object part based transferof manipulation trajectories from crowd-sourcing in 3d pointclouds”. In: Robotics Research .Springer, 2018, pp. 701–720.[31] Matthew E Taylor and Peter Stone. “Transfer learning for reinforcement learning domains: Asurvey.” In: Journal of Machine Learning Research 10.7 (2009).[32] Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew Walter, Ashis Banerjee, SethTeller, and Nicholas Roy. “Understanding natural language commands for robotic navigationand mobile manipulation”. In: Proceedings of the AAAI Conference on Artificial Intelligence .V ol. 25. 1. 2011.[33] HJ Wang and Karthik Narasimhan. “Grounding Language to Entities and Dynamics forGeneralization in Reinforcement Learning”. In: arXiv preprint arXiv:2101.07393 (2021).[34] Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang,William Yang Wang, and Lei Zhang. “Reinforced cross-modal matching and self-supervisedimitation learning for vision-language navigation”. In: Proceedings of the IEEE/CVF Confer-ence on Computer Vision and Pattern Recognition . 2019, pp. 6629–6638.[35] Zhuangdi Zhu, Kaixiang Lin, and Jiayu Zhou. “Transfer learning in deep reinforcementlearning: A survey”. In: arXiv preprint arXiv:2009.07888 (2020).[36] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. “Maximum entropyinverse reinforcement learning.” In: Aaai . V ol. 8. Chicago, IL, USA. 2008, pp. 1433–1438.67 Appendix7.1 Dataset DetailsFigure 3: Adaptations used in the Room Rear-rangement (top) and Room Navigation (bottom)domains.Transition Dynamics for the Room Rear-rangement Domain. If the agent is on a cellthat contains another object, the Grasp actionpicks up the object, otherwise it leads to nochange. A grasped object moves with the agent,until the Release action is executed. The Up,Down ,Left , and Right actions move the agent(and the grasped object, if any) by one unit in thecorresponding direction, except when the actionwould result in the agent going outside the grid,or the two objects on the same grid cell. In thesecases, the action doesn’t result in any change.TheStop action terminates the episode.Examples of Adaptations. Figure 3 showsexamples of the adaptations from both the Re-arrangement and the Navigation domain, whileTable 2 shows some examples of synthetic andnatural language descriptions for these adapta-tions.Together, these environments cover varioustypes of adaptations, such as specifying mod-ifications to one versus several entities, provid-ing absolute modifications to an entity’s posi-tion (e.g., “move the table one unit further left")versus modifications that are relative to otherentities (e.g., “move the table one unit awayfrom the sofa"). Further, these domains coverdifferent types of MDPs, with Room Rearrange-ment being a discrete state and action space en-vironment, with a relatively short horizon, whileRoom Navigation being a continuous state andaction space environment, with a longer horizon.(On average, an optimal policy completes a taskin the Room Rearrangement domain in about 30steps, while in the Room Navigation domain inabout 150 steps.) Finally, the Room Navigation domain has a unique optimal path (i.e. a straightline path between the initial state and the goal state), while the Room Rearrangement domain admitsmultiple optimal paths (e.g. if reaching an entity requires taking 2 steps to the right and 1 stepupwards, these steps can be performed in any order). Thus, these two domains make a robust testbedfor developing techniques for the proposed problem setting.7.2 Details of the Relational and Policy Adaptation ApproachesGoal Prediction. The goal prediction module is trained by using the final states in the source andtarget demonstrations, as the source and target goals respectively. We minimize the mean absoluteerror between the gold target goal state, gtgtand the predicted target goal state, ˆgtgt:Lgoal=1NNXi=1∥gtgt−ˆgtgt∥1Figure 7 shows a diagram of the goal prediction module.Distance Function. To train the distance function, two states siandsjare sampled from a demon-stration τ, which can be the source or the target demonstration for the task, such that i < j . The7Table 2: Examples of template-generated and natural language descriptions collected using AMT.Template Natural language paraphrase1. go further away from the metallic table Increase your distance from the metallic table.2. go closer to the foldable light Move in the direction of the light that is foldable3. go to the opposite side of the corner light Move across from the corner light.4. move the large chair one unit farther from thewide couchIncrement the distance of the big chair from thewide couch by one.5. move corner table two units further left and metal-lic shelf one unit further backwardslide the corner table two units left and move themetal shelf a single unit back6. move the large table to where the large sofa wasmoved, and vice versaswap the place of the table with the sofamodel is trained to predict distances such that d(g, si)> d(g, sj), where gis the goal state for thedemonstration. This is achieved using the following loss function:Ldist=−Xsi,sj,glogexp(d(g, si))exp(d(g, si)) + exp( d(g, sj))This loss function has been shown to be effective at learning functions that satisfy pairwise inequalityconstraints [7, 6].The policy adaptation approach is diagrammatically shown in Figure 8.7.3 Details of the combined approachHere, we describe the details of how the Relational Reward Adaptation and Relationsl PolicyAdaptation approaches are combined.1.Train the reward adaptation and policy adaptation models using supervised learning inde-pendently, as detailed in the previous sections.2.Use knowledge distillation to initialize the value network for PPO, updating the PPO valuenetwork towards the potential predicted by the reward adaptation approach.3.Use knowledge distillation to initialize the policy network for PPO, updating the PPO policynetwork towards the action probabilities predicted by the policy adaptation approach for thetarget task.4.Finetune the action and value networks using PPO with the rewards predicted by the rewardadaptation approach.For knowledge distillation, states from the demonstration data are sampled uniformly at random.Figure 9 shows a diagram of the combined approach.Importantly, we found that the action network initialized using knowledge distillation usually has alow entropy, and therefore finetuning it directly does not result in good performance. To amelioratethis issue, the entropy of the action network must be kept sufficiently high for it to still allow someexploration. In the continuous control case, we achieve this by increasing the standard deviationof the action network, tuned using the validation set. In the discrete domain, since there is noexplicit parameter to control the entropy in the action network, the knowledge distillation step has anadditional loss term to penalize low-entropy solutions.7.4 Ablation Experiments for Relational Reward AdaptationWe experimented with several ablations of our reward adaptation model, which we describe below.The results are reported in Table 3, where we include the results from the main paper again in the first3 rows for easier comparison.Since our full model consists of two learned components, the goal prediction module, and the distancefunction, we first study the impact of each of these components independently. We experiment with8Table 3: Success rates for different models on Room Rearrangement and Room Navigation domains.We report both the raw success rates (unnormalized), and success rates normalized by the oraclesetting performance.No. of successesSetting Rearrangement NavigationUnnormalized Normalized Unnormalized NormalizedReward Adaptation 2996.02 ±136.21 68.05 ±3.09 247.98 ±20.51 73.54 ±6.08Oracle 4402.78 ±410.67 100.00 ±9.33 337.22 ±7.34 100.00 ±2.18Zero reward 121.02 ±4.25 2.75 ±0.10 0.29 ±0.04 0.09 ±0.01True goal, predicted distance 4164.80 ±337.83 94.59 ±7.67 362.13 ±12.18 107.39 ±3.61Predicted goal, true distance 3706.80 ±200.46 84.19 ±4.55 196.49 ±12.97 58.27 ±3.85Synthetic language 3827.64 ±141.79 86.94 ±3.22 317.11 ±49.26 94.04 ±14.61Non-relational goal prediction 869.89 ±115.12 19.76 ±2.61 0.38 ±0.17 0.11 ±0.05the following two settings: (1) the true target goal state, with the learned distance function (Row 4),and (2) the learned target goal prediction, with the true distance function (Row 5). As expected, thedistance function is easy to learn in these domains, and using the learned distance function instead ofthe true distance function leads to a small or no drop in performance. Most the performance dropcomes from the goal prediction module, and therefore future modeling innovations should focus onimproving the goal prediction module.Next, we look at the performance difference between synthetic and natural language. Row 6 inTable 1 shows the number of successful episodes when using synthetic language only, both duringtraining the goal prediction model, and for learning the target task policy using RL during testing. Inboth the domains, using synthetic language is significantly better than using natural language, and iscomparable to the oracle.In order to analyze the benefit of using the relational model, we compare our approach against anon-relational model. Row 7 shows the results when using a non-relational model, where we usea multilayered perceptron with three linear layers, that takes in the entity vectors, goal positionsof all entities in the source task, and the CLIP embedding of the final token in the description, allconcatenated together as a single input vector, and outputs the goal positions of all entities in thetarget task as a single vector. This model is significantly worse than the relational model on both thedomains, highlighting the benefit of using a relational approach for these tasks.7.5 Qualitative ResultsIn this section, we report some qualitative results on the Navigation domain with reward and policyadaptation approaches.In Figure 4, we show two examples of goal prediction using the Relational Reward Adaptationapproach. In the first example, the predicted goal state is quite close to the true goal state under thetarget task, suggesting that the model is able to successfully recover the target task. In the secondexample, the predicted goal is somewhat farther from the true goal. A plausible explanation is that themodel was not able to disambiguate the entity being referred to by language, and therefore computesthe target goal position as a linear combination of distances to multiple entities.In Figure 5, we show three examples of paths followed by the agent when following the actionspredicted by the Relational Policy Adaptation approach (without any finetuning). In the first example,we see that the agent successfully reaches and stops at the true goal position under the target task.In the other two examples, we see that the agent gets somewhat close to the goal position under thetarget task, but doesn’t actually reach it (and is also going towards the goal position under the sourcetask). The errors seem to get larger as agent gets closer to the target goal, motivating a modifiedtraining algorithm wherein datapoints could be weighted differently based on how close the agent isto the goal position. We leave this investigation for future work.9Figure 4: Visualization of predicted goal for two test datapoints. The yellow X denotes the goalposition under the source task, and the red and blue X’s denote the predicted and true goal positionsunder the target task.Figure 5: Visualization of predicted goal for two test datapoints. The red X denotes the initial positionof the agent, the yellow X denotes the true goal position under the source task, and the blue X denotesthe true goal position under the target task.Figure 6: Learning curves comparing the policy training on target tasks when using uninitializedPPO networks and PPO networks initialized using policy adaptation, on the Rearrangement (left) andNavigation (right) domains.10Figure 7: Neural Network architecture for relational goal prediction.11Figure 8: Relational Policy Adaptation approach12Figure 9: Initializing the value and policy networks of the actor-critic model using the rewardadaptation and policy adaptation approaches.13
CkSkFmZvI99
Interesting paper but evaluation procedure and results should be made clearer
7: Good paper, accept
## Summary This paper proposes a new setting related to imitation learning and instruction following where the agent must learn a target task from a demonstration of a source task and a natural language description of the difference between the source and desired target tasks. To investigate this task adaptation setting the authors propose two benchmarks and the corresponding datasets. The papers presents and implements two independent approaches to tackling this problem (one is more related to Inverse RL while the other is closer to Imitation Learning) and shows how these two can be combined. ## Pros * proposes a novel problem setting that is well motivated and relevant for real world applications * provides two benchmarks to investigate the proposed problem setting * proposes two independent approaches to solving the problem. Both are interesting and well motivated (potential based goal-conditioned reward and learning a goal-conditioned policy to generate a new dataset from which learning a policy) * proposes a interesting way of combining both methods * very extensive appendix ## Cons * the evaluation procedure is unclear to me and it is therefore complicated to assess if the results are encouraging and how much room there is for improvement: * Table 1 caption mentions success rates but these are actually No. of successes and it is unclear how to convert those (how many tasks / episodes per task are tested?). From what I understand from l.152, 500'000 and 100'000 episodes are tested so these would make 17% and 0.4% of success rates. * l.152 what are the "500'000 and 100'000"? I think a word is missing. Are these new (source demonstration, language description, target goal) datapoints created for evaluation? How are those generated and how do they relate to the training datapoints? * Are Relational Reward Adaptation and Relational Policy Adaption evaluate differently? If yes, why is that and why not reporting it in Table 1.? * l.159-160: do you consider a task succesful if only one of the 100 rollouts achieves it? * If I understand correctly Oracle is training RL with ground truth reward yet it is outperformed by the proposed method? How come, is it because the RL task is already too difficult in itself to be learned and that actually the reward learning part is not that challenging? what about Policy Adaptation + Ground Truth reward? would this be a better upper bound? * I understand the problem setting is novel but isn't there some related work that could be adapted to this setting in order to propose some baselines? ## Questions * For the policy network distillation do you only sample states from the source demonstrations? Otherwise what action do you input to the policy learned by policy adaptation? * A maybe more straightforward way of combining parts of the reward and policy adaptation would have been to learn a target goal function like in 4.1 (the Adapt(g_src, l) one) and a goal-conditioned policy like in 4.2 ( the pi(s|g) one) and then using this goal-conditioned policy with the adapted target goal. Have you tried it? Do you think it would work/fail and why? ## Typos/Suggestions * It could be made clearer from the beginning that we have access to target demonstrations for learning (in the introduction and Figure 1 for example) * Section 3. could be made clearer. What is an adaptation template? Is it swapping the goal position of the entities or is is one instance of doing so? what is a datapoint? is is (tau_src, l, tau_tgt) ? and is there always one of each? i.e. for a given source demo these is only one description and only one target demo? * what are the natural language paraphrase used for? * l.71 shouldn't be pi_tgt ? * l 122. are the states here only the list of coordinates or also the one-hot vectors of attributes? from l. 91 it seems that the state is only the list of coordinates
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Language-guided Task Adaptation for Imitation Learning ### Paper Abstract We introduce a novel setting, wherein an agent needs to learn a task from a demonstration of a related task with the difference between the tasks communicated in natural language. The proposed setting allows reusing demonstrations from other tasks, by providing low effort language descriptions, and can also be used to provide feedback to correct agent errors, which are both important desiderata for building intelligent agents that assist humans in daily tasks. To enable progress in this proposed setting, we create two benchmarks---Room Rearrangement and Room Navigation---that cover a diverse set of task adaptations. Further, we propose a framework that uses a transformer-based model to reason about the entities in the tasks and their relationships, to learn a policy for the target task. ### Paper Keywords ["language", "imitation learning", "learning from demonstration"] ### Paper Content Language-guided Task Adaptation for ImitationLearningPrasoon Goyal, Raymond J. Mooney, Scott NiekumDepartment of Computer ScienceUniversity of Texas at Austin{pgoyal,mooney,sniekum}@cs.utexas.eduAbstractWe introduce a novel setting, wherein an agent needs to learn a task from ademonstration of a related task with the difference between the tasks communicatedin natural language. The proposed setting allows reusing demonstrations fromother tasks, by providing low effort language descriptions, and can also be usedto provide feedback to correct agent errors, which are both important desideratafor building intelligent agents that assist humans in daily tasks. To enable progressin this proposed setting, we create two benchmarks—Room Rearrangement andRoom Navigation—that cover a diverse set of task adaptations. Further, we proposea framework that uses a transformer-based model to reason about the entities in thetasks and their relationships, to learn a policy for the target task.1 IntroductionImitation learning and instruction-following are two common approaches to communicate a new taskto a learning agent, using demonstrations and natural language respectively. However, providingdemonstrations for each new task can be burdersome for the user, while providing intricate detailsusing language can also become challenging. This motivates a new paradigm that combines thestrengths of both demonstrations and natural language. To this end, we propose a novel setting—givena demonstration of a task (the source task ), we want an agent to complete a somewhat different task(thetarget task ) in a zero-shot setting, that is, without access to anydemonstrations for the target task.The difference between the source task and the target task is communicated using natural language.Figure 1: Example of the settingFor example, consider an environment consist-ing of objects in a room, as shown in Figure 1.Suppose we have a robot, to which we havealready provided the demonstration shown onthe left. Now, we want to teach it to go to theopposite side of the table without providing anew demonstration, using a demonstration forthe source task and a linguistic description ofthe difference between the source and the targettasks, such as “Go to the opposite side of thewide table”. Note that, to infer the target goal, neither the source demonstration, nor the description,is sufficient by itself, and the agent must therefore combine information from both the modalities.This setting (1) allows reusing demonstrations from related tasks, and (2) enables demonstratingcomplex tasks where both the modalities may be essential. Further, it can be used for correctingthe behavior of an agent, where the agent’s current behavior can be seen as a demonstration of thesource task and a natural language description can be provided to guide the agent towards the correctbehavior.36th Conference on Neural Information Processing Systems (NeurIPS 2022).2 Related WorkOur proposed setting is related to, but distinct from, several prior research directions: (1) instead ofgetting a demonstration of the desired task as in standard imitation learning [3, 23, 26, 1, 25, 36, 9, 16,11] and only language in instruction-following [2, 10, 34, 32, 14, 4, 28, 29, 20, 30], in our approachthe agent gets a demonstration of a related task, with the difference between the demonstrated taskand the desired task communicated using language; (2) our approach can be seen as an instance oftransfer learning [31, 35], where the transfer is guided using language; (3) our approach is orthogonalto many other lines of work that use language to aid learning [19, 13, 12, 18, 33, 5, 21].3 Benchmark DatasetsWe create two benchmark environments: Room Rearrangement and Room Navigation. The RoomRearrangement Environment consists of a 5×5grid, with 2 distinct objects. The goal is to move eachobject to a desired goal position. The agent and the objects are spawned randomly in the grid. Theaction space for the agent consists of 7 actions— Up,Down ,Left ,Right ,Grasp ,Release , and Stop .The Room Navigation Environment consists of a 2D arena, (x, y)∈[−100,100]2, with 4 distinctobjects. The agent is spawned at a random location in the arena, and needs to navigate to a desiredgoal position. The action space for the agent is (dx, dy )∈[−1,1]2. We use a common set of objectsin both the environments— Chair ,Table ,Sofa ,Light ,Shelf , and Wardrobe . Further, each objectcan have one of 6 attributes— Large ,Wide ,Wooden ,Metallic ,Corner , and Foldable .For each domain, we create three types of adaptations. For Room Rearrangement, these adaptationsinvolve specifying an absolute change in the goal position of each entity, the relative change in thegoal position of one entity with respect to the other, and swapping the goal positions of the entities.For Room Navigation, these adaptations involve moving closer to an entity, moving further awayfrom an entity, and going to the opposite side of an entity. For each adaptation template, 5,000datapoints were generated for training, 100 for validation of the reward and goal learning, 5 for tuningthe RL hyperparameters, and 10 for the RL test set. We generate template-based descriptions for allthe datapoints. Further, we use Amazon Mechanical Turk to collect natural language paraphrasesfor 10% of the training datapoints, and all the datapoints in the other splits. More details about thedataset are provided in the appendix.4 RElational Task Adaptation for Imitation with Language (RETAIL)Figure 2: The RETAIL frameworkWe propose the RElational Task Adaptation forImitation with Language (RETAIL) frameworkthat takes in a source demonstration, τsrc, andthe difference between the source and targettasks described using natural language, l, tolearn a policy for the target task πsrc. The frame-work consists of two independent approaches,as shown in Figure 2. The first approach—Relational Reward Adaptation—involves infer-ring a reward function for the target task Rtgtusing the source demonstration τsrcand lan-guage l, from which a policy for the target taskπtgtis learned using RL. The second approach—Relational Policy Adaptation—involves learn-ing a policy for the source task πsrcfromthe source demonstration τsrc, which is thenadapted using language lto obtain a policy forthe target task πtgt.For both these approaches, we assume access to a training set D={(τisrc, τitgt, li)}Ni=1, where fortheithdatapoint, τisrcis a demonstration for the source task, τitgtis a demonstration for the targettask, and liis the linguistic description of the difference between the source task and the target task.2We propose a relational model since many adaptations require reasoning about the relation betweenentities (e.g. “Move the big table two units away from the wooden chair"). Since entity extraction isnot the focus of this work, we assume access to a set of entities for each task, where each entity isrepresented using two one-hot vectors, corresponding to an attribute and a noun. Further, each state isrepresented as a list, where element icorresponds to the (x, y)coordinates of the ithentity. Finally,we assume that the number of entities, denoted as Nentities , is fixed for a given domain.We start by describing some common components used in both the approaches. To encode an entity,its attribute and noun are first encoded using an embedding layer, and the (x, y) position is encodedusing a linear layer. These embeddings are concatenated to get the final vector representation of theentity. To encode language, we experiment with 4 encoders: (1) pretrained CLIP model [24], (2)pretrained BERT model [8], (3) BERT model initialized randomly, and (4) GloVE word embeddings[22], with a two-layer bidirectional LSTM [17].4.1 Relational Reward AdaptationWe define the reward R(s, s′)using a potential function as, R(s, s′) =φ(s′)−φ(s). Thus, theproblem of reward learning is reduced to the problem of learning the potential function φ(s). Wedecompose the potential function learning problem into two subproblems: (1) predicting the goal statefor the target task given the source goal and the language, gtgt=Adapt (gsrc, l), and (2) learninga distance function between two states, d(s, s′). The potential function for the target task is thendefined as φtgt(s|gsrc, l) =−d(s, Adapt (gsrc, l)).The goal prediction module is a transformer model that takes in the encoded entities for the sourcegoal state and the token embeddings generated by the language encoder, to output the goal state forthe target task. The distance function encodes states sands′using a multi-layered perceptron, andcomputes an l2-distance between the encoded states. The goal prediction module and the distancefunction are trained independently, and the combined to obtain a reward function that is used to learna policy for the target task using PPO [27]. More details are provided in the Appendix.4.2 Relational Policy AdaptationInstead of learning a model to infer the reward function for the target task from the source demonstra-tion and language, in this section, we describe an alternate approach wherein we learn a model toinfer the target task policy from the source task policy.First, a goal-conditioned policy π(s|g)is learned using all the source and target demonstrations—given the goal state for a task, g, (which is assumed to be the last state in the demonstration), andanother state, s, we use behavior cloning to learn a policy that predicts the action to be taken at states. We use a neural network to parameterize this policy, wherein the states gandsare concatenatedand then passed through a multi-layer perceptron to predict the action at state s.The learned model is then used to generate data of the form (state, language, source action, targetaction). For each datapoint of the form (τisrc, τitgt, li)in the original dataset, the states in the sourceand target demonstrations are passed through the learned goal-conditioned policy, passing in thesource task goal and the target task goal to obtain the actions in the source and target tasks respectively.This data is used to train a transformer-based adaptation model, that takes in the source action, theentities in the state s, and the language to predict the target action. See Figure 8 in the appendix for adiagram of the approach.During evaluation, we are given the source demonstration and language, as before. We use thegoal-conditioned policy π(s|g)to first predict the action for the current state under the source task,and then pass this predicted action, along with the encoded entities and language to the adaptationmodel, to obtain the action under the target task. This action is then executed in the environment.The process is repeated until the STOP action is executed or the maximum episode length is reached.Note that this approach does not involve reinforcement learning to learn the policy.4.3 Combining Reward and Policy AdaptationRecall that the actor-critic model in the PPO algorithm consists of a policy network and a valuenetwork. We use the models trained using the policy adaptation and the reward adaptation approaches3to initialize these networks respectively using knowledge distillation [15]. The details of our fullapproach are described in the appendix.5 ExperimentsTable 1: Success ratesNo. of successesSetting Rearrangement NavigationReward Adaptation 2996.02 ±136.21 247.98 ±20.51Oracle 4402.78 ±410.67 337.22 ±7.34Zero reward 121.02 ±4.25 0.29 ±0.04Reward+Policy Adaptation 8516.78 ±894.35 430.80 ±5.08Evaluation Metrics. In theRoom Rearrangement domain,an episode is deemed success-ful if both the entities are in thedesired goal locations when theagent executes Stop , while forthe Room Navigation domain, anepisode is deemed successful ifthel2-distance between the agent’s final position and the desired goal position is less than 5 units.(Recall that the total arena size is 200×200units.)Relational Reward Adaptation. We train a policy using the reward function obtained by combiningthe predicted goal state and distance function for each target task, and report the total number ofsuccessful episodes at the end of 500,000 and 100,000 for Rearrangement and Navigation respectively,averaged across 3 RL runs per target task. We compare our approach to two other reward functions:(1) a zero-reward, that gives a zero reward for all actions, serving as a lower bound, and (2) an oracle,that has access to the true goal state for the target task and uses l1-distance for Rearrangement, andl2-distance for Navigation. Our results, summarized in Table 1 (rows 1-3), show that the proposedapproach is about 70% as good as the oracle, and significantly better than the zero-reward lowerbound.Relational Policy Adaptation. To evaluate this approach, we generate 100 rollouts using thetrained models for each test task, and compute the number of successful episodes. We find that theapproach completes 15.33% tasks on the Rearrangement domain, and 3.87% tasks on the Navigationdomain. Recall that this approach does not involve RL on the target task.Combining Reward and Policy Adaptation. We report the number of successes when PPOis initialized using the adapted policy in Table 1 (row 4). We observe that on both the domains,initializing the policy network using the Relational Policy Adaptation approach and the value networkusing the Relational Reward Adaptation approach leads to a substantially faster policy learning onthe target tasks, compared to randomly initialized PPO networks. Figure 6 in the appendix shows thelearning curves for these experiments.Key Takeaways. To summarize, our experiments demonstrate that: (1) Relational Reward Adap-tation leads to successfully learning the target task from the source demonstration and language inmany test tasks, but there is room for improvement; (2) Relational Policy Adaptation can be usedto complete some target tasks without RL, but there is a significant room for improvement; and (3)combining the two approaches followed by finetuning with RL leads to a much better performancethan using either approach independently.6 ConclusionsWe introduced a new problem setting, wherein an agent needs to learn a policy for a target task,given the demonstration of a source task and a linguistic description of the difference between thesource and the target tasks, and created two relational benchmarks – Room Rearrangement and RoomNavigation – for this setting. We presented two relational approaches for the problem setting. Thefirst approach – relational reward adaptation – learns a transformer-based model that predicts the goalstate for the target task, and learns a distance function between two states. These trained modules arethen combined to obtain a reward function for the target task, which is used to learn a policy usingRL. The second approach – relational policy adaptation – learns a transformer-based model that takesin a state, and the action at this state under the source task, to output the action at this state underthe target task, conditioned on the source task goal and language. We show that combining theseapproaches results in effective policy learning on the target tasks.4References[1] Pieter Abbeel and Andrew Y Ng. “Apprenticeship learning via inverse reinforcement learning”.In:Proceedings of the twenty-first international conference on Machine learning . 2004, p. 1.[2] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid,Stephen Gould, and Anton Van Den Hengel. “Vision-and-language navigation: Interpretingvisually-grounded navigation instructions in real environments”. In: Proceedings of the IEEEConference on Computer Vision and Pattern Recognition . 2018, pp. 3674–3683.[3] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. “A survey of robotlearning from demonstration”. In: Robotics and autonomous systems 57.5 (2009), pp. 469–483.[4] Jacob Arkin, Matthew R Walter, Adrian Boteanu, Michael E Napoli, Harel Biggie, Hadas Kress-Gazit, and Thomas M Howard. “Contextual awareness: Understanding monologic naturallanguage instructions for autonomous robots”. In: 2017 26th IEEE International Symposiumon Robot and Human Interactive Communication (RO-MAN) . IEEE. 2017, pp. 502–509.[5] SRK Branavan, David Silver, and Regina Barzilay. “Learning to win by reading manuals in aMonte-Carlo framework”. In: Journal of Artificial Intelligence Research 43 (2012).[6] Daniel Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. “Extrapolating beyondsuboptimal demonstrations via inverse reinforcement learning from observations”. In: Interna-tional Conference on Machine Learning . PMLR. 2019, pp. 783–792.[7] Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei.“Deep reinforcement learning from human preferences”. In: arXiv preprint arXiv:1706.03741(2017).[8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Bert: Pre-trainingof deep bidirectional transformers for language understanding”. In: arXiv preprintarXiv:1810.04805 (2018).[9] Chelsea Finn, Sergey Levine, and Pieter Abbeel. “Guided cost learning: Deep inverse optimalcontrol via policy optimization”. In: International conference on machine learning . PMLR.2016, pp. 49–58.[10] Daniel Fried, Ronghang Hu, V olkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-PhilippeMorency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. “Speaker-follower models for vision-and-language navigation”. In: arXiv preprint arXiv:1806.02724(2018).[11] Justin Fu, Katie Luo, and Sergey Levine. “Learning robust rewards with adversarial inversereinforcement learning”. In: arXiv preprint arXiv:1710.11248 (2017).[12] Prasoon Goyal, Scott Niekum, and Raymond J Mooney. “PixL2R: Guiding Reinforce-ment Learning Using Natural Language by Mapping Pixels to Rewards”. In: arXiv preprintarXiv:2007.15543 (2020).[13] Prasoon Goyal, Scott Niekum, and Raymond J Mooney. “Using natural language for rewardshaping in reinforcement learning”. In: arXiv preprint arXiv:1903.02020 (2019).[14] Sachithra Hemachandra, Felix Duvallet, Thomas M Howard, Nicholas Roy, Anthony Stentz,and Matthew R Walter. “Learning models for following natural language directions in unknownenvironments”. In: 2015 IEEE International Conference on Robotics and Automation (ICRA) .IEEE. 2015, pp. 5608–5615.[15] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al. “Distilling the knowledge in a neural network”.In:arXiv preprint arXiv:1503.02531 2.7 (2015).[16] Jonathan Ho and Stefano Ermon. “Generative adversarial imitation learning”. In: arXiv preprintarXiv:1606.03476 (2016).[17] Sepp Hochreiter and Jürgen Schmidhuber. “Long short-term memory”. In: Neural computation9.8 (1997), pp. 1735–1780.[18] Russell Kaplan, Christopher Sauer, and Alexander Sosa. “Beating atari with natural languageguided reinforcement learning”. In: arXiv preprint arXiv:1704.05539 (2017).[19] Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Ed-ward Grefenstette, Shimon Whiteson, and Tim Rocktaschel. “A Survey of ReinforcementLearning Informed by Natural Language”. In: IJCAI 2019: Proceedings of the Twenty-EighthInternational Joint Conference on Artificial Intelligence . Aug. 2019.5[20] Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. “Tell me dave: Context-sensitive grounding of natural language to manipulation instructions”. In: The InternationalJournal of Robotics Research 35.1-3 (2016), pp. 281–300.[21] Karthik Narasimhan, Regina Barzilay, and Tommi Jaakkola. “Grounding language for transferin deep reinforcement learning”. In: Journal of Artificial Intelligence Research 63 (2018),pp. 849–874.[22] Jeffrey Pennington, Richard Socher, and Christopher Manning. “Glove: Global vectors forword representation”. In: Proceedings of the 2014 conference on empirical methods in naturallanguage processing (EMNLP) . 2014, pp. 1532–1543.[23] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network . Tech. rep.CARNEGIE-MELLON UNIV PITTSBURGH PA ARTIFICIAL INTELLIGENCE and PSY-CHOLOGY . . ., 1989.[24] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. “Learning transferable visualmodels from natural language supervision”. In: International Conference on Machine Learning .PMLR. 2021, pp. 8748–8763.[25] Deepak Ramachandran and Eyal Amir. “Bayesian Inverse Reinforcement Learning.” In: IJCAI .V ol. 7. 2007, pp. 2586–2591.[26] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. “A reduction of imitation learning andstructured prediction to no-regret online learning”. In: Proceedings of the fourteenth interna-tional conference on artificial intelligence and statistics . JMLR Workshop and ConferenceProceedings. 2011, pp. 627–635.[27] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. “Proximalpolicy optimization algorithms”. In: arXiv preprint arXiv:1707.06347 (2017).[28] Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, RoozbehMottaghi, Luke Zettlemoyer, and Dieter Fox. “Alfred: A benchmark for interpreting groundedinstructions for everyday tasks”. In: Proceedings of the IEEE/CVF conference on computervision and pattern recognition . 2020, pp. 10740–10749.[29] Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, and Heni BenAmor. “Language-Conditioned Imitation Learning for Robot Manipulation Tasks”. In: arXivpreprint arXiv:2010.12083 (2020).[30] Jaeyong Sung, Seok Hyun Jin, and Ashutosh Saxena. “Robobarista: Object part based transferof manipulation trajectories from crowd-sourcing in 3d pointclouds”. In: Robotics Research .Springer, 2018, pp. 701–720.[31] Matthew E Taylor and Peter Stone. “Transfer learning for reinforcement learning domains: Asurvey.” In: Journal of Machine Learning Research 10.7 (2009).[32] Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew Walter, Ashis Banerjee, SethTeller, and Nicholas Roy. “Understanding natural language commands for robotic navigationand mobile manipulation”. In: Proceedings of the AAAI Conference on Artificial Intelligence .V ol. 25. 1. 2011.[33] HJ Wang and Karthik Narasimhan. “Grounding Language to Entities and Dynamics forGeneralization in Reinforcement Learning”. In: arXiv preprint arXiv:2101.07393 (2021).[34] Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang,William Yang Wang, and Lei Zhang. “Reinforced cross-modal matching and self-supervisedimitation learning for vision-language navigation”. In: Proceedings of the IEEE/CVF Confer-ence on Computer Vision and Pattern Recognition . 2019, pp. 6629–6638.[35] Zhuangdi Zhu, Kaixiang Lin, and Jiayu Zhou. “Transfer learning in deep reinforcementlearning: A survey”. In: arXiv preprint arXiv:2009.07888 (2020).[36] Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, and Anind K Dey. “Maximum entropyinverse reinforcement learning.” In: Aaai . V ol. 8. Chicago, IL, USA. 2008, pp. 1433–1438.67 Appendix7.1 Dataset DetailsFigure 3: Adaptations used in the Room Rear-rangement (top) and Room Navigation (bottom)domains.Transition Dynamics for the Room Rear-rangement Domain. If the agent is on a cellthat contains another object, the Grasp actionpicks up the object, otherwise it leads to nochange. A grasped object moves with the agent,until the Release action is executed. The Up,Down ,Left , and Right actions move the agent(and the grasped object, if any) by one unit in thecorresponding direction, except when the actionwould result in the agent going outside the grid,or the two objects on the same grid cell. In thesecases, the action doesn’t result in any change.TheStop action terminates the episode.Examples of Adaptations. Figure 3 showsexamples of the adaptations from both the Re-arrangement and the Navigation domain, whileTable 2 shows some examples of synthetic andnatural language descriptions for these adapta-tions.Together, these environments cover varioustypes of adaptations, such as specifying mod-ifications to one versus several entities, provid-ing absolute modifications to an entity’s posi-tion (e.g., “move the table one unit further left")versus modifications that are relative to otherentities (e.g., “move the table one unit awayfrom the sofa"). Further, these domains coverdifferent types of MDPs, with Room Rearrange-ment being a discrete state and action space en-vironment, with a relatively short horizon, whileRoom Navigation being a continuous state andaction space environment, with a longer horizon.(On average, an optimal policy completes a taskin the Room Rearrangement domain in about 30steps, while in the Room Navigation domain inabout 150 steps.) Finally, the Room Navigation domain has a unique optimal path (i.e. a straightline path between the initial state and the goal state), while the Room Rearrangement domain admitsmultiple optimal paths (e.g. if reaching an entity requires taking 2 steps to the right and 1 stepupwards, these steps can be performed in any order). Thus, these two domains make a robust testbedfor developing techniques for the proposed problem setting.7.2 Details of the Relational and Policy Adaptation ApproachesGoal Prediction. The goal prediction module is trained by using the final states in the source andtarget demonstrations, as the source and target goals respectively. We minimize the mean absoluteerror between the gold target goal state, gtgtand the predicted target goal state, ˆgtgt:Lgoal=1NNXi=1∥gtgt−ˆgtgt∥1Figure 7 shows a diagram of the goal prediction module.Distance Function. To train the distance function, two states siandsjare sampled from a demon-stration τ, which can be the source or the target demonstration for the task, such that i < j . The7Table 2: Examples of template-generated and natural language descriptions collected using AMT.Template Natural language paraphrase1. go further away from the metallic table Increase your distance from the metallic table.2. go closer to the foldable light Move in the direction of the light that is foldable3. go to the opposite side of the corner light Move across from the corner light.4. move the large chair one unit farther from thewide couchIncrement the distance of the big chair from thewide couch by one.5. move corner table two units further left and metal-lic shelf one unit further backwardslide the corner table two units left and move themetal shelf a single unit back6. move the large table to where the large sofa wasmoved, and vice versaswap the place of the table with the sofamodel is trained to predict distances such that d(g, si)> d(g, sj), where gis the goal state for thedemonstration. This is achieved using the following loss function:Ldist=−Xsi,sj,glogexp(d(g, si))exp(d(g, si)) + exp( d(g, sj))This loss function has been shown to be effective at learning functions that satisfy pairwise inequalityconstraints [7, 6].The policy adaptation approach is diagrammatically shown in Figure 8.7.3 Details of the combined approachHere, we describe the details of how the Relational Reward Adaptation and Relationsl PolicyAdaptation approaches are combined.1.Train the reward adaptation and policy adaptation models using supervised learning inde-pendently, as detailed in the previous sections.2.Use knowledge distillation to initialize the value network for PPO, updating the PPO valuenetwork towards the potential predicted by the reward adaptation approach.3.Use knowledge distillation to initialize the policy network for PPO, updating the PPO policynetwork towards the action probabilities predicted by the policy adaptation approach for thetarget task.4.Finetune the action and value networks using PPO with the rewards predicted by the rewardadaptation approach.For knowledge distillation, states from the demonstration data are sampled uniformly at random.Figure 9 shows a diagram of the combined approach.Importantly, we found that the action network initialized using knowledge distillation usually has alow entropy, and therefore finetuning it directly does not result in good performance. To amelioratethis issue, the entropy of the action network must be kept sufficiently high for it to still allow someexploration. In the continuous control case, we achieve this by increasing the standard deviationof the action network, tuned using the validation set. In the discrete domain, since there is noexplicit parameter to control the entropy in the action network, the knowledge distillation step has anadditional loss term to penalize low-entropy solutions.7.4 Ablation Experiments for Relational Reward AdaptationWe experimented with several ablations of our reward adaptation model, which we describe below.The results are reported in Table 3, where we include the results from the main paper again in the first3 rows for easier comparison.Since our full model consists of two learned components, the goal prediction module, and the distancefunction, we first study the impact of each of these components independently. We experiment with8Table 3: Success rates for different models on Room Rearrangement and Room Navigation domains.We report both the raw success rates (unnormalized), and success rates normalized by the oraclesetting performance.No. of successesSetting Rearrangement NavigationUnnormalized Normalized Unnormalized NormalizedReward Adaptation 2996.02 ±136.21 68.05 ±3.09 247.98 ±20.51 73.54 ±6.08Oracle 4402.78 ±410.67 100.00 ±9.33 337.22 ±7.34 100.00 ±2.18Zero reward 121.02 ±4.25 2.75 ±0.10 0.29 ±0.04 0.09 ±0.01True goal, predicted distance 4164.80 ±337.83 94.59 ±7.67 362.13 ±12.18 107.39 ±3.61Predicted goal, true distance 3706.80 ±200.46 84.19 ±4.55 196.49 ±12.97 58.27 ±3.85Synthetic language 3827.64 ±141.79 86.94 ±3.22 317.11 ±49.26 94.04 ±14.61Non-relational goal prediction 869.89 ±115.12 19.76 ±2.61 0.38 ±0.17 0.11 ±0.05the following two settings: (1) the true target goal state, with the learned distance function (Row 4),and (2) the learned target goal prediction, with the true distance function (Row 5). As expected, thedistance function is easy to learn in these domains, and using the learned distance function instead ofthe true distance function leads to a small or no drop in performance. Most the performance dropcomes from the goal prediction module, and therefore future modeling innovations should focus onimproving the goal prediction module.Next, we look at the performance difference between synthetic and natural language. Row 6 inTable 1 shows the number of successful episodes when using synthetic language only, both duringtraining the goal prediction model, and for learning the target task policy using RL during testing. Inboth the domains, using synthetic language is significantly better than using natural language, and iscomparable to the oracle.In order to analyze the benefit of using the relational model, we compare our approach against anon-relational model. Row 7 shows the results when using a non-relational model, where we usea multilayered perceptron with three linear layers, that takes in the entity vectors, goal positionsof all entities in the source task, and the CLIP embedding of the final token in the description, allconcatenated together as a single input vector, and outputs the goal positions of all entities in thetarget task as a single vector. This model is significantly worse than the relational model on both thedomains, highlighting the benefit of using a relational approach for these tasks.7.5 Qualitative ResultsIn this section, we report some qualitative results on the Navigation domain with reward and policyadaptation approaches.In Figure 4, we show two examples of goal prediction using the Relational Reward Adaptationapproach. In the first example, the predicted goal state is quite close to the true goal state under thetarget task, suggesting that the model is able to successfully recover the target task. In the secondexample, the predicted goal is somewhat farther from the true goal. A plausible explanation is that themodel was not able to disambiguate the entity being referred to by language, and therefore computesthe target goal position as a linear combination of distances to multiple entities.In Figure 5, we show three examples of paths followed by the agent when following the actionspredicted by the Relational Policy Adaptation approach (without any finetuning). In the first example,we see that the agent successfully reaches and stops at the true goal position under the target task.In the other two examples, we see that the agent gets somewhat close to the goal position under thetarget task, but doesn’t actually reach it (and is also going towards the goal position under the sourcetask). The errors seem to get larger as agent gets closer to the target goal, motivating a modifiedtraining algorithm wherein datapoints could be weighted differently based on how close the agent isto the goal position. We leave this investigation for future work.9Figure 4: Visualization of predicted goal for two test datapoints. The yellow X denotes the goalposition under the source task, and the red and blue X’s denote the predicted and true goal positionsunder the target task.Figure 5: Visualization of predicted goal for two test datapoints. The red X denotes the initial positionof the agent, the yellow X denotes the true goal position under the source task, and the blue X denotesthe true goal position under the target task.Figure 6: Learning curves comparing the policy training on target tasks when using uninitializedPPO networks and PPO networks initialized using policy adaptation, on the Rearrangement (left) andNavigation (right) domains.10Figure 7: Neural Network architecture for relational goal prediction.11Figure 8: Relational Policy Adaptation approach12Figure 9: Initializing the value and policy networks of the actor-critic model using the rewardadaptation and policy adaptation approaches.13<|im_end|> <|im_start|>assistant ### Review Title Interesting paper but evaluation procedure and results should be made clearer ### Review Text ## Summary This paper proposes a new setting related to imitation learning and instruction following where the agent must learn a target task from a demonstration of a source task and a natural language description of the difference between the source and desired target tasks. To investigate this task adaptation setting the authors propose two benchmarks and the corresponding datasets. The papers presents and implements two independent approaches to tackling this problem (one is more related to Inverse RL while the other is closer to Imitation Learning) and shows how these two can be combined. ## Pros * proposes a novel problem setting that is well motivated and relevant for real world applications * provides two benchmarks to investigate the proposed problem setting * proposes two independent approaches to solving the problem. Both are interesting and well motivated (potential based goal-conditioned reward and learning a goal-conditioned policy to generate a new dataset from which learning a policy) * proposes a interesting way of combining both methods * very extensive appendix ## Cons * the evaluation procedure is unclear to me and it is therefore complicated to assess if the results are encouraging and how much room there is for improvement: * Table 1 caption mentions success rates but these are actually No. of successes and it is unclear how to convert those (how many tasks / episodes per task are tested?). From what I understand from l.152, 500'000 and 100'000 episodes are tested so these would make 17% and 0.4% of success rates. * l.152 what are the "500'000 and 100'000"? I think a word is missing. Are these new (source demonstration, language description, target goal) datapoints created for evaluation? How are those generated and how do they relate to the training datapoints? * Are Relational Reward Adaptation and Relational Policy Adaption evaluate differently? If yes, why is that and why not reporting it in Table 1.? * l.159-160: do you consider a task succesful if only one of the 100 rollouts achieves it? * If I understand correctly Oracle is training RL with ground truth reward yet it is outperformed by the proposed method? How come, is it because the RL task is already too difficult in itself to be learned and that actually the reward learning part is not that challenging? what about Policy Adaptation + Ground Truth reward? would this be a better upper bound? * I understand the problem setting is novel but isn't there some related work that could be adapted to this setting in order to propose some baselines? ## Questions * For the policy network distillation do you only sample states from the source demonstrations? Otherwise what action do you input to the policy learned by policy adaptation? * A maybe more straightforward way of combining parts of the reward and policy adaptation would have been to learn a target goal function like in 4.1 (the Adapt(g_src, l) one) and a goal-conditioned policy like in 4.2 ( the pi(s|g) one) and then using this goal-conditioned policy with the adapted target goal. Have you tried it? Do you think it would work/fail and why? ## Typos/Suggestions * It could be made clearer from the beginning that we have access to target demonstrations for learning (in the introduction and Figure 1 for example) * Section 3. could be made clearer. What is an adaptation template? Is it swapping the goal position of the entities or is is one instance of doing so? what is a datapoint? is is (tau_src, l, tau_tgt) ? and is there always one of each? i.e. for a given source demo these is only one description and only one target demo? * what are the natural language paraphrase used for? * l.71 shouldn't be pi_tgt ? * l 122. are the states here only the list of coordinates or also the one-hot vectors of attributes? from l. 91 it seems that the state is only the list of coordinates ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
SJ4Z72Rctm
ICLR.cc/2019/Conference
2019
Composing Entropic Policies using Divergence Correction
["Jonathan J Hunt", "Andre Barreto", "Timothy P Lillicrap", "Nicolas Heess"]
Deep reinforcement learning (RL) algorithms have made great strides in recent years. An important remaining challenge is the ability to quickly transfer existing skills to novel tasks, and to combine existing skills with newly acquired ones. In domains where tasks are solved by composing skills this capacity holds the promise of dramatically reducing the data requirements of deep RL algorithms, and hence increasing their applicability. Recent work has studied ways of composing behaviors represented in the form of action-value functions. We analyze these methods to highlight their strengths and weaknesses, and point out situations where each of them is susceptible to poor performance. To perform this analysis we extend generalized policy improvement to the max-entropy framework and introduce a method for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between policies. We study this approach in the tabular case and propose a scalable variant that is applicable in multi-dimensional continuous action spaces. We compare our approach with existing ones on a range of non-trivial continuous control problems with compositional structure, and demonstrate qualitatively better performance despite not requiring simultaneous observation of all task rewards.
["maximum entropy RL", "policy composition", "deep rl"]
ABSTRACTDeep reinforcement learning (RL) algorithms have made great strides in recentyears. An important remaining challenge is the ability to quickly transfer exist-ing skills to novel tasks, and to combine existing skills with newly acquired ones.In domains where tasks are solved by composing skills this capacity holds thepromise of dramatically reducing the data requirements of deep RL algorithms,and hence increasing their applicability. Recent work has studied ways of com-posing behaviors represented in the form of action-value functions. We analyzethese methods to highlight their strengths and weaknesses, and point out situa-tions where each of them is susceptible to poor performance. To perform thisanalysis we extend generalized policy improvement to the max-entropy frame-work and introduce a method for the practical implementation of successor fea-tures in continuous action spaces. Then we propose a novel approach which, inprinciple, recovers the optimal policy during transfer. This method works by ex-plicitly learning the (discounted, future) divergence between policies. We studythis approach in the tabular case and propose a scalable variant that is applicablein multi-dimensional continuous action spaces. We compare our approach withexisting ones on a range of non-trivial continuous control problems with com-positional structure, and demonstrate qualitatively better performance despite notrequiring simultaneous observation of all task rewards.1 I NTRODUCTIONReinforcement learning algorithms coupled with powerful function approximators have recentlyachieved a series of successes (Mnih et al., 2015; Silver et al., 2016; Lillicrap et al., 2015; Kalash-nikov et al., 2018). Unfortunately, while being extremely powerful, deep reinforcement learning(DRL) algorithms often require a large number of interactions with the environment to achieve goodresults, partially because they are often applied “from scratch” rather than in settings where they canleverage existing experience. This reduces their applicability in domains where generating experi-ence is expensive, or learning from scratch is challenging.The data efficiency of DRL algorithms is affected by various factors and significant research efforthas been directed at achieving improvements (e.g. Popov et al., 2017). At the same time the de-velopment of basic locomotor behavior in humans can, in fact, require large amounts of experienceand practice (Adolph et al., 2012), and it can take significant effort and training to master complex,high-speed skills (Haith & Krakauer, 2013). Once such skills have been acquired, however, humansrapidly put them to work in new contexts and to solve new tasks, suggesting transfer learning as animportant mechanism.Transfer learning has been explored extensively in multiple fields of the machine learning commu-nity (see e.g. Weiss et al., 2016, for a recent review). In RL and robotics the transfer of knowledgefrom one task to another has been studied from a variety of angles.For the purpose of this paper we are interested in methods that are suitable for transfer in the contextof high-dimensional motor control problems. We further focus on model-free approaches, which areevident in human motor control (Haith & Krakauer, 2013), and have recently been used by a varietyof scalable deep RL methods (e.g. Lillicrap et al., 2015; Mnih et al., 2015; Schulman et al., 2017;Kalashnikov et al., 2018).1Under review as a conference paper at ICLR 2019Transfer may be especially valuable in domains where a small set of skills can be composed, indifferent combinations, to solve a variety of tasks. Different notions of compositionality have beenconsidered in the RL and robotics literature. For instance, ‘options’ are associated with discreteunits of behavior that can be sequenced, thus emphasizing composition in time (Precup et al., 1998).In this paper we are concerned with a rather distinct notion of compositionality, namely how to com-bine and blend potentially concurrent behaviors. This form of composition is particularly relevantin high-dimensional continuous action spaces, where it is possible to achieve more than one tasksimultaneously (e.g. walking somewhere while juggling).One approach to this challenge is via the composition of task rewards. Specifically, we are interestedin the following question: If we have previously solved a set of tasks with similar transition dynamicsbut different reward functions, how can we leverage this knowledge to solve new tasks which can beexpressed as a convex combination of those rewards functions?This question has recently been studied in two independent lines of work: by Barreto et al. (2017;2018) in the context of successor feature (SF) representations used for Generalized Policy Improve-ment (GPI) with deterministic policies, and by Haarnoja et al. (2018a); van Niekerk et al. (2018) inthe context of maximum entropy policies. These approaches operate in distinct frameworks but bothachieve skill composition by combining the Q-functions associated with previously learned skills.We clarify the relationship between the two approaches and show that both can perform well in somesituations but achieve poor results in others, often in complementary ways. We introduce a novelmethod of behavior composition that that can consistently achieve good performance.Our contributions are as follows:1. We introduce succcessor features (SF) in the context of maximum entropy and extend theGPI theorem to this case (max-ent GPI).2. We provide an analysis of when GPI, and compositional “optimism” (Haarnoja et al.,2018a) of entropy-regularized policies transfer. We construct both tabular and continuousaction tasks where both fail to transfer well.3. We propose a correction term – which we call Divergence Correction (DC)– based on theR ́enyi divergence between policies which allows us, in principle, to recover the optimalpolicy for transfer for any convex combination of rewards.4. We demonstrate a practical implementation of these methods in continuous action spacesusing adaptive importance sampling and compare the approaches introduced here: max-entGPI and DC with optimism(Haarnoja et al., 2018a) and Conditional Qfunctions (Schaulet al., 2015) in a variety of non-trivial continuous action transfer tasks.2 B ACKGROUND2.1 M ULTI -TASK RLWe consider Markov Decision Processes defined by the tuple Mcontaining: a state space S, actionspaceA, a start state distribution p(s1), a transition function p(st+1jst;at), a discount2[0;1)anda reward function r(st;at;st+1). The objective of RL is to find a policy (ajs) :S!P (A)whichmaximises the discounted expected return from any state J() =E;M[P1=ttr]where theexpected reward is dependent on the policy and the MDPM.We formalize transfer as in Barreto et al. (2017); Haarnoja et al. (2018a), as the desire to performwell across all tasks in a set M2T0after having learned policies for tasks M2T , without addi-tional experience. We assume that TandT0are related in two ways: all tasks share the same statetransition function, and tasks in T0can be expressed as convex combinations of rewards associatedwith tasks in setT. So if we write the reward functions for tasks in Tas the vector= (r1;r2;:::),tasks inT0can be expressed as rw=w.We focus on combinations of two policies rb=bri+ (1b)rjbut the methods can be extended tomore than two tasks. We refer to a transfer method as optimal, if it achieves optimal returns on tasksinT0, using only experience on tasks T.2Under review as a conference paper at ICLR 20192.2 S UCCESSOR FEATURESSuccessor Features (SF) (Dayan, 1993) and Generalised Policy Improvement (GPI) (Barreto et al.,2017; 2018) provide a principled solution to transfer in the setting defined above. SF make theadditional assumption that the reward feature is fully observable, that is, the agent has access tothe rewards of all tasks in Tbut notT0during training on each individual task.The key observation of SF representations is that linearity of the reward rwwith respect to thefeaturesimplies the following decomposition of the value policy of :Qw(st;at) =E"1X=ttwjat#=E"1Xi=ttjat#w (st;at)w;(1)where is the expected discounted sum of features induced by policy . This decompositionallows us to compute the action-value for on any task wby learning .If we have a set of policies 1;2;:::;nindexed byi, SF and GPI provide a principled approachto transfer on task w. Namely, we act according to the deterministic GPI policy GPIw(st)arg maxatQGPIw(st;at))whereQGPIw(st;at)maxiQiw(st;at) = maxi i(s;a)w (2)The GPI theorem guarantees the GPI policy has a return at least as good as any component policy,that is,VGPIw (s)maxiViw(s)8s2S.2.3 M AXIMUM ENTROPY RLThe maximum entropy (max-ent) RL objective augments the reward to favor entropic solutionsJ() =E;M[P1i=t(r+H[(js))]] (3)whereis a parameter that determines the relative importance of the entropy term.This objective has been considered in a number of works including Kappen (2005); Todorov (2009);Haarnoja et al. (2017; 2018a); Ziebart et al. (2008); Fox et al. (2015).We define the action-value Qassociated with eq. 3 asQ(st;at)rt+EP1=t+1t(r+H[(js)])(4)(noticeQ(st;at)does not include any entropy terms for the state st). Soft Q iterationQ(st;at) r(st;at;st+1) +Ep(st+1jst;at)[V(st+1)] (5)V(st) E[Q(st;at)] +H[(jst)] =logZAexp(1Q(st;at))dalogZ(st)(6)where(atjst)/exp(1Q(st;at))converges to the optimal policy with standard assumptions(Haarnoja et al., 2017).3 C OMPOSING POLICIES IN MAX-ENTREINFORCEMENT LEARNINGIn this section we present two novel approaches for max-ent transfer learning. In section 4 we thenoutline a practical method for making use of these results.3.1 M AX-ENTSUCCESSOR FEATURES AND GENERALIZED POLICY IMPROVEMENTWe introduce max-ent SF, which provide a practical method for computing the value of a maximumentropy policy under any convex combination of rewards. We then show the GPI theorem (Barretoet al., 2017) holds for maximum entropy policies.We define the action-dependent SF to include the entropy of the policy, excluding the current state,analogous to the max-entropy definition of Qin (4): (st;at)t+EP1=i+1t(+1H[(js)])=t+Ep(st+1jst;at)[(st+1)](7)3Under review as a conference paper at ICLR 2019where 1is a vector of ones of the same dimensionality as and we define the state-dependentsuccessor features as the expected in analogy with V(s):(s)Ea(js)[ (s;a)] +1H[(js)]: (8)The max-entropy action-value of for any convex combination of rewards wis then given byQw(s;a) = (s;a)w. Max-ent SF allow us to estimate the action-value of previous policies ona new task. We show that, as in the deterministic case, there is a principled way to combine multiplepolicies using their action-values on task w.Theorem 3.1 (Max-Ent Generalized Policy Improvement) Let1;2;:::;nbenpolicies with-max-ent action-value functions Q1;Q2;:::;Qnand value functions V1;V2;:::;Vn. Define(ajs)/exp1maxiQi(s;a):Then,Q(s;a)maxiQi(s;a)for alls2S and alla2A, (9)V(s)maxiVi(s)for alls2S, (10)whereQ(s;a)andV(s)are the-max-ent action-value and value function respectively of .Proof: See appendix A.1. In our setup, we learn i(s;a), the SFs of policies ifor each task inT, we define the max-ent GPI policy for task w2T0asGPIw(ajs)/exp(1maxiQiw(s;a)) =exp(1maxi i(s;a)w):3.2 D IVERGENCE CORRECTION (DC)Haarnoja et al. (2018a) introduced a simple approach to policy composition by estimating the action-value for the transfer task rb=bri+ (1b)rjfrom the optimal action-values of the componenttasksQiandQjQOptb(s;a)bQi(s;a) + (1b)Qj(s;a): (11)When using Boltzmann policies defined by Q, the resulting policy, Optb(ajs)/exp(1QOptb(s;a)),is the product distribution of the two component policies. We refer to Optbas the compositionally“optimistic” (CO) policy, as it acts according to the optimistic assumption that the optimal returnsofQiandQjwill be, simultaneously, achievable1.Both max-ent GPI we presented above, and CO can, in different ways, fail to transfer well in somesituations (see fig. 1 for some examples in tabular case). Neither approach consistently performsoptimally during transfer, even if all component terms are known exactly. We desire a solution fortransfer that, in principle, can perform optimally.Here we show, at the cost of learning a function conditional on the task weightings b, it is in principlepossible to recover the optimal policy for the transfer tasks, without direct experience on those tasks,by correcting for the compositional optimism bias in QOptb. For simplicity, as in Haarnoja et al.(2018a), we restrict this to the case with only 2 tasks, but it can be extended to multiple tasks.The correction term for CO uses a property noted, but not exploited in Haarnoja et al. (2018a). Thebias inQOptis related to the the discounted sum of R ́enyi divergences of the two component policies.Intuitively, if the two policies result in trajectories with low divergence between the policies in eachstate, the CO assumption that both policies can achieve good returns is approximately correct. Whenthe divergences are large, the CO assumption is being overly optimistic and the correction term willbe large.Theorem 3.2 (DC Optimality) Leti;jbemax-ent optimal policies for tasks with rewards riandrjwith max-ent action-value functions Qi;Qj. DefineC1b(st;at)as the fixed point ofC(k+1)b(st;at) =Ep(st+1jst;at)hlogRAi(at+1jst+1)bj(at+1jst+1)(1b)exp(1C(k)b(st+1;at+1))dat+1i1Compositional optimism is not the same as optimism under uncertainty, often used in RL for exploration.4Under review as a conference paper at ICLR 2019Given the conditions for Soft Q convergence, the max-ent optimal Qb(s;a)forrb=bri+ (1b)rjisQb(s;a) =bQi(s;a) + (1b)Qj(s;a)C1b(s;a)8s2S;a2A;b2[0;1]:Proof: See appendix A.2. We call this Divergence Correction (DC) as the quantity C1bis related tothe R ́enyi divergence between policies (see appendix A.2 for details). Learning C1bdoes not requireany additional information (in principle) than that required to learn policies iandj. Unlike withSF, it is not necessary to observe other task features while training the policies. On the other hand,unlike with GPI, which can be used to naturally combine any number of tasks with arbitrary weightvectors w, in order to apply DC one must estimate C1b(s;a)for all values of b. so the complexityof learningC1increases significantly if more than 2 tasks are combined.Supplementary Table 1 provides a comparison on the properties of the methods we consider here.We also compare with simply learning a conditional QfunctionQ(s;ajb)(CondQ) (e.g. Schaulet al., 2015; Andrychowicz et al., 2017). As with GPI, this requires observing the full set of taskfeatures, in order to compute rbfor arbitrary b.In this section we have introduced two new theoretical approaches to max-ent transfer composition:max-ent GPI and DC. We have shown how these are related to relevant prior methods. In the nextsection we address the question of how to practically learn and sample with these approaches incontinuous action spaces.4 A DAPTIVE IMPORTANCE SAMPLING FOR BOLTZMAN POLICIESALGORITHMThe control of robotic systems with high-dimensional continuous action spaces is a promising usecase for the ideas presented in this paper. Such control problems may allow for multiple solutions,and can exhibit exploitable compositional structure. Unfortunately, learning and sampling of gen-eral Boltzmann policies defined over continuous action spaces is challenging. While this can bemitigated by learning a parametric sampling distribution, during transfer we want to sample fromthe Boltzmann policy associated with a newly synthesized action-value function without having tolearn such an approximation first. To address this issue we introduce Adaptive Importance Samplingfor Boltzmann Policies (AISBP), a method which provides a practical solution to this challenge.In the following we parametrise all functions with neural nets (denoting parameters by the subscript), including the soft action-value for reward i:QiQ(s;a); the associated soft value function ViV(s)and a proposal distribution qiq(ajs), the role of which we explain below. We use an off-policyalgorithm, so that experience generated by training on policy ican be used to improve policy j. Thisis especially important since our analysis requires the action-value Qi(s;a)to be known in all states.This is less likely to be the case for a on on-policy algorithm, that only updates Qiusing trajectoriesgenerated by policy i. During training experience generated by all tasks are stored in a replay bufferR, and mini-batches are sampled uniformly and used to update all function approximators. Soft Qiteration (see eq. 4) is used to learn QiandVi. These updates are, in principle, straightforwardusing transitions sampled from the replay buffer.Sampling from the Boltzmann policy defined by QiQ,i(ajs)/exp1QiQ(s;a)is challenging asis estimating the partition function (the logof which is also the value, c.f. Eq. 6). One approach isto fit an expressible, tractable sampler, such as a stochastic neural network to approximate i(e.g.Haarnoja et al., 2018a). This approach works well when learning a single policy. However, duringtransfer this may require learning a new sampler for each new value composition. AISBP insteaduses importance sampling to sample and estimate the partition function. The scalability of thisapproach is improved by using using a learned proposal distribution qq(ajs), and by observing thatmodern architectures allow for efficient batch computation of a large number of importance samples.To facilitate transfer we restrict the parametric form of the proposals to mixtures of (truncated) Nor-mal distributions. The well-known result that the product of Normal distributions can be computedin closed-form then allows us to construct effective compositional proposals during transfer.More formally, for each policy in Twe learn an action-value QiQ(s;a), and valueViV(s)network,and a proposal distribution qiq(ajs)(we drop the task index ihere when writing the losses for nota-5Under review as a conference paper at ICLR 2019tional clarify, and write the losses for a single policy). The proposal distribution is a mixture of Mtruncated Normal distributions NT, truncated to the square a2[1;1)nwith diagonal covariancesqq(ajs) =1MPMm=1NT(a;mq(s);mq(s);1;1) (12)The proposal distribution is optimized by minimizing the forward KL divergence with the Boltz-mann policy (ajs)/exp1QQ(s;a). This KL is “zero avoiding” and over-estimates the supportof(Murphy, 2012) which is desirable for a proposal distribution (Gu et al., 2015),L(q) =EREa(js)[log(ajst)logqq(ajst)](13)where the expectation is over the replay buffer state density.The inner expectation in the proposal loss itself requires sampling from . We approximate hisexpectation by self-normalized importance sampling and use a target proposal distribution p(atjst)which is a mixture distribution consisting of the proposals for all policies along with a uniformdistribution. For batchsize BandNproposal samples the estimator of the proposal loss is thenL(q)1BBXk=1NXl=1wkllogqq(ajst);w0kl=1(QQ(sk;akl))p(akljsk);wkl=w0klPNm=1w0km:(14)The value function loss is defined as the L2 error on the Soft Q estimate of valueL(V) =ER"12VV(st)logZAexp(1QQ(st;a))da2#(15)which is estimated using importance sampling to compute the integral.L(V)12BPBl=1(VV(sl)logZ)2;Z=1NPNk=1exp(1QQ(sl;alk))qq(alkjsl): (16)This introduces bias due to the finite-sample approximation of the expectation inside the (concave)log. In practice we found this estimator sufficiently accurate, provided the proposal distribution wasclose to. We also use importance sampling to sample from while acting.The action-value loss is just the L2 norm with the Soft Q target:L(Q) =ER12(QQ(st;at)(r(st;at;st+1) +V0V(st+1)))2: (17)To improve stability we employ target networks for the value VV0and proposal q0qnetworks (Mnihet al., 2015; Lillicrap et al., 2015) We also parameterize Qas an advantage QQ(s;a) =VV(s) +AA(s;a)(Baird, 1994; Wang et al., 2015; Harmon et al., 1995) which is more stable when theadvantage is small compared with the value. The full algorithm is give in Algorithm Box 1 andmore details are provided in appendix C.4.1 I MPORTANCE SAMPLED MAX-ENTGPIThe same importance sampling approach can also be used to estimate max-ent SF. Max-ent GPIrequires us to learn the expected (maximum entropy) features ifor each policy i, in order toestimate its (entropic) value under a new convex combination task w. This requires that experiencetuple in the replay contain the full feature vector , rather than just the reward for the policy whichgenerated the experience ri. Given this information andcan be learned with analogousupdates toVandQ, which again requires importance sampling to estimate .As withVV, we use a target network for 0and advantage parametrization. We found that,because these updates when using experience shared between tasks is far off-policy, it is necessaryto have a longer target update period than for V. Full details are of the losses and samplers are inappendix C.6Under review as a conference paper at ICLR 2019Algorithm 1 AISBP training algorithmInitialize proposal network q, value network parameters Vand action-value network parametersQand replayRwhile training do .in parallel on each actorObtain parameters from learnerSample task iTRoll out episode using qiqto importance sample i(ajs)/exp1QiQ(s;a)Add experience to replay Rend whilewhile training do .in parallel on the learnerSample SARS tuple from RImproveL(q),L(V),L(Q)Improve additional losses for transfer L(),L( ),L(C),L(Vb)L(Qb),iftarget update period thenUpdate target network parameters V0 V;q0 q,0 ,V0b Vbend ifend while4.2 D IVERGENCE CORRECTIONAll that is required for transfer using compositional optimism (eq. 11, Haarnoja et al. (2018a)) is themax-ent action values of each task, so no additional training is required beyond the base policies.In section 3.2 we have shown that if we can learn the fixed point of C1b(s;a)we can correct thiscompositional optimism and recover the optimal action-value Qb(s;a).We exploit the recursive relationship in C1b(s;a)to fit a neural net CC(s;a;b )with a TD(0) esti-mator. This requires learning a conditional estimator for any value of b, so as to support arbitrarytask combinations. Fortunately, since C1bdepends only on the policies and transition function it ispossible to learn an estimator C1bfor different values of bby sampling bduring each update. Asbefore, we use target networks and an advantage parametrization for CC(s;a;b )We learnC1basCC(s;a;b ), for each pair of policies i;jresulting in the lossL(C) =EsR;bU(0;1)[12(CC(s;a;b ) +Ep(s0js;a)[logRAexp(blogi(a0js0)+ (18)(1b)j(a0js0)1CC0(s0;a0;b))da0])2]:As with other integrals of the action space, we approximate this loss using importance sampling toestimate the integral. Note that, unlike GPI and CondQ (next section), learning C1bdoes not requireobservingwhile training.We also considered a heuristic approach where we learned Conly forb=12(this is typicallyapproximately the largest divergence). This avoids the complexity of a conditional estimator andwe estimate C1bas^C1b(s;a)4b(1b)C11=2(s;a):This heuristic, we denote DC-Cheap, canbe motivated by considering Gaussian policies with similar variance (see appendix D) The max-entGPI bound can be used to correct for over-estimates of the heuristic C1b,QDCCheap +GPI(s;a) =max(QOPT(s;a)^C1b(s;a);QGPI(s;a)).4.3 C OND QAs a baseline, we directly learn a conditional Qfunction using a similar approach to DC of samplingbeach update Q(s;a;b )(Schaul et al., 2015). This, like GPI but unlike DC, requires observing during training so the reward on task bcan be estimated. We provide the full details in appendix C.4.4 S AMPLING COMPOSITIONAL POLICIESDuring transfer we would like to be able to sample from the Boltzmann policy defined by ourestimate of the transfer action-value Qb(the estimate is computed using the methods we enumerated7Under review as a conference paper at ICLR 2019above) without having to, offline, learn a new proposal or sampling distribution first (which is theapproach employed by Haarnoja et al. (2018a)).As outlined earlier, we chose the proposal distributions so that the product of proposals is tractable,meaning we can sample from qijb(ajs)/(qiq(ajs))b(qj(ajs))(1b). This is a good proposal dis-tribution when the CO bias is low, since QOptbdefines a Boltzmann policy which is the product ofthe base policies2However, when C1b(s;a)is large, meaning the CO bias is large, qijmay not bea good proposal, as we show in the experiments. In this case none of the existing proposal distribu-tions may be a good fit. Therefore we sample from a mixture distribution of all policies, all policyproducts and the uniform distribution.pb(ajs)14(qiq(ajs) +qjq(ajs) +qijb(ajs) +1VA) (19)whereVAis the volume of the action space. Empirically, we find this is sufficient to result in goodperformance during transfer. The algorithm for transfer is given in supplementary algorithm 2.5 E XPERIMENTS5.1 D ISCRETE ,TABULAR ENVIRONMENTWe first consider some illustrative tabular cases of compositional transfer. These highlight situationsin which GPI and CO transfer can perform poorly (Figure 1). As expected, we find that GPI performswell when the optimal transfer policy is close to one of the existing policies; CO performs well whenboth subtask policies are compatible. The task we refer to as “tricky” is illustrative of in which theoptimal policy for the transfer task does not resemble either existing policy: In the grid world non-overlapping rewards for each task are provided in one corner of the grid world, while lower valueoverlapping rewards are provided in the other corner (cf. Fig. 1). As a consequence both GPI andCO perform poorly while DC performs well in all cases.5.2 C ONTINUOUS ACTION SPACESWe next compare the different approaches in more challenging continuous control tasks. We trainmax-ent policies to solve individual tasks using the importance sampling approach from section4 and then assess transfer on convex combinations of the rewards. All approaches use the sameexperience and proposal distribution.Figure 2 examines the transfer policies in detail in a simple point-mass task and shows how theestimatedC1bcorrects the CO QOptand dramatically changes the policy.We then examine conceptually similar tasks in more difficult domains: a 5 DOF planar manipulatorreaching task (figure 3), 3 DOF jumping ball and 8 DOF ant (figure 4). We see that DC recovers aqualitatively better policy in all cases. The performance of GPI depends noticeably on the choice of. DC-Cheap, which is a simpler heuristic, performs almost as well as DC in the tasks we considerexcept for the point mass task. When bounded by GPI (DC-Cheap+GPI) it performs well for thepoint mass task as well, suggesting simple approximations of C1bmay be sufficient in some cases.3We focussed on “tricky” tasks as they are challenging form of transfer. In general, we would expectDC to perform well in most situations where OC performs well, since in this case the correction termC1bthat DC must learn is inconsequential (OC is equivalent to assuming C1b= 0). Supplementaryfigure 5 demonstrates on a task with non-composible solutions (i.e. C1bis large and potentiallychallenging to learn), DC continues to perform as well as GPI, slightly better than CondQ, and asexpected, OC performs poorly.6 D ISCUSSIONWe have presented two approaches to transfer learning via convex combinations of rewards in themaximum entropy framework: max-ent GPI and DC. We have shown that, under standard assump-2Optb(ajs)/exp1QOpt(s;a) = exp(1(Q1(s;a) +Q2(s;a)) =1(ajs)2(ajs).3We provide videos of the more interesting tasks at https://tinyurl.com/yaplfwaq .8Under review as a conference paper at ICLR 2019DCDC(a) (L)eft task (b) (T)tricky task 1 (c) (T)tricky task 2 (d) LR regret (e) LU regret (f) T regret(g) Opt LR (h) GPI LU (i) GPI T (j) DC T (k) D1/2D1/2COGPIDCFigure 1: Policy composition in the tabular case . All tasks are in an infinite-horizon tabular 8x8world. The action space is the 4 diagonal movements (actions at the boundary transition back tothe same state) ( a-c) shows 3 reward functions (color indicates reward, dark blue r= +1 , lightbluer= 0:75). The arrows indicate the action likelihoods for the max-ent optimal policy for eachtask. ( d-f) The log regret for the max-ent returns for 3 qualitatively distinct compositional tasksrb=bri+(1b)rj, using different approaches to transfer from the base policies. The compositionaltasks we consider are left-right (LR), left-up (LU) and the “tricky“ tasks (T).(d) GPI performs well when the subtasks are incompatible, meaning the optimal policy is near oneof the component policies. ( g) CO performs poorly in these situations, resulting in indecision aboutwhich subtask to commit to.(e) Conversely, when the subpolicies are compatible, such as on the LU task, CO transfers wellwhile the GPI policy ( h) does not consistently take advantage of the compatibility of the two tasksto simultaneously achieve both subgoals.(f) Neither GPI nor CO policies ( ishows the GPI policy, but CO is similar) perform well when theoptimal transfer policy is dissimilar to either existing task policy. The two tricky task policies arecompatible in many states but have a high-divergence in the bottom-left corner since the rewards arenon-overlapping there ( k), thus the optimal policy on the composed task is to move to the top rightcorner where there are overlapping rewards. By learning, and correcting for, this future divergencebetween policies, DC results in optimal policies for all task combinations including tricky ( j).x1x2(a) Trajectories0.0 0.5 1.0b0.00.51.0ReturnOptimisticGPIDCDC-CheapDC-Cheap+GPICondQ (b) Returnsx1x20.00.8 (c)D12a1a21415(d)QOpta1a235(e)C11=2a1a211 (f)QDCFigure 2: Tricky point mass . The continuous “tricky” task with a simple 2-D velocity controlledpointed mass. ( a) Environment and example trajectories. The rewards are (r1= 1;r2= 0) ,(0;1)and(0:75;0:75)for the green, red and yellow squares. Lines show sampled trajectories (starting inthe center) for the compositional task r1=2with CO (red), GPI (blue) and DC (black). Only DC, DCheuristics and CondQ (not shown) find the optimal transfer policy of navigating to yellow rewardarea for the joint task which is the optimal solution for the compositional task. ( b) The returns foreach transfer method. DC and CondQ methods recover significantly better performance than GPI,and the CO policy performs poorly. ( c) The R ́enyi divergence of the two base policies as a functionof position: the two policies are compatible except near the bottom left corner where the rewardsare non-overlapping. ( d)QOptat the center position for the combined task. As both policies prefermoving left and down, most of the energy is on these actions. ( e) However, the future divergenceC11=2under these actions is high, which results in the ( f) DC differing significantly from CO.tions, the max-ent GPI policy performs at least as well as its component policies, and that DCrecovers the optimal transfer policy. Todorov (2009) and (Saxe et al., 2017; van Niekerk et al.,2018) previously considered optimal composition of max-ent policies. However, these approaches9Under review as a conference paper at ICLR 2019(a) Planar manipulator trickyDCCOGPICondQ (b) Finger PositionCOGPICondQDC-Cheap+GPIDC- CheapDC (c) ReturnsFigure 3: “Tricky” task with planar manipulator . The “tricky” tasks with a 5D torque-controlledplanar manipulator. The training tasks consists of (mutually exclusive) rewards of (1;0);(0;1)whenthe finger is at the green and red targets respectively and reward (0:75;0:75)at the blue target. ( b)Finger position at the end of the trajectories starting from randomly sampled start states) for thetransfer task with circles indicating the rewards. DC and CondQ trajectories reach towards theblue target (the optimal solution) while CO and GPI trajectories primarily reach towards one of thesuboptimal partial solutions. ( c) The returns on the transfer tasks (shaded bars show SEM, 5 seeds).(a) AntCOGPICondQDC-Cheap+GPIDC- CheapDC (b) JB Returns0.0 0.5 1.0b0.500.75Return (c) Ant returnsx1x2 (d) TrajectoriesFigure 4: “Tricky” task with mobile bodies. “Tricky” task with two bodies: a 3 DOF jumpingball (supplementary figure 6) and ( a) 8 DOF ant (both torque controlled). The task has rewards(1;0);(0;1)in the green and red boxes respectively and (0:75;0:75)in the blue square. ( b-c) Re-turns for both walkers when started in the center position. CO approach does not recover the optimalpolicy for the compositional task while the other approaches largely do, although CondQ does notlearn a good policy on the ant (shaded bars show SEM, 3 seeds for jumping ball, 5 seeds for ant). ( e)Sampled trajectories of the ant on the transfer task starting from a neutral position for b=12. GPIand DC consistently go to the blue square (optimal), CondQ and CO do not.require stronger assumptions than max-ent SF or DC, namely that reward states are absorbing andthat the joint reward is restricted to the softmax of the component rewards (soft OR). By contrast,DC does not restrict the class of MDPs and learns how compatible policies are, allowing approxi-mate recovery of optimal transfer policies both when the component rewards are jointly achievable(AND), and when only one sub-goal can be achieved (OR).We have compared our methods with conditional action-value functions (CondQ) (Schaul et al.,2015, e.g.) and optimistic policy combination (Haarnoja et al., 2018a). Further, we have presentedAISBP, a practical algorithm for training DC and max-ent GPI models in continuous action spacesusing adaptive importance sampling. We have compared these approaches, along with heuristicapproximations of DC, and demonstrated that DC recovers an approximately optimal policy duringtransfer across a variety of high-dimensional control tasks. Empirically we have found CondQ maybe harder to learn than DC, and it requires additional observation of during training.
r1l45Kv92X
Interesting work, but need further improvement
5: Marginally below acceptance threshold
-- Contribution, Originality, and Quality -- This paper has presented two approaches for transfer learning in the reinforcement learning (RL) setting: max-ent GPI (Section 3.1) and DC (Section 3.2). The authors have also established some theoretical results for these two approaches (Theorem 3.1 and 3.2), and also demonstrated some experiment results (Section 5). These two developed approaches are interesting. However, based on existing literature (Barreto et al. 2017; 2018, Haarnoja et al. 2018a), neither of them seems to contain *significant* novelty. The derivations of the theoretical results (Theorem 3.1 and 3.2) are also relatively straightforward. The experiment results in Section 5 are interesting. -- Clarity -- I have two major complaints about the clarity of this paper. 1) Section 4 of the paper is not well written and is hard to follow. 2) Some notations in the paper are not well defined. For instance 2a) In page 3, the notation \delta has not been defined. 2b) In page 6, both notation V_{\theta'_V} and V'_{\theta_V} have been used. I do not think either of them has been defined. -- Pros and Cons -- Pros: 1) The proposed approaches and the experiment results are interesting. Cons: 1) Neither the algorithm design nor the analysis has sufficient novelty, compared to the typical standard of a top-tier conference. 2) The paper is not very well written, especially Section 4. 3) For Theorem 3.2, why not prove a variant of it for the general multi-task case? 4) It would be better to provide the pseudocode of the proposed algorithm in the main body of the paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Composing Entropic Policies using Divergence Correction ### Paper Abstract Deep reinforcement learning (RL) algorithms have made great strides in recent years. An important remaining challenge is the ability to quickly transfer existing skills to novel tasks, and to combine existing skills with newly acquired ones. In domains where tasks are solved by composing skills this capacity holds the promise of dramatically reducing the data requirements of deep RL algorithms, and hence increasing their applicability. Recent work has studied ways of composing behaviors represented in the form of action-value functions. We analyze these methods to highlight their strengths and weaknesses, and point out situations where each of them is susceptible to poor performance. To perform this analysis we extend generalized policy improvement to the max-entropy framework and introduce a method for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between policies. We study this approach in the tabular case and propose a scalable variant that is applicable in multi-dimensional continuous action spaces. We compare our approach with existing ones on a range of non-trivial continuous control problems with compositional structure, and demonstrate qualitatively better performance despite not requiring simultaneous observation of all task rewards. ### Paper Keywords ["maximum entropy RL", "policy composition", "deep rl"] ### Paper Content ABSTRACTDeep reinforcement learning (RL) algorithms have made great strides in recentyears. An important remaining challenge is the ability to quickly transfer exist-ing skills to novel tasks, and to combine existing skills with newly acquired ones.In domains where tasks are solved by composing skills this capacity holds thepromise of dramatically reducing the data requirements of deep RL algorithms,and hence increasing their applicability. Recent work has studied ways of com-posing behaviors represented in the form of action-value functions. We analyzethese methods to highlight their strengths and weaknesses, and point out situa-tions where each of them is susceptible to poor performance. To perform thisanalysis we extend generalized policy improvement to the max-entropy frame-work and introduce a method for the practical implementation of successor fea-tures in continuous action spaces. Then we propose a novel approach which, inprinciple, recovers the optimal policy during transfer. This method works by ex-plicitly learning the (discounted, future) divergence between policies. We studythis approach in the tabular case and propose a scalable variant that is applicablein multi-dimensional continuous action spaces. We compare our approach withexisting ones on a range of non-trivial continuous control problems with com-positional structure, and demonstrate qualitatively better performance despite notrequiring simultaneous observation of all task rewards.1 I NTRODUCTIONReinforcement learning algorithms coupled with powerful function approximators have recentlyachieved a series of successes (Mnih et al., 2015; Silver et al., 2016; Lillicrap et al., 2015; Kalash-nikov et al., 2018). Unfortunately, while being extremely powerful, deep reinforcement learning(DRL) algorithms often require a large number of interactions with the environment to achieve goodresults, partially because they are often applied “from scratch” rather than in settings where they canleverage existing experience. This reduces their applicability in domains where generating experi-ence is expensive, or learning from scratch is challenging.The data efficiency of DRL algorithms is affected by various factors and significant research efforthas been directed at achieving improvements (e.g. Popov et al., 2017). At the same time the de-velopment of basic locomotor behavior in humans can, in fact, require large amounts of experienceand practice (Adolph et al., 2012), and it can take significant effort and training to master complex,high-speed skills (Haith & Krakauer, 2013). Once such skills have been acquired, however, humansrapidly put them to work in new contexts and to solve new tasks, suggesting transfer learning as animportant mechanism.Transfer learning has been explored extensively in multiple fields of the machine learning commu-nity (see e.g. Weiss et al., 2016, for a recent review). In RL and robotics the transfer of knowledgefrom one task to another has been studied from a variety of angles.For the purpose of this paper we are interested in methods that are suitable for transfer in the contextof high-dimensional motor control problems. We further focus on model-free approaches, which areevident in human motor control (Haith & Krakauer, 2013), and have recently been used by a varietyof scalable deep RL methods (e.g. Lillicrap et al., 2015; Mnih et al., 2015; Schulman et al., 2017;Kalashnikov et al., 2018).1Under review as a conference paper at ICLR 2019Transfer may be especially valuable in domains where a small set of skills can be composed, indifferent combinations, to solve a variety of tasks. Different notions of compositionality have beenconsidered in the RL and robotics literature. For instance, ‘options’ are associated with discreteunits of behavior that can be sequenced, thus emphasizing composition in time (Precup et al., 1998).In this paper we are concerned with a rather distinct notion of compositionality, namely how to com-bine and blend potentially concurrent behaviors. This form of composition is particularly relevantin high-dimensional continuous action spaces, where it is possible to achieve more than one tasksimultaneously (e.g. walking somewhere while juggling).One approach to this challenge is via the composition of task rewards. Specifically, we are interestedin the following question: If we have previously solved a set of tasks with similar transition dynamicsbut different reward functions, how can we leverage this knowledge to solve new tasks which can beexpressed as a convex combination of those rewards functions?This question has recently been studied in two independent lines of work: by Barreto et al. (2017;2018) in the context of successor feature (SF) representations used for Generalized Policy Improve-ment (GPI) with deterministic policies, and by Haarnoja et al. (2018a); van Niekerk et al. (2018) inthe context of maximum entropy policies. These approaches operate in distinct frameworks but bothachieve skill composition by combining the Q-functions associated with previously learned skills.We clarify the relationship between the two approaches and show that both can perform well in somesituations but achieve poor results in others, often in complementary ways. We introduce a novelmethod of behavior composition that that can consistently achieve good performance.Our contributions are as follows:1. We introduce succcessor features (SF) in the context of maximum entropy and extend theGPI theorem to this case (max-ent GPI).2. We provide an analysis of when GPI, and compositional “optimism” (Haarnoja et al.,2018a) of entropy-regularized policies transfer. We construct both tabular and continuousaction tasks where both fail to transfer well.3. We propose a correction term – which we call Divergence Correction (DC)– based on theR ́enyi divergence between policies which allows us, in principle, to recover the optimalpolicy for transfer for any convex combination of rewards.4. We demonstrate a practical implementation of these methods in continuous action spacesusing adaptive importance sampling and compare the approaches introduced here: max-entGPI and DC with optimism(Haarnoja et al., 2018a) and Conditional Qfunctions (Schaulet al., 2015) in a variety of non-trivial continuous action transfer tasks.2 B ACKGROUND2.1 M ULTI -TASK RLWe consider Markov Decision Processes defined by the tuple Mcontaining: a state space S, actionspaceA, a start state distribution p(s1), a transition function p(st+1jst;at), a discount2[0;1)anda reward function r(st;at;st+1). The objective of RL is to find a policy (ajs) :S!P (A)whichmaximises the discounted expected return from any state J() =E;M[P1=ttr]where theexpected reward is dependent on the policy and the MDPM.We formalize transfer as in Barreto et al. (2017); Haarnoja et al. (2018a), as the desire to performwell across all tasks in a set M2T0after having learned policies for tasks M2T , without addi-tional experience. We assume that TandT0are related in two ways: all tasks share the same statetransition function, and tasks in T0can be expressed as convex combinations of rewards associatedwith tasks in setT. So if we write the reward functions for tasks in Tas the vector= (r1;r2;:::),tasks inT0can be expressed as rw=w.We focus on combinations of two policies rb=bri+ (1b)rjbut the methods can be extended tomore than two tasks. We refer to a transfer method as optimal, if it achieves optimal returns on tasksinT0, using only experience on tasks T.2Under review as a conference paper at ICLR 20192.2 S UCCESSOR FEATURESSuccessor Features (SF) (Dayan, 1993) and Generalised Policy Improvement (GPI) (Barreto et al.,2017; 2018) provide a principled solution to transfer in the setting defined above. SF make theadditional assumption that the reward feature is fully observable, that is, the agent has access tothe rewards of all tasks in Tbut notT0during training on each individual task.The key observation of SF representations is that linearity of the reward rwwith respect to thefeaturesimplies the following decomposition of the value policy of :Qw(st;at) =E"1X=ttwjat#=E"1Xi=ttjat#w (st;at)w;(1)where is the expected discounted sum of features induced by policy . This decompositionallows us to compute the action-value for on any task wby learning .If we have a set of policies 1;2;:::;nindexed byi, SF and GPI provide a principled approachto transfer on task w. Namely, we act according to the deterministic GPI policy GPIw(st)arg maxatQGPIw(st;at))whereQGPIw(st;at)maxiQiw(st;at) = maxi i(s;a)w (2)The GPI theorem guarantees the GPI policy has a return at least as good as any component policy,that is,VGPIw (s)maxiViw(s)8s2S.2.3 M AXIMUM ENTROPY RLThe maximum entropy (max-ent) RL objective augments the reward to favor entropic solutionsJ() =E;M[P1i=t(r+H[(js))]] (3)whereis a parameter that determines the relative importance of the entropy term.This objective has been considered in a number of works including Kappen (2005); Todorov (2009);Haarnoja et al. (2017; 2018a); Ziebart et al. (2008); Fox et al. (2015).We define the action-value Qassociated with eq. 3 asQ(st;at)rt+EP1=t+1t(r+H[(js)])(4)(noticeQ(st;at)does not include any entropy terms for the state st). Soft Q iterationQ(st;at) r(st;at;st+1) +Ep(st+1jst;at)[V(st+1)] (5)V(st) E[Q(st;at)] +H[(jst)] =logZAexp(1Q(st;at))dalogZ(st)(6)where(atjst)/exp(1Q(st;at))converges to the optimal policy with standard assumptions(Haarnoja et al., 2017).3 C OMPOSING POLICIES IN MAX-ENTREINFORCEMENT LEARNINGIn this section we present two novel approaches for max-ent transfer learning. In section 4 we thenoutline a practical method for making use of these results.3.1 M AX-ENTSUCCESSOR FEATURES AND GENERALIZED POLICY IMPROVEMENTWe introduce max-ent SF, which provide a practical method for computing the value of a maximumentropy policy under any convex combination of rewards. We then show the GPI theorem (Barretoet al., 2017) holds for maximum entropy policies.We define the action-dependent SF to include the entropy of the policy, excluding the current state,analogous to the max-entropy definition of Qin (4): (st;at)t+EP1=i+1t(+1H[(js)])=t+Ep(st+1jst;at)[(st+1)](7)3Under review as a conference paper at ICLR 2019where 1is a vector of ones of the same dimensionality as and we define the state-dependentsuccessor features as the expected in analogy with V(s):(s)Ea(js)[ (s;a)] +1H[(js)]: (8)The max-entropy action-value of for any convex combination of rewards wis then given byQw(s;a) = (s;a)w. Max-ent SF allow us to estimate the action-value of previous policies ona new task. We show that, as in the deterministic case, there is a principled way to combine multiplepolicies using their action-values on task w.Theorem 3.1 (Max-Ent Generalized Policy Improvement) Let1;2;:::;nbenpolicies with-max-ent action-value functions Q1;Q2;:::;Qnand value functions V1;V2;:::;Vn. Define(ajs)/exp1maxiQi(s;a):Then,Q(s;a)maxiQi(s;a)for alls2S and alla2A, (9)V(s)maxiVi(s)for alls2S, (10)whereQ(s;a)andV(s)are the-max-ent action-value and value function respectively of .Proof: See appendix A.1. In our setup, we learn i(s;a), the SFs of policies ifor each task inT, we define the max-ent GPI policy for task w2T0asGPIw(ajs)/exp(1maxiQiw(s;a)) =exp(1maxi i(s;a)w):3.2 D IVERGENCE CORRECTION (DC)Haarnoja et al. (2018a) introduced a simple approach to policy composition by estimating the action-value for the transfer task rb=bri+ (1b)rjfrom the optimal action-values of the componenttasksQiandQjQOptb(s;a)bQi(s;a) + (1b)Qj(s;a): (11)When using Boltzmann policies defined by Q, the resulting policy, Optb(ajs)/exp(1QOptb(s;a)),is the product distribution of the two component policies. We refer to Optbas the compositionally“optimistic” (CO) policy, as it acts according to the optimistic assumption that the optimal returnsofQiandQjwill be, simultaneously, achievable1.Both max-ent GPI we presented above, and CO can, in different ways, fail to transfer well in somesituations (see fig. 1 for some examples in tabular case). Neither approach consistently performsoptimally during transfer, even if all component terms are known exactly. We desire a solution fortransfer that, in principle, can perform optimally.Here we show, at the cost of learning a function conditional on the task weightings b, it is in principlepossible to recover the optimal policy for the transfer tasks, without direct experience on those tasks,by correcting for the compositional optimism bias in QOptb. For simplicity, as in Haarnoja et al.(2018a), we restrict this to the case with only 2 tasks, but it can be extended to multiple tasks.The correction term for CO uses a property noted, but not exploited in Haarnoja et al. (2018a). Thebias inQOptis related to the the discounted sum of R ́enyi divergences of the two component policies.Intuitively, if the two policies result in trajectories with low divergence between the policies in eachstate, the CO assumption that both policies can achieve good returns is approximately correct. Whenthe divergences are large, the CO assumption is being overly optimistic and the correction term willbe large.Theorem 3.2 (DC Optimality) Leti;jbemax-ent optimal policies for tasks with rewards riandrjwith max-ent action-value functions Qi;Qj. DefineC1b(st;at)as the fixed point ofC(k+1)b(st;at) =Ep(st+1jst;at)hlogRAi(at+1jst+1)bj(at+1jst+1)(1b)exp(1C(k)b(st+1;at+1))dat+1i1Compositional optimism is not the same as optimism under uncertainty, often used in RL for exploration.4Under review as a conference paper at ICLR 2019Given the conditions for Soft Q convergence, the max-ent optimal Qb(s;a)forrb=bri+ (1b)rjisQb(s;a) =bQi(s;a) + (1b)Qj(s;a)C1b(s;a)8s2S;a2A;b2[0;1]:Proof: See appendix A.2. We call this Divergence Correction (DC) as the quantity C1bis related tothe R ́enyi divergence between policies (see appendix A.2 for details). Learning C1bdoes not requireany additional information (in principle) than that required to learn policies iandj. Unlike withSF, it is not necessary to observe other task features while training the policies. On the other hand,unlike with GPI, which can be used to naturally combine any number of tasks with arbitrary weightvectors w, in order to apply DC one must estimate C1b(s;a)for all values of b. so the complexityof learningC1increases significantly if more than 2 tasks are combined.Supplementary Table 1 provides a comparison on the properties of the methods we consider here.We also compare with simply learning a conditional QfunctionQ(s;ajb)(CondQ) (e.g. Schaulet al., 2015; Andrychowicz et al., 2017). As with GPI, this requires observing the full set of taskfeatures, in order to compute rbfor arbitrary b.In this section we have introduced two new theoretical approaches to max-ent transfer composition:max-ent GPI and DC. We have shown how these are related to relevant prior methods. In the nextsection we address the question of how to practically learn and sample with these approaches incontinuous action spaces.4 A DAPTIVE IMPORTANCE SAMPLING FOR BOLTZMAN POLICIESALGORITHMThe control of robotic systems with high-dimensional continuous action spaces is a promising usecase for the ideas presented in this paper. Such control problems may allow for multiple solutions,and can exhibit exploitable compositional structure. Unfortunately, learning and sampling of gen-eral Boltzmann policies defined over continuous action spaces is challenging. While this can bemitigated by learning a parametric sampling distribution, during transfer we want to sample fromthe Boltzmann policy associated with a newly synthesized action-value function without having tolearn such an approximation first. To address this issue we introduce Adaptive Importance Samplingfor Boltzmann Policies (AISBP), a method which provides a practical solution to this challenge.In the following we parametrise all functions with neural nets (denoting parameters by the subscript), including the soft action-value for reward i:QiQ(s;a); the associated soft value function ViV(s)and a proposal distribution qiq(ajs), the role of which we explain below. We use an off-policyalgorithm, so that experience generated by training on policy ican be used to improve policy j. Thisis especially important since our analysis requires the action-value Qi(s;a)to be known in all states.This is less likely to be the case for a on on-policy algorithm, that only updates Qiusing trajectoriesgenerated by policy i. During training experience generated by all tasks are stored in a replay bufferR, and mini-batches are sampled uniformly and used to update all function approximators. Soft Qiteration (see eq. 4) is used to learn QiandVi. These updates are, in principle, straightforwardusing transitions sampled from the replay buffer.Sampling from the Boltzmann policy defined by QiQ,i(ajs)/exp1QiQ(s;a)is challenging asis estimating the partition function (the logof which is also the value, c.f. Eq. 6). One approach isto fit an expressible, tractable sampler, such as a stochastic neural network to approximate i(e.g.Haarnoja et al., 2018a). This approach works well when learning a single policy. However, duringtransfer this may require learning a new sampler for each new value composition. AISBP insteaduses importance sampling to sample and estimate the partition function. The scalability of thisapproach is improved by using using a learned proposal distribution qq(ajs), and by observing thatmodern architectures allow for efficient batch computation of a large number of importance samples.To facilitate transfer we restrict the parametric form of the proposals to mixtures of (truncated) Nor-mal distributions. The well-known result that the product of Normal distributions can be computedin closed-form then allows us to construct effective compositional proposals during transfer.More formally, for each policy in Twe learn an action-value QiQ(s;a), and valueViV(s)network,and a proposal distribution qiq(ajs)(we drop the task index ihere when writing the losses for nota-5Under review as a conference paper at ICLR 2019tional clarify, and write the losses for a single policy). The proposal distribution is a mixture of Mtruncated Normal distributions NT, truncated to the square a2[1;1)nwith diagonal covariancesqq(ajs) =1MPMm=1NT(a;mq(s);mq(s);1;1) (12)The proposal distribution is optimized by minimizing the forward KL divergence with the Boltz-mann policy (ajs)/exp1QQ(s;a). This KL is “zero avoiding” and over-estimates the supportof(Murphy, 2012) which is desirable for a proposal distribution (Gu et al., 2015),L(q) =EREa(js)[log(ajst)logqq(ajst)](13)where the expectation is over the replay buffer state density.The inner expectation in the proposal loss itself requires sampling from . We approximate hisexpectation by self-normalized importance sampling and use a target proposal distribution p(atjst)which is a mixture distribution consisting of the proposals for all policies along with a uniformdistribution. For batchsize BandNproposal samples the estimator of the proposal loss is thenL(q)1BBXk=1NXl=1wkllogqq(ajst);w0kl=1(QQ(sk;akl))p(akljsk);wkl=w0klPNm=1w0km:(14)The value function loss is defined as the L2 error on the Soft Q estimate of valueL(V) =ER"12VV(st)logZAexp(1QQ(st;a))da2#(15)which is estimated using importance sampling to compute the integral.L(V)12BPBl=1(VV(sl)logZ)2;Z=1NPNk=1exp(1QQ(sl;alk))qq(alkjsl): (16)This introduces bias due to the finite-sample approximation of the expectation inside the (concave)log. In practice we found this estimator sufficiently accurate, provided the proposal distribution wasclose to. We also use importance sampling to sample from while acting.The action-value loss is just the L2 norm with the Soft Q target:L(Q) =ER12(QQ(st;at)(r(st;at;st+1) +V0V(st+1)))2: (17)To improve stability we employ target networks for the value VV0and proposal q0qnetworks (Mnihet al., 2015; Lillicrap et al., 2015) We also parameterize Qas an advantage QQ(s;a) =VV(s) +AA(s;a)(Baird, 1994; Wang et al., 2015; Harmon et al., 1995) which is more stable when theadvantage is small compared with the value. The full algorithm is give in Algorithm Box 1 andmore details are provided in appendix C.4.1 I MPORTANCE SAMPLED MAX-ENTGPIThe same importance sampling approach can also be used to estimate max-ent SF. Max-ent GPIrequires us to learn the expected (maximum entropy) features ifor each policy i, in order toestimate its (entropic) value under a new convex combination task w. This requires that experiencetuple in the replay contain the full feature vector , rather than just the reward for the policy whichgenerated the experience ri. Given this information andcan be learned with analogousupdates toVandQ, which again requires importance sampling to estimate .As withVV, we use a target network for 0and advantage parametrization. We found that,because these updates when using experience shared between tasks is far off-policy, it is necessaryto have a longer target update period than for V. Full details are of the losses and samplers are inappendix C.6Under review as a conference paper at ICLR 2019Algorithm 1 AISBP training algorithmInitialize proposal network q, value network parameters Vand action-value network parametersQand replayRwhile training do .in parallel on each actorObtain parameters from learnerSample task iTRoll out episode using qiqto importance sample i(ajs)/exp1QiQ(s;a)Add experience to replay Rend whilewhile training do .in parallel on the learnerSample SARS tuple from RImproveL(q),L(V),L(Q)Improve additional losses for transfer L(),L( ),L(C),L(Vb)L(Qb),iftarget update period thenUpdate target network parameters V0 V;q0 q,0 ,V0b Vbend ifend while4.2 D IVERGENCE CORRECTIONAll that is required for transfer using compositional optimism (eq. 11, Haarnoja et al. (2018a)) is themax-ent action values of each task, so no additional training is required beyond the base policies.In section 3.2 we have shown that if we can learn the fixed point of C1b(s;a)we can correct thiscompositional optimism and recover the optimal action-value Qb(s;a).We exploit the recursive relationship in C1b(s;a)to fit a neural net CC(s;a;b )with a TD(0) esti-mator. This requires learning a conditional estimator for any value of b, so as to support arbitrarytask combinations. Fortunately, since C1bdepends only on the policies and transition function it ispossible to learn an estimator C1bfor different values of bby sampling bduring each update. Asbefore, we use target networks and an advantage parametrization for CC(s;a;b )We learnC1basCC(s;a;b ), for each pair of policies i;jresulting in the lossL(C) =EsR;bU(0;1)[12(CC(s;a;b ) +Ep(s0js;a)[logRAexp(blogi(a0js0)+ (18)(1b)j(a0js0)1CC0(s0;a0;b))da0])2]:As with other integrals of the action space, we approximate this loss using importance sampling toestimate the integral. Note that, unlike GPI and CondQ (next section), learning C1bdoes not requireobservingwhile training.We also considered a heuristic approach where we learned Conly forb=12(this is typicallyapproximately the largest divergence). This avoids the complexity of a conditional estimator andwe estimate C1bas^C1b(s;a)4b(1b)C11=2(s;a):This heuristic, we denote DC-Cheap, canbe motivated by considering Gaussian policies with similar variance (see appendix D) The max-entGPI bound can be used to correct for over-estimates of the heuristic C1b,QDCCheap +GPI(s;a) =max(QOPT(s;a)^C1b(s;a);QGPI(s;a)).4.3 C OND QAs a baseline, we directly learn a conditional Qfunction using a similar approach to DC of samplingbeach update Q(s;a;b )(Schaul et al., 2015). This, like GPI but unlike DC, requires observing during training so the reward on task bcan be estimated. We provide the full details in appendix C.4.4 S AMPLING COMPOSITIONAL POLICIESDuring transfer we would like to be able to sample from the Boltzmann policy defined by ourestimate of the transfer action-value Qb(the estimate is computed using the methods we enumerated7Under review as a conference paper at ICLR 2019above) without having to, offline, learn a new proposal or sampling distribution first (which is theapproach employed by Haarnoja et al. (2018a)).As outlined earlier, we chose the proposal distributions so that the product of proposals is tractable,meaning we can sample from qijb(ajs)/(qiq(ajs))b(qj(ajs))(1b). This is a good proposal dis-tribution when the CO bias is low, since QOptbdefines a Boltzmann policy which is the product ofthe base policies2However, when C1b(s;a)is large, meaning the CO bias is large, qijmay not bea good proposal, as we show in the experiments. In this case none of the existing proposal distribu-tions may be a good fit. Therefore we sample from a mixture distribution of all policies, all policyproducts and the uniform distribution.pb(ajs)14(qiq(ajs) +qjq(ajs) +qijb(ajs) +1VA) (19)whereVAis the volume of the action space. Empirically, we find this is sufficient to result in goodperformance during transfer. The algorithm for transfer is given in supplementary algorithm 2.5 E XPERIMENTS5.1 D ISCRETE ,TABULAR ENVIRONMENTWe first consider some illustrative tabular cases of compositional transfer. These highlight situationsin which GPI and CO transfer can perform poorly (Figure 1). As expected, we find that GPI performswell when the optimal transfer policy is close to one of the existing policies; CO performs well whenboth subtask policies are compatible. The task we refer to as “tricky” is illustrative of in which theoptimal policy for the transfer task does not resemble either existing policy: In the grid world non-overlapping rewards for each task are provided in one corner of the grid world, while lower valueoverlapping rewards are provided in the other corner (cf. Fig. 1). As a consequence both GPI andCO perform poorly while DC performs well in all cases.5.2 C ONTINUOUS ACTION SPACESWe next compare the different approaches in more challenging continuous control tasks. We trainmax-ent policies to solve individual tasks using the importance sampling approach from section4 and then assess transfer on convex combinations of the rewards. All approaches use the sameexperience and proposal distribution.Figure 2 examines the transfer policies in detail in a simple point-mass task and shows how theestimatedC1bcorrects the CO QOptand dramatically changes the policy.We then examine conceptually similar tasks in more difficult domains: a 5 DOF planar manipulatorreaching task (figure 3), 3 DOF jumping ball and 8 DOF ant (figure 4). We see that DC recovers aqualitatively better policy in all cases. The performance of GPI depends noticeably on the choice of. DC-Cheap, which is a simpler heuristic, performs almost as well as DC in the tasks we considerexcept for the point mass task. When bounded by GPI (DC-Cheap+GPI) it performs well for thepoint mass task as well, suggesting simple approximations of C1bmay be sufficient in some cases.3We focussed on “tricky” tasks as they are challenging form of transfer. In general, we would expectDC to perform well in most situations where OC performs well, since in this case the correction termC1bthat DC must learn is inconsequential (OC is equivalent to assuming C1b= 0). Supplementaryfigure 5 demonstrates on a task with non-composible solutions (i.e. C1bis large and potentiallychallenging to learn), DC continues to perform as well as GPI, slightly better than CondQ, and asexpected, OC performs poorly.6 D ISCUSSIONWe have presented two approaches to transfer learning via convex combinations of rewards in themaximum entropy framework: max-ent GPI and DC. We have shown that, under standard assump-2Optb(ajs)/exp1QOpt(s;a) = exp(1(Q1(s;a) +Q2(s;a)) =1(ajs)2(ajs).3We provide videos of the more interesting tasks at https://tinyurl.com/yaplfwaq .8Under review as a conference paper at ICLR 2019DCDC(a) (L)eft task (b) (T)tricky task 1 (c) (T)tricky task 2 (d) LR regret (e) LU regret (f) T regret(g) Opt LR (h) GPI LU (i) GPI T (j) DC T (k) D1/2D1/2COGPIDCFigure 1: Policy composition in the tabular case . All tasks are in an infinite-horizon tabular 8x8world. The action space is the 4 diagonal movements (actions at the boundary transition back tothe same state) ( a-c) shows 3 reward functions (color indicates reward, dark blue r= +1 , lightbluer= 0:75). The arrows indicate the action likelihoods for the max-ent optimal policy for eachtask. ( d-f) The log regret for the max-ent returns for 3 qualitatively distinct compositional tasksrb=bri+(1b)rj, using different approaches to transfer from the base policies. The compositionaltasks we consider are left-right (LR), left-up (LU) and the “tricky“ tasks (T).(d) GPI performs well when the subtasks are incompatible, meaning the optimal policy is near oneof the component policies. ( g) CO performs poorly in these situations, resulting in indecision aboutwhich subtask to commit to.(e) Conversely, when the subpolicies are compatible, such as on the LU task, CO transfers wellwhile the GPI policy ( h) does not consistently take advantage of the compatibility of the two tasksto simultaneously achieve both subgoals.(f) Neither GPI nor CO policies ( ishows the GPI policy, but CO is similar) perform well when theoptimal transfer policy is dissimilar to either existing task policy. The two tricky task policies arecompatible in many states but have a high-divergence in the bottom-left corner since the rewards arenon-overlapping there ( k), thus the optimal policy on the composed task is to move to the top rightcorner where there are overlapping rewards. By learning, and correcting for, this future divergencebetween policies, DC results in optimal policies for all task combinations including tricky ( j).x1x2(a) Trajectories0.0 0.5 1.0b0.00.51.0ReturnOptimisticGPIDCDC-CheapDC-Cheap+GPICondQ (b) Returnsx1x20.00.8 (c)D12a1a21415(d)QOpta1a235(e)C11=2a1a211 (f)QDCFigure 2: Tricky point mass . The continuous “tricky” task with a simple 2-D velocity controlledpointed mass. ( a) Environment and example trajectories. The rewards are (r1= 1;r2= 0) ,(0;1)and(0:75;0:75)for the green, red and yellow squares. Lines show sampled trajectories (starting inthe center) for the compositional task r1=2with CO (red), GPI (blue) and DC (black). Only DC, DCheuristics and CondQ (not shown) find the optimal transfer policy of navigating to yellow rewardarea for the joint task which is the optimal solution for the compositional task. ( b) The returns foreach transfer method. DC and CondQ methods recover significantly better performance than GPI,and the CO policy performs poorly. ( c) The R ́enyi divergence of the two base policies as a functionof position: the two policies are compatible except near the bottom left corner where the rewardsare non-overlapping. ( d)QOptat the center position for the combined task. As both policies prefermoving left and down, most of the energy is on these actions. ( e) However, the future divergenceC11=2under these actions is high, which results in the ( f) DC differing significantly from CO.tions, the max-ent GPI policy performs at least as well as its component policies, and that DCrecovers the optimal transfer policy. Todorov (2009) and (Saxe et al., 2017; van Niekerk et al.,2018) previously considered optimal composition of max-ent policies. However, these approaches9Under review as a conference paper at ICLR 2019(a) Planar manipulator trickyDCCOGPICondQ (b) Finger PositionCOGPICondQDC-Cheap+GPIDC- CheapDC (c) ReturnsFigure 3: “Tricky” task with planar manipulator . The “tricky” tasks with a 5D torque-controlledplanar manipulator. The training tasks consists of (mutually exclusive) rewards of (1;0);(0;1)whenthe finger is at the green and red targets respectively and reward (0:75;0:75)at the blue target. ( b)Finger position at the end of the trajectories starting from randomly sampled start states) for thetransfer task with circles indicating the rewards. DC and CondQ trajectories reach towards theblue target (the optimal solution) while CO and GPI trajectories primarily reach towards one of thesuboptimal partial solutions. ( c) The returns on the transfer tasks (shaded bars show SEM, 5 seeds).(a) AntCOGPICondQDC-Cheap+GPIDC- CheapDC (b) JB Returns0.0 0.5 1.0b0.500.75Return (c) Ant returnsx1x2 (d) TrajectoriesFigure 4: “Tricky” task with mobile bodies. “Tricky” task with two bodies: a 3 DOF jumpingball (supplementary figure 6) and ( a) 8 DOF ant (both torque controlled). The task has rewards(1;0);(0;1)in the green and red boxes respectively and (0:75;0:75)in the blue square. ( b-c) Re-turns for both walkers when started in the center position. CO approach does not recover the optimalpolicy for the compositional task while the other approaches largely do, although CondQ does notlearn a good policy on the ant (shaded bars show SEM, 3 seeds for jumping ball, 5 seeds for ant). ( e)Sampled trajectories of the ant on the transfer task starting from a neutral position for b=12. GPIand DC consistently go to the blue square (optimal), CondQ and CO do not.require stronger assumptions than max-ent SF or DC, namely that reward states are absorbing andthat the joint reward is restricted to the softmax of the component rewards (soft OR). By contrast,DC does not restrict the class of MDPs and learns how compatible policies are, allowing approxi-mate recovery of optimal transfer policies both when the component rewards are jointly achievable(AND), and when only one sub-goal can be achieved (OR).We have compared our methods with conditional action-value functions (CondQ) (Schaul et al.,2015, e.g.) and optimistic policy combination (Haarnoja et al., 2018a). Further, we have presentedAISBP, a practical algorithm for training DC and max-ent GPI models in continuous action spacesusing adaptive importance sampling. We have compared these approaches, along with heuristicapproximations of DC, and demonstrated that DC recovers an approximately optimal policy duringtransfer across a variety of high-dimensional control tasks. Empirically we have found CondQ maybe harder to learn than DC, and it requires additional observation of during training.<|im_end|> <|im_start|>assistant ### Review Title Interesting work, but need further improvement ### Review Text -- Contribution, Originality, and Quality -- This paper has presented two approaches for transfer learning in the reinforcement learning (RL) setting: max-ent GPI (Section 3.1) and DC (Section 3.2). The authors have also established some theoretical results for these two approaches (Theorem 3.1 and 3.2), and also demonstrated some experiment results (Section 5). These two developed approaches are interesting. However, based on existing literature (Barreto et al. 2017; 2018, Haarnoja et al. 2018a), neither of them seems to contain *significant* novelty. The derivations of the theoretical results (Theorem 3.1 and 3.2) are also relatively straightforward. The experiment results in Section 5 are interesting. -- Clarity -- I have two major complaints about the clarity of this paper. 1) Section 4 of the paper is not well written and is hard to follow. 2) Some notations in the paper are not well defined. For instance 2a) In page 3, the notation \delta has not been defined. 2b) In page 6, both notation V_{\theta'_V} and V'_{\theta_V} have been used. I do not think either of them has been defined. -- Pros and Cons -- Pros: 1) The proposed approaches and the experiment results are interesting. Cons: 1) Neither the algorithm design nor the analysis has sufficient novelty, compared to the typical standard of a top-tier conference. 2) The paper is not very well written, especially Section 4. 3) For Theorem 3.2, why not prove a variant of it for the general multi-task case? 4) It would be better to provide the pseudocode of the proposed algorithm in the main body of the paper. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
BVSM0x3EDK6
ICLR.cc/2021/Conference
2021
Robust and Generalizable Visual Representation Learning via Random Convolutions
["Zhenlin Xu", "Deyi Liu", "Junlin Yang", "Colin Raffel", "Marc Niethammer"]
While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation. Random convolutions are approximately shape-preserving and may distort local textures. Intuitively, randomized convolutions create an infinite number of new domains with similar global shapes but random local texture. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation.
["domain generalization", "robustness", "representation learning", "data augmentation"]
ABSTRACTWhile successful for various computer vision tasks, deep neural networks haveshown to be vulnerable to texture style shifts and small perturbations to whichhumans are robust. In this work, we show that the robustness of neural networks canbe greatly improved through the use of random convolutions as data augmentation.Random convolutions are approximately shape-preserving and may distort localtextures. Intuitively, randomized convolutions create an infinite number of newdomains with similar global shapes but random local texture. Therefore, we exploreusing outputs of multi-scale random convolutions as new images or mixing themwith the original images during training. When applying a network trained with ourapproach to unseen domains, our method consistently improves the performanceon domain generalization benchmarks and is scalable to ImageNet. In particular,in the challenging scenario of generalizing to the sketch domain in PACS and toImageNet-Sketch, our method outperforms state-of-art methods by a large margin.More interestingly, our method can benefit downstream tasks by providing a morerobust pretrained visual representation.11 I NTRODUCTIONGeneralizability and robustness to out-of-distribution samples have been major pain points whenapplying deep neural networks (DNNs) in real world applications (V olpi et al., 2018). Though DNNsare typically trained on datasets with millions of training samples, they still lack robustness to domainshift, small perturbations, and adversarial examples (Luo et al., 2019). Recent research has shown thatneural networks tend to use superficial features rather than global shape information for predictioneven when trained on large-scale datasets such as ImageNet (Geirhos et al., 2019). These superficialfeatures can be local textures or even patterns imperceptible to humans but detectable to DNNs, as isthe case for adversarial examples (Ilyas et al., 2019). In contrast, image semantics often depend moreon object shapes rather than local textures. For image data, local texture differences are one of themain sources of domain shift, e.g., between synthetic virtual images and real data (Sun & Saenko,2014). Our goal is therefore to learn visual representations that are invariant to local texture andthat generalize to unseen domains. While texture and color may be treated as different concepts, wefollow the convention in Geirhos et al. (2019) and include color when talking about texture.We address the challenging setting of robust visual representation learning from single domaindata. Limited work exists in this setting. Proposed methods include data augmentation (V olpiet al., 2018; Qiao et al., 2020; Geirhos et al., 2019), domain randomization (Tobin et al., 2017; Yueet al., 2019), self-supervised learning (Carlucci et al., 2019), and penalizing the predictive powerof low-level network features (Wang et al., 2019a). Following the spirit of adding inductive biastowards global shape information over local textures, we propose using random convolutions toimprove the robustness to domain shifts and small perturbations. While recently Lee et al. (2020)proposed a similar technique for improving the generalization of reinforcement learning agents in1Code is available at https://github.com/wildphoton/RandConv .1Published as a conference paper at ICLR 2021Input k= 1k= 3k= 5k= 7k= 11k= 15Input = 0:9= 0:7= 0:5= 0:3= 0:1= 0Figure 1: Top: Illustration that RandConv randomize local texture but preserve shapes in the image. Middle:First column is the input image of size 2242; following columns are convolutions results using random filtersof different sizes k.Bottom: Mixing results between an image and one of its random convolution results withdifferent mixing coefficients .unseen environments, we focus on visual representation learning and examine our approach on visualdomain generalization benchmarks. Our method also includes the multiscale design and a mixingvariant. In addition, considering that many computer vision tasks rely on training deep networksbased on ImageNet-pretrained weights (including some domain generalization benchmarks), weask“Can a more robust pretrained model make the finetuned model more robust on downstreamtasks?” Different from (Kornblith et al., 2019; Salman et al., 2020) who studied the transferability ofa pretrained ImageNet representation to new tasks while focusing on in-domain generalization, weexplore generalization performance on unseen domains for new tasks.We make the following contributions:We develop RandConv , a data augmentation technique using multi-scale random-convolutionsto generate images with random texture while maintaining global shapes. We explore using theRandConv output as training images or mixing it with the original images. We show that aconsistency loss can further enforce invariance under texture changes.We provide insights and justification on why RandConv augments images with different localtexture but the same semantics with the shape-preserving property of random convolutions.We validate RandConv and its mixing variant in extensive experiments on synthetic and real-world benchmarks as well as on the large-scale ImageNet dataset. Our methods outperformsingle domain generalization approaches by a large margin on digit recognition datasets and forthe challenging case of generalizing to the Sketch domain in PACS and to ImageNet-Sketch.We explore if the robustness/generalizability of a pretrained representation can transfer. Weshow that transferring a model pretrained with RandConv on ImageNet can further improvedomain generalization performance on new downstream tasks on the PACS dataset.2 R ELATED WORKDomain Generalization (DG) aims at learning representations that perform well when transferredto unseen domains. Modern techniques range between feature fusion (Shen et al., 2019), meta-learning (Li et al., 2018a; Balaji et al., 2018), and adversarial training (Shao et al., 2019; Li et al.,2018b). Note that most current DG work (Ghifary et al., 2016; Li et al., 2018a;b) requires a multi-source training setting to work well. However, in practice, it might be difficult and expensive tocollect data from multiple sources, such as collecting data from multiple medical centers (Raghupathi& Raghupathi, 2014). Instead, we consider the more strict single-domain generalization DG setting,2Published as a conference paper at ICLR 2021where we train the model on source data from a single domain and generalize it to new unseendomains (Carlucci et al., 2019; Wang et al., 2019b).Domain Randomization (DR) was first introduced as a DG technique by Tobin et al. (2017) tohandle the domain gap between simulated and real data. As the training data in (Tobin et al., 2017) issynthesized in a virtual environment, it is possible to generate diverse training samples by randomlyselecting background images, colors, lighting, and textures of foreground objects. When a simulationenvironment is not accessible, image stylization can be used to generate new domains (Yue et al.,2019; Geirhos et al., 2019). However, this requires extra effort to collect data and to train an additionalmodel; further, the number of randomized domains is limited by the number of predefined styles.Data Augmentation has been widely used to improve the generalization of machine learning mod-els (Simard et al., 2003). DR approaches can be considered a type of synthetic data augmentation.To improve performance on unseen domains, V olpi et al. (2018) generate adversarial examplesto augment the training data; Qiao et al. (2020) extend this approach via meta-learning. As withother adversarial training algorithms, significant extra computation is required to obtain adversarialexamples.Learning Representations Biased towards Global Shape Geirhos et al. (2019) demonstrated thatconvolutional neural networks (CNNs) tend to use superficial local features even when trained onlarge datasets. To counteract this effect, they proposed to train on stylized ImageNet, thereby forcing anetwork to rely on object shape instead of textures. Wang et al. improved out-of-domain performanceby penalizing the correlation between a learned representation and superficial features such as thegray-level co-occurrence matrix (Wang et al., 2019b), or by penalizing the predictive power of local,low-level layer features in a neural network via an adversarial classifier (Wang et al., 2019a). Ourapproach shares the idea that learning representations invariant to local texture helps generalizationto unseen domains. However, RandConv avoids searching over many hyper-parameters, collectingextra data, and training other networks. It also scales to large-scale datasets since it adds minimalcomputation overhead.Random Mapping in Machine Learning Random projections have also been effective for di-mensionality reduction based on the distance-preserving property of the Johnson–Lindenstrausslemma (Johnson & Lindenstrauss, 1984). (Vinh et al., 2016) applied random projections on entireimages as data augmentation to make neural networks robust to adversarial examples. Lee et al. (2020)recently used random convolutions to help reinforcement learning (RL) agents generalize to newenvironments. Neural networks with fixed random weights can encode meaningful representations(Saxe et al., 2011) and are therefore useful for neural architecture search (Gaier & Ha, 2019), genera-tive models (He et al., 2016b), natural language processing (Wieting & Kiela, 2019), and RL (Osbandet al., 2018; Burda et al., 2019). In contrast, RandConv uses non-fixed randomly-sampled weightsto generate images with different local texture.3 R AND CONV: RANDOMIZE LOCAL TEXTURE AT DIFFERENT SCALESWe propose using a convolution layer with non-fixed random weights as the first layer of a DNNduring training. This strategy generates images with random local texture but consistent shapes, andis beneficial for robust visual representation learning. Sec. 3.1 justifies the shape-preserving propertyof a random convolution layer. Sec. 3.2 describes RandConv , our data augmentation algorithmusing a multi-scale randomized convolution layer and input mixing.3.1 A R ANDOM CONVOLUTION LAYER PRESERVES GLOBAL SHAPESConvolution is the key building block for deep convolutional neural networks. Consider a convolutionlayer with filters 2RhwCinCoutwith an input image I2RHWCin, whereHandWarethe height and width of the input and CinandCoutare the number of feature channels for the inputand output, and handware the height and width of the layer’s filter. The output (with appropriateinput padding) will be g=Iwithg2RHWCout.In images, nearby pixels with similar color or texture can be grouped into primitive shapes thatrepresent parts of objects or the background. A convolution layer linearly projects local imagepatches to features at corresponding locations on the output map using shared parameters. While a3Published as a conference paper at ICLR 2021convolution with random filters can project local patches to arbitrary output features, the output of arandom linear projection approximately preserves relative similarity between input patches, provedin Appendix B. In other words, since any two locations within the same shape have similar localtextures in the input image, they tend to be similar in the output feature map. Therefore, shapes thatemerge in the output feature map are similar to shapes in the input image provided that the filter sizeis sufficiently small compared to the size of a typical shape.In other words, the size of a convolution filter determines the smallest shape it can preserve. Forexample, 1x1 random convolutions preserve shapes at the single-pixel level and thus work as arandom color mapping; large filters perturb shapes smaller than the filter size that are consideredlocal texture of a shape at this larger scale. See Fig. 1 for examples. More discussion and a formalproof are in Appendix A and B .3.2 M ULTI -SCALE IMAGE AUGMENTATION WITH A RANDOMIZED CONVOLUTION LAYERAlgorithm 1 Learning with Data Augmentation by Random Convolutions1:Input : Model , task lossLtask, training imagesfIigNi=1and their labelsfyigNi=1, pool of filter sizesK=f1;:::;ng, fraction of original data p, whether to mixwith original images, consistency loss weight 2:function RA N DCO N V(I,K,mix,p)3: Sample p0U(0;1)4: ifp0<pandmixis False then5: return I . When not in mixmode, use the original image with probability p6: else7: Sample scale kK8: Sample convolution weights 2Rkk33N(0;13k2)9:Irc=I .Apply convolution on I10: ifmixis True then11: Sample U(0;1)12: return I+ (1)Irc .Mix with original images13: else14: return Irc15:Learning Objective :16:fori= 1!Ndo17: forj= 1!3do18: ^yji= (RandConv (Ii)).Predict labels for three augmented variants of the same image19:Lcons=P3j=1KL(^yjijjyi)where yi=P3j=1^yji=3 .Consistency Loss20:L=Ltask(^y1i;yi) +Lcons .Learning with the task loss and the consistency lossSec. 3.1 discussed how outputs of randomized convolution layers approximately maintain shapeinformation at a scale larger than their filter sizes. Here, we develop our RandConv data augmenta-tion technique using a randomized convolution layer with Cout=Cinto generate shape-consistentimages with randomized texture (see Alg. 1). Our goal is not to use RandConv to parameterize orrepresent texture as in previous filter-bank based texture models (Heeger & Bergen, 1995; Portilla &Simoncelli, 2000). Instead, we only use the three-channel outputs of RandConv as new images withthe same shape and different “style” (loosely referred to as "texture"). We also note that, a convolutionlayer is different from a convolution operation in image filtering. Standard image filtering applies thesame 2D filter on three color channels separately. In contrast, our convolution layer applies threedifferent 3Dfilters and each takes all color channels as input and generates one channel of the output.Our proposed RandConv variants are as follows:RC img: Augmenting Images with Random Texture A simple approach is to use the randomizedconvolution layer outputs, I, as new images; where are the randomly sampled weights and Iis a training image. If the original training data is in the domain D0, a sampled weight kgeneratesimages with consistent global shape but random texture forming the random domain Dk. Thus, byrandom weight sampling, we obtain an infinite number of random domains D1;D1;:::;D1. Inputimage intensities are assumed to be a standard normal distribution N(0;1)(which is often true inpractice thanks to data whitening). As the outputs of RandConv should follow the same distribution,we sample the convolution weights from N(0;2)where= 1=pCinhw, which is commonlyapplied for network initialization (He et al., 2015). We include the original images for training at aratiopas a hyperparameter.4Published as a conference paper at ICLR 2021RC mix: Mixing Variant As shown in Fig. 1, outputs from RC imgcan vary significantly from theappearance of the original images. Although generalizing to domains with significantly different localtexture distributions is useful, we may not want to sacrifice much performance on domains similarto the training domain. Inspired by the AugMix (Hendrycks et al., 2020b) strategy, we propose toblend the original image with the outputs of the RandConv layer via linear convex combinationsI+ (1)(I), whereis the mixing weight uniformly sampled from [0;1].InRC mix, theRandConv outputs provide shape-consistent perturbations of the original images. Varying , wecontinuously interpolate between the training domain and the randomly sampled domains of RCimg.Multi-scale Texture Corruption As discussed in Sec. 3.1„ image shape information at a scalesmaller than a filter’s size will be corrupted by RandConv . Therefore, we can use filters of varyingsizes to preserve shapes at various scales. We choose to uniformly randomly sample a filter sizekfrom a poolK= 1;3;:::n before sampling convolution weights 2RkkCinCoutfrom aGaussian distribution N(0;1k2Cin). Fig. 1 shows examples of multi-scale RandConv outputs.Consistency Regularization To learn representations invariant to texture changes, we use a lossencouraging consistent network predictions for the same RandConv -augmented image for differentrandom filter samples. Approaches for transform-invariant domain randomization (Yue et al., 2019),data augmentation (Hendrycks et al., 2020b), and semi-supervised learning (Berthelot et al., 2019)use similar strategies. We use Kullback-Leibler (KL) divergence to measure consistency. However,enforcing prediction similarity of two augmented variants may be too strong. Instead, following(Hendrycks et al., 2020b), we use RandConv to obtain 3 augmentation samples of image I:Gj=RandConvj(I)forj= 1;2;3and obtain their predictions with a model :yj= (Gj). We thencompute the relaxed loss asP3j=1KL(yjjjy), where y=P3j=1yj=3is the sample average.4 E XPERIMENTSSecs. 4.1 to 4.3 evaluate our methods on the following datasets: multiple digit recognition datasets,PACS, and ImageNet-sketch. Sec. 4.4 uses PACS to explore the out-of-domain generalization of apretrained representation in transfer learning by checking if pretraining on ImageNet with our methodimproves the domain generalization performance in downstream tasks. All experiments are in thesingle-domain generalization setting where training and validation sets are drawn from one domain.Additional experiments with ResNet18 as the backbone are given in the Appendix .4.1 D IGIT RECOGNITIONThe five digit recognition datasets (MNIST (LeCun et al., 1998), MNIST-M (Ganin et al., 2016),SVHN (Netzer et al., 2011), SYNTH (Ganin & Lempitsky, 2014) and USPS (Denker et al., 1989))have been widely used for domain adaptation and generalization research (Peng et al., 2019a;b;Qiao et al., 2020). Following the setups in (V olpi et al., 2018) and (Qiao et al., 2020), we train asimple CNN with 10,000 MNIST samples and evaluate the accuracy on the test sets of the other fourdatasets. We also test on MNIST-C (Mu & Gilmer, 2019), a robustness benchmark with 15 commoncorruptions of MNIST and report the average accuracy over all corruptions.Figure 2: Average accuracy and 5-run variance of MNIST model on MNIST-M, SVHN, SYNTHand USPS. Studies for: (a) original data fraction pforRC img; (b) multiscale design (1-n refers tousing scales 1,3,..,n) for RC img;p=0:5(orange) and RC mix(blue); (c) consistency loss weight forRC img17;p=0:5(orange) and RC mix17(blue).5Published as a conference paper at ICLR 2021Selecting Hyperparameters and Ablation Study. Fig. 2(a) shows the effect of the hyperparameterponRC imgwith filter size 1. We see that adding only 10%RandConv data (p= 0:9) immediatelyimproves the average performance (DG-Avg) on MNIST-M, SVHN, SYNTH and USPS performancefrom 53.53 to 69.19, outperforming all other approaches (see Tab. 1) for every dataset. We choosep= 0:5, which obtains the best DG-Avg. Fig. 2(b) shows results for a multiscale ablation study.Increasing the pool of filter sizes up to 7improves DG-Avg performance. Therefore we use multi-scale 1-7to study the consistency loss weight , shown in Fig. 2(c). Adding the consistency lossimproves both RandConv variants on DG-avg: RC mix17favors= 10 while RC img17;p=0:5performs similarly for = 5and= 10 . We choose = 10 for all subsequent experiments.Results. Tab. 1 compares the performance of RC img17;p=0:5;=10andRC mix17;=10with otherstate-of-the-art approaches. We show results of the adversarial training based methods GUD (V olpiet al., 2018), M-ADA (Qiao et al., 2020), and PAR (Wang et al., 2019a). The baseline model is trainedonly on the standard classification loss. To show RandConv is more than a trivial color/contrastadjustment method, we also compare to ColorJitter2data augmentation (which randomly changesimage brightness, contrast, and saturation) and GreyScale (where images are transformed to grey-scale for training and testing). We also tested data augmentation with a fixed Laplacian of Gaussianfilter (Band-Pass) of size=3 and = 1and the data augmentation pipeline (Multi-Aug) that was usedin a recently proposed large scale study on domain generalization algorithms and datasets (Gulrajani& Lopez-Paz, 2020). RandConv and its mixing variant outperforms the best competing method (M-ADA) by 17% on DG-Avg and achieves the best 91.62% accuracy on MNIST-C. While the differencebetween the two variants of RandConv is marginal, RC mix17;=10performs better on both DG-Avgand MNIST-C. When combined with Multi-Aug, RandConv achieves improved performance excepton MNIST-C. Fig 3 shows t-SNE image feature plots for unseen domains generated by the baselineapproach and RC mix17;=10. The RandConv embeddings suggest better generalization to unseendomains.Table 1: Average accuracy and 5-run standard deviation (in parenthesis) of MNIST10K model onMNIST-M, SVHN, SYNTH, USPS and their average (DG-avg); and average accuracy of 15 types ofcorruptions in MNIST-C. Both RandConv variants significantly outperform all other methods.MNIST MNIST-M SVHN USPS SYNTH DG-Avg MNIST-CBaseline 98.40 (0.84) 58.87 (3.73) 33.41 (5.28) 79.27 (2.70) 42.43 (5.46) 53.50 (4.23) 88.20 (2.10)GreyScale 98.82 (0.02) 58.41 (0.99) 36.06 (1.48) 80.45 (1.00) 45.00 (0.80) 54.98 (0.86) 89.15 (0.44)ColorJitter 98.72 (0.05) 62.72 (0.66) 39.61 (0.88) 79.18 (0.60) 46.40 (0.34) 56.98 (0.39) 89.48 (0.18)BandPass 98.65 (0.11) 70.22 (2.73) 48.34 (2.56) 78.60 (0.82) 57.17 (2.01) 63.58 (1.89) 87.89 (0.68)MultiAug 98.80 (0.05) 62.32 (0.66) 39.07 (0.68) 79.31 (1.02) 46.48 (0.80) 56.79 (0.34) 89.54 (0.11)PAR (our imp) 98.79 (0.05) 61.16 (0.21) 36.08 (1.27) 79.95 (1.18) 45.48 (0.35) 55.67 (0.33) 89.34 (0.45)GUD - 60.41 35.51 77.26 45.32 54.62 -M-ADA - 67.94 42.55 78.53 48.95 59.49 -RC img1-7,p=0.5,=5 98.86 (0.05) 87.67 (0.37) 54.95 (1.90) 82.08 (1.46) 63.37 (1.58) 72.02 (1.15) 90.94 (0.51)RC mix1-7;=10 98.85 (0.04) 87.76 (0.83) 57.52 (2.09) 83.36 (0.96) 62.88 (0.78) 72.88 (0.58) 91.62 (0.77)RC mix1-7;=10+MultiAug 98.82 (0.06) 87.89 (0.29) 62.07 (0.62) 84.39 (1.02) 63.90 (0.63) 74.56 (0.46) 91.40 (0.93)4.2 PACS E XPERIMENTSThe PACS dataset (Li et al., 2018b) considers 7-class classification on 4 domains: photo, art painting,cartoon, and sketch, with very different texture styles. Most recent domain generalization work studiesthe multi-source domain setting on PACS and uses domain labels of the training data. Although wefollow the convention to train on 3 domains and to test on the fourth, we simply pool the data fromthe 3 training domains as in (Wang et al., 2019a), without using domain labels during the training.Baseline and State-of-the-Art . Following (Li et al., 2017), we use Deep-All as the baseline, whichfinetunes an ImageNet-pretrained AlexNet on 3 domains using only the classification loss and tests onthe fourth domain. We test our RandConv variants RC img1-7;p=0:5andRC mix1-7with and withoutconsistency loss, and ColorJitter/GreyScale/BandPass/MultiAug data augmentation as in the digitdatasets. We also implemented PAR (Wang et al., 2019a) using our baseline model. RC mix1-72See PyTorch documentation for implementation details; all parameters are set to 0.5.6Published as a conference paper at ICLR 2021MNIST-M SVHN USPS SYNTHFigure 3: t-SNE feature embedding visualization for digit datasets for models trained on MNISTwithout (top) and with our RC mix1-7;=10approach (bottom). Different colors denote different classes.Table 2: Mean and 5-run standard deviation (in parenthesis) results for domain generalization onPACS. Best results with our Deep-All baseline are in bold. The domain name in each columnrepresents the target domain. Base column indicates different baselines and results under differentbaselines are not directly comparable. MLDG and CIDDF used domain labels for training.Base Method Photo Art Cartoon Sketch AverageOursDeep-All 86.77 (0.42) 60.11 (1.33) 64.12 (0.32) 55.28 (4.71) 66.57 (1.36)GreyScale 83.93 (1.47) 61.60 (1.18) 62.12 (0.61) 60.07 (2.47) 66.93 (0.83)ColorJitter 84.61 (0.83) 59.01 (0.24) 61.43 (0.68) 62.44 (1.68) 66.88 (0.33)BandPass 87.08 (0.57) 59.46 (0.27) 64.39 (0.51) 55.39 (2.95) 66.58 (0.73)MultiAug 85.21 (0.47) 59.51 (0.38) 62.88 (1.01) 61.67 (0.76) 67.32 (0.23)PAR (our imp.) 87.21 (0.42) 60.17 (0.95) 63.63 (0.88) 55.83 (2.57) 66.71 (0.58)RC img1-7,p=0.5 86.50 (0.72) 61.10 (0.38) 64.24 (0.62) 68.50 (1.83) 70.09 (0.43)RC mix1-7 86.60 (0.67) 61.74 (0.90) 64.05 (0.66) 69.74 (0.66) 70.53 (0.25)RC mix1-7+MultiAug 86.23 (0.74) 61.91 (0.76) 62.69 (0.76) 67.74 (1.21) 69.64 (0.49)RC img1-7,p=0.5,=10 81.15 (0.76) 59.56 (0.79) 62.42 (0.59) 71.74 (0.43) 68.72 (0.58)RC mix1-7,=10 81.78 (1.11) 61.14 (0.51) 63.57 (0.29) 71.97 (0.38) 69.62 (0.24)Results below are not directly comparable due to different Deep-All implementations.Wang et al. (2019a)Deep-All (our run) 88.40 66.26 66.58 59.40 70.16PAR (our run) 88.40 65.19 68.58 61.86 71.10PAR (reported) 89.6 66.3 68.3 64.1 72.08Carlucci et al. (2019)Deep-All 89.98 66.68 69.41 60.02 71.52Jigen 89.00 67.63 71.71 65.18 73.38Li et al. (2018a)Deep-All 86.67 64.91 64.28 53.08 67.24MLDG ( use domain labels) 88.00 66.23 66.88 58.96 70.01Li et al. (2018c)Deep-All 77.98 57.55 67.04 58.52 65.27CIDDG ( use domain labels) 78.65 62.70 69.73 64.45 68.88combined with MultiAug is also tested. Further, we compare to the following state-of-the-artapproaches: Jigen (Carlucci et al., 2019) using self-supervision, MLDG (Li et al., 2018a) usingmeta-learning, and the conditional invariant deep domain generalization method CIDDG (Li et al.,2018c). Note that previous methods used different Deep-All baselines which make the final accuracynot directly comparable, and MLDG and CIDDG use domain labels for training.Results. Tab. 2 shows significant improvements on Sketch for both RandConv variants. Sketchis the most challenging domain with no color and much less texture compared to the other 3domains. The success on Sketch demonstrates that our methods can guide the DNN to learn globalrepresentations focusing on shapes that are robust to texture changes. Without using the consistencyloss, RC mix1-7achieves the best overall result improving over Deep-All by 4% but adding MultiAugdoes not further improve the performance. Adding the consistency loss with = 10 ,RC mix1-7and7Published as a conference paper at ICLR 2021RC img1-7;p=0:5performs better on Sketch but degrades performance on the other 3 domains, so doGreyScale and ColorJitter. This observation will be discussed in Sec 4.4 .4.3 G ENERALIZING AN IMAGE NETMODEL TO IMAGE NET-SKETCHTable 3: Accuracy of ImageNet-trained AlexNet on ImageNet-Sketch (IN-S) data. Our methodsoutperform PAR by 5% and are on par with a Stylized-ImageNet (SIN) trained model. Note that PARwas built on top of a stronger baseline than our model, and both PAR and SIN fine-tuned the baselinemodel which helped the performance, while we train RandConv model from scratch.Baseline(Wang et al., 2019a)PAR(Wang et al., 2019a)BaselineRC img1-7;p=0.5,=10RC mix1-7;=10SIN(Geirhos et al., 2019)Top1 12.04 13.06 10.28 18.09 16.91 17.62Top5 25.60 26.27 21.60 35.40 33.99 36.22ImageNet-Sketch (Wang et al., 2019a) is an out-of-domain test set for models trained on ImageNet.We trained AlexNet from scratch with RC img1-7;p=0:5;=10andRC mix1-7;=10. We evaluate theirperformance on ImageNet-Sketch. We use the AlexNet model trained without RandConv as ourbaseline. Tab. 3 compares PAR and its baseline model and AlexNet trained with Stylized ImageNet(SIN) (Geirhos et al., 2019) on ImageNet-Sketch. Although PAR uses a stronger baseline, RandConvachieves significant improvements over our baseline and outperforms PAR by a large margin. Ourmethods achieve more than a 7% accuracy improvement over the baseline and surpass PAR by 5%.SIN as an image stylization approach that can modify image texture in a hierarchical and realistic way.However, albeit its complexity, it still performs on par with RandConv. Note that image stylizationtechniques require additional data and heavy precomputation. Further, the images for the style sourcealso need to be chosen. In contrast, RandConv is much easier to use: it can be applied to any datasetvia a simple convolution layer. We also measure the shape-bias metric proposed by Geirhos et al.(2019) for RandConv trained AlexNet. RC img1-7;p=0:5;=10andRC mix1-7;=10improve the baselinefrom 25:36% to48:24% and54:85% respectively.4.4 R EVISITING PACS WITH MORE ROBUST PRETRAINED REPRESENTATIONSA common practice for many computer vision tasks (including the PACS benchmark) is transferlearning, i.e. finetuning a backbone model pretrained on ImageNet. Recently, how the accuracy onImageNet (Kornblith et al., 2019) and adversial robustness (Salman et al., 2020) of the pretrainedmodel affect transfer learning has been studied in the context of domain generalization. Instead, westudy how out-of-domain generalizability transfers from pretraining to downstream tasks and shedlight on how to better use pretrained models.Impact of ImageNet Pretraining A model trained on ImageNet may be biased towards tex-tures (Geirhos et al., 2019). Finetuning ImageNet pretrained models on PACS may inherit thistexture bias, thereby benefitting generalization on the Photo domain (which is similar to ImageNet),but hurting performance on the Sketch domain. Therefore, as shown in Sec. 4.2, using RandConvto correct this texture bias improves results on Sketch, but degrades them on the Photo domain.Since pretraining has such a strong impact on transfer performance to new tasks, we ask: "Can thegeneralizability of a pretrained model transfer to downstream tasks? I.e., does a pretrained modelwith better generalizability improve performance on unseen domains on new tasks?" To answer this,we revisit the PACS tasks based on ImageNet-pretrained weights where our two RandConv variantsof Sec. 4.3 are used during ImageNet training. We study if this results in performance changes for theDeep-All baseline and for finetuning with RandConv .Better Performance via RandConv pretrained model We start by testing the Deep-All baselinesusing the two RandConv -trained ImageNet models of Sec. 4.3 as initialization. Tab. 4 showssignificant improvements on Sketch. Results are comparable to finetuning with RandConv on anormal pretrained model. Art is also consistently improved. Performance drops slightly on Photo asexpected, since we reduced the texture bias in the pretrained model, which is helpful for the Photodomain. A similar performance improvement is observed when using the SIN-trained AlexNet asinitialization. Using RandConv forboth ImageNet training and PACS finetuning, we achieve 76.11%accuracy on Sketch. As far as we know, this is the best performance using an AlexNet baseline. Thisapproach even outperforms Jigen (Carlucci et al., 2019) (71.35%) with a stronger ResNet18 baseline8Published as a conference paper at ICLR 2021Table 4: Generalization results on PACS with RandConv and SIN pretrained AlexNet. ImageNetcolumn shows how the pretrained model is trained on ImageNet (baseline represents training theImageNet model using only the classification loss); PACS column indicates the methods used forfinetuning on PACS. Best andsecond best accuracy for each target domain are highlighted in boldand underlined.PACS ImageNet Photo Art Cartoon Sketch AvgDeep-AllBaseline 86.77 (0.42) 60.11 (1.33) 64.12 (0.32) 55.28 (4.71) 66.57 (1.36)RC img1-7;p=0:5;=10 84.48 (0.52) 62.61 (1.23) 66.13 (0.80) 69.24 (0.80) 70.61 (0.53)RC mix1-7;=10 85.59 (0.40) 63.30 (0.99) 63.83 (0.85) 68.29 (1.27) 70.25 (0.45)SIN 85.33 (0.66) 65.85 (0.87) 65.39 (0.62) 65.75 (0.59) 70.58 (0.21)RC img1-7;p=0.5,=10Baseline 81.15 (0.76) 59.56 (0.79) 62.42 (0.59) 71.74 (0.43) 68.72 (0.58)RC img1-7;p=0:5;=10 84.36 (0.36) 63.73 (0.91) 68.07 (0.55) 75.41 (0.57) 72.89 (0.33)RC mix1-7;=10 84.63 (0.97) 63.41 (1.22) 66.36 (0.43) 74.59 (0.84) 72.25 (0.54)RC mix1-7=10Baseline 81.78 (1.11) 61.14 (0.51) 63.57 (0.29) 71.97 (0.38) 69.62 (0.24)RC img1-7;p=0:5;=10 85.16 (1.03) 63.17 (0.38) 67.68 (0.60) 76.11 (0.43) 73.03 (0.46)RC mix1-7;=10 86.17 (0.56) 65.33 (1.05) 65.52 (1.13) 73.21 (1.03) 72.56 (0.50)model. Cartoon and Art are also improved. The best average domain generalization accuracy is73.03%, with a more than 6% improvement over our initial Deep-All baseline.This experiment confirms that generalizability may transfer: removing texture bias may not onlymake a pretrained model more generalizable, but it may help generalization on downstream tasks.For similar target and pretraining domains like Photo and ImageNet, where learning texture bias mayactually be beneficial, performance may degrade slightly.5 C ONCLUSION AND DISCUSSIONRandomized convolution ( RandConv ) is a simple but powerful data augmentation technique forrandomizing local image texture. RandConv helps focus visual representations on global shapeinformation rather than local texture. We theoretically justified the approximate shape-preservingproperty of RandConv and developed RandConv techniques using multi-scale and mixing designs.We also make use of a consistency loss to encourage texture invariance. RandConv outperformsstate-of-the-art approaches on the digit recognition benchmark and on the sketch domain of PACSand on ImageNet-Sketch by a large margin. By finetuning a model pretrained with RandConv onPACS, we showed that the generalizability of a pretrained model may transfer to and benefit a newdownstream task. This resulted in a new state-of-art performance on PACS in the Sketch domain.RandConv can help computer vision tasks when a shape-biased model is helpful e.g. for objectdetection. RandConv can also provide a shape-biased pretrained model to improve performanceon downstream tasks when generalizing to unseen domains. However, local texture features can beuseful for many computer vision tasks, especially for fixed-domain fine-grained visual recognition. Insuch cases, visual representations that are invariant to local texture may hurt in-domain performance.Therefore, important future work includes learning representations that disentangle shape and texturefeatures and building models to use such representations in an explainable way.Adversarial robustness of deep neural networks has received significant recent attention. Interestingly,Zhang & Zhu (2019) find that adversarially-trained models are more shape biased; Shi et al. (2020)show that their method for increasing shape bias also helps adversarial robustness, especially whencombined with adversarial training. Therefore, exploring how RandConv affects the adversarialrobustness of models could be interesting future work. Moreover, recent biologically inspired modelsfor improving adversarial robustness (Dapello et al., 2020) use Gabor filters with fixed randomconfigurations followed by a stochastic layer to add Gaussian noise to the network input, which mayexplain the importance of randomness in RandConv . Exploring connections between RandConvand biological mechanisms in the human visual system would be interesting future work.Acknowledgments We thank Zhiding Yu for discussions on initial ideas and the experimental setup.We also thank Nathan Cahill for advice on proving the properties of random convolutions.9Published as a conference paper at ICLR 2021
4BQoSUinQx_
Interesting use of Random convolutions for Data-Augmentation
6: Marginally above acceptance threshold
This paper proposes a simple way to increase the robustness of the learned representations in a network perform a series of object recognition tasks by adding a random convolution layer as a pre-processing stage, thus “filtering the image” and preserving the global shape but altering the local `texture’ of the newly transformed image. Here, the hope is that -- analogous to Geirhos et al. 2019 that induces a shape bias by transforming the image distribution into a new one with altered *global* textures that induce a shape bias and increases general robustness to o.o.d distortions -- the authors here go about doing something similar at the local level given the small size of the receptive field of the filter, thus preserving the shape and slightly altering “the texture”. Pros: * While the innovation is simple and efficient, this data-augmentation scheme works, and I can see how other future works may use this as well as a data-augmentation technique for object recognition. I am not sure however if no one else has explored the effects of random convolutions for robustness. It sounds too good to be true, but then again -- there is always beauty in simplicity and it is possible that the authors have hit the nail on the head on finding a somewhat ‘contrived’ filtering process as a bonus rather than a limitation. Simple, yet counter-intuitive findings like these are relevant for ICLR. * Authors provide lots of experiments that to some degree prove the success of their augmentation strategy (although see Cons). Cons: * Biological Inspiration: What is the biological mechanism linked to the success of using random convolutions. One could argue that this point is ‘irrelevant’ to the authors and the readers, but as there is a plethora of different data-augmentation techniques to choose from, why should computer vision and machine learning practitioners choose this one? (See Missing Reference for a suggestion) * Insufficient/Incomplete Baseline: The model is inspired loosely by Geirhos et al. 2019; but how does the model compete with Geirhos’ et al.’s Stylized ImageNet? I would have wanted to see a baseline between the authors proposed model and other texture-based augmentation strategies. This would elucidate the Global vs Local advantages of “texture”/style transfer on learned representations. I think this is where authors could capitalize more on. * The word `texture’ in the paper is a mis-nomer. Here what is really done is 1st order filtering via a convolution operation with a filter that does not happen to have a Gabor-like shape. “Texture” in other contexts going back to vision science and even computer vision and image processing (style transfer included), is usually computed by a set of cross-correlations between *outputs* of a filtered image (analogous to the Gramian Matrix of Gatys et al. 2015), or the principled Portilla-Simoncelli texture model from 1999. Missing references: * Excessive Invariance increases adversarial vulnerability by Jacobsen et al. ICLR 2019. The augmentation procedure proposed by the authors shows robustness to common distortions, but how about adversarial robustness? Is this relevant? Was this tried? I’d love to hear more about the authors thoughts on this to potentially raise my score. * Emergent Properties of Foveated Perceptual Systems (link: https://openreview.net/forum?id=2_Z6MECjPEa): An interesting concurrent submission to this year's ICLR has shown that the biological mechanism of visual crowding (that resembles texture computation for humans in the visual periphery) is linked to some of the operations introduced in the paper by the authors. It would be great if the authors potentially cite similar (and/or the before-mentioned) works to provide a link to a biological mechanism that may support why their data-augmentation procedure works and/or should be used; otherwise it seems contrived and could be seen as “yet another data-augmentation procedure that increases robustness but we don’t know why”. * Implementing a Primary Visual Cortex in the retina increases adversarial robustness by Dapello, Marques et al. 2020 (NeurIPS). This recently published paper in a way shows almost the opposite of what the authors are proposing here. Rather than using random convolutions, they actually mimic the gamut of spatial frequency tuning properties of Gabor filters in the first stages of convolution as done in human/monkey V1. The authors should discuss how their results fit with Dapello, Marques et al. 2020 and how they can reconcile their somewhat opposing views. Final Assessment: I am on the fence of having this paper accepted at ICLR given the limitations expressed above, but I do like it’s simplicity that should not take away it’s merit -- thus my slight lean towards acceptance. I am willing to raise my score however if authors address some of the cons/limitations, and am also curious to see the opinion from other reviewers, it is possible that I may have missed a key reference regarding data-augmentation that may weaken my assessment.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Robust and Generalizable Visual Representation Learning via Random Convolutions ### Paper Abstract While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation. Random convolutions are approximately shape-preserving and may distort local textures. Intuitively, randomized convolutions create an infinite number of new domains with similar global shapes but random local texture. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation. ### Paper Keywords ["domain generalization", "robustness", "representation learning", "data augmentation"] ### Paper Content ABSTRACTWhile successful for various computer vision tasks, deep neural networks haveshown to be vulnerable to texture style shifts and small perturbations to whichhumans are robust. In this work, we show that the robustness of neural networks canbe greatly improved through the use of random convolutions as data augmentation.Random convolutions are approximately shape-preserving and may distort localtextures. Intuitively, randomized convolutions create an infinite number of newdomains with similar global shapes but random local texture. Therefore, we exploreusing outputs of multi-scale random convolutions as new images or mixing themwith the original images during training. When applying a network trained with ourapproach to unseen domains, our method consistently improves the performanceon domain generalization benchmarks and is scalable to ImageNet. In particular,in the challenging scenario of generalizing to the sketch domain in PACS and toImageNet-Sketch, our method outperforms state-of-art methods by a large margin.More interestingly, our method can benefit downstream tasks by providing a morerobust pretrained visual representation.11 I NTRODUCTIONGeneralizability and robustness to out-of-distribution samples have been major pain points whenapplying deep neural networks (DNNs) in real world applications (V olpi et al., 2018). Though DNNsare typically trained on datasets with millions of training samples, they still lack robustness to domainshift, small perturbations, and adversarial examples (Luo et al., 2019). Recent research has shown thatneural networks tend to use superficial features rather than global shape information for predictioneven when trained on large-scale datasets such as ImageNet (Geirhos et al., 2019). These superficialfeatures can be local textures or even patterns imperceptible to humans but detectable to DNNs, as isthe case for adversarial examples (Ilyas et al., 2019). In contrast, image semantics often depend moreon object shapes rather than local textures. For image data, local texture differences are one of themain sources of domain shift, e.g., between synthetic virtual images and real data (Sun & Saenko,2014). Our goal is therefore to learn visual representations that are invariant to local texture andthat generalize to unseen domains. While texture and color may be treated as different concepts, wefollow the convention in Geirhos et al. (2019) and include color when talking about texture.We address the challenging setting of robust visual representation learning from single domaindata. Limited work exists in this setting. Proposed methods include data augmentation (V olpiet al., 2018; Qiao et al., 2020; Geirhos et al., 2019), domain randomization (Tobin et al., 2017; Yueet al., 2019), self-supervised learning (Carlucci et al., 2019), and penalizing the predictive powerof low-level network features (Wang et al., 2019a). Following the spirit of adding inductive biastowards global shape information over local textures, we propose using random convolutions toimprove the robustness to domain shifts and small perturbations. While recently Lee et al. (2020)proposed a similar technique for improving the generalization of reinforcement learning agents in1Code is available at https://github.com/wildphoton/RandConv .1Published as a conference paper at ICLR 2021Input k= 1k= 3k= 5k= 7k= 11k= 15Input = 0:9= 0:7= 0:5= 0:3= 0:1= 0Figure 1: Top: Illustration that RandConv randomize local texture but preserve shapes in the image. Middle:First column is the input image of size 2242; following columns are convolutions results using random filtersof different sizes k.Bottom: Mixing results between an image and one of its random convolution results withdifferent mixing coefficients .unseen environments, we focus on visual representation learning and examine our approach on visualdomain generalization benchmarks. Our method also includes the multiscale design and a mixingvariant. In addition, considering that many computer vision tasks rely on training deep networksbased on ImageNet-pretrained weights (including some domain generalization benchmarks), weask“Can a more robust pretrained model make the finetuned model more robust on downstreamtasks?” Different from (Kornblith et al., 2019; Salman et al., 2020) who studied the transferability ofa pretrained ImageNet representation to new tasks while focusing on in-domain generalization, weexplore generalization performance on unseen domains for new tasks.We make the following contributions:We develop RandConv , a data augmentation technique using multi-scale random-convolutionsto generate images with random texture while maintaining global shapes. We explore using theRandConv output as training images or mixing it with the original images. We show that aconsistency loss can further enforce invariance under texture changes.We provide insights and justification on why RandConv augments images with different localtexture but the same semantics with the shape-preserving property of random convolutions.We validate RandConv and its mixing variant in extensive experiments on synthetic and real-world benchmarks as well as on the large-scale ImageNet dataset. Our methods outperformsingle domain generalization approaches by a large margin on digit recognition datasets and forthe challenging case of generalizing to the Sketch domain in PACS and to ImageNet-Sketch.We explore if the robustness/generalizability of a pretrained representation can transfer. Weshow that transferring a model pretrained with RandConv on ImageNet can further improvedomain generalization performance on new downstream tasks on the PACS dataset.2 R ELATED WORKDomain Generalization (DG) aims at learning representations that perform well when transferredto unseen domains. Modern techniques range between feature fusion (Shen et al., 2019), meta-learning (Li et al., 2018a; Balaji et al., 2018), and adversarial training (Shao et al., 2019; Li et al.,2018b). Note that most current DG work (Ghifary et al., 2016; Li et al., 2018a;b) requires a multi-source training setting to work well. However, in practice, it might be difficult and expensive tocollect data from multiple sources, such as collecting data from multiple medical centers (Raghupathi& Raghupathi, 2014). Instead, we consider the more strict single-domain generalization DG setting,2Published as a conference paper at ICLR 2021where we train the model on source data from a single domain and generalize it to new unseendomains (Carlucci et al., 2019; Wang et al., 2019b).Domain Randomization (DR) was first introduced as a DG technique by Tobin et al. (2017) tohandle the domain gap between simulated and real data. As the training data in (Tobin et al., 2017) issynthesized in a virtual environment, it is possible to generate diverse training samples by randomlyselecting background images, colors, lighting, and textures of foreground objects. When a simulationenvironment is not accessible, image stylization can be used to generate new domains (Yue et al.,2019; Geirhos et al., 2019). However, this requires extra effort to collect data and to train an additionalmodel; further, the number of randomized domains is limited by the number of predefined styles.Data Augmentation has been widely used to improve the generalization of machine learning mod-els (Simard et al., 2003). DR approaches can be considered a type of synthetic data augmentation.To improve performance on unseen domains, V olpi et al. (2018) generate adversarial examplesto augment the training data; Qiao et al. (2020) extend this approach via meta-learning. As withother adversarial training algorithms, significant extra computation is required to obtain adversarialexamples.Learning Representations Biased towards Global Shape Geirhos et al. (2019) demonstrated thatconvolutional neural networks (CNNs) tend to use superficial local features even when trained onlarge datasets. To counteract this effect, they proposed to train on stylized ImageNet, thereby forcing anetwork to rely on object shape instead of textures. Wang et al. improved out-of-domain performanceby penalizing the correlation between a learned representation and superficial features such as thegray-level co-occurrence matrix (Wang et al., 2019b), or by penalizing the predictive power of local,low-level layer features in a neural network via an adversarial classifier (Wang et al., 2019a). Ourapproach shares the idea that learning representations invariant to local texture helps generalizationto unseen domains. However, RandConv avoids searching over many hyper-parameters, collectingextra data, and training other networks. It also scales to large-scale datasets since it adds minimalcomputation overhead.Random Mapping in Machine Learning Random projections have also been effective for di-mensionality reduction based on the distance-preserving property of the Johnson–Lindenstrausslemma (Johnson & Lindenstrauss, 1984). (Vinh et al., 2016) applied random projections on entireimages as data augmentation to make neural networks robust to adversarial examples. Lee et al. (2020)recently used random convolutions to help reinforcement learning (RL) agents generalize to newenvironments. Neural networks with fixed random weights can encode meaningful representations(Saxe et al., 2011) and are therefore useful for neural architecture search (Gaier & Ha, 2019), genera-tive models (He et al., 2016b), natural language processing (Wieting & Kiela, 2019), and RL (Osbandet al., 2018; Burda et al., 2019). In contrast, RandConv uses non-fixed randomly-sampled weightsto generate images with different local texture.3 R AND CONV: RANDOMIZE LOCAL TEXTURE AT DIFFERENT SCALESWe propose using a convolution layer with non-fixed random weights as the first layer of a DNNduring training. This strategy generates images with random local texture but consistent shapes, andis beneficial for robust visual representation learning. Sec. 3.1 justifies the shape-preserving propertyof a random convolution layer. Sec. 3.2 describes RandConv , our data augmentation algorithmusing a multi-scale randomized convolution layer and input mixing.3.1 A R ANDOM CONVOLUTION LAYER PRESERVES GLOBAL SHAPESConvolution is the key building block for deep convolutional neural networks. Consider a convolutionlayer with filters 2RhwCinCoutwith an input image I2RHWCin, whereHandWarethe height and width of the input and CinandCoutare the number of feature channels for the inputand output, and handware the height and width of the layer’s filter. The output (with appropriateinput padding) will be g=Iwithg2RHWCout.In images, nearby pixels with similar color or texture can be grouped into primitive shapes thatrepresent parts of objects or the background. A convolution layer linearly projects local imagepatches to features at corresponding locations on the output map using shared parameters. While a3Published as a conference paper at ICLR 2021convolution with random filters can project local patches to arbitrary output features, the output of arandom linear projection approximately preserves relative similarity between input patches, provedin Appendix B. In other words, since any two locations within the same shape have similar localtextures in the input image, they tend to be similar in the output feature map. Therefore, shapes thatemerge in the output feature map are similar to shapes in the input image provided that the filter sizeis sufficiently small compared to the size of a typical shape.In other words, the size of a convolution filter determines the smallest shape it can preserve. Forexample, 1x1 random convolutions preserve shapes at the single-pixel level and thus work as arandom color mapping; large filters perturb shapes smaller than the filter size that are consideredlocal texture of a shape at this larger scale. See Fig. 1 for examples. More discussion and a formalproof are in Appendix A and B .3.2 M ULTI -SCALE IMAGE AUGMENTATION WITH A RANDOMIZED CONVOLUTION LAYERAlgorithm 1 Learning with Data Augmentation by Random Convolutions1:Input : Model , task lossLtask, training imagesfIigNi=1and their labelsfyigNi=1, pool of filter sizesK=f1;:::;ng, fraction of original data p, whether to mixwith original images, consistency loss weight 2:function RA N DCO N V(I,K,mix,p)3: Sample p0U(0;1)4: ifp0<pandmixis False then5: return I . When not in mixmode, use the original image with probability p6: else7: Sample scale kK8: Sample convolution weights 2Rkk33N(0;13k2)9:Irc=I .Apply convolution on I10: ifmixis True then11: Sample U(0;1)12: return I+ (1)Irc .Mix with original images13: else14: return Irc15:Learning Objective :16:fori= 1!Ndo17: forj= 1!3do18: ^yji= (RandConv (Ii)).Predict labels for three augmented variants of the same image19:Lcons=P3j=1KL(^yjijjyi)where yi=P3j=1^yji=3 .Consistency Loss20:L=Ltask(^y1i;yi) +Lcons .Learning with the task loss and the consistency lossSec. 3.1 discussed how outputs of randomized convolution layers approximately maintain shapeinformation at a scale larger than their filter sizes. Here, we develop our RandConv data augmenta-tion technique using a randomized convolution layer with Cout=Cinto generate shape-consistentimages with randomized texture (see Alg. 1). Our goal is not to use RandConv to parameterize orrepresent texture as in previous filter-bank based texture models (Heeger & Bergen, 1995; Portilla &Simoncelli, 2000). Instead, we only use the three-channel outputs of RandConv as new images withthe same shape and different “style” (loosely referred to as "texture"). We also note that, a convolutionlayer is different from a convolution operation in image filtering. Standard image filtering applies thesame 2D filter on three color channels separately. In contrast, our convolution layer applies threedifferent 3Dfilters and each takes all color channels as input and generates one channel of the output.Our proposed RandConv variants are as follows:RC img: Augmenting Images with Random Texture A simple approach is to use the randomizedconvolution layer outputs, I, as new images; where are the randomly sampled weights and Iis a training image. If the original training data is in the domain D0, a sampled weight kgeneratesimages with consistent global shape but random texture forming the random domain Dk. Thus, byrandom weight sampling, we obtain an infinite number of random domains D1;D1;:::;D1. Inputimage intensities are assumed to be a standard normal distribution N(0;1)(which is often true inpractice thanks to data whitening). As the outputs of RandConv should follow the same distribution,we sample the convolution weights from N(0;2)where= 1=pCinhw, which is commonlyapplied for network initialization (He et al., 2015). We include the original images for training at aratiopas a hyperparameter.4Published as a conference paper at ICLR 2021RC mix: Mixing Variant As shown in Fig. 1, outputs from RC imgcan vary significantly from theappearance of the original images. Although generalizing to domains with significantly different localtexture distributions is useful, we may not want to sacrifice much performance on domains similarto the training domain. Inspired by the AugMix (Hendrycks et al., 2020b) strategy, we propose toblend the original image with the outputs of the RandConv layer via linear convex combinationsI+ (1)(I), whereis the mixing weight uniformly sampled from [0;1].InRC mix, theRandConv outputs provide shape-consistent perturbations of the original images. Varying , wecontinuously interpolate between the training domain and the randomly sampled domains of RCimg.Multi-scale Texture Corruption As discussed in Sec. 3.1„ image shape information at a scalesmaller than a filter’s size will be corrupted by RandConv . Therefore, we can use filters of varyingsizes to preserve shapes at various scales. We choose to uniformly randomly sample a filter sizekfrom a poolK= 1;3;:::n before sampling convolution weights 2RkkCinCoutfrom aGaussian distribution N(0;1k2Cin). Fig. 1 shows examples of multi-scale RandConv outputs.Consistency Regularization To learn representations invariant to texture changes, we use a lossencouraging consistent network predictions for the same RandConv -augmented image for differentrandom filter samples. Approaches for transform-invariant domain randomization (Yue et al., 2019),data augmentation (Hendrycks et al., 2020b), and semi-supervised learning (Berthelot et al., 2019)use similar strategies. We use Kullback-Leibler (KL) divergence to measure consistency. However,enforcing prediction similarity of two augmented variants may be too strong. Instead, following(Hendrycks et al., 2020b), we use RandConv to obtain 3 augmentation samples of image I:Gj=RandConvj(I)forj= 1;2;3and obtain their predictions with a model :yj= (Gj). We thencompute the relaxed loss asP3j=1KL(yjjjy), where y=P3j=1yj=3is the sample average.4 E XPERIMENTSSecs. 4.1 to 4.3 evaluate our methods on the following datasets: multiple digit recognition datasets,PACS, and ImageNet-sketch. Sec. 4.4 uses PACS to explore the out-of-domain generalization of apretrained representation in transfer learning by checking if pretraining on ImageNet with our methodimproves the domain generalization performance in downstream tasks. All experiments are in thesingle-domain generalization setting where training and validation sets are drawn from one domain.Additional experiments with ResNet18 as the backbone are given in the Appendix .4.1 D IGIT RECOGNITIONThe five digit recognition datasets (MNIST (LeCun et al., 1998), MNIST-M (Ganin et al., 2016),SVHN (Netzer et al., 2011), SYNTH (Ganin & Lempitsky, 2014) and USPS (Denker et al., 1989))have been widely used for domain adaptation and generalization research (Peng et al., 2019a;b;Qiao et al., 2020). Following the setups in (V olpi et al., 2018) and (Qiao et al., 2020), we train asimple CNN with 10,000 MNIST samples and evaluate the accuracy on the test sets of the other fourdatasets. We also test on MNIST-C (Mu & Gilmer, 2019), a robustness benchmark with 15 commoncorruptions of MNIST and report the average accuracy over all corruptions.Figure 2: Average accuracy and 5-run variance of MNIST model on MNIST-M, SVHN, SYNTHand USPS. Studies for: (a) original data fraction pforRC img; (b) multiscale design (1-n refers tousing scales 1,3,..,n) for RC img;p=0:5(orange) and RC mix(blue); (c) consistency loss weight forRC img17;p=0:5(orange) and RC mix17(blue).5Published as a conference paper at ICLR 2021Selecting Hyperparameters and Ablation Study. Fig. 2(a) shows the effect of the hyperparameterponRC imgwith filter size 1. We see that adding only 10%RandConv data (p= 0:9) immediatelyimproves the average performance (DG-Avg) on MNIST-M, SVHN, SYNTH and USPS performancefrom 53.53 to 69.19, outperforming all other approaches (see Tab. 1) for every dataset. We choosep= 0:5, which obtains the best DG-Avg. Fig. 2(b) shows results for a multiscale ablation study.Increasing the pool of filter sizes up to 7improves DG-Avg performance. Therefore we use multi-scale 1-7to study the consistency loss weight , shown in Fig. 2(c). Adding the consistency lossimproves both RandConv variants on DG-avg: RC mix17favors= 10 while RC img17;p=0:5performs similarly for = 5and= 10 . We choose = 10 for all subsequent experiments.Results. Tab. 1 compares the performance of RC img17;p=0:5;=10andRC mix17;=10with otherstate-of-the-art approaches. We show results of the adversarial training based methods GUD (V olpiet al., 2018), M-ADA (Qiao et al., 2020), and PAR (Wang et al., 2019a). The baseline model is trainedonly on the standard classification loss. To show RandConv is more than a trivial color/contrastadjustment method, we also compare to ColorJitter2data augmentation (which randomly changesimage brightness, contrast, and saturation) and GreyScale (where images are transformed to grey-scale for training and testing). We also tested data augmentation with a fixed Laplacian of Gaussianfilter (Band-Pass) of size=3 and = 1and the data augmentation pipeline (Multi-Aug) that was usedin a recently proposed large scale study on domain generalization algorithms and datasets (Gulrajani& Lopez-Paz, 2020). RandConv and its mixing variant outperforms the best competing method (M-ADA) by 17% on DG-Avg and achieves the best 91.62% accuracy on MNIST-C. While the differencebetween the two variants of RandConv is marginal, RC mix17;=10performs better on both DG-Avgand MNIST-C. When combined with Multi-Aug, RandConv achieves improved performance excepton MNIST-C. Fig 3 shows t-SNE image feature plots for unseen domains generated by the baselineapproach and RC mix17;=10. The RandConv embeddings suggest better generalization to unseendomains.Table 1: Average accuracy and 5-run standard deviation (in parenthesis) of MNIST10K model onMNIST-M, SVHN, SYNTH, USPS and their average (DG-avg); and average accuracy of 15 types ofcorruptions in MNIST-C. Both RandConv variants significantly outperform all other methods.MNIST MNIST-M SVHN USPS SYNTH DG-Avg MNIST-CBaseline 98.40 (0.84) 58.87 (3.73) 33.41 (5.28) 79.27 (2.70) 42.43 (5.46) 53.50 (4.23) 88.20 (2.10)GreyScale 98.82 (0.02) 58.41 (0.99) 36.06 (1.48) 80.45 (1.00) 45.00 (0.80) 54.98 (0.86) 89.15 (0.44)ColorJitter 98.72 (0.05) 62.72 (0.66) 39.61 (0.88) 79.18 (0.60) 46.40 (0.34) 56.98 (0.39) 89.48 (0.18)BandPass 98.65 (0.11) 70.22 (2.73) 48.34 (2.56) 78.60 (0.82) 57.17 (2.01) 63.58 (1.89) 87.89 (0.68)MultiAug 98.80 (0.05) 62.32 (0.66) 39.07 (0.68) 79.31 (1.02) 46.48 (0.80) 56.79 (0.34) 89.54 (0.11)PAR (our imp) 98.79 (0.05) 61.16 (0.21) 36.08 (1.27) 79.95 (1.18) 45.48 (0.35) 55.67 (0.33) 89.34 (0.45)GUD - 60.41 35.51 77.26 45.32 54.62 -M-ADA - 67.94 42.55 78.53 48.95 59.49 -RC img1-7,p=0.5,=5 98.86 (0.05) 87.67 (0.37) 54.95 (1.90) 82.08 (1.46) 63.37 (1.58) 72.02 (1.15) 90.94 (0.51)RC mix1-7;=10 98.85 (0.04) 87.76 (0.83) 57.52 (2.09) 83.36 (0.96) 62.88 (0.78) 72.88 (0.58) 91.62 (0.77)RC mix1-7;=10+MultiAug 98.82 (0.06) 87.89 (0.29) 62.07 (0.62) 84.39 (1.02) 63.90 (0.63) 74.56 (0.46) 91.40 (0.93)4.2 PACS E XPERIMENTSThe PACS dataset (Li et al., 2018b) considers 7-class classification on 4 domains: photo, art painting,cartoon, and sketch, with very different texture styles. Most recent domain generalization work studiesthe multi-source domain setting on PACS and uses domain labels of the training data. Although wefollow the convention to train on 3 domains and to test on the fourth, we simply pool the data fromthe 3 training domains as in (Wang et al., 2019a), without using domain labels during the training.Baseline and State-of-the-Art . Following (Li et al., 2017), we use Deep-All as the baseline, whichfinetunes an ImageNet-pretrained AlexNet on 3 domains using only the classification loss and tests onthe fourth domain. We test our RandConv variants RC img1-7;p=0:5andRC mix1-7with and withoutconsistency loss, and ColorJitter/GreyScale/BandPass/MultiAug data augmentation as in the digitdatasets. We also implemented PAR (Wang et al., 2019a) using our baseline model. RC mix1-72See PyTorch documentation for implementation details; all parameters are set to 0.5.6Published as a conference paper at ICLR 2021MNIST-M SVHN USPS SYNTHFigure 3: t-SNE feature embedding visualization for digit datasets for models trained on MNISTwithout (top) and with our RC mix1-7;=10approach (bottom). Different colors denote different classes.Table 2: Mean and 5-run standard deviation (in parenthesis) results for domain generalization onPACS. Best results with our Deep-All baseline are in bold. The domain name in each columnrepresents the target domain. Base column indicates different baselines and results under differentbaselines are not directly comparable. MLDG and CIDDF used domain labels for training.Base Method Photo Art Cartoon Sketch AverageOursDeep-All 86.77 (0.42) 60.11 (1.33) 64.12 (0.32) 55.28 (4.71) 66.57 (1.36)GreyScale 83.93 (1.47) 61.60 (1.18) 62.12 (0.61) 60.07 (2.47) 66.93 (0.83)ColorJitter 84.61 (0.83) 59.01 (0.24) 61.43 (0.68) 62.44 (1.68) 66.88 (0.33)BandPass 87.08 (0.57) 59.46 (0.27) 64.39 (0.51) 55.39 (2.95) 66.58 (0.73)MultiAug 85.21 (0.47) 59.51 (0.38) 62.88 (1.01) 61.67 (0.76) 67.32 (0.23)PAR (our imp.) 87.21 (0.42) 60.17 (0.95) 63.63 (0.88) 55.83 (2.57) 66.71 (0.58)RC img1-7,p=0.5 86.50 (0.72) 61.10 (0.38) 64.24 (0.62) 68.50 (1.83) 70.09 (0.43)RC mix1-7 86.60 (0.67) 61.74 (0.90) 64.05 (0.66) 69.74 (0.66) 70.53 (0.25)RC mix1-7+MultiAug 86.23 (0.74) 61.91 (0.76) 62.69 (0.76) 67.74 (1.21) 69.64 (0.49)RC img1-7,p=0.5,=10 81.15 (0.76) 59.56 (0.79) 62.42 (0.59) 71.74 (0.43) 68.72 (0.58)RC mix1-7,=10 81.78 (1.11) 61.14 (0.51) 63.57 (0.29) 71.97 (0.38) 69.62 (0.24)Results below are not directly comparable due to different Deep-All implementations.Wang et al. (2019a)Deep-All (our run) 88.40 66.26 66.58 59.40 70.16PAR (our run) 88.40 65.19 68.58 61.86 71.10PAR (reported) 89.6 66.3 68.3 64.1 72.08Carlucci et al. (2019)Deep-All 89.98 66.68 69.41 60.02 71.52Jigen 89.00 67.63 71.71 65.18 73.38Li et al. (2018a)Deep-All 86.67 64.91 64.28 53.08 67.24MLDG ( use domain labels) 88.00 66.23 66.88 58.96 70.01Li et al. (2018c)Deep-All 77.98 57.55 67.04 58.52 65.27CIDDG ( use domain labels) 78.65 62.70 69.73 64.45 68.88combined with MultiAug is also tested. Further, we compare to the following state-of-the-artapproaches: Jigen (Carlucci et al., 2019) using self-supervision, MLDG (Li et al., 2018a) usingmeta-learning, and the conditional invariant deep domain generalization method CIDDG (Li et al.,2018c). Note that previous methods used different Deep-All baselines which make the final accuracynot directly comparable, and MLDG and CIDDG use domain labels for training.Results. Tab. 2 shows significant improvements on Sketch for both RandConv variants. Sketchis the most challenging domain with no color and much less texture compared to the other 3domains. The success on Sketch demonstrates that our methods can guide the DNN to learn globalrepresentations focusing on shapes that are robust to texture changes. Without using the consistencyloss, RC mix1-7achieves the best overall result improving over Deep-All by 4% but adding MultiAugdoes not further improve the performance. Adding the consistency loss with = 10 ,RC mix1-7and7Published as a conference paper at ICLR 2021RC img1-7;p=0:5performs better on Sketch but degrades performance on the other 3 domains, so doGreyScale and ColorJitter. This observation will be discussed in Sec 4.4 .4.3 G ENERALIZING AN IMAGE NETMODEL TO IMAGE NET-SKETCHTable 3: Accuracy of ImageNet-trained AlexNet on ImageNet-Sketch (IN-S) data. Our methodsoutperform PAR by 5% and are on par with a Stylized-ImageNet (SIN) trained model. Note that PARwas built on top of a stronger baseline than our model, and both PAR and SIN fine-tuned the baselinemodel which helped the performance, while we train RandConv model from scratch.Baseline(Wang et al., 2019a)PAR(Wang et al., 2019a)BaselineRC img1-7;p=0.5,=10RC mix1-7;=10SIN(Geirhos et al., 2019)Top1 12.04 13.06 10.28 18.09 16.91 17.62Top5 25.60 26.27 21.60 35.40 33.99 36.22ImageNet-Sketch (Wang et al., 2019a) is an out-of-domain test set for models trained on ImageNet.We trained AlexNet from scratch with RC img1-7;p=0:5;=10andRC mix1-7;=10. We evaluate theirperformance on ImageNet-Sketch. We use the AlexNet model trained without RandConv as ourbaseline. Tab. 3 compares PAR and its baseline model and AlexNet trained with Stylized ImageNet(SIN) (Geirhos et al., 2019) on ImageNet-Sketch. Although PAR uses a stronger baseline, RandConvachieves significant improvements over our baseline and outperforms PAR by a large margin. Ourmethods achieve more than a 7% accuracy improvement over the baseline and surpass PAR by 5%.SIN as an image stylization approach that can modify image texture in a hierarchical and realistic way.However, albeit its complexity, it still performs on par with RandConv. Note that image stylizationtechniques require additional data and heavy precomputation. Further, the images for the style sourcealso need to be chosen. In contrast, RandConv is much easier to use: it can be applied to any datasetvia a simple convolution layer. We also measure the shape-bias metric proposed by Geirhos et al.(2019) for RandConv trained AlexNet. RC img1-7;p=0:5;=10andRC mix1-7;=10improve the baselinefrom 25:36% to48:24% and54:85% respectively.4.4 R EVISITING PACS WITH MORE ROBUST PRETRAINED REPRESENTATIONSA common practice for many computer vision tasks (including the PACS benchmark) is transferlearning, i.e. finetuning a backbone model pretrained on ImageNet. Recently, how the accuracy onImageNet (Kornblith et al., 2019) and adversial robustness (Salman et al., 2020) of the pretrainedmodel affect transfer learning has been studied in the context of domain generalization. Instead, westudy how out-of-domain generalizability transfers from pretraining to downstream tasks and shedlight on how to better use pretrained models.Impact of ImageNet Pretraining A model trained on ImageNet may be biased towards tex-tures (Geirhos et al., 2019). Finetuning ImageNet pretrained models on PACS may inherit thistexture bias, thereby benefitting generalization on the Photo domain (which is similar to ImageNet),but hurting performance on the Sketch domain. Therefore, as shown in Sec. 4.2, using RandConvto correct this texture bias improves results on Sketch, but degrades them on the Photo domain.Since pretraining has such a strong impact on transfer performance to new tasks, we ask: "Can thegeneralizability of a pretrained model transfer to downstream tasks? I.e., does a pretrained modelwith better generalizability improve performance on unseen domains on new tasks?" To answer this,we revisit the PACS tasks based on ImageNet-pretrained weights where our two RandConv variantsof Sec. 4.3 are used during ImageNet training. We study if this results in performance changes for theDeep-All baseline and for finetuning with RandConv .Better Performance via RandConv pretrained model We start by testing the Deep-All baselinesusing the two RandConv -trained ImageNet models of Sec. 4.3 as initialization. Tab. 4 showssignificant improvements on Sketch. Results are comparable to finetuning with RandConv on anormal pretrained model. Art is also consistently improved. Performance drops slightly on Photo asexpected, since we reduced the texture bias in the pretrained model, which is helpful for the Photodomain. A similar performance improvement is observed when using the SIN-trained AlexNet asinitialization. Using RandConv forboth ImageNet training and PACS finetuning, we achieve 76.11%accuracy on Sketch. As far as we know, this is the best performance using an AlexNet baseline. Thisapproach even outperforms Jigen (Carlucci et al., 2019) (71.35%) with a stronger ResNet18 baseline8Published as a conference paper at ICLR 2021Table 4: Generalization results on PACS with RandConv and SIN pretrained AlexNet. ImageNetcolumn shows how the pretrained model is trained on ImageNet (baseline represents training theImageNet model using only the classification loss); PACS column indicates the methods used forfinetuning on PACS. Best andsecond best accuracy for each target domain are highlighted in boldand underlined.PACS ImageNet Photo Art Cartoon Sketch AvgDeep-AllBaseline 86.77 (0.42) 60.11 (1.33) 64.12 (0.32) 55.28 (4.71) 66.57 (1.36)RC img1-7;p=0:5;=10 84.48 (0.52) 62.61 (1.23) 66.13 (0.80) 69.24 (0.80) 70.61 (0.53)RC mix1-7;=10 85.59 (0.40) 63.30 (0.99) 63.83 (0.85) 68.29 (1.27) 70.25 (0.45)SIN 85.33 (0.66) 65.85 (0.87) 65.39 (0.62) 65.75 (0.59) 70.58 (0.21)RC img1-7;p=0.5,=10Baseline 81.15 (0.76) 59.56 (0.79) 62.42 (0.59) 71.74 (0.43) 68.72 (0.58)RC img1-7;p=0:5;=10 84.36 (0.36) 63.73 (0.91) 68.07 (0.55) 75.41 (0.57) 72.89 (0.33)RC mix1-7;=10 84.63 (0.97) 63.41 (1.22) 66.36 (0.43) 74.59 (0.84) 72.25 (0.54)RC mix1-7=10Baseline 81.78 (1.11) 61.14 (0.51) 63.57 (0.29) 71.97 (0.38) 69.62 (0.24)RC img1-7;p=0:5;=10 85.16 (1.03) 63.17 (0.38) 67.68 (0.60) 76.11 (0.43) 73.03 (0.46)RC mix1-7;=10 86.17 (0.56) 65.33 (1.05) 65.52 (1.13) 73.21 (1.03) 72.56 (0.50)model. Cartoon and Art are also improved. The best average domain generalization accuracy is73.03%, with a more than 6% improvement over our initial Deep-All baseline.This experiment confirms that generalizability may transfer: removing texture bias may not onlymake a pretrained model more generalizable, but it may help generalization on downstream tasks.For similar target and pretraining domains like Photo and ImageNet, where learning texture bias mayactually be beneficial, performance may degrade slightly.5 C ONCLUSION AND DISCUSSIONRandomized convolution ( RandConv ) is a simple but powerful data augmentation technique forrandomizing local image texture. RandConv helps focus visual representations on global shapeinformation rather than local texture. We theoretically justified the approximate shape-preservingproperty of RandConv and developed RandConv techniques using multi-scale and mixing designs.We also make use of a consistency loss to encourage texture invariance. RandConv outperformsstate-of-the-art approaches on the digit recognition benchmark and on the sketch domain of PACSand on ImageNet-Sketch by a large margin. By finetuning a model pretrained with RandConv onPACS, we showed that the generalizability of a pretrained model may transfer to and benefit a newdownstream task. This resulted in a new state-of-art performance on PACS in the Sketch domain.RandConv can help computer vision tasks when a shape-biased model is helpful e.g. for objectdetection. RandConv can also provide a shape-biased pretrained model to improve performanceon downstream tasks when generalizing to unseen domains. However, local texture features can beuseful for many computer vision tasks, especially for fixed-domain fine-grained visual recognition. Insuch cases, visual representations that are invariant to local texture may hurt in-domain performance.Therefore, important future work includes learning representations that disentangle shape and texturefeatures and building models to use such representations in an explainable way.Adversarial robustness of deep neural networks has received significant recent attention. Interestingly,Zhang & Zhu (2019) find that adversarially-trained models are more shape biased; Shi et al. (2020)show that their method for increasing shape bias also helps adversarial robustness, especially whencombined with adversarial training. Therefore, exploring how RandConv affects the adversarialrobustness of models could be interesting future work. Moreover, recent biologically inspired modelsfor improving adversarial robustness (Dapello et al., 2020) use Gabor filters with fixed randomconfigurations followed by a stochastic layer to add Gaussian noise to the network input, which mayexplain the importance of randomness in RandConv . Exploring connections between RandConvand biological mechanisms in the human visual system would be interesting future work.Acknowledgments We thank Zhiding Yu for discussions on initial ideas and the experimental setup.We also thank Nathan Cahill for advice on proving the properties of random convolutions.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Interesting use of Random convolutions for Data-Augmentation ### Review Text This paper proposes a simple way to increase the robustness of the learned representations in a network perform a series of object recognition tasks by adding a random convolution layer as a pre-processing stage, thus “filtering the image” and preserving the global shape but altering the local `texture’ of the newly transformed image. Here, the hope is that -- analogous to Geirhos et al. 2019 that induces a shape bias by transforming the image distribution into a new one with altered *global* textures that induce a shape bias and increases general robustness to o.o.d distortions -- the authors here go about doing something similar at the local level given the small size of the receptive field of the filter, thus preserving the shape and slightly altering “the texture”. Pros: * While the innovation is simple and efficient, this data-augmentation scheme works, and I can see how other future works may use this as well as a data-augmentation technique for object recognition. I am not sure however if no one else has explored the effects of random convolutions for robustness. It sounds too good to be true, but then again -- there is always beauty in simplicity and it is possible that the authors have hit the nail on the head on finding a somewhat ‘contrived’ filtering process as a bonus rather than a limitation. Simple, yet counter-intuitive findings like these are relevant for ICLR. * Authors provide lots of experiments that to some degree prove the success of their augmentation strategy (although see Cons). Cons: * Biological Inspiration: What is the biological mechanism linked to the success of using random convolutions. One could argue that this point is ‘irrelevant’ to the authors and the readers, but as there is a plethora of different data-augmentation techniques to choose from, why should computer vision and machine learning practitioners choose this one? (See Missing Reference for a suggestion) * Insufficient/Incomplete Baseline: The model is inspired loosely by Geirhos et al. 2019; but how does the model compete with Geirhos’ et al.’s Stylized ImageNet? I would have wanted to see a baseline between the authors proposed model and other texture-based augmentation strategies. This would elucidate the Global vs Local advantages of “texture”/style transfer on learned representations. I think this is where authors could capitalize more on. * The word `texture’ in the paper is a mis-nomer. Here what is really done is 1st order filtering via a convolution operation with a filter that does not happen to have a Gabor-like shape. “Texture” in other contexts going back to vision science and even computer vision and image processing (style transfer included), is usually computed by a set of cross-correlations between *outputs* of a filtered image (analogous to the Gramian Matrix of Gatys et al. 2015), or the principled Portilla-Simoncelli texture model from 1999. Missing references: * Excessive Invariance increases adversarial vulnerability by Jacobsen et al. ICLR 2019. The augmentation procedure proposed by the authors shows robustness to common distortions, but how about adversarial robustness? Is this relevant? Was this tried? I’d love to hear more about the authors thoughts on this to potentially raise my score. * Emergent Properties of Foveated Perceptual Systems (link: https://openreview.net/forum?id=2_Z6MECjPEa): An interesting concurrent submission to this year's ICLR has shown that the biological mechanism of visual crowding (that resembles texture computation for humans in the visual periphery) is linked to some of the operations introduced in the paper by the authors. It would be great if the authors potentially cite similar (and/or the before-mentioned) works to provide a link to a biological mechanism that may support why their data-augmentation procedure works and/or should be used; otherwise it seems contrived and could be seen as “yet another data-augmentation procedure that increases robustness but we don’t know why”. * Implementing a Primary Visual Cortex in the retina increases adversarial robustness by Dapello, Marques et al. 2020 (NeurIPS). This recently published paper in a way shows almost the opposite of what the authors are proposing here. Rather than using random convolutions, they actually mimic the gamut of spatial frequency tuning properties of Gabor filters in the first stages of convolution as done in human/monkey V1. The authors should discuss how their results fit with Dapello, Marques et al. 2020 and how they can reconcile their somewhat opposing views. Final Assessment: I am on the fence of having this paper accepted at ICLR given the limitations expressed above, but I do like it’s simplicity that should not take away it’s merit -- thus my slight lean towards acceptance. I am willing to raise my score however if authors address some of the cons/limitations, and am also curious to see the opinion from other reviewers, it is possible that I may have missed a key reference regarding data-augmentation that may weaken my assessment. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
g7j2e-zlKFS
eswc-conferences.org/ESWC/2021/Conference/Research_Track
2021
Data Reliability and Trustworthiness through Digital Transmission Contracts
["Simon Mangel", "Lars Gleim", "Jan Pennekamp", "Klaus Wehrle", "Stefan Decker"]
As decision-making is increasingly data-driven, trustworthiness and reliability of the underlying data, e.g., maintained in knowledge graphs or on the Web, are essential requirements for their usability in the industry. However, neither traditional solutions, such as paper-based data curation processes, nor state-of-the-art approaches, such as distributed ledger technologies, adequately scale to the complex requirements and high throughput of continuously evolving industrial data. Motivated by a practical use case with high demands towards data trustworthiness and reliability, we identify the need for digitally-verifiable data immutability as a still insufficiently addressed dimension of data quality. Based on our discussion of shortcomings in related work, we thus propose ReShare, our novel concept of digital transmission contracts with bilateral signatures, to address this open issue for both RDF knowledge graphs and arbitrary data on the Web. Our quantitative evaluation of ReShare’s performance and scalability reveals only moderate computation and communication overhead, indicating significant potential for cost-reductions compared to today’s approaches. By cleverly integrating digital transmission contracts with existing Web-based information systems, ReShare provides a promising foundation for data sharing and reuse in Industry 4.0 and beyond, enabling digital accountability through easily-adoptable digitally-verifiable data immutability and non-repudiation.
["Digital transmission contracts", "Trust", "Data immutability", "Non-repudiation", "Accountability", "Data dynamics", "Linked Data", "Knowledge graphs"]
Data Reliability and Trustworthinessthrough Digital Transmission ContractsSimon Mangel1, Lars Gleim1, Jan Pennekamp2,Klaus Wehrle2, and Stefan Decker1;31Databases and Information Systems, RWTH Aachen University, Germany2Communication and Distributed Systems, RWTH Aachen University, Germany3Fraunhofer FIT, Sankt Augustin, GermanyAbstract. As decision-making is increasingly data-driven, trustworthiness andreliability of the underlying data, e.g., maintained in knowledge graphs or onthe Web, are essential requirements for their usability in the industry. However,neither traditional solutions, such as paper-based data curation processes, norstate-of-the-art approaches, such as distributed ledger technologies, adequatelyscale to the complex requirements and high throughput of continuously evolvingindustrial data. Motivated by a practical use case with high demands towards datatrustworthiness and reliability, we identify the need for digitally-verifiable dataimmutability as a still insufficiently addressed dimension of data quality. Basedon our discussion of shortcomings in related work, we thus propose ReShare ,our novel concept of digital transmission contracts with bilateral signatures, toaddress this open issue for both RDF knowledge graphs and arbitrary data onthe Web. Our quantitative evaluation of ReShare’s performance and scalabilityreveals only moderate computation and communication overhead, indicating sig-nificant potential for cost-reductions compared to today’s approaches. By cleverlyintegrating digital transmission contracts with existing Web-based informationsystems, ReShare provides a promising foundation for data sharing and reuse inIndustry 4.0 and beyond, enabling digital accountability through easily-adoptabledigitally-verifiable data immutability and non-repudiation.Keywords: Digital transmission contracts ·Trust ·Data immutability ·Non-repudiation · Accountability · Data dynamics · Linked Data · Knowledge graphs1 IntroductionWith current trends in the Internet of Things and Industry 4.0, decision-makers nowadayshave access to a wide variety of data sources and platforms, thus enabling a data-drivendecision-making process. Developments in Open Data underline the trend towardsopen data sharing paradigms independent of the domain in question. E.g., supply chainsalready demonstrate these trends in productive use [ 3,16], where novel approaches enablemulti-hop data sharing [ 30], effectively forming an Internet of Production (IoP) [ 29]where multiple stakeholders collaborate. Here, benefits include reductions in costs,increased profit margins, and general improvements in production quality [29].To realize novel use cases in an IoP, data trustworthiness and reliability are crucialproperties to enable sound data-driven decisions [ 25,26,35]. Neglecting these aspects2 Mangel et al.can cause severe damages, such as miscalculations, economic losses, or even harmful tohumans [ 2,10,24,28]. Apart from trust and reliability, interoperability is important forany data sharing [ 13]. In this matter, Linked Data (LD) [ 1], a paradigm for inter-businessdata sharing, is a promising candidate. Additionally, LD also facilitates provenancetracing [ 15] and thus can positively influence trust. Overall, assessing trust and reliabilityin the context of LD is extremely relevant.A promising approach to establish objective trustworthiness is to enrich data withdigital signatures [ 40], as automatic signature creation and verification promise scalablesolutions. Related work [ 7,20,36] proposes several existing approaches regarding thesigning of Linked Data, which serves as a foundation for any approach using signaturesof Resource Description Framework (RDF) resources. However, the rising needs w.r.t.scalability in the face of increasing data dynamics are rarely considered. Hence, existingapproaches towards signing LD nowadays have limited applicability. That is, whilecoarse signatures often force users to retrieve unnecessary data only to verify signatures,fine-granular signatures, which offer more utility by design, impact the scalability due tothe communication and storage overhead caused by a large number of signatures, causinga trade-off which impairs the achievable scalability. Moreover, signatures independentlygenerated by the data source cannot provide immutability, as the data source can simplyforge signatures for modified data. The reliability guarantees of existing signature-based approaches are significantly weaker than Distributed Ledger-based approaches,where strong immutability is created by committing the state of data to an immutableledger [ 16,34]. However, these systems suffer from limited scalability due to limitedthroughput and infrastructure overhead [38].To address the required goals of trustworthiness and reliability through scalableimmutability, we propose an on-demand signature scheme, where the sender and re-ceiver actively engage in the transmission process and both sign a so-called DigitalTransmission Contract (DTC). In contrast to common signature approaches, DTCsestablish the immutability of data reliably, as both peers of a transmission would haveto collude to forge a DTC. Together with the non-repudiation provided by the signa-tures, immutability further implicates accountability , which is relevant in the face ofliability conflicts [ 30]. Thus, DTCs enable transmission peers to exchange trustworthyand reliable data by creating immutability based on bilateral signatures with improvedscalability w.r.t. data dynamics through their on-demand methodology. Our mechanismis designed to be flexible towards arbitrary data formats, such as conventional, unlinkeddata, as well as Linked Data resources by employing canonicalization.Contributions. Our main contributions in this paper are as follows.–We coin the need for data immutability as an enabler of trustworthiness and reliabil-ity, unlocking new data sharing and collaboration use cases based on well-foundeddata-driven decision making.–Addressing today’s shortcomings, our novel design of digital transmission contractsallows users to verify and prove immutability besides the integrity and authenticity ofexchanged data at a low overhead in terms of both computation and communication.–We further provide the research community with a detailed assessment of the appli-cability and feasibility of our approach ReShare and thereby create the foundationfor the novel paradigm of on-demand bilateral signatures for scalable immutability.Data Reliability and Trustworthiness through Digital Transmission Contracts 32 Design GoalsTo address the lack of suitable approaches in reliable and trustworthy data sharing, weidentify a set of concise design goals, which will guide us through the review of relatedwork and presentation and evaluation of our approach.G1: Integrity, Authenticity & Non-repudiation. The ability to digitally verify theintegrity and authenticity of LD resources is essential to establish objective trustwor-thiness. Thereby, unauthorized modification is prevented, and the users’ trust in datacorrectness is strengthened. Further, non-repudiation is needed for accountability (cf.G2), as the possibility to repudiate a given action hinders a party’s accountability forsaid action. As all three requirements can be fulfilled using digital signatures, we groupthese three desired properties under a single goal.G2: Immutability. To strengthen data reliability and to establish accountability, wepostulate that any used system must be made immutable. Third parties should not beable to easily question this immutability. That is, the system should provide proofs ofimmutability, which malicious entities cannot easily forge.G3: Applicability. To be viable in realistic use cases, the solution should be directlyapplicable and provide both trustworthiness and reliability. Interoperability and usabilityfurther affect the applicability of a system, as they facilitate integration in use cases.G4: Scalability. Given that the increasing data dynamics, especially in the industrialdomain, are a significant challenge for data consumers, any proposed signing andverification approach must scale to future needs, i.e., it should be able to timely react tothe frequent creation and modification of data without significantly impairing the design.G5: Performance. To complement G4, we emphasize that unreasonable overheadfor any involved party must be strictly avoided. Otherwise, the proposed solution limitstheir ability to participate and leads to undesired constraints following a restrictedthroughput and consequently a non-acceptance of the system in real-world settings.G6: Payload Flexibility. As a scalable approach towards trustworthy and reliabledata sharing (cf. G1,G2) unlocks important use cases in industrial environments wherethe LD paradigm is not yet well-established, we argue that a sole focus on LD signifi-cantly limits the applicability. Thus, we demand flexibility concerning the payload ofthe created signatures. For improved adaptability, any proposed system should supportarbitrary data formats, with a specific focus on commonly employed formats on the Web.Note that the entirety of our design goals exceeds the needs for applications that areexclusive to the Semantic Web. Indeed, any approach that fulfills G1-G5can satisfy therequirements of data trustworthiness and reliability in the Semantic Web. However, weintend to propose a more flexible and all-encompassing industry-ready approach.3 Background & Related WorkNow, we outline related work and fundamental concepts for the challenge of datatrustworthiness and reliability. First, we relate the general problem of data quality toour goals of trustworthiness and reliability, before we summarize existing approachesto signing LD resources. With the issue of data mutability in mind, we briefly discussdistributed ledger technology that promises data immutability. Throughout the section,we discuss the shortcomings of approaches based on our design goals described in Sec. 2.4 Mangel et al.Data Quality and Trust in the Semantic Web. Data quality is commonly conceivedas its fitness for use w.r.t. a given application [ 40]. Thus, it constitutes a heterogeneousconcept with dimensions that partly are of subjective or context-dependent nature. Zaveriet al. [ 40] were able to extensively identify and categorize sub-dimensions and metricsof data quality in the context of LD.As one of the six categories of data quality dimensions, trust severely suffers frommore dynamic associations of stakeholders, such as prevalent in modern supply chainswith increasing flexibility [ 3,27]. Zaveri et al. [ 40] investigate trust in detail and identifyreputation ,believability ,verifiability , and objectivity as dimensions of trust. The metricsof reputation, believability, and objectivity are either only able to indicate trustworthiness,e.g., by checking for the existence of meta-information about the data source, or dependon a sophisticated trust model, which may not exist in real-world use cases.Consequently, we believe that none of these three dimensions allows for objectivelyassessing the trustworthiness of data independent from its context when the fourthdimension – verifiability – is not given, as fraudulent modification and forgery are notprevented. Contrarily, verifiability can be objectively assessed by the use of digitalsignatures. As long as the signature is bound to the data source, e.g., by employing aPublic-Key Infrastructure (PKI) [ 31], signatures grant authenticity, integrity, and non-repudiation of the data. We argue that data trustworthiness may be sufficiently assertedthrough signature verification if the data source is trusted.Approaches towards LD Signatures. After discussing the concept of data quality inthe Semantic Web, we now survey existing approaches that sign LD resources. Note thatthe discussed approaches fulfill G1, i.e., the ability to sign LD resources, by design.To the best of our knowledge, Carroll et al. [ 7] were the first to propose a sophisticatedsignature mechanism specifically for usage with Linked Data. The authors especiallyfocus on the canonicalization algorithm and argue that, even if graph canonicalizationis GI-complete, practically graphs can be canonicalized in O(nlog(n)). In this regard,Carroll et al. [ 7] are relevant for any task which needs a canonical RDF representation.However, the authors focus on the signature mechanism itself and do not propose acomplete system, as the use of a PKI and the distribution of signatures are not discussed.Tummarello et al. [ 36] followed up on Carroll et al. [ 7] by proposing a more holisticsystem for LD signatures. The authors argue that graph-level signatures [ 7] are oftentoo coarse for practical use cases, as users would always have to request the entire RDFgraph to verify the signature. To address this issue, the authors proposed to sign thedata at a much finer level, i.e., at the level of Minimum Self-contained Graphs (MSGs).Signatures are attached to an arbitrary triple of the signed MSG, thus internalizing them.By directly attaching signatures and certificate metadata to the data itself, the authorsexplicitly specify how to apply the system to a knowledge graph ( G3). However, as alsocriticized by Kasten et al. [ 20], the certificate is referenced by a URI, which makes thesignature unusable if a certificate can no longer be retrieved. Furthermore, a user hasto compute at least a partial partitioning into MSGs to know which statements weresigned in a given signature, as such information is not explicitly stated. Moreover, theapproach has severe issues concerning scalability ( G4) and performance ( G5) caused bythe fine granularity of signatures. If no blank nodes are present, each statement is signedseparately, resulting in a substantial overhead caused by the signatures, which is evenData Reliability and Trustworthiness through Digital Transmission Contracts 5exaggerated if data is modified frequently. Due to the focus on LD, the approach doesnot support signing other data formats, i.e., it does not provide payload flexibility ( G6).Kasten et al. [ 20] improved on Tummarello et al. [ 36] by proposing a framework thatallows for signatures at different levels of granularity, i.e., reaching from MSG signaturesup to signing multiple graphs at once. The authors discuss and formalize the entiresignature process in a framework and only give exemplary solutions for the identifiedfunctions. Therefore, the work does not constitute a directly applicable solution, butrather aims at building a foundation for applicable solutions through formalization. Dueto its improved flexibility, the approach provides better scalability ( G4) and reducedoverhead ( G5) compared to Tummarello et al. [ 36].G6is not met, as the approachspecifically focuses on RDF data. Most importantly, all approaches listed above have acommon crucial deficiency w.r.t. our design goals, i.e., they do not create immutability(G2). As the data source only signs data, this entity can easily forge signatures formodified data at any time, thus violating immutability. This aspect is crucial, as weidentified immutability as a core requirement for data reliability in Sec. 2.To the best of our knowledge, none of the existing approaches to signing LD resourcescan sufficiently fulfill our design goals, especially w.r.t. immutability ( G2).Distributed Ledger-based Immutability. In contrast to signatures, Distributed Ledger(DL) technology is designed to provide strong immutability ( G2) by committing the stateof data to an irreversible ledger [ 41]. This property is highly desirable and thus celebratedin a wide variety of use cases, such as distributed supply chains [ 3,16]. Such use caseswith little pre-existing trust relations and opportunities to employ LD technology motivatethe combination of LD and DL technology to establish immutability. Consequently, usecases and barriers w.r.t. the combination of LD and DLs have been identified [ 6,12,33].Furthermore, researchers proposed the first concrete solutions employing DL technologyto establish immutability in the context of LD [34].However, the intersection of the two research fields still is in its infancy, and existingapproaches rarely cover the need for immutability holistically. Furthermore, the strongimmutability provided by Distributed Ledgers comes with practical disadvantages. Evenwith new consensus mechanisms such as the Swirlds hashgraph consensus algorithm [ 4]or the tangle [ 32], which try to mitigate the negative effect of common, costly consensusmechanisms such as proof-of-work, DL systems always bring a substantial overhead(G5). This overhead is infeasible in many use cases, as with huge amounts of data, e.g.,produced by IoT sensors, the relative cost for conducting the consensus mechanismexceeds the value of the data written to the ledger. Thus, we decided to refrain frominvolving DL technology in our system. However, we think that the intersection ofDistributed Ledgers and LD in scenarios where LD immutability is crucial and theimposed overhead is acceptable makes for a promising research area for future work.In conclusion, we found that none of the existing approaches towards LD signatureswere able to sufficiently fulfill our design goals, especially due to the missing immutabil-ity (G2). Research regarding combinations of DL and LD technology to provide strongimmutability ( G2) is still in its infancy, resulting in prohibitive scalability ( G4) andperformance ( G5) overhead for many use cases as detailed in our supplementary mate-rial [ 18]. Therefore, we identify the requirement for an approach that bridges the needsfor immutability, scalability, and performance.6 Mangel et al.4 ReShare: Reliable Data Sharing through DTCsTo address the previously identified shortcomings of related work w.r.t. our design goals,we propose ReShare , a scalable and flexible on-demand resource signature system. Re-Share employs the novel concept of Digital Transmission Contracts (DTCs). Wheneverthe state of data needs to be proven as immutable, the data sender and receiver partake inacontract generation handshake , which results in the creation of a DTC. A DTC is arecord comprising the state of the subject data (as checksums), the identities of senderand receiver, as well as a timestamp of contract creation, crafted immutably by addingsignatures of both peers. In the DTC generation mechanism, the receiver requests aDTC for a set of resources. The sender compiles said DTC by creating checksums forthe resources, adding metadata, and signing the record. Finally, the receiver signs theretrieved DTC and sends its signature back to the sender.The validity of a contract can be automatically verified at any time in a correspond-ingcontract verification mechanism , where the identities of the parties, the signatures,and the resource checksums are verified. We expect that creating signatures for dataexchanges offers improved scalability ( G4) compared to existing signature mechanismsin use cases where the amount of produced and modified data outweighs the numberof data requests. This expectation is strengthened by ReShare’s ability to bundle mul-tiple resources into one DTC, thus reducing the per-resource signature overhead ( G5).Furthermore, we argue that ReShare provides immutability in addition to the existingbenefits of digital signatures, i.e., integrity and authenticity, as a colluding of two partiesis needed to forge a valid contract that violates the data immutability.First, we proceed by motivating a use case in the domain of aerospace engineering.Subsequently, we present details about both the contract generation and verificationmechanisms. Finally, we discuss realization aspects of ReShare, i.e., the representationand ontology of DTCs and the integration of ReShare with LD technology.ReShare’s Capabilities Illustrated using Aerospace Engineering. For our system,aerospace engineering constitutes an interesting use case, as reliability is highly desirablewhen designing and producing safety-critical products. This need even is legally justified,as federal US-American laws require manufacturers to securely store the type design ,comprising drawings and specifications, information about dimensions, materials, andprocesses for as long as the respective type certificate of the aircraft is valid (cf. 14 CFR§21.31, §21.41, §21.49 [ 37]). Thus, such data must be kept available reliably, at least aslong as an aircraft of the given type is operational. As a result, manufacturers usuallyapply specialized archiving systems [ 22]. However, if we consider modern IoT-backedsupply chains, massive amounts of data can hardly be processed by common archivingpipelines, which may still involve humans in paper-based signature mechanisms [ 39].Therefore, aerospace engineering is a prime candidate to integrate ReShare.To further look into this use case, we illustrate the benefits of employing an LD-based approach for tracking and tracing through a practical example: Involving themanufacturer Boing , which conducts the final assembly of an aircraft, the independentsupplier ACom , which provides the radio unit for Boing ’s aircraft together with relevantproduction data, and regulatory agency F ̈AAthat ensures legal compliance. We furtherconsider the following scenarios to visualize the system’s functionality:Data Reliability and Trustworthiness through Digital Transmission Contracts 7Assembly. Boing assembles an aircraft using a radio unit supplied by ACom , whileACom further grants Boing access to all relevant production data. Enriched with metdata,such as provenance information, this data is stored in an LD platform. Boing andAComalso generate a DTC, which is bound to the state of said data. In this context, both BoingandACom verify their certificates and signatures, thus mutually establishing authenticity.Due to the dataflow direction, we refer to ACom as the sender and to Boing as the receiver .Proof of Conformance. After an aircraft was involved in an incident, Boing has toprove to F ̈AAthat certain requirements were met during manufacturing. To this end, Boingpresents the relevant data, together with the respective DTCs. The contracts include allrelevant context to verify the authenticity and state of the data. If an investigation of theincident concludes that, for example, the data regarding the radio is incorrect, Boing isnot liable, as it acted to the best of its knowledge. Rather, ACom can be held accountable.Tracking and Tracing. To deal with the aftermath of a defective radio unit, Boingtraces all other aircraft that use the same type of radio using the data’s semantic properties.While an improved tracking & tracing efficiency is not a contribution of our system, weargue that ReShare provides the needed reliability through LD.After outlining three distinct and common real-world scenarios, we now presentReShare’s DTC generation and verification mechanisms.4.1 Generation MechanismThe contract generation mechanism constitutes a relatively simple 3-way handshake,which we visualize in Fig. 1. We use this opportunity to address the different typesof identifiers used in the system. Resources are identified by unique Resource IDs(RIDs). While ReShare supports arbitrary RIDs, the use of Internationalized ResourceIdentifiers [ 11] (IRIs) improves interoperability with LD technology. For identificationof the contract itself, ReShare mandates the use of unique IRIs [ 11] in order to integrateDTCs with common LD technology. The mechanism works as following, where Rdenotes the receiver, and Sthe sender:R1:Rchooses a set of RIDs and sends the RIDs and his identity, including a public keycertificate, to S.S1:Scan now assemble the contract associating checksums of the canonicalized re-sources identified by the RIDs. Then, Screates a timestamp and a unique contractIRI and adds them to the contract. Finally, Screates its signature with the JSONrepresentation of the contract as input. Then, the signature is added to the contract.Sthen sends the assembled contract, including its identity with a public key cer-tificate, its signature, the RIDs with associated checksums, the timestamp, and thecontract IRI back to R.R2:S’s signature (i.e., senderSig ) is verified, together with S’s certificate (includedin the sender field). The timestamp recency is checked. Rsigns the contractsimilarly to S.The complete contract, including both peers’ signatures, is finally sent to S.Both signatures are created by omitting any existing signatures in the contract as an inputto the signature creation. That is, both the sender and receiver sign the DTC, includingthe identities, resource checksums, and the timestamp. The computation overhead on thesender’s side for creating the contract, which mainly consists of generating checksums8 Mangel et al.R1: receiver, factsR's name,certificate,etc.List of RIDs,without checksumsS RS1: sender, senderSig, facts, ts, cIRIR2: receiverSigS's name,certificate,etc.S's signatureof the contractList of RIDs,with checksumsR's signatureof the contract,excluding senderSigCompute and add fact checksumsCreate timestampCreate cIRICreate senderSigAdd senderSig to contract Verify senderSig(Verify fact checksums)Create receiverSigAdd receiverSig to contractChoose facts to requestTimestampContractIRIFig. 1. Visualization of the contract generation handshake. Sis the sender ,Ris the receiver . Here,Fact is used as a synonym for a persistently identified resource.and creating the signature, can be reduced using pre-generated checksums in use caseswhere scalability needs are especially high, or Denial of Service by R1flooding is a validthreat model. In the latter case, the problem could otherwise also be mitigated by the useof rate limiting or access control.The messages contain complete, incremental versions of the contract, which allowsthe sender to remain stateless. This design fits our server-client model nicely, where thesender as a server provides an interface for receivers to request contracts. The handshakeis always executed on top of TCP, which guarantees reliable communication. Therefore,an error indicates a faulty or incompatible configuration. A mechanism to deal witherrors is not part of ReShare, but could be easily implemented for future work.4.2 Verification MechanismContract verification relies only on access to the contract and the covered data itself andconsists of the following steps (in no particular order):–Verify the public key certificates using the respective PKI–Verify the signatures using the JSON-formatted contract and the public keys–Verify the data checksumsHence, only access to the contract and the data are needed. As ReShare currently onlysupports a CA-based PKI with X.509 certificates, the necessary PKI context comprisesa set of pre-installed root CA certificates to verify the X.509 certificate chains of bothparties. Contracts are explicit about all information needed to verify signatures andchecksums, e.g., about the canonicalization and serialization of the data. To verify thechecksums, all parties use the same input as during the DTC generation, i.e., the fullDTC excluding any existing signatures. Through the simplicity of the verification, onlyconsisting of the three steps described above, the system is applicable ( G3) in use caseswith needs for unambiguous verifiability of the generated DTCs, e.g., in legal conflicts.4.3 RealizationWe implemented ReShare as a Proof-of-Concept Node.js module [ 18]. We discuss ourdecisions concerning the implementation and used technologies in the following.Data Reliability and Trustworthiness through Digital Transmission Contracts 9Namespace prefixes:rs: <http://example.com/reshare/ontology#>http://example.com/contracts/<contractID>c:fact-0"kLbb...5A=="^^<xsd:base64Binary>rs:hasFactrs:senderSigrs:receiverSigc:senderSigc:receiverSigrs:sha256Sumrs:sender rs:receiverc:sender c:receiverrs:x509Certrs:x509Cert"MIIE...wkk="^^<xsd:base64Binary> "MIIE...AA=="^^<xsd:base64Binary>rs:sigTypers:sigTypeurn:oid:1.2.840.113549.1.1.10rs:Factrs:factOrigin"5fc6...f48e"^^<xsd:hexBinary> "cWjd...2A=="^^<xsd:base64Binary>http://example.com/persons/John_Doers:sigDatars:sigDataars:Signatureaars:Identitya ac: <http://example.com/contracts/<contractID>#>xsd: <http://www.w3.org/2001/XMLSchema#>Fig. 2. Structure of an exemplary ReShare contract, visualized as an RDF graph using Turtle nota-tion.http://example.com/contracts/<contractID> here symbolizes the contractIRI chosen by the sender (cf. Sec. 4.1).Contract Ontology and Representations. As Tummarello et al. [ 36] discuss, theability to internalize signatures into the context of the signed data improves the overallusability, because data and signatures are directly associated. Therefore, contracts arerepresented as JSON-LD [ 21] by default. For this, we have defined the ReShare ontol-ogy [ 19], which defines the types and properties in a contract. This makes contractsthemselves usable in LD platforms. A DTC contains the following:–A root node, identifying the contract itself, identified transparently and uniquely bythe contract IRI chosen by the sender–A set of resource checksums, associated with their resources by RIDs (cf. Sec. 4.1)–The identity of sender and receiver, including X.509 certificates [8]–The signatures of sender and receiver ( RSASSA-PSS [23])–A timestamp in the ISO 8601 format (interpretable as xsd:dateTimeStamp [9])An example contract RDF graph with the default structure can be seen in Fig. 2. Notethat the notion of Facts originates from the FactDAG data interoperability model [ 13]and its implementation FactStack [ 14], where a revisioning system is used to createpersistence. As this paradigm is not mandatory in ReShare, we use factas a synonymfor any persistently identified resource in this paper.Because we require payload flexibility ( G6), DTCs should also be compatible withother common technology stacks outside of the Semantic Web. Therefore, in additionto the JSON-LD representation, the context can be omitted, resulting in a pure JSONrepresentation of DTCs, making contracts usable as structured data.Contract Generation. Given that the contracts are represented as JSON-LD orJSON by default, we decided to rely on a JSON-based protocol for contract generation,where the JSON contracts are wrapped into a minimalistic message structure. TheJSON-LD context is automatically added when exporting the contract after generation.10 Mangel et al.The most basic generation mode corresponds to an execution of the JSON protocoldirectly on top of TCP, with optional use of TLS. However, this approach requiresopening a dedicated port for ReShare. Therefore, we also provide an HTTP(S) mode tointegrate ReShare into existing Web servers. Then, the 3-way handshake is wrapped intotwo HTTP POST requests. Thus, we end up with four modes, which we denote by TCP(i.e., without TLS or HTTP), TLS (i.e., TCP+TLS without HTTP), HTTP, and HTTPS.5 EvaluationTo assess the benefits, possible limitations, and applicability of ReShare, we first quan-titatively evaluate the storage and communication overhead as well as the effects oflatency to the generation mechanism, before qualitatively discussing the fulfillment ofour design goals as defined in Sec. 2, whilst giving outlooks to promising use cases.5.1 Quantitative EvaluationTo quantitatively evaluate the performance of the system, we simulated contract gen-eration with varying (i) modes of operation (TCP/TLS/HTTP/HTTPS), (ii) number offacts per contract, and (iii) certificate chain lengths of both peers. We quantitativelyevaluate the (a) total duration of the handshake, (b) the number of bytes transferred,as well as (c) the size of the generated contract in its default JSON representation. Weemploy a TCP proxy to investigate the impact of varying network latency on the proto-col’s performance and measure the amount of data transferred. Overhead for TLS andHTTP are included in the results. We split the evaluation into two orthogonal parametercombinations to facilitate visualization and discussion, which we show in Table 1.Contract Size & Communication Overhead. In Fig. 3, we show how the numberof facts in one contract influences contract bytes and communication overhead per fact,split by handshake mode. A per-fact plot brings better comparability to other approachesthan per-contract, as contracts are a concept that is specific to our approach. Thus, metricsare plotted on a per-fact (i.e., per-resource) basis.The total contract size and communication overhead per handshake increase linearlywith the number of facts, as fact data is of constant size, consisting of a RID and achecksum. Thus, if mmodels one of the per-contract metrics, this gives m(n) =sn+c,where sis the slope of the curve, i.e., the bytes by which the metric increases if one factis added per contract, nis the number of facts per contract, and cis the constant overheadwhich is not influenced by the number of facts. Then, we can model the per-fact metricTable 1. Overview of the evaluated parameter combinations.Protocol Mode Facts/DTC Proxy delay [ms] Cert. Chain Len. Iterations1 TCP, TLS, HTTP, HTTPS1, 5, 10, 20, 30, 40, 50,0 1, 2, 3, 4, 5 2060, 70, 80, 90, 1002 TCP, TLS, HTTP, HTTPS 100, 10, 20, 30, 40, 50,1 5060, 70, 80, 90, 100Data Reliability and Trustworthiness through Digital Transmission Contracts 111510 20 30 40 50 60 70 80 90 100Facts per contract1001K10KBytes per factContract BytesTCPTCP+TLSHTTPHTTPSTheoretical limit: 127 Contract B/factFig. 3. Contract bytes and communication bytes per fact, by the number of facts in one contract.Dataset 1 from Table 1 was used. Communication bytes are split up by handshake mode, i.e., HTTPenabled/disabled, and TLS enabled/disabled. Compared to a per-contract plot, this representationprovides better interpretability when comparing the system to other approaches, as contracts arean unknown concept in other work.asm0(n) =m(n)n=s+cn1. This model is plotted for each metric. The theoreticallimit for contract bytes per fact naturally is given by s, which is the slope of the linearper-contract fit. It can be interpreted as the number of bytes that are caused by a factitself, excluding the static overhead in contracts, which is independent of the number offacts. An analog interpretation of the other metrics slopes is possible. The Figure alsoshows that the overhead of HTTP is negligible, whereas enabling TLS causes a constantcommunication overhead of approximately 200 B per contract.Handshake Duration. To better evaluate the handshake duration in a realistic sce-nario, we simulate varying degrees of network latency. The results can be seen in Fig. 4.Because the relationship between network latency and total handshake duration expect-edly is linear, the slope of a linear fit divided by 2gives a rough estimate on the numberof Round Trip Times (RTTs) a contract generation takes.As a first observation, our implementation produces a constant overhead of approxi-mately 140 ms in handshake duration. Similar to the previously evaluated communicationoverhead in bytes, HTTP and TCP add the least latency overhead. As the three handshakemessages (cf. Sec. 4.1) can directly be sent on top of TCP or HTTP without additionalmessages, the slope comes close to the baseline of a 3-way handshake. If TLS is enabled,the latency effect is significantly increased by the TLS handshake, i.e., for each TLShandshake, the delay increases by approximately 1 RTT . In TLS mode without HTTP,one socket is used for all messages. Thus, only one TLS handshake is needed, causing a1 RTT overhead compared to raw TCP. As our implementation currently does not supportsocket reuse when using HTTP, we need two TLS handshakes in HTTPS mode (onefor each POST request), causing a 2 RTT overhead compared to HTTP without TLS,resulting in a maximum handshake duration of approximately 3.5 RTT . Note that reusingcryptographic material from the first request does not reduce latency overhead, as it doesnot change the number of TLS handshake messages. sages.For future work, we plan to add socket reuse support for HTTP to our implementation.12 Mangel et al.0 20 40 60 80 100Simulated delay [ms]0200400600800Handshake duration [ms]Baseline (3x delay)TCP (Slope: 3.46)TLS (Slope: 5.56)HTTP (Slope: 3.48)HTTPS (Slope: 7.51)Fig. 4. Visualization of the effect of communication delay in-between sender and receiver simulatedwith an artificial delay in the TCP proxy. Dataset 2 from Table 1 was used. The data was collectedusing 10 facts per contract and a certificate chain length of 1. Per mode and artificial delay,50contracts were generated. The error bars display the interval of 2, thus accounting forapproximately 95 % of the measurements. The baseline represents a 3-way handshake without anyoverhead by computation or additional messages.5.2 Qualitative EvaluationAfter quantitatively evaluating the performance of ReShare, we now classify the quan-titative results and discuss the benefits and disadvantages of ReShare w.r.t. the designgoals defined in Sec. 2, as well as practical considerations.Trustworthiness & Reliability. In Sec. 2, we identified signatures as a core enablerof data reliability and trustworthiness ( G1), which are extremely relevant aspects of dataquality (cf. Sec. 1). For the created signatures to create trustworthiness, the entire trustchain, reaching from the PKI as the trust root data checksums, has to be validated asdefined in Sec. 4.2. Because the verification algorithms, i.e., X.509 certificate validation,RSA-PSS signature verification, and SHA-2 checksum validation, are generally accepted,our focus is on the availability of the necessary signatures, checksums, and certificates.First, the material necessary for verifying the signatures themselves is containedwithin the DTC. Second, for the included certificate to be verifiable, the consumer hasto trust the root CA which signed the peer certificates. This assumption is reasonable,as root stores and root programs have long-established extensive CA curation [ 17].However, the expiry and revocation of X.509 certificates, as well as the retrievability ofthe root certificate, which is not included in the contract, hinder verifiability, and thus,reliability ( G1) in application scenarios with long-term storage requirements. One couldcounteract this with special approaches for long-term signature preservation, such asBrali ́c et al. [ 5], which have detrimental effects on performance ( G5) or scalability G4and require additional infrastructure. We leave this issue to be investigated in futurework. Third, the data has to be available in order to verify the included checksum. Here,the structure of DTCs has the advantage that not all included data has to be available inorder to keep the signature material verifiable, as individual checksums can be verified.Thus, if certain resources are no longer needed, they can simply be deleted withoutimpacting the verifiability of other resources signed by a contract.Data Reliability and Trustworthiness through Digital Transmission Contracts 13Overall, DTCs are easily verifiable, thus providing good reliability and trustworthi-ness ( G1), as long as the certificates are valid and the root certificate is retrievable.Immutability & Accountability. Usually, data-driven business models with highdata reliability needs either depend upon well-trusted business partners or have to resortto both time- and cost-intensive manual data curation [ 39]. Application of ReSharethus provides promising opportunities to create trustworthiness where trust cannoteasily be established otherwise and can further be used as a legal binding of the peersto the underlying transmission, useful in legal conflicts. ReShare provides improvedimmutability compared to usual signature schemes, as illegally forging a valid DTCrequires collusion of both peers of the contract. Because benign behavior of the two peersis a severely limiting assumption, ReShare cannot hold up with DL-based immutability,as successful Distributed Ledgers are considered to be irreversible, unless a large share ofthe network colludes. Thus, ReShare provides enhanced immutability ( G2) compared toother signature-based approaches, but cannot keep up with DL technology regarding thisaspect. To improve the immutability guarantees of ReShare, one could employ a digitalnotary, e.g., by adding additional signatures by impartial third parties or committingcontracts to a DL. However, to keep scalability, one should incorporate measures toreduce the notary overhead, e.g., by only using the notary for interval-based checksumsof all created contracts. We deem this idea an interesting direction for future work.Performance & Scalability. If a sufficiently large number of resources are signedper-contract, the per-resource storage communication overhead falls below 1 kB rela-tively quickly. With a handshake latency overhead of less than 200 ms (without delay),less than 5 RTTs , and the simple contract verification mechanism, we argue that theamount of imposed overhead by the use of the system is reasonably low, especially incomparison to traditional proof of transmission approaches, such as paper-based receiptscommonly used in industry today [ 39]. Therefore, the performance goal ( G5) is met.One could argue that if individual transmissions only consist of a few resources (e.g.,only a few RDF statements), the per-resource overhead both for storage of the contractsand the handshake increases relatively fast. This issue also exists in related work w.r.t.LD signatures, as the severity of the overhead imposed to the user when using coarsesignatures is exaggerated in this scenario (cf. Sec. 3). However, if, on the one hand, theoverall frequency of requests is low, this issue becomes less severe, as the throughputrequirements are small. If, on the other hand, higher request frequencies are expected,ReShare provides the opportunity of resource bundling, i.e., requested resources cansimply be buffered by the client and bundled into a single DTC, thus mitigating the issue.In environments with high request frequency by many distinct data recipients, DTCbundling may, however, only apply to a lesser degree. However, besides a limitation w.r.t.these specific circumstances, ReShare provides decent scalability ( G4).Payload Flexibility. Because DTCs use checksums of canonicalized data, the dataformat is arbitrary, as long as a canonical representation is specified, which contributesto payload flexibility ( G6) and allows for applicability to generic Semantic Web dataand any other type of resource on the Web. Thus, ReShare constitutes a unified solutionfor arbitrary data on the Web.Other Practical Considerations. ReShare has the advantage that it is optionallyadaptable both for individual stakeholders and individual transmission, as its use is not14 Mangel et al.mandatory. If one installs ReShare, but opts out to generate contracts for transmissions,made possible through the optionally adaptable design, the system generates little to nooverhead. Suppose that it is used in HTTP(S) mode, then, it can be integrated into analready existing web server, and thus does not require additional hardware, infrastructure,or specific software. Such a deployment is useful where manual data curation may bemore cost- or time-efficient than generating DTCs, or peers simply do not implementReShare, making the system fully backward-compatible.The peer X.509 certificates make up for a majority of the contract data. Thus,removing the certificate data from DTCs and instead of referencing peer certificates withunique identifiers would drastically reduce contract sizes, improving scalability ( G4) andperformance ( G5). However, as verifiability is the key to the provided trustworthiness andreliability, we argue that making the verifiability of DTCs dependent upon the certificateavailability would substantially weaken verifiability, and thus, our core requirement ( G1).However, to practically reduce the overhead imposed by peer certificates, one couldassign unique identifiers to the used certificates in the LD context of DTCs, which allowsto only store the certificates once when using an LD platform such as a triple store. Thismethod could be realized with ReShare as part of future work.ReShare can also be integrated with existing systems and data, i.e., it is backward-compatible. Retrospectively generating DTCs even has an advantage w.r.t. performance(G5) and scalability ( G4), as all resources from a given data source can be bundledinto a single DTC, thus reducing the per-resource communication and storage overhead.However, using retrospectively generated contracts, one can not prove possession of thedata for the time interval before contract generation due to the contract timestamp.To conclude, with decent scalability in most use cases and stronger immutability thancommon signature schemes, we see no significant limitations for ReShare’s applicability,making it a promising solution for a variety of use cases with requirements of scalability,trustworthiness, and reliability.6 ConclusionIn this paper, we expressed the need for immutability as an enabler for data trustworthi-ness and reliability, paving the way for novel use cases employing LD technology forreliable data sharing and collaboration. After identifying the lack of suitable solutionsthat bridge the need for scalability and immutability, we present ReShare, our designutilizing digital transmission contracts to establish immutability through signatures byboth transmission peers, imposing a reasonably low overhead with good scalability.We provide the research community with a discussion of feasibility and applicability,building a foundation for future work w.r.t. scalable immutability for real-world use.Trustworthiness and reliability are essential requirements for a more open data shar-ing paradigm in the industry, as economic outcomes depend on data correctness. Digitalsignatures can provide integrity, authenticity, and non-repudiation, and therefore, theycan be used to create data trustworthiness. However, we argue that simple signatures can-not reliably establish immutability, as the signing authority can forge arbitrary signatures,thus hindering data reliability. Recently, distributed ledgers are frequently proposed toachieve the immutability of information. Unfortunately, their scalability is substantiallyData Reliability and Trustworthiness through Digital Transmission Contracts 15challenged through limited throughput and infrastructure overhead. To address theseissues, we propose ReShare, a system for creating on-demand bilateral signatures con-tained in digital transmission contracts. Given that both transmission peers would haveto collude to forge valid digital transmission contracts, we argue that ReShare providesimproved immutability compared to common signature systems, combined with properscalability through moderate overhead and the ability to sign multiple resources at once.In our evaluation, we demonstrate that our proposed design shows promising applica-bility, as its immutability is valuable in use cases with high scalability needs, while beingflexible towards the format of signed data and optionally adaptable by concept, imposinglittle overhead when peers opt-out from usage. Thus, ReShare is a prime candidate toachieve data reliability and trustworthiness for both the Semantic Web and industry. Forfuture work, optionally-adaptable notary systems could further strengthen the immutabil-ity of our proposed contracts. However, already in its current state, ReShare allows fornovel approaches that profit from and build upon the proposed concept of on-demandbilateral signatures.Acknowledgments Funded by the Deutsche Forschungsgemeinschaft (DFG, GermanResearch Foundation) under Germany’s Excellence Strategy – EXC-2023 Internet ofProduction – 390621612.References1.Abramowicz, W., Auer, S., Heath, T.: Linked Data in Business. Bus. Inf. Syst. Eng. 58(5)(2016)2.Attaran, M., Attaran, S.: Collaborative supply chain management: the most promising practicefor building efficient and sustainable supply chains. Bus. Process Manag. J. 13(3) (2007)3.Bader, L., Pennekamp, J., Matzutt, R., Hedderich, D., et al.: Blockchain-Based Privacy Preser-vation for Supply Chains Supporting Lightweight Multi-Hop Information Accountability. Inf.Process. Manag. 58(3) (2021)4.Baird, L.: The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance.Swirlds-tr-2016, Swirlds, Inc. (2016)5.Brali ́c, V ., Kule ˇs, M., Stan ˇci ́c, H.: A model for long-term preservation of digital signaturevalidity: TrustChain. In: INFuture (2017)6.Cano-Benito, J., Cimmino, A., Garc ́ıa-Castro, R.: Towards Blockchain and Semantic Web. In:BIS (2019)7. Carroll, J.J.: Signing RDF Graphs. In: ISWC (2003)8.Cooper, D., Santesson, S., Farrell, S., Boeyen, S., et al.: Internet X.509 Public Key Infrastruc-ture Certificate and Certificate Revocation List (CRL) Profile. RFC 5280 (2008)9.Cyganiak, R., Wood, D., Lanthaler, M.: RDF 1.1 Concepts and Abstract Syntax. W3C Rec.(2014)10.Dahlmanns, M., Pennekamp, J., Fink, I.B., Schoolmann, B., et al.: Transparent End-to-EndSecurity for Publish/Subscribe Communication in Cyber-Physical Systems. In: ACM SaT-CPS(2021)11. Duerst, M., Suignard, M.: Internationalized Resource Identifiers (IRIs). RFC 3987 (2005)12.English, M., Auer, S., Domingue, J.: Block Chain Technologies & The Semantic Web: AFramework for Symbiotic Development. Tech. rep., University of Bonn (2016)13.Gleim, L., Pennekamp, J., Liebenberg, M., Buchsbaum, M., et al.: FactDAG: FormalizingData Interoperability in an Internet of Production. IEEE Internet Things J. 7(4) (2020)16 Mangel et al.14.Gleim, L., Pennekamp, J., Tirpitz, L., Welten, S., et al.: FactStack: Interoperable Data Man-agement and Preservation for the Web and Industry 4.0. In: BTW (2021)15.Gleim, L., Tirpitz, L., Pennekamp, J., Decker, S.: Expressing FactDAG Provenance withPROV-O. In: MEPDaW (2020)16. Gonczol, P., Katsikouli, P., Herskind, L., Dragoni, N.: Blockchain Implementations and UseCases for Supply Chains-A Survey. IEEE Access 8(2020)17.Holz, R., Braun, L., Kammenhuber, N., Carle, G.: The SSL Landscape – A Thorough Analysisof the X.509 PKI Using Active and Passive Measurements. In: ACM IMC (2011)18. i5: factcheck.js. https://git.rwth-aachen.de/i5/factdag/factcheck.js19. i5: ReShare Ontology v0.1. http://i5.pages.rwth-aachen.de/factdag/reshare-ontology/0.1/20.Kasten, A., Scherp, A., Schauß, P.: A Framework for Iterative Signing of Graph Data on theWeb. In: ESWC (2014)21. Kellogg, G., Champin, P.A., Longley, D.: JSON-LD 1.1. W3C Rec. (2020)22.LOTAR International: Legal & Business Motivation. https://lotar-international.org/why-lotar/legal-business-motivation/ (2020, accessed December 16, 2020)23.Moriarty, K., Kaliski, B., Jonsson, J., Rusch, A.: PKCS #1: RSA Cryptography SpecificationsVersion 2.2. IETF RFC 8017 (2016)24.Moyaux, T., Chaib-draa, B., D’Amours, S.: Information Sharing as a Coordination Mechanismfor Reducing the Bullwhip Effect in a Supply Chain. IEEE Trans. Syst., Man, Cybern. C 37(3)(2007)25. ̈Ozer, ̈O., Zheng, Y .: Establishing Trust and Trustworthiness for Supply Chain InformationSharing. Springer (2017)26. ̈Ozer, ̈O., Zheng, Y ., Ren, Y .: Trust, Trustworthiness, and Information Sharing in SupplyChains Bridging China and the United States. Manag. Sci. 60(10) (2014)27.Pennekamp, J., Bader, L., Matzutt, R., et al.: Private Multi-Hop Accountability for SupplyChains. In: BIoTCPS (ICC Workshops) (2020)28.Pennekamp, J., Dahlmanns, M., Gleim, L., Decker, S., et al.: Security Considerations forCollaborations in an Industrial IoT-based Lab of Labs. In: IEEE GCIoT (2019)29.Pennekamp, J., Glebke, R., Henze, M., et al.: Towards an Infrastructure Enabling the Internetof Production. In: IEEE ICPS (2019)30.Pennekamp, J., Henze, M., Schmidt, S., Niemietz, P., et al.: Dataflow Challenges in an Internetof Production: A Security & Privacy Perspective. In: ACM CPS-SPC (2019)31. Perlman, R.: An overview of PKI trust models. IEEE Netw. 13(6) (1999)32. Popov, S.: The Tangle. White paper (2016)33.Third, A., Domingue, J.: LinkChains: Exploring the space of decentralised trustworthy LinkedData. In: DeSemWeb (2017)34.Third, A., Tiddi, I., Bastianelli, E., Valentine, C., et al.: Towards the Temporal Streaming ofGraph Data on Distributed Ledgers. In: LD-DL (2017)35.Tsai, W.T., Wei, X., Chen, Y ., Paul, R., et al.: Data provenance in SOA: security, reliability,and integrity. Serv. Oriented Comput. Appl. 1(4) (2007)36.Tummarello, G., Morbidoni, C., Puliti, P., Piazza, F.: Signing Individual Fragments of an RDFGraph. In: WWW (2005)37.U.S. Office of the Federal Register: 14 Code of Federal Regulations, Part 21. https://www.ecfr.gov/cgi-bin/text-idx?node=pt14.1.21 (2020, accessed December 16, 2020)38.Xie, J., Yu, F.R., Huang, T., Xie, R., et al.: A survey on the scalability of blockchain systems.IEEE Netw. 33(5) (2019)39.Yoon, A.: Data Reuse and Users’ Trust Judgments: Toward Trusted Data Curation. Ph.D.thesis, University of North Carolina at Chapel Hill (2015)40.Zaveri, A., Rula, A., Maurino, A., Pietrobon, R., et al.: Quality assessment for Linked Data:A Survey. Semant. Web 7(1) (2016)41.Zheng, Z., Xie, S., Dai, H., Chen, X., et al.: An Overview of Blockchain Technology: Archi-tecture, Consensus, and Future Trends. In: IEEE BigData Congress (2017)
OmzRIXpEDby
An interesting solution to a pressing problem in sharing of trustworthy (linked) data
2: Accept
The authors address the pressing issue of sharing (linked) data in a trustworthy and reliable manner by introducing the concept of Digital Transmission Contracts and demontrate its practical applicability and scalability with a PoC called ReShare. The paper is well written, well structured and easy to read. The motivation is well explained and the evaluation seems to be appropriate for illustrating the concept's technical feasibility within a real-world setting.
2: The reviewer is willing to defend the evaluation but not sufficiently familiar with the state of the art or the specific topic of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Data Reliability and Trustworthiness through Digital Transmission Contracts ### Paper Abstract As decision-making is increasingly data-driven, trustworthiness and reliability of the underlying data, e.g., maintained in knowledge graphs or on the Web, are essential requirements for their usability in the industry. However, neither traditional solutions, such as paper-based data curation processes, nor state-of-the-art approaches, such as distributed ledger technologies, adequately scale to the complex requirements and high throughput of continuously evolving industrial data. Motivated by a practical use case with high demands towards data trustworthiness and reliability, we identify the need for digitally-verifiable data immutability as a still insufficiently addressed dimension of data quality. Based on our discussion of shortcomings in related work, we thus propose ReShare, our novel concept of digital transmission contracts with bilateral signatures, to address this open issue for both RDF knowledge graphs and arbitrary data on the Web. Our quantitative evaluation of ReShare’s performance and scalability reveals only moderate computation and communication overhead, indicating significant potential for cost-reductions compared to today’s approaches. By cleverly integrating digital transmission contracts with existing Web-based information systems, ReShare provides a promising foundation for data sharing and reuse in Industry 4.0 and beyond, enabling digital accountability through easily-adoptable digitally-verifiable data immutability and non-repudiation. ### Paper Keywords ["Digital transmission contracts", "Trust", "Data immutability", "Non-repudiation", "Accountability", "Data dynamics", "Linked Data", "Knowledge graphs"] ### Paper Content Data Reliability and Trustworthinessthrough Digital Transmission ContractsSimon Mangel1, Lars Gleim1, Jan Pennekamp2,Klaus Wehrle2, and Stefan Decker1;31Databases and Information Systems, RWTH Aachen University, Germany2Communication and Distributed Systems, RWTH Aachen University, Germany3Fraunhofer FIT, Sankt Augustin, GermanyAbstract. As decision-making is increasingly data-driven, trustworthiness andreliability of the underlying data, e.g., maintained in knowledge graphs or onthe Web, are essential requirements for their usability in the industry. However,neither traditional solutions, such as paper-based data curation processes, norstate-of-the-art approaches, such as distributed ledger technologies, adequatelyscale to the complex requirements and high throughput of continuously evolvingindustrial data. Motivated by a practical use case with high demands towards datatrustworthiness and reliability, we identify the need for digitally-verifiable dataimmutability as a still insufficiently addressed dimension of data quality. Basedon our discussion of shortcomings in related work, we thus propose ReShare ,our novel concept of digital transmission contracts with bilateral signatures, toaddress this open issue for both RDF knowledge graphs and arbitrary data onthe Web. Our quantitative evaluation of ReShare’s performance and scalabilityreveals only moderate computation and communication overhead, indicating sig-nificant potential for cost-reductions compared to today’s approaches. By cleverlyintegrating digital transmission contracts with existing Web-based informationsystems, ReShare provides a promising foundation for data sharing and reuse inIndustry 4.0 and beyond, enabling digital accountability through easily-adoptabledigitally-verifiable data immutability and non-repudiation.Keywords: Digital transmission contracts ·Trust ·Data immutability ·Non-repudiation · Accountability · Data dynamics · Linked Data · Knowledge graphs1 IntroductionWith current trends in the Internet of Things and Industry 4.0, decision-makers nowadayshave access to a wide variety of data sources and platforms, thus enabling a data-drivendecision-making process. Developments in Open Data underline the trend towardsopen data sharing paradigms independent of the domain in question. E.g., supply chainsalready demonstrate these trends in productive use [ 3,16], where novel approaches enablemulti-hop data sharing [ 30], effectively forming an Internet of Production (IoP) [ 29]where multiple stakeholders collaborate. Here, benefits include reductions in costs,increased profit margins, and general improvements in production quality [29].To realize novel use cases in an IoP, data trustworthiness and reliability are crucialproperties to enable sound data-driven decisions [ 25,26,35]. Neglecting these aspects2 Mangel et al.can cause severe damages, such as miscalculations, economic losses, or even harmful tohumans [ 2,10,24,28]. Apart from trust and reliability, interoperability is important forany data sharing [ 13]. In this matter, Linked Data (LD) [ 1], a paradigm for inter-businessdata sharing, is a promising candidate. Additionally, LD also facilitates provenancetracing [ 15] and thus can positively influence trust. Overall, assessing trust and reliabilityin the context of LD is extremely relevant.A promising approach to establish objective trustworthiness is to enrich data withdigital signatures [ 40], as automatic signature creation and verification promise scalablesolutions. Related work [ 7,20,36] proposes several existing approaches regarding thesigning of Linked Data, which serves as a foundation for any approach using signaturesof Resource Description Framework (RDF) resources. However, the rising needs w.r.t.scalability in the face of increasing data dynamics are rarely considered. Hence, existingapproaches towards signing LD nowadays have limited applicability. That is, whilecoarse signatures often force users to retrieve unnecessary data only to verify signatures,fine-granular signatures, which offer more utility by design, impact the scalability due tothe communication and storage overhead caused by a large number of signatures, causinga trade-off which impairs the achievable scalability. Moreover, signatures independentlygenerated by the data source cannot provide immutability, as the data source can simplyforge signatures for modified data. The reliability guarantees of existing signature-based approaches are significantly weaker than Distributed Ledger-based approaches,where strong immutability is created by committing the state of data to an immutableledger [ 16,34]. However, these systems suffer from limited scalability due to limitedthroughput and infrastructure overhead [38].To address the required goals of trustworthiness and reliability through scalableimmutability, we propose an on-demand signature scheme, where the sender and re-ceiver actively engage in the transmission process and both sign a so-called DigitalTransmission Contract (DTC). In contrast to common signature approaches, DTCsestablish the immutability of data reliably, as both peers of a transmission would haveto collude to forge a DTC. Together with the non-repudiation provided by the signa-tures, immutability further implicates accountability , which is relevant in the face ofliability conflicts [ 30]. Thus, DTCs enable transmission peers to exchange trustworthyand reliable data by creating immutability based on bilateral signatures with improvedscalability w.r.t. data dynamics through their on-demand methodology. Our mechanismis designed to be flexible towards arbitrary data formats, such as conventional, unlinkeddata, as well as Linked Data resources by employing canonicalization.Contributions. Our main contributions in this paper are as follows.–We coin the need for data immutability as an enabler of trustworthiness and reliabil-ity, unlocking new data sharing and collaboration use cases based on well-foundeddata-driven decision making.–Addressing today’s shortcomings, our novel design of digital transmission contractsallows users to verify and prove immutability besides the integrity and authenticity ofexchanged data at a low overhead in terms of both computation and communication.–We further provide the research community with a detailed assessment of the appli-cability and feasibility of our approach ReShare and thereby create the foundationfor the novel paradigm of on-demand bilateral signatures for scalable immutability.Data Reliability and Trustworthiness through Digital Transmission Contracts 32 Design GoalsTo address the lack of suitable approaches in reliable and trustworthy data sharing, weidentify a set of concise design goals, which will guide us through the review of relatedwork and presentation and evaluation of our approach.G1: Integrity, Authenticity & Non-repudiation. The ability to digitally verify theintegrity and authenticity of LD resources is essential to establish objective trustwor-thiness. Thereby, unauthorized modification is prevented, and the users’ trust in datacorrectness is strengthened. Further, non-repudiation is needed for accountability (cf.G2), as the possibility to repudiate a given action hinders a party’s accountability forsaid action. As all three requirements can be fulfilled using digital signatures, we groupthese three desired properties under a single goal.G2: Immutability. To strengthen data reliability and to establish accountability, wepostulate that any used system must be made immutable. Third parties should not beable to easily question this immutability. That is, the system should provide proofs ofimmutability, which malicious entities cannot easily forge.G3: Applicability. To be viable in realistic use cases, the solution should be directlyapplicable and provide both trustworthiness and reliability. Interoperability and usabilityfurther affect the applicability of a system, as they facilitate integration in use cases.G4: Scalability. Given that the increasing data dynamics, especially in the industrialdomain, are a significant challenge for data consumers, any proposed signing andverification approach must scale to future needs, i.e., it should be able to timely react tothe frequent creation and modification of data without significantly impairing the design.G5: Performance. To complement G4, we emphasize that unreasonable overheadfor any involved party must be strictly avoided. Otherwise, the proposed solution limitstheir ability to participate and leads to undesired constraints following a restrictedthroughput and consequently a non-acceptance of the system in real-world settings.G6: Payload Flexibility. As a scalable approach towards trustworthy and reliabledata sharing (cf. G1,G2) unlocks important use cases in industrial environments wherethe LD paradigm is not yet well-established, we argue that a sole focus on LD signifi-cantly limits the applicability. Thus, we demand flexibility concerning the payload ofthe created signatures. For improved adaptability, any proposed system should supportarbitrary data formats, with a specific focus on commonly employed formats on the Web.Note that the entirety of our design goals exceeds the needs for applications that areexclusive to the Semantic Web. Indeed, any approach that fulfills G1-G5can satisfy therequirements of data trustworthiness and reliability in the Semantic Web. However, weintend to propose a more flexible and all-encompassing industry-ready approach.3 Background & Related WorkNow, we outline related work and fundamental concepts for the challenge of datatrustworthiness and reliability. First, we relate the general problem of data quality toour goals of trustworthiness and reliability, before we summarize existing approachesto signing LD resources. With the issue of data mutability in mind, we briefly discussdistributed ledger technology that promises data immutability. Throughout the section,we discuss the shortcomings of approaches based on our design goals described in Sec. 2.4 Mangel et al.Data Quality and Trust in the Semantic Web. Data quality is commonly conceivedas its fitness for use w.r.t. a given application [ 40]. Thus, it constitutes a heterogeneousconcept with dimensions that partly are of subjective or context-dependent nature. Zaveriet al. [ 40] were able to extensively identify and categorize sub-dimensions and metricsof data quality in the context of LD.As one of the six categories of data quality dimensions, trust severely suffers frommore dynamic associations of stakeholders, such as prevalent in modern supply chainswith increasing flexibility [ 3,27]. Zaveri et al. [ 40] investigate trust in detail and identifyreputation ,believability ,verifiability , and objectivity as dimensions of trust. The metricsof reputation, believability, and objectivity are either only able to indicate trustworthiness,e.g., by checking for the existence of meta-information about the data source, or dependon a sophisticated trust model, which may not exist in real-world use cases.Consequently, we believe that none of these three dimensions allows for objectivelyassessing the trustworthiness of data independent from its context when the fourthdimension – verifiability – is not given, as fraudulent modification and forgery are notprevented. Contrarily, verifiability can be objectively assessed by the use of digitalsignatures. As long as the signature is bound to the data source, e.g., by employing aPublic-Key Infrastructure (PKI) [ 31], signatures grant authenticity, integrity, and non-repudiation of the data. We argue that data trustworthiness may be sufficiently assertedthrough signature verification if the data source is trusted.Approaches towards LD Signatures. After discussing the concept of data quality inthe Semantic Web, we now survey existing approaches that sign LD resources. Note thatthe discussed approaches fulfill G1, i.e., the ability to sign LD resources, by design.To the best of our knowledge, Carroll et al. [ 7] were the first to propose a sophisticatedsignature mechanism specifically for usage with Linked Data. The authors especiallyfocus on the canonicalization algorithm and argue that, even if graph canonicalizationis GI-complete, practically graphs can be canonicalized in O(nlog(n)). In this regard,Carroll et al. [ 7] are relevant for any task which needs a canonical RDF representation.However, the authors focus on the signature mechanism itself and do not propose acomplete system, as the use of a PKI and the distribution of signatures are not discussed.Tummarello et al. [ 36] followed up on Carroll et al. [ 7] by proposing a more holisticsystem for LD signatures. The authors argue that graph-level signatures [ 7] are oftentoo coarse for practical use cases, as users would always have to request the entire RDFgraph to verify the signature. To address this issue, the authors proposed to sign thedata at a much finer level, i.e., at the level of Minimum Self-contained Graphs (MSGs).Signatures are attached to an arbitrary triple of the signed MSG, thus internalizing them.By directly attaching signatures and certificate metadata to the data itself, the authorsexplicitly specify how to apply the system to a knowledge graph ( G3). However, as alsocriticized by Kasten et al. [ 20], the certificate is referenced by a URI, which makes thesignature unusable if a certificate can no longer be retrieved. Furthermore, a user hasto compute at least a partial partitioning into MSGs to know which statements weresigned in a given signature, as such information is not explicitly stated. Moreover, theapproach has severe issues concerning scalability ( G4) and performance ( G5) caused bythe fine granularity of signatures. If no blank nodes are present, each statement is signedseparately, resulting in a substantial overhead caused by the signatures, which is evenData Reliability and Trustworthiness through Digital Transmission Contracts 5exaggerated if data is modified frequently. Due to the focus on LD, the approach doesnot support signing other data formats, i.e., it does not provide payload flexibility ( G6).Kasten et al. [ 20] improved on Tummarello et al. [ 36] by proposing a framework thatallows for signatures at different levels of granularity, i.e., reaching from MSG signaturesup to signing multiple graphs at once. The authors discuss and formalize the entiresignature process in a framework and only give exemplary solutions for the identifiedfunctions. Therefore, the work does not constitute a directly applicable solution, butrather aims at building a foundation for applicable solutions through formalization. Dueto its improved flexibility, the approach provides better scalability ( G4) and reducedoverhead ( G5) compared to Tummarello et al. [ 36].G6is not met, as the approachspecifically focuses on RDF data. Most importantly, all approaches listed above have acommon crucial deficiency w.r.t. our design goals, i.e., they do not create immutability(G2). As the data source only signs data, this entity can easily forge signatures formodified data at any time, thus violating immutability. This aspect is crucial, as weidentified immutability as a core requirement for data reliability in Sec. 2.To the best of our knowledge, none of the existing approaches to signing LD resourcescan sufficiently fulfill our design goals, especially w.r.t. immutability ( G2).Distributed Ledger-based Immutability. In contrast to signatures, Distributed Ledger(DL) technology is designed to provide strong immutability ( G2) by committing the stateof data to an irreversible ledger [ 41]. This property is highly desirable and thus celebratedin a wide variety of use cases, such as distributed supply chains [ 3,16]. Such use caseswith little pre-existing trust relations and opportunities to employ LD technology motivatethe combination of LD and DL technology to establish immutability. Consequently, usecases and barriers w.r.t. the combination of LD and DLs have been identified [ 6,12,33].Furthermore, researchers proposed the first concrete solutions employing DL technologyto establish immutability in the context of LD [34].However, the intersection of the two research fields still is in its infancy, and existingapproaches rarely cover the need for immutability holistically. Furthermore, the strongimmutability provided by Distributed Ledgers comes with practical disadvantages. Evenwith new consensus mechanisms such as the Swirlds hashgraph consensus algorithm [ 4]or the tangle [ 32], which try to mitigate the negative effect of common, costly consensusmechanisms such as proof-of-work, DL systems always bring a substantial overhead(G5). This overhead is infeasible in many use cases, as with huge amounts of data, e.g.,produced by IoT sensors, the relative cost for conducting the consensus mechanismexceeds the value of the data written to the ledger. Thus, we decided to refrain frominvolving DL technology in our system. However, we think that the intersection ofDistributed Ledgers and LD in scenarios where LD immutability is crucial and theimposed overhead is acceptable makes for a promising research area for future work.In conclusion, we found that none of the existing approaches towards LD signatureswere able to sufficiently fulfill our design goals, especially due to the missing immutabil-ity (G2). Research regarding combinations of DL and LD technology to provide strongimmutability ( G2) is still in its infancy, resulting in prohibitive scalability ( G4) andperformance ( G5) overhead for many use cases as detailed in our supplementary mate-rial [ 18]. Therefore, we identify the requirement for an approach that bridges the needsfor immutability, scalability, and performance.6 Mangel et al.4 ReShare: Reliable Data Sharing through DTCsTo address the previously identified shortcomings of related work w.r.t. our design goals,we propose ReShare , a scalable and flexible on-demand resource signature system. Re-Share employs the novel concept of Digital Transmission Contracts (DTCs). Wheneverthe state of data needs to be proven as immutable, the data sender and receiver partake inacontract generation handshake , which results in the creation of a DTC. A DTC is arecord comprising the state of the subject data (as checksums), the identities of senderand receiver, as well as a timestamp of contract creation, crafted immutably by addingsignatures of both peers. In the DTC generation mechanism, the receiver requests aDTC for a set of resources. The sender compiles said DTC by creating checksums forthe resources, adding metadata, and signing the record. Finally, the receiver signs theretrieved DTC and sends its signature back to the sender.The validity of a contract can be automatically verified at any time in a correspond-ingcontract verification mechanism , where the identities of the parties, the signatures,and the resource checksums are verified. We expect that creating signatures for dataexchanges offers improved scalability ( G4) compared to existing signature mechanismsin use cases where the amount of produced and modified data outweighs the numberof data requests. This expectation is strengthened by ReShare’s ability to bundle mul-tiple resources into one DTC, thus reducing the per-resource signature overhead ( G5).Furthermore, we argue that ReShare provides immutability in addition to the existingbenefits of digital signatures, i.e., integrity and authenticity, as a colluding of two partiesis needed to forge a valid contract that violates the data immutability.First, we proceed by motivating a use case in the domain of aerospace engineering.Subsequently, we present details about both the contract generation and verificationmechanisms. Finally, we discuss realization aspects of ReShare, i.e., the representationand ontology of DTCs and the integration of ReShare with LD technology.ReShare’s Capabilities Illustrated using Aerospace Engineering. For our system,aerospace engineering constitutes an interesting use case, as reliability is highly desirablewhen designing and producing safety-critical products. This need even is legally justified,as federal US-American laws require manufacturers to securely store the type design ,comprising drawings and specifications, information about dimensions, materials, andprocesses for as long as the respective type certificate of the aircraft is valid (cf. 14 CFR§21.31, §21.41, §21.49 [ 37]). Thus, such data must be kept available reliably, at least aslong as an aircraft of the given type is operational. As a result, manufacturers usuallyapply specialized archiving systems [ 22]. However, if we consider modern IoT-backedsupply chains, massive amounts of data can hardly be processed by common archivingpipelines, which may still involve humans in paper-based signature mechanisms [ 39].Therefore, aerospace engineering is a prime candidate to integrate ReShare.To further look into this use case, we illustrate the benefits of employing an LD-based approach for tracking and tracing through a practical example: Involving themanufacturer Boing , which conducts the final assembly of an aircraft, the independentsupplier ACom , which provides the radio unit for Boing ’s aircraft together with relevantproduction data, and regulatory agency F ̈AAthat ensures legal compliance. We furtherconsider the following scenarios to visualize the system’s functionality:Data Reliability and Trustworthiness through Digital Transmission Contracts 7Assembly. Boing assembles an aircraft using a radio unit supplied by ACom , whileACom further grants Boing access to all relevant production data. Enriched with metdata,such as provenance information, this data is stored in an LD platform. Boing andAComalso generate a DTC, which is bound to the state of said data. In this context, both BoingandACom verify their certificates and signatures, thus mutually establishing authenticity.Due to the dataflow direction, we refer to ACom as the sender and to Boing as the receiver .Proof of Conformance. After an aircraft was involved in an incident, Boing has toprove to F ̈AAthat certain requirements were met during manufacturing. To this end, Boingpresents the relevant data, together with the respective DTCs. The contracts include allrelevant context to verify the authenticity and state of the data. If an investigation of theincident concludes that, for example, the data regarding the radio is incorrect, Boing isnot liable, as it acted to the best of its knowledge. Rather, ACom can be held accountable.Tracking and Tracing. To deal with the aftermath of a defective radio unit, Boingtraces all other aircraft that use the same type of radio using the data’s semantic properties.While an improved tracking & tracing efficiency is not a contribution of our system, weargue that ReShare provides the needed reliability through LD.After outlining three distinct and common real-world scenarios, we now presentReShare’s DTC generation and verification mechanisms.4.1 Generation MechanismThe contract generation mechanism constitutes a relatively simple 3-way handshake,which we visualize in Fig. 1. We use this opportunity to address the different typesof identifiers used in the system. Resources are identified by unique Resource IDs(RIDs). While ReShare supports arbitrary RIDs, the use of Internationalized ResourceIdentifiers [ 11] (IRIs) improves interoperability with LD technology. For identificationof the contract itself, ReShare mandates the use of unique IRIs [ 11] in order to integrateDTCs with common LD technology. The mechanism works as following, where Rdenotes the receiver, and Sthe sender:R1:Rchooses a set of RIDs and sends the RIDs and his identity, including a public keycertificate, to S.S1:Scan now assemble the contract associating checksums of the canonicalized re-sources identified by the RIDs. Then, Screates a timestamp and a unique contractIRI and adds them to the contract. Finally, Screates its signature with the JSONrepresentation of the contract as input. Then, the signature is added to the contract.Sthen sends the assembled contract, including its identity with a public key cer-tificate, its signature, the RIDs with associated checksums, the timestamp, and thecontract IRI back to R.R2:S’s signature (i.e., senderSig ) is verified, together with S’s certificate (includedin the sender field). The timestamp recency is checked. Rsigns the contractsimilarly to S.The complete contract, including both peers’ signatures, is finally sent to S.Both signatures are created by omitting any existing signatures in the contract as an inputto the signature creation. That is, both the sender and receiver sign the DTC, includingthe identities, resource checksums, and the timestamp. The computation overhead on thesender’s side for creating the contract, which mainly consists of generating checksums8 Mangel et al.R1: receiver, factsR's name,certificate,etc.List of RIDs,without checksumsS RS1: sender, senderSig, facts, ts, cIRIR2: receiverSigS's name,certificate,etc.S's signatureof the contractList of RIDs,with checksumsR's signatureof the contract,excluding senderSigCompute and add fact checksumsCreate timestampCreate cIRICreate senderSigAdd senderSig to contract Verify senderSig(Verify fact checksums)Create receiverSigAdd receiverSig to contractChoose facts to requestTimestampContractIRIFig. 1. Visualization of the contract generation handshake. Sis the sender ,Ris the receiver . Here,Fact is used as a synonym for a persistently identified resource.and creating the signature, can be reduced using pre-generated checksums in use caseswhere scalability needs are especially high, or Denial of Service by R1flooding is a validthreat model. In the latter case, the problem could otherwise also be mitigated by the useof rate limiting or access control.The messages contain complete, incremental versions of the contract, which allowsthe sender to remain stateless. This design fits our server-client model nicely, where thesender as a server provides an interface for receivers to request contracts. The handshakeis always executed on top of TCP, which guarantees reliable communication. Therefore,an error indicates a faulty or incompatible configuration. A mechanism to deal witherrors is not part of ReShare, but could be easily implemented for future work.4.2 Verification MechanismContract verification relies only on access to the contract and the covered data itself andconsists of the following steps (in no particular order):–Verify the public key certificates using the respective PKI–Verify the signatures using the JSON-formatted contract and the public keys–Verify the data checksumsHence, only access to the contract and the data are needed. As ReShare currently onlysupports a CA-based PKI with X.509 certificates, the necessary PKI context comprisesa set of pre-installed root CA certificates to verify the X.509 certificate chains of bothparties. Contracts are explicit about all information needed to verify signatures andchecksums, e.g., about the canonicalization and serialization of the data. To verify thechecksums, all parties use the same input as during the DTC generation, i.e., the fullDTC excluding any existing signatures. Through the simplicity of the verification, onlyconsisting of the three steps described above, the system is applicable ( G3) in use caseswith needs for unambiguous verifiability of the generated DTCs, e.g., in legal conflicts.4.3 RealizationWe implemented ReShare as a Proof-of-Concept Node.js module [ 18]. We discuss ourdecisions concerning the implementation and used technologies in the following.Data Reliability and Trustworthiness through Digital Transmission Contracts 9Namespace prefixes:rs: <http://example.com/reshare/ontology#>http://example.com/contracts/<contractID>c:fact-0"kLbb...5A=="^^<xsd:base64Binary>rs:hasFactrs:senderSigrs:receiverSigc:senderSigc:receiverSigrs:sha256Sumrs:sender rs:receiverc:sender c:receiverrs:x509Certrs:x509Cert"MIIE...wkk="^^<xsd:base64Binary> "MIIE...AA=="^^<xsd:base64Binary>rs:sigTypers:sigTypeurn:oid:1.2.840.113549.1.1.10rs:Factrs:factOrigin"5fc6...f48e"^^<xsd:hexBinary> "cWjd...2A=="^^<xsd:base64Binary>http://example.com/persons/John_Doers:sigDatars:sigDataars:Signatureaars:Identitya ac: <http://example.com/contracts/<contractID>#>xsd: <http://www.w3.org/2001/XMLSchema#>Fig. 2. Structure of an exemplary ReShare contract, visualized as an RDF graph using Turtle nota-tion.http://example.com/contracts/<contractID> here symbolizes the contractIRI chosen by the sender (cf. Sec. 4.1).Contract Ontology and Representations. As Tummarello et al. [ 36] discuss, theability to internalize signatures into the context of the signed data improves the overallusability, because data and signatures are directly associated. Therefore, contracts arerepresented as JSON-LD [ 21] by default. For this, we have defined the ReShare ontol-ogy [ 19], which defines the types and properties in a contract. This makes contractsthemselves usable in LD platforms. A DTC contains the following:–A root node, identifying the contract itself, identified transparently and uniquely bythe contract IRI chosen by the sender–A set of resource checksums, associated with their resources by RIDs (cf. Sec. 4.1)–The identity of sender and receiver, including X.509 certificates [8]–The signatures of sender and receiver ( RSASSA-PSS [23])–A timestamp in the ISO 8601 format (interpretable as xsd:dateTimeStamp [9])An example contract RDF graph with the default structure can be seen in Fig. 2. Notethat the notion of Facts originates from the FactDAG data interoperability model [ 13]and its implementation FactStack [ 14], where a revisioning system is used to createpersistence. As this paradigm is not mandatory in ReShare, we use factas a synonymfor any persistently identified resource in this paper.Because we require payload flexibility ( G6), DTCs should also be compatible withother common technology stacks outside of the Semantic Web. Therefore, in additionto the JSON-LD representation, the context can be omitted, resulting in a pure JSONrepresentation of DTCs, making contracts usable as structured data.Contract Generation. Given that the contracts are represented as JSON-LD orJSON by default, we decided to rely on a JSON-based protocol for contract generation,where the JSON contracts are wrapped into a minimalistic message structure. TheJSON-LD context is automatically added when exporting the contract after generation.10 Mangel et al.The most basic generation mode corresponds to an execution of the JSON protocoldirectly on top of TCP, with optional use of TLS. However, this approach requiresopening a dedicated port for ReShare. Therefore, we also provide an HTTP(S) mode tointegrate ReShare into existing Web servers. Then, the 3-way handshake is wrapped intotwo HTTP POST requests. Thus, we end up with four modes, which we denote by TCP(i.e., without TLS or HTTP), TLS (i.e., TCP+TLS without HTTP), HTTP, and HTTPS.5 EvaluationTo assess the benefits, possible limitations, and applicability of ReShare, we first quan-titatively evaluate the storage and communication overhead as well as the effects oflatency to the generation mechanism, before qualitatively discussing the fulfillment ofour design goals as defined in Sec. 2, whilst giving outlooks to promising use cases.5.1 Quantitative EvaluationTo quantitatively evaluate the performance of the system, we simulated contract gen-eration with varying (i) modes of operation (TCP/TLS/HTTP/HTTPS), (ii) number offacts per contract, and (iii) certificate chain lengths of both peers. We quantitativelyevaluate the (a) total duration of the handshake, (b) the number of bytes transferred,as well as (c) the size of the generated contract in its default JSON representation. Weemploy a TCP proxy to investigate the impact of varying network latency on the proto-col’s performance and measure the amount of data transferred. Overhead for TLS andHTTP are included in the results. We split the evaluation into two orthogonal parametercombinations to facilitate visualization and discussion, which we show in Table 1.Contract Size & Communication Overhead. In Fig. 3, we show how the numberof facts in one contract influences contract bytes and communication overhead per fact,split by handshake mode. A per-fact plot brings better comparability to other approachesthan per-contract, as contracts are a concept that is specific to our approach. Thus, metricsare plotted on a per-fact (i.e., per-resource) basis.The total contract size and communication overhead per handshake increase linearlywith the number of facts, as fact data is of constant size, consisting of a RID and achecksum. Thus, if mmodels one of the per-contract metrics, this gives m(n) =sn+c,where sis the slope of the curve, i.e., the bytes by which the metric increases if one factis added per contract, nis the number of facts per contract, and cis the constant overheadwhich is not influenced by the number of facts. Then, we can model the per-fact metricTable 1. Overview of the evaluated parameter combinations.Protocol Mode Facts/DTC Proxy delay [ms] Cert. Chain Len. Iterations1 TCP, TLS, HTTP, HTTPS1, 5, 10, 20, 30, 40, 50,0 1, 2, 3, 4, 5 2060, 70, 80, 90, 1002 TCP, TLS, HTTP, HTTPS 100, 10, 20, 30, 40, 50,1 5060, 70, 80, 90, 100Data Reliability and Trustworthiness through Digital Transmission Contracts 111510 20 30 40 50 60 70 80 90 100Facts per contract1001K10KBytes per factContract BytesTCPTCP+TLSHTTPHTTPSTheoretical limit: 127 Contract B/factFig. 3. Contract bytes and communication bytes per fact, by the number of facts in one contract.Dataset 1 from Table 1 was used. Communication bytes are split up by handshake mode, i.e., HTTPenabled/disabled, and TLS enabled/disabled. Compared to a per-contract plot, this representationprovides better interpretability when comparing the system to other approaches, as contracts arean unknown concept in other work.asm0(n) =m(n)n=s+cn1. This model is plotted for each metric. The theoreticallimit for contract bytes per fact naturally is given by s, which is the slope of the linearper-contract fit. It can be interpreted as the number of bytes that are caused by a factitself, excluding the static overhead in contracts, which is independent of the number offacts. An analog interpretation of the other metrics slopes is possible. The Figure alsoshows that the overhead of HTTP is negligible, whereas enabling TLS causes a constantcommunication overhead of approximately 200 B per contract.Handshake Duration. To better evaluate the handshake duration in a realistic sce-nario, we simulate varying degrees of network latency. The results can be seen in Fig. 4.Because the relationship between network latency and total handshake duration expect-edly is linear, the slope of a linear fit divided by 2gives a rough estimate on the numberof Round Trip Times (RTTs) a contract generation takes.As a first observation, our implementation produces a constant overhead of approxi-mately 140 ms in handshake duration. Similar to the previously evaluated communicationoverhead in bytes, HTTP and TCP add the least latency overhead. As the three handshakemessages (cf. Sec. 4.1) can directly be sent on top of TCP or HTTP without additionalmessages, the slope comes close to the baseline of a 3-way handshake. If TLS is enabled,the latency effect is significantly increased by the TLS handshake, i.e., for each TLShandshake, the delay increases by approximately 1 RTT . In TLS mode without HTTP,one socket is used for all messages. Thus, only one TLS handshake is needed, causing a1 RTT overhead compared to raw TCP. As our implementation currently does not supportsocket reuse when using HTTP, we need two TLS handshakes in HTTPS mode (onefor each POST request), causing a 2 RTT overhead compared to HTTP without TLS,resulting in a maximum handshake duration of approximately 3.5 RTT . Note that reusingcryptographic material from the first request does not reduce latency overhead, as it doesnot change the number of TLS handshake messages. sages.For future work, we plan to add socket reuse support for HTTP to our implementation.12 Mangel et al.0 20 40 60 80 100Simulated delay [ms]0200400600800Handshake duration [ms]Baseline (3x delay)TCP (Slope: 3.46)TLS (Slope: 5.56)HTTP (Slope: 3.48)HTTPS (Slope: 7.51)Fig. 4. Visualization of the effect of communication delay in-between sender and receiver simulatedwith an artificial delay in the TCP proxy. Dataset 2 from Table 1 was used. The data was collectedusing 10 facts per contract and a certificate chain length of 1. Per mode and artificial delay,50contracts were generated. The error bars display the interval of 2, thus accounting forapproximately 95 % of the measurements. The baseline represents a 3-way handshake without anyoverhead by computation or additional messages.5.2 Qualitative EvaluationAfter quantitatively evaluating the performance of ReShare, we now classify the quan-titative results and discuss the benefits and disadvantages of ReShare w.r.t. the designgoals defined in Sec. 2, as well as practical considerations.Trustworthiness & Reliability. In Sec. 2, we identified signatures as a core enablerof data reliability and trustworthiness ( G1), which are extremely relevant aspects of dataquality (cf. Sec. 1). For the created signatures to create trustworthiness, the entire trustchain, reaching from the PKI as the trust root data checksums, has to be validated asdefined in Sec. 4.2. Because the verification algorithms, i.e., X.509 certificate validation,RSA-PSS signature verification, and SHA-2 checksum validation, are generally accepted,our focus is on the availability of the necessary signatures, checksums, and certificates.First, the material necessary for verifying the signatures themselves is containedwithin the DTC. Second, for the included certificate to be verifiable, the consumer hasto trust the root CA which signed the peer certificates. This assumption is reasonable,as root stores and root programs have long-established extensive CA curation [ 17].However, the expiry and revocation of X.509 certificates, as well as the retrievability ofthe root certificate, which is not included in the contract, hinder verifiability, and thus,reliability ( G1) in application scenarios with long-term storage requirements. One couldcounteract this with special approaches for long-term signature preservation, such asBrali ́c et al. [ 5], which have detrimental effects on performance ( G5) or scalability G4and require additional infrastructure. We leave this issue to be investigated in futurework. Third, the data has to be available in order to verify the included checksum. Here,the structure of DTCs has the advantage that not all included data has to be available inorder to keep the signature material verifiable, as individual checksums can be verified.Thus, if certain resources are no longer needed, they can simply be deleted withoutimpacting the verifiability of other resources signed by a contract.Data Reliability and Trustworthiness through Digital Transmission Contracts 13Overall, DTCs are easily verifiable, thus providing good reliability and trustworthi-ness ( G1), as long as the certificates are valid and the root certificate is retrievable.Immutability & Accountability. Usually, data-driven business models with highdata reliability needs either depend upon well-trusted business partners or have to resortto both time- and cost-intensive manual data curation [ 39]. Application of ReSharethus provides promising opportunities to create trustworthiness where trust cannoteasily be established otherwise and can further be used as a legal binding of the peersto the underlying transmission, useful in legal conflicts. ReShare provides improvedimmutability compared to usual signature schemes, as illegally forging a valid DTCrequires collusion of both peers of the contract. Because benign behavior of the two peersis a severely limiting assumption, ReShare cannot hold up with DL-based immutability,as successful Distributed Ledgers are considered to be irreversible, unless a large share ofthe network colludes. Thus, ReShare provides enhanced immutability ( G2) compared toother signature-based approaches, but cannot keep up with DL technology regarding thisaspect. To improve the immutability guarantees of ReShare, one could employ a digitalnotary, e.g., by adding additional signatures by impartial third parties or committingcontracts to a DL. However, to keep scalability, one should incorporate measures toreduce the notary overhead, e.g., by only using the notary for interval-based checksumsof all created contracts. We deem this idea an interesting direction for future work.Performance & Scalability. If a sufficiently large number of resources are signedper-contract, the per-resource storage communication overhead falls below 1 kB rela-tively quickly. With a handshake latency overhead of less than 200 ms (without delay),less than 5 RTTs , and the simple contract verification mechanism, we argue that theamount of imposed overhead by the use of the system is reasonably low, especially incomparison to traditional proof of transmission approaches, such as paper-based receiptscommonly used in industry today [ 39]. Therefore, the performance goal ( G5) is met.One could argue that if individual transmissions only consist of a few resources (e.g.,only a few RDF statements), the per-resource overhead both for storage of the contractsand the handshake increases relatively fast. This issue also exists in related work w.r.t.LD signatures, as the severity of the overhead imposed to the user when using coarsesignatures is exaggerated in this scenario (cf. Sec. 3). However, if, on the one hand, theoverall frequency of requests is low, this issue becomes less severe, as the throughputrequirements are small. If, on the other hand, higher request frequencies are expected,ReShare provides the opportunity of resource bundling, i.e., requested resources cansimply be buffered by the client and bundled into a single DTC, thus mitigating the issue.In environments with high request frequency by many distinct data recipients, DTCbundling may, however, only apply to a lesser degree. However, besides a limitation w.r.t.these specific circumstances, ReShare provides decent scalability ( G4).Payload Flexibility. Because DTCs use checksums of canonicalized data, the dataformat is arbitrary, as long as a canonical representation is specified, which contributesto payload flexibility ( G6) and allows for applicability to generic Semantic Web dataand any other type of resource on the Web. Thus, ReShare constitutes a unified solutionfor arbitrary data on the Web.Other Practical Considerations. ReShare has the advantage that it is optionallyadaptable both for individual stakeholders and individual transmission, as its use is not14 Mangel et al.mandatory. If one installs ReShare, but opts out to generate contracts for transmissions,made possible through the optionally adaptable design, the system generates little to nooverhead. Suppose that it is used in HTTP(S) mode, then, it can be integrated into analready existing web server, and thus does not require additional hardware, infrastructure,or specific software. Such a deployment is useful where manual data curation may bemore cost- or time-efficient than generating DTCs, or peers simply do not implementReShare, making the system fully backward-compatible.The peer X.509 certificates make up for a majority of the contract data. Thus,removing the certificate data from DTCs and instead of referencing peer certificates withunique identifiers would drastically reduce contract sizes, improving scalability ( G4) andperformance ( G5). However, as verifiability is the key to the provided trustworthiness andreliability, we argue that making the verifiability of DTCs dependent upon the certificateavailability would substantially weaken verifiability, and thus, our core requirement ( G1).However, to practically reduce the overhead imposed by peer certificates, one couldassign unique identifiers to the used certificates in the LD context of DTCs, which allowsto only store the certificates once when using an LD platform such as a triple store. Thismethod could be realized with ReShare as part of future work.ReShare can also be integrated with existing systems and data, i.e., it is backward-compatible. Retrospectively generating DTCs even has an advantage w.r.t. performance(G5) and scalability ( G4), as all resources from a given data source can be bundledinto a single DTC, thus reducing the per-resource communication and storage overhead.However, using retrospectively generated contracts, one can not prove possession of thedata for the time interval before contract generation due to the contract timestamp.To conclude, with decent scalability in most use cases and stronger immutability thancommon signature schemes, we see no significant limitations for ReShare’s applicability,making it a promising solution for a variety of use cases with requirements of scalability,trustworthiness, and reliability.6 ConclusionIn this paper, we expressed the need for immutability as an enabler for data trustworthi-ness and reliability, paving the way for novel use cases employing LD technology forreliable data sharing and collaboration. After identifying the lack of suitable solutionsthat bridge the need for scalability and immutability, we present ReShare, our designutilizing digital transmission contracts to establish immutability through signatures byboth transmission peers, imposing a reasonably low overhead with good scalability.We provide the research community with a discussion of feasibility and applicability,building a foundation for future work w.r.t. scalable immutability for real-world use.Trustworthiness and reliability are essential requirements for a more open data shar-ing paradigm in the industry, as economic outcomes depend on data correctness. Digitalsignatures can provide integrity, authenticity, and non-repudiation, and therefore, theycan be used to create data trustworthiness. However, we argue that simple signatures can-not reliably establish immutability, as the signing authority can forge arbitrary signatures,thus hindering data reliability. Recently, distributed ledgers are frequently proposed toachieve the immutability of information. Unfortunately, their scalability is substantiallyData Reliability and Trustworthiness through Digital Transmission Contracts 15challenged through limited throughput and infrastructure overhead. To address theseissues, we propose ReShare, a system for creating on-demand bilateral signatures con-tained in digital transmission contracts. Given that both transmission peers would haveto collude to forge valid digital transmission contracts, we argue that ReShare providesimproved immutability compared to common signature systems, combined with properscalability through moderate overhead and the ability to sign multiple resources at once.In our evaluation, we demonstrate that our proposed design shows promising applica-bility, as its immutability is valuable in use cases with high scalability needs, while beingflexible towards the format of signed data and optionally adaptable by concept, imposinglittle overhead when peers opt-out from usage. Thus, ReShare is a prime candidate toachieve data reliability and trustworthiness for both the Semantic Web and industry. Forfuture work, optionally-adaptable notary systems could further strengthen the immutabil-ity of our proposed contracts. However, already in its current state, ReShare allows fornovel approaches that profit from and build upon the proposed concept of on-demandbilateral signatures.Acknowledgments Funded by the Deutsche Forschungsgemeinschaft (DFG, GermanResearch Foundation) under Germany’s Excellence Strategy – EXC-2023 Internet ofProduction – 390621612.References1.Abramowicz, W., Auer, S., Heath, T.: Linked Data in Business. Bus. Inf. Syst. Eng. 58(5)(2016)2.Attaran, M., Attaran, S.: Collaborative supply chain management: the most promising practicefor building efficient and sustainable supply chains. Bus. Process Manag. J. 13(3) (2007)3.Bader, L., Pennekamp, J., Matzutt, R., Hedderich, D., et al.: Blockchain-Based Privacy Preser-vation for Supply Chains Supporting Lightweight Multi-Hop Information Accountability. Inf.Process. Manag. 58(3) (2021)4.Baird, L.: The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance.Swirlds-tr-2016, Swirlds, Inc. (2016)5.Brali ́c, V ., Kule ˇs, M., Stan ˇci ́c, H.: A model for long-term preservation of digital signaturevalidity: TrustChain. In: INFuture (2017)6.Cano-Benito, J., Cimmino, A., Garc ́ıa-Castro, R.: Towards Blockchain and Semantic Web. In:BIS (2019)7. Carroll, J.J.: Signing RDF Graphs. In: ISWC (2003)8.Cooper, D., Santesson, S., Farrell, S., Boeyen, S., et al.: Internet X.509 Public Key Infrastruc-ture Certificate and Certificate Revocation List (CRL) Profile. RFC 5280 (2008)9.Cyganiak, R., Wood, D., Lanthaler, M.: RDF 1.1 Concepts and Abstract Syntax. W3C Rec.(2014)10.Dahlmanns, M., Pennekamp, J., Fink, I.B., Schoolmann, B., et al.: Transparent End-to-EndSecurity for Publish/Subscribe Communication in Cyber-Physical Systems. In: ACM SaT-CPS(2021)11. Duerst, M., Suignard, M.: Internationalized Resource Identifiers (IRIs). RFC 3987 (2005)12.English, M., Auer, S., Domingue, J.: Block Chain Technologies & The Semantic Web: AFramework for Symbiotic Development. Tech. rep., University of Bonn (2016)13.Gleim, L., Pennekamp, J., Liebenberg, M., Buchsbaum, M., et al.: FactDAG: FormalizingData Interoperability in an Internet of Production. IEEE Internet Things J. 7(4) (2020)16 Mangel et al.14.Gleim, L., Pennekamp, J., Tirpitz, L., Welten, S., et al.: FactStack: Interoperable Data Man-agement and Preservation for the Web and Industry 4.0. In: BTW (2021)15.Gleim, L., Tirpitz, L., Pennekamp, J., Decker, S.: Expressing FactDAG Provenance withPROV-O. In: MEPDaW (2020)16. Gonczol, P., Katsikouli, P., Herskind, L., Dragoni, N.: Blockchain Implementations and UseCases for Supply Chains-A Survey. IEEE Access 8(2020)17.Holz, R., Braun, L., Kammenhuber, N., Carle, G.: The SSL Landscape – A Thorough Analysisof the X.509 PKI Using Active and Passive Measurements. In: ACM IMC (2011)18. i5: factcheck.js. https://git.rwth-aachen.de/i5/factdag/factcheck.js19. i5: ReShare Ontology v0.1. http://i5.pages.rwth-aachen.de/factdag/reshare-ontology/0.1/20.Kasten, A., Scherp, A., Schauß, P.: A Framework for Iterative Signing of Graph Data on theWeb. In: ESWC (2014)21. Kellogg, G., Champin, P.A., Longley, D.: JSON-LD 1.1. W3C Rec. (2020)22.LOTAR International: Legal & Business Motivation. https://lotar-international.org/why-lotar/legal-business-motivation/ (2020, accessed December 16, 2020)23.Moriarty, K., Kaliski, B., Jonsson, J., Rusch, A.: PKCS #1: RSA Cryptography SpecificationsVersion 2.2. IETF RFC 8017 (2016)24.Moyaux, T., Chaib-draa, B., D’Amours, S.: Information Sharing as a Coordination Mechanismfor Reducing the Bullwhip Effect in a Supply Chain. IEEE Trans. Syst., Man, Cybern. C 37(3)(2007)25. ̈Ozer, ̈O., Zheng, Y .: Establishing Trust and Trustworthiness for Supply Chain InformationSharing. Springer (2017)26. ̈Ozer, ̈O., Zheng, Y ., Ren, Y .: Trust, Trustworthiness, and Information Sharing in SupplyChains Bridging China and the United States. Manag. Sci. 60(10) (2014)27.Pennekamp, J., Bader, L., Matzutt, R., et al.: Private Multi-Hop Accountability for SupplyChains. In: BIoTCPS (ICC Workshops) (2020)28.Pennekamp, J., Dahlmanns, M., Gleim, L., Decker, S., et al.: Security Considerations forCollaborations in an Industrial IoT-based Lab of Labs. In: IEEE GCIoT (2019)29.Pennekamp, J., Glebke, R., Henze, M., et al.: Towards an Infrastructure Enabling the Internetof Production. In: IEEE ICPS (2019)30.Pennekamp, J., Henze, M., Schmidt, S., Niemietz, P., et al.: Dataflow Challenges in an Internetof Production: A Security & Privacy Perspective. In: ACM CPS-SPC (2019)31. Perlman, R.: An overview of PKI trust models. IEEE Netw. 13(6) (1999)32. Popov, S.: The Tangle. White paper (2016)33.Third, A., Domingue, J.: LinkChains: Exploring the space of decentralised trustworthy LinkedData. In: DeSemWeb (2017)34.Third, A., Tiddi, I., Bastianelli, E., Valentine, C., et al.: Towards the Temporal Streaming ofGraph Data on Distributed Ledgers. In: LD-DL (2017)35.Tsai, W.T., Wei, X., Chen, Y ., Paul, R., et al.: Data provenance in SOA: security, reliability,and integrity. Serv. Oriented Comput. Appl. 1(4) (2007)36.Tummarello, G., Morbidoni, C., Puliti, P., Piazza, F.: Signing Individual Fragments of an RDFGraph. In: WWW (2005)37.U.S. Office of the Federal Register: 14 Code of Federal Regulations, Part 21. https://www.ecfr.gov/cgi-bin/text-idx?node=pt14.1.21 (2020, accessed December 16, 2020)38.Xie, J., Yu, F.R., Huang, T., Xie, R., et al.: A survey on the scalability of blockchain systems.IEEE Netw. 33(5) (2019)39.Yoon, A.: Data Reuse and Users’ Trust Judgments: Toward Trusted Data Curation. Ph.D.thesis, University of North Carolina at Chapel Hill (2015)40.Zaveri, A., Rula, A., Maurino, A., Pietrobon, R., et al.: Quality assessment for Linked Data:A Survey. Semant. Web 7(1) (2016)41.Zheng, Z., Xie, S., Dai, H., Chen, X., et al.: An Overview of Blockchain Technology: Archi-tecture, Consensus, and Future Trends. In: IEEE BigData Congress (2017)<|im_end|> <|im_start|>assistant ### Review Title An interesting solution to a pressing problem in sharing of trustworthy (linked) data ### Review Text The authors address the pressing issue of sharing (linked) data in a trustworthy and reliable manner by introducing the concept of Digital Transmission Contracts and demontrate its practical applicability and scalability with a PoC called ReShare. The paper is well written, well structured and easy to read. The motivation is well explained and the evaluation seems to be appropriate for illustrating the concept's technical feasibility within a real-world setting. ### Review Rating 2: Accept ### Review Confidence 2: The reviewer is willing to defend the evaluation but not sufficiently familiar with the state of the art or the specific topic of the paper<|im_end|> <|im_end|>
yuXQOhKRjBr
ICLR.cc/2021/Conference
2021
Towards Powerful Graph Neural Networks: Diversity Matters
["Xu Bingbing", "Huawei Shen", "Qi Cao", "Yuanhao Liu", "Keting Cen", "Xueqi Cheng"]
Graph neural networks (GNNs) offer us an effective framework for graph representation learning via layer-wise neighborhood aggregation. Their success is attributed to their expressive power at learning representation of nodes and graphs. To achieve GNNs with high expressive power, existing methods mainly resort to complex neighborhood aggregation functions, e.g., designing injective aggregation function or using multiple aggregation functions. Consequently, their expressive power is limited by the capability of aggregation function, which is tricky to determine in practice. To combat this problem, we propose a novel framework, namely diverse sampling, to improve the expressive power of GNNs. For a target node, diverse sampling offers it diverse neighborhoods, i.e., rooted sub-graphs, and the representation of target node is finally obtained via aggregating the representation of diverse neighborhoods obtained using any GNN model. High expressive power is guaranteed by the diversity of different neighborhoods. We use classical GNNs (i.e., GCN and GAT) as base models to evaluate the effectiveness of the proposed framework. Experiments are conducted at multi-class node classification task on three benchmark datasets and multi-label node classification task on a dataset collected in this paper. Extensive experiments demonstrate the proposed method consistently improve the performance of base GNN models. The proposed framework is applicable to any GNN models and thus is general for improving the expressive power of GNNs.
["GNNs", "Expressive power", "Diverse sampling", "Injective"]
ABSTRACTGraph neural networks (GNNs) offer us an effective framework for graph repre-sentation learning via layer-wise neighborhood aggregation. Their success is at-tributed to their expressive power at learning representation of nodes and graphs.To achieve GNNs with high expressive power, existing methods mainly resort tocomplex neighborhood aggregation functions, e.g., designing injective aggrega-tion function or using multiple aggregation functions. Consequently, their expres-sive power is limited by the capability of aggregation function, which is tricky todetermine in practice. To combat this problem, we propose a novel framework,namely diverse sampling , to improve the expressive power of GNNs. For a targetnode, diverse sampling offers it diverse neighborhoods, i.e., rooted sub-graphs,and the representation of target node is finally obtained via aggregating the rep-resentation of diverse neighborhoods obtained using anyGNN model. High ex-pressive power is guaranteed by the diversity of different neighborhoods. We useclassical GNNs (i.e., GCN and GAT) as base models to evaluate the effective-ness of the proposed framework. Experiments are conducted at multi-class nodeclassification task on three benchmark datasets and multi-label node classifica-tion task on a dataset collected in this paper. Extensive experiments demonstratethe proposed method consistently improve the performance of base GNN models.The proposed framework is applicable to any GNN models and thus is general forimproving the expressive power of GNNs.1 I NTRODUCTIONGraph neural networks (GNNs) have been shown to be effective at graph representation learningand many predictive tasks on graph-structured data, e.g., node classification and graph classifica-tion (Kipf & Welling, 2016; Xu et al., 2018a). GNNs follow a neighborhood aggregation scheme,where the representation of a node is obtained by recursively aggregating and transforming repre-sentation of its neighboring nodes (Gilmer et al., 2017). The success of GNNs is believed to beattributed to their high expressive power at learning representation of nodes and graphs (Xu et al.,2018a). Therefore, it is an important research problem to analyze and improve the expressive powerof existing GNN models and design new GNNs with high expressive power.Several recent works focus on the expressive power of GNNs. Xu et al. pointed out that the expres-sive power of GNNs depends on the neighborhood aggregation function (Xu et al., 2018a). Theydevelop a simple architecture, i.e., leveraging multi-layer perceptron (MLP) and a sum pooling as auniversal approximator defined on multi-set, to achieve injective neighborhood aggregation function.With injective aggregation function in each layer, the proposed graph isomorphism network (GIN)has the expressive power as high as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler& Lehman, 1968). Similarly, Sato et al. implement a powerful GNN via consistent port numbering,i.e., mapping edges to port numbering and neighbors are ordered by the port numbering (Sato et al.,2019). However, port ordering of CPNGNNs is not unique, and not all orderings can distinguish thesame set of graphs (Garg et al., 2020). Principal neighborhood aggregation (PNA) defines multipleaggregation functions to improve the expressive power of GNNs (Corso et al., 2020). However, thenumber of required aggregation functions to discriminate multi-sets depends on the size of multi-set,which is prohibitive for real world networks with skewed degree distribution. In sum, existing meth-ods focus on designing an injective, often complex, aggregation function in each layer to achieveGNNs with high expressive power. However, injective functions are difficult to obtain and tricky to1Under review as a conference paper at ICLR 2021determine in practice. Indeed, layer-wise injective function is not always required and what we needis an injective function defined over rooted sub-graphs or graphs as a whole.In this paper, we propose a novel framework, namely diverse sampling, to improve the expressivepower of GNNs. For a target node, diverse sampling offers it diverse neighborhoods, i.e., rooted sub-graphs, and the representation of target node is finally obtained via aggregating the representationof diverse neighborhoods obtained using any GNN model. High expressive power is guaranteedby the diversity of different neighborhoods. For convenience, we denote with DS-GNN the GNNimplemented under the proposed diverse sampling framework.54123MLPInjectiveLayer54123Aggregation 1Aggregation MConcat FC/gid3354126958417395841273GNNGNNGNNOne Injective Layer Sample584127369Aggregation 2MLPMLPMLPMLPPNA LayerGIN LayerInput InjectiveLayer/gid335/gid335Ouput 54123(a) (b) (c) /gid335/gid335 /gid335Figure 1: The motivation of DS-GNN: constructing multiple sampled graphs rather than complexlayer-wise aggregation functions (the node with red circle is the central node and “FC” representsfully connected layer).Fig. 1 illustrates the main idea of the proposed DS-GNN, and compare it with two representativemethods, i.e., GIN and PNA. Fig. 1 (a) depicts the injective layer implemented via MLP or multi-ple aggregation functions, aggregating first-order neighboring nodes to obtain the representation ofcentral node. Injective layer are stacked to achieve an overall injective function defined on rootedsub-graphs. On the contrary, DS-GNN does not follow the line of designing complicated aggrega-tion functions in each layer. Instead, DS-GNN improve the expressive power of GNNs via obtainingdiverse rooted sub-graphs for each node. Specifically, we sample nodes multiple times on the entireinput graph based on diverse sampling, and obtain multiple sampled sub-graphs for each node. Afterdiverse sampling, we leverage the shared GNN model to get the representation for the central node,including its high-order neighbors. In this way, each node is represented by a multi-set, consistingof the representations obtained from different sampled rooted sub-graphs. The final representationof central node is finally obtained via aggregating the representation of diverse neighborhoods.Finally, we use classical GNNs (i.e., GCN (Kipf & Welling, 2016) and GAT (Veli ˇckovi ́c et al., 2017))as base models to evaluate the effectiveness of the proposed framework. Experiments are conductedat node-based multi-class classification task on three benchmark datasets and node-based multi-label classification task on a dataset collected in this paper. Extensive experiments demonstratethe proposed method consistently improve the performance of base GNN models. The proposedframework is applicable to any GNN models and thus is general for improving the expressive powerof GNNs.2 N OTATIONS AND PRELIMINARIESWe first introduce the general framework of GNNs.2Under review as a conference paper at ICLR 2021G=fV;Agdenotes an undirected graph, where Vis the set of nodes with jVj=n, andAis theadjacency matrix with Ai;j=Aj;ito define the connection between node iand nodej.X2Rnpdenotes the feature matrix and the i-th row inXrepresent the attributes of i-th node. DS-GNNModern GNNs follow a neighborhood aggregation scheme, which iteratively update each node’srepresentation via aggregating the representation of its neighboring nodes. Formally, the k-th layerof GNN isa(k)v= AGGREGATE(k)(h(k1)u :u2N(v)); h(k)v= COMBINE(k)(h(k1)v;a(k)v);(1)whereN(v)represents the neighbors of node v,h(k)vis the representation of node vin thek-th layer,andh(0)v=Xv. Additionally, we introduce two representative base models: Graph ConvolutionalNetwork (GCN) (Kipf & Welling, 2016) and Graph Attention Networks (GAT) (Veli ˇckovi ́c et al.,2017) in Appendix A.3 D IVERSE SAMPLING BASED POWERFUL GRAPH NEURAL NETWORKConsidering that the injectivity of each node with its neighbors making GNNs be possessed withthe most powerful ability for node-level task, i.e., representations can be distinguishable for twonodes when they have dissimilar attributes or different neighbors, we devote to designing a powerfulGNN to reach such injectivity, thus increase its expressive power. Specifically, we propose a novelframework, i.e., Diverse Sampling (DS-GNN), to increase the expressive power via constructingdiverse rooted sub-graphs for each node. In this section, we first give the framework of our DS-GNN. Then we will introduce the important parts of DS-GNN, including diverse sampling strategythat ensure the diversity among each sampling, as well as the part of model learning. We alsotheoretically analyze how the proposed DS-GNN can improve the expressive abilities when addedto generic GNNs.3.1 M ETHOD8974 5163 297413874 562895162101112101111121210Loss SampleBaseGNNBaseGNNBaseGNNMLPMLPMLP/gid335 /gid335 /gid335 /gid335 /gid335Figure 2: Architecture of DS-GNN.The framework of DS-GNN is illustrated in Fig. 2. For the input graph, instead of running GNNon the entire graph, we first do node sampling with K-times and then obtain Ksampled graphs.During sampling, we calculate the sampling probability for each node and randomly retain the nodebased on the probability. If one node is not sampled, it means that the node and all edges connectedwith it will be removed from the graph. The updated adjacency matrix and feature matrix of eachsampling are fed into the base GNN model, and the GNN is shared over all samplings. Then weobtain one representation for each sampled node corresponded with one input sampled graph. Thus,if one node is sampled more than one time, we will obtain multiple representations.To get the final representation of each node, we need to integrate these representations. To hold theinjectivity when integrating representations, we adopt a multi-layer perceptrons (MLP) followed bythe sum aggregation to achieve the injective multi-set aggregation function. Once getting the finalrepresentation for each node, we calculate the loss function and optimize the model.Sampling Strategy3Under review as a conference paper at ICLR 2021Intuitively, with a large number of sampling numbers, the sampling results are expected to retainthe information of entire graph. Thus previous methods, e.g., Dropedge (Rong et al., 2019) whichsamples in each training epoch, do not need to carefully design the sampling strategy and they keepall edges when testing and validation. Meanwhile, they fail to achieve diverse sampled graphs intesting. Different from previous methods, we need to hold the Ksampled graphs in testing andkeep training and testing consistently. To solve this, we only sample the entire graph K-times, andleverage these sampled Kgraphs in all training epochs, as well as the validation and testing. Toguarantee the performance of the proposed DS-GNN when even small K is chosen, we carefullydesign the important diverse sampling strategy.To ensure the reuse of sampling results, we propose node sampling to obtain Ksampled graphs.In one sampling, the neighbors of each node are obtained from the same sampled graph. For thesampling probability of nodes, the most intuitive way is to set the same initial probability for eachnode, which is pinit. However, this method may require a large number of sampling numbers K. Inorder to further speed up or get an effective sampling plan, we also propose three intuitive guidelinesfor sampling strategy: 1) to ensure the coverage of samples, each node should appears at least in onesampled graph; 2) to achieve the diversity among samples, we assume that the sampling probabilityof each node should decrease when its sampled times in previous is already large; 3) to retain theimportant node in the original network and adopting the node degree as the importance indicator,we assume that the sampling probability of each node should increase when its degree in originalnetwork is large. Meanwhile, we also considering the previous sampled results, discounting thesampling probability of each node when the its degree in previous sampled graphs is already large.That’s to say, the sampling probability of each node increase with the increase of node degree inoriginal network, and decrease with the increase of node degree in previous sampled graphs. Toachieve the above three guidelines, we need to relate the sampling probability of each node to thehistorical sampling results. Next, we will introduce in detail how we do.LetKrepresent the total number of samples, pv;irepresent the probability of node vin thei-thsampling, and Hv;irepresent the number of times that the node vhas been sampled before the i-th sampling. To achieve guideline 1), we should guarantee that the probability for a node, whichhas never been sampled in history K-1 samples, equals to 1 at the K-th sampling. In other words,pv;K= 1 whenHv;K= 0. Additionally, to achieve the diversity guideline of 2), the samplingprobabilitypv;ishould decrease with the increase of historical sampled numbers Hv;i. Together, thesampling probability of node vfor thei-th sampling is defined as:pv;i=pinit+ (1pinit)iHv;iK: (2)To achieve guideline of 3), a new sampling strategy is designed. Specifically, we define Dsamplev;j asthe degree of node vin the j-th sampled graph, and Dvas the degree of node vin the original graph.To make the sampling probability increase with Dv, as well as decrease with Dsamplev;j as mentionedin guideline 3), we define the coefficient Dgapv;ias:Dgapv;i=Pi1j=1(DvDsamplev;j )Pi1j=1Dv(3)To achieve our three guidelines in total, we integrate the Eq 2 and Eq 3, and define the samplingprobability for node vin thei-th sampling as:pv;i=pinit+ (1pinit)iHv;iKDgapv;i: (4)Note that our sampling strategy is different from previous methods, i.e., dropping features and drop-ping edges. The Dropout (Srivastava et al., 2014) is proposed to prevent over-fitting by randomlydropping features. Dropping edges can be regarded as a generation of Dropout from dropping fea-tures to dropping edges. However, dropping edges may cause that two nodes who are the samewith identical neighbors obtain different representations, since dropping edges may cause these twonodes result in different neighbors after edge sampling. In contrast, we propose node samplingstrategy, ensuring that the nodes with the same input features and neighbors will still have the same4Under review as a conference paper at ICLR 2021representation. We give the detailed introduction of some previous methods, included in droppingedges in Appendix B.Model LearningIn the following, we describe in detail how DS-GNN can be applied to any base GNN to achieve amore powerful GNN model. Specifically, we first do Ksamplings on the input graph Avia the aboveintroduced diverse sampling, getting the sampled graphs [A1;A2;;AK]in a pre-processing step.Then we applied a base GNN model to obtain the representations of nodes in sampled graphs re-spectively, each layer is formulated as.h(k;l)v= COMBINE(l)(h(k;l1)v;AGGREGATE(l)(h(k;l1)u :u2Nk(v))); (5)wherehk;lvis the representation of node vat thel-th layer in the k-th sampled graphs, and Nk(v)denotes the neighbors of node vin thek-th sampled graphs Ak. For each node, we obtain Krepre-sentationsfh(1;L)v;h(2;L)v;;h(K;L )vgfrom theL-th layer of base GNN based on the Ksampledgraphs. Note that if a node is not sampled in one sampling, we fill its representation with 0 to ensurethat each node has Krepresentations. In order to integrate them into one representation, we fedthem into a MLP with two fully connected layers, and then sum the transferred representations afterMLP. Note that the MLP is shared over all samplings. Let Ovdenote the output representation ofnodev,Ovis calculated asOv=KXi=1MLP(h(i;L)v): (6)To verify the ability of DS-GNN model, we adopt node-level tasks, i.e., node based multi-classclassification and node-based multi-label classification, as the target task. Note that in multi-labelclassification, each node has multiple labels. Let VLabeldenote the set of labeled nodes and mis thenumber of total labels, Yvcequals 1 if node vhas labelc, otherwise 0. For node-based multi-classclassification, the output probability vector ^Zvis calculated by applying “ Softmax ” funtion to theoutput representations Ov:^Zv= Softmax( Ov): (7)For node-based multi-label classification, ^Zvis calculated by applied “ Sigmoid ” funtion instead:^Zv= Sigmoid( Ov): (8)For these two tasks, the loss function is defined as the cross-entropy over all labeled nodes asL=Xv2VLabelmXc=1Yvcln^Zvc; (9)3.2 T OWARDS HIGH EXPRESSIVE POWERRecall that, in this work we devote to achieving injectivity on nodes to increase the expressive powerof GNN, i.e., for any two nodes with different attributes or different neighbors, the model can outputdifferent representations for them. In this section, we theoretically analyze whether our proposedDS-GNN method can achieve this goal. Note that the MLP applied to the K representations in Fig. 2followed by the sum operator can achieve injectivity when integrating the multi-sets (Hornik et al.,1989; Xu et al., 2018a). As a result, to prove the injectivity on nodes of the proposed DS-GNN,we only need to prove that two nodes with different neighbors have different multi-sets of the Krepresentations.For simplicity, we denote the representation of node uon thei-th sampled graph under Llayersbase GNN model, i.e., h(i;L)u, asui. Formally, for two nodes, i.e., node uand nodev, with differentattributes or different neighbors on the original graph, the multi-sets of their representations basedon theKsampled graphs under Llayers base GNN model are:Multiset(u) =fu1;u2;;uKg; Multiset(v) =fv1;v2;;vKg: (10)We provide the following theorem and prove it in Appendix C to demonstrate that we can obtaindifferent multi-sets for these two nodes with high probability.5Under review as a conference paper at ICLR 2021Theorem As the number of samples increases, the probability that the multi-sets for two nodes withdifferent attributes or different neighbors are different can be close to 1.On the basis of the high probability of obtaining different multi-sets and the proven injective multi-set function, i.e., MLP followed by “Sum”, the injectivity can be achieved among nodes.Note that for the two nodes who are the same, i.e., they have the same attributes and neighbors, ourDS-GNN also ensure that they have the same representation. This is because that we adopt nodesampling instead of edge sampling. Under node sampling, the two identical nodes will lose thisneighbor at the same time, otherwise, keeping the neighbor, as well as the representations obtainedby base GNN model stay the same.4 E XPERIMENTSWe use GCN and GAT as our base models and implement them under our proposed DS-GNNframework respectively, referred as DS-GCN and DS-GAT. The effectiveness of DS-GCN and DS-GAT is evaluated on three benchmark datasets via multi-class classification. Detailed analysis ofsampling times and sampling strategy are also provided. In addition, to enrich the type of node-leveltasks, we also offer an multi-label classification task on a newly collected dataset from DouBanwebsite1, namely DBMovie.4.1 E XPERIMENTAL SETTINGSWe use Tensorflow to implement the proposed model and take Adam with an initial learning rate of0.01 as optimizer. For the three benchmark datasets, we set a weight decay of 0:0005 . For DBMovie,we do not use weight decay. Note that for the i-th sampling, we sample 10times and take the averageof these 10sampled garphs as the result of the i-th sampling, preventing the instability of a singlesampling. All GNN models has two layers and leverage Relu as the activation function of hiddenlayers. We add a ResNet (He et al., 2016) between the second GNN layer and the output layer. Werun 1000 epochs and choose the model that performs the best on validation.4.2 B ASELINESFor the three benchmark datasets, GNNs have achieved large improvements than traditional meth-ods. As a result, we choose the the representative GNN models, i.e., GCN (Kipf & Welling, 2016),GAT (Veli ˇckovi ́c et al., 2017), as the base GNN models and apply our DS-GNN to these two models.Additionally, we compare against the SOTA GNNs, including GraphSAGE (Hamilton et al., 2017),GIN (Xu et al., 2018a) and DropEdge (Rong et al., 2019). For the newly collected DBMovie dataset,we first consider traditional node classification methods as our baselines, including Multi-Layer Per-ceptrons (MLP), which only use node features, and graph embeddings (DeepWalk) (Perozzi et al.,2014), which only use graph structures. Since GNNs are proved to be effective in graph-basedlearning, we also compare against GCN and GAT.Table 1: The Statistics of DatasetsDatasets Nodes Edges Classes Features Train/Validation/Test Node-level TaskCora 2,708 5,429 7 1,433 1,208/500/1,000 Multi-classCiteSeer 3,327 4,732 6 3,703 1,812/500/1,000 Multi-classPubMed 19,717 44,338 3 500 18,217/500/1,000 Multi-classDBMovie 21,659 221,138 28 3,000 20,159/500/1,000 Multi-label4.3 N ODE-BASED MULTI -CLASS CLASSIFICATIONTo evaluate the proposed method on node-based multi-class classification, we conduct experimentson the three benchmark datasets, namely, Cora, CiteSeer, PubMed. The first three rows in Table 1show an overview of these three multi-class datasets. To better verify the ability of each model and1https://movie.douban.com/6Under review as a conference paper at ICLR 2021eliminate the effect of other factors such as data insufficiency, we follow the data split in DropE-dge (Rong et al., 2019).Performance on Node-based Multi-class ClassificationMethod Cora CiteSeer PubMedGIN 85.7% 76.4% 89.7%GraphSAGE 87.8% 78.4% 90.1%DropEdge-GCN 86.5% 78.7% 91.2%GCN 86.1% 75.9% 90.2%DS-GCN 88.0 % 79.9% 90.5%GAT 87.4% 77.8% 87.9%DS-GAT 88.2 % 80.0% 91.0%Table 2: Results of Node-based Multi-class Classifica-tion Figure 3: The impact of sampling time KExperimental results are reported in Table 2. Additionally, we provide the corresponding standarddeviation in Appendix D. Similar to previous research, we report the mean classification accuracyover nodes in test set for quantitative evaluation. DS-GNN methods all achieve improvements overbase models (indicated by the bold numbers), e.g., in CiteSeer, DS-GCN (79.9%) achieves largeimprovement over GCN (75.9%). Furthermore, applying our proposed method to the base GNNmodels can achieve the better performance, outperforming the state-of-the-art models. Comparedwith DropEdge which takes GCN as base model, our DiverseSample (DS) mechanism achievesbetter or comparable results.Analysis of Sampling Numbers KTo analyze the impact of sampling times K, we show the accuracy of Cora with different Kin Fig. 3.It is observed that the accuracy increases as Kbecomes larger at the beginning. This is owing tothe increasing probability of achieving injectivity. When Kcontinues to increase, the injectiveprobability becomes stable and close to 1, resulting in the accuracy also showing a stability. Theresults show that we can achieve better experimental results with a small K, e.g.,K= 5in Fig. 3),which proves the practicability of our method.Analysis of Sampling StrategyFigure 4: Diverse sampling VS Random samplingTo evaluate the effectiveness of the proposed diverse sampling strategy, we compare it with randomsampling strategy. Fig. 4 show the accuracy of Cora and CiteSeer with different initial node samplingprobabilitypinitrespectively when sampling times K= 5. The results demonstrate that: 1) theperformance of random sampling will be significantly affected by pinit. The model performs poorwith a relatively small pinit, while the model effect is significantly improved when pinitincreases.Thus, random sampling requires a relatively large pinitto maintain the sampling effect and model7Under review as a conference paper at ICLR 2021performance. 2) The performance of Diverse sampling is relatively stable, obtaining comparableperformance even with a small pinit. This is owing to diverse sampling considers the three guidelinesmentioned earlier, which proves the effectiveness of the diverse sampling strategy we proposed.Furthermore, under diverse sampling, we observe that the best pinitare different among datasets. Ifbetter results achieved under a smaller pinit, our diverse sampling can achieve improvements overrandom sampling. Also, it is observed that Cora performs well with a larger pinit, while CiteSeerachieves better performance when pinitis smaller, i.e., Cora tends to keep more edges in sampledgraphs. Similarly, PubMed also performs better when pinitis smaller. The reason may be relatedto the property of label smoothness “ ” proposed in CS-GNN (Hou et al., 2019), which measuresthe quality of surrounding informations. A larger implies that nodes with different labels areconnected in the graph. In CS-GNN, Cora has a smaller = 0:19, while Citeseer with a larger= 0:26as well as PubMeb with = 0:5. These values indicate that the surrounding informationsare of higher quality in Cora, which explain why Cora prefers to keep more edges.4.4 N ODE-BASED MULTI -LABEL CLASSIFICATIONTo enrich the type of node-level tasks, we collect a new dataset from the DouBan website andoffer an multi-label classification task. Each sample in this dataset has its descriptions (features),genres (labels), and similar movies (edges), as illustrated in Fig. 5. These similar movies (edges) aredirectly provided by DouBan website based on user co-preference. This dataset has 28 labels andeach movie may have more than one label. We define the task as tagging the movie with its ownlabels. The last row in Table 1 shows an overview of the newly collected DBMovie dataset.Figure 5: One Sample of DBMovieMethod MAP F1@3 NDCG@3MLP 54.9% 39.4% 50.6%DeepWalk 61.6% 44.7% 59.1%GCN 83.2% 60.2% 82.1%DS-GCN 83.6 % 60.5% 82.6%GAT 83.3% 60.5% 82.6%DS-GAT 84.0 % 61.0% 83.0%Table 3: Results of Node-based Multi-label Clas-sificationPerformance on Node-based Multi-label ClassificationWe now validate the effectiveness of DS-GCN and DS-GAT. For node-based multi-label classifica-tion, we leverage the widely used ranking metrics to evaluate our method, including Mean AveragePrecision (MAP), F1, and Normalized Discounted Cumulative Gain (NDCG). These metrics en-courage the correct label to be placed ahead of irrelevant labels, and a larger value indicates betterperformance. Experimental results on DBMovie are reported in Table 3. In the multi-label clas-sification task, GNN models, i.e., both GCN and GAT, also perform much better than traditionalmethods, including MLP (only using attributes) and DeepWalk (only using only). GAT performsbetter than GCN, owing to its attention mechanism to learn edge weight. Under all evaluation met-rics, our proposed method achieves consistently an improvement over GCN and GAT, showing anincreased expressive power of model.5 C ONCLUSIONWe proposed DS-GNN to improve the expressive power of graph neural networks. Different fromprevious methods that aim to implement injectivity via designing complicated layer-wise aggrega-tion functions, we focus on designing the diverse rooted sub-graphs for each node. To enhance thediversity of rooted sub-graphs, we design the diverse sampling strategy. As the number of sam-ples increases, we can achieve injectivity with higher probability. Extensive experiments on node-based multi-class classification and node-based multi-label classification prove that our method canachieve improvements over the baselines. In the future, we consider to extend our method to graphclassification scenarios.8Under review as a conference paper at ICLR 2021
eB32BJqEXl
This paper introduces a graph sampling method to increase the chance that GNN can differentiate different node representations. The idea makes sense but is straightforward. The novelty is limited.
4: Ok but not good enough - rejection
In this paper, the authors propose to sample nodes of a given graph multiple times to form a set of K sub-graphs. GNNs are then applied on each sampled graph for learning node representations. For each node, all representations are combined for the downstream tasks. The idea of doing multiple sampling is to increase the chance that nodes with different neighborhoods can be more different in the set of sampled graphs, than only considering the original single graph. In contrast, nodes with same neighborhood will remain the same over all sampled graphs. This mechanism helps discriminate node representations better. The key technical contribution of the paper is the proposed sampling strategy, which, with several proposed guidelines, is designed to reduce the possible number of samples K. The strategy is mostly an empirical design. It is good to see some analysis on how do the sample times affect the difference of node representations, but it seems to be very intuitive. As such, the technical contribution of this paper is limited. The idea to use multiple samples to improve performance seems also a straightforward way to improve the coverage of nodes' local structures. In the experiments, it is interesting to compare with GraphSage by sampling nodes' neighborhoods multiple times. Perhaps with a small number of sampling times, GraphSage's performance can be further improved. Especially, its performance in Table 2 is close to the proposed method. In Fig. 3, the decrease of accuracy with increased K remains to be analyzed. Intuitively, the performance should be stable as K increase. From Fig. 4, the improvements brought by the proposed sampling strategy is small compared with the random sampling, which questions the importance of the proposed strategy. In addition, the results cannot show how does the proposed strategy maintain superior performance with a smaller K compared with the random sampling. In Table 3, it is not clear on why do the three GNN methods in Table 2, GIN, GraphSage and DropEdge-GCN are excluded. It is better to make all comparison consistent.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Towards Powerful Graph Neural Networks: Diversity Matters ### Paper Abstract Graph neural networks (GNNs) offer us an effective framework for graph representation learning via layer-wise neighborhood aggregation. Their success is attributed to their expressive power at learning representation of nodes and graphs. To achieve GNNs with high expressive power, existing methods mainly resort to complex neighborhood aggregation functions, e.g., designing injective aggregation function or using multiple aggregation functions. Consequently, their expressive power is limited by the capability of aggregation function, which is tricky to determine in practice. To combat this problem, we propose a novel framework, namely diverse sampling, to improve the expressive power of GNNs. For a target node, diverse sampling offers it diverse neighborhoods, i.e., rooted sub-graphs, and the representation of target node is finally obtained via aggregating the representation of diverse neighborhoods obtained using any GNN model. High expressive power is guaranteed by the diversity of different neighborhoods. We use classical GNNs (i.e., GCN and GAT) as base models to evaluate the effectiveness of the proposed framework. Experiments are conducted at multi-class node classification task on three benchmark datasets and multi-label node classification task on a dataset collected in this paper. Extensive experiments demonstrate the proposed method consistently improve the performance of base GNN models. The proposed framework is applicable to any GNN models and thus is general for improving the expressive power of GNNs. ### Paper Keywords ["GNNs", "Expressive power", "Diverse sampling", "Injective"] ### Paper Content ABSTRACTGraph neural networks (GNNs) offer us an effective framework for graph repre-sentation learning via layer-wise neighborhood aggregation. Their success is at-tributed to their expressive power at learning representation of nodes and graphs.To achieve GNNs with high expressive power, existing methods mainly resort tocomplex neighborhood aggregation functions, e.g., designing injective aggrega-tion function or using multiple aggregation functions. Consequently, their expres-sive power is limited by the capability of aggregation function, which is tricky todetermine in practice. To combat this problem, we propose a novel framework,namely diverse sampling , to improve the expressive power of GNNs. For a targetnode, diverse sampling offers it diverse neighborhoods, i.e., rooted sub-graphs,and the representation of target node is finally obtained via aggregating the rep-resentation of diverse neighborhoods obtained using anyGNN model. High ex-pressive power is guaranteed by the diversity of different neighborhoods. We useclassical GNNs (i.e., GCN and GAT) as base models to evaluate the effective-ness of the proposed framework. Experiments are conducted at multi-class nodeclassification task on three benchmark datasets and multi-label node classifica-tion task on a dataset collected in this paper. Extensive experiments demonstratethe proposed method consistently improve the performance of base GNN models.The proposed framework is applicable to any GNN models and thus is general forimproving the expressive power of GNNs.1 I NTRODUCTIONGraph neural networks (GNNs) have been shown to be effective at graph representation learningand many predictive tasks on graph-structured data, e.g., node classification and graph classifica-tion (Kipf & Welling, 2016; Xu et al., 2018a). GNNs follow a neighborhood aggregation scheme,where the representation of a node is obtained by recursively aggregating and transforming repre-sentation of its neighboring nodes (Gilmer et al., 2017). The success of GNNs is believed to beattributed to their high expressive power at learning representation of nodes and graphs (Xu et al.,2018a). Therefore, it is an important research problem to analyze and improve the expressive powerof existing GNN models and design new GNNs with high expressive power.Several recent works focus on the expressive power of GNNs. Xu et al. pointed out that the expres-sive power of GNNs depends on the neighborhood aggregation function (Xu et al., 2018a). Theydevelop a simple architecture, i.e., leveraging multi-layer perceptron (MLP) and a sum pooling as auniversal approximator defined on multi-set, to achieve injective neighborhood aggregation function.With injective aggregation function in each layer, the proposed graph isomorphism network (GIN)has the expressive power as high as the Weisfeiler-Lehman (WL) graph isomorphism test (Weisfeiler& Lehman, 1968). Similarly, Sato et al. implement a powerful GNN via consistent port numbering,i.e., mapping edges to port numbering and neighbors are ordered by the port numbering (Sato et al.,2019). However, port ordering of CPNGNNs is not unique, and not all orderings can distinguish thesame set of graphs (Garg et al., 2020). Principal neighborhood aggregation (PNA) defines multipleaggregation functions to improve the expressive power of GNNs (Corso et al., 2020). However, thenumber of required aggregation functions to discriminate multi-sets depends on the size of multi-set,which is prohibitive for real world networks with skewed degree distribution. In sum, existing meth-ods focus on designing an injective, often complex, aggregation function in each layer to achieveGNNs with high expressive power. However, injective functions are difficult to obtain and tricky to1Under review as a conference paper at ICLR 2021determine in practice. Indeed, layer-wise injective function is not always required and what we needis an injective function defined over rooted sub-graphs or graphs as a whole.In this paper, we propose a novel framework, namely diverse sampling, to improve the expressivepower of GNNs. For a target node, diverse sampling offers it diverse neighborhoods, i.e., rooted sub-graphs, and the representation of target node is finally obtained via aggregating the representationof diverse neighborhoods obtained using any GNN model. High expressive power is guaranteedby the diversity of different neighborhoods. For convenience, we denote with DS-GNN the GNNimplemented under the proposed diverse sampling framework.54123MLPInjectiveLayer54123Aggregation 1Aggregation MConcat FC/gid3354126958417395841273GNNGNNGNNOne Injective Layer Sample584127369Aggregation 2MLPMLPMLPMLPPNA LayerGIN LayerInput InjectiveLayer/gid335/gid335Ouput 54123(a) (b) (c) /gid335/gid335 /gid335Figure 1: The motivation of DS-GNN: constructing multiple sampled graphs rather than complexlayer-wise aggregation functions (the node with red circle is the central node and “FC” representsfully connected layer).Fig. 1 illustrates the main idea of the proposed DS-GNN, and compare it with two representativemethods, i.e., GIN and PNA. Fig. 1 (a) depicts the injective layer implemented via MLP or multi-ple aggregation functions, aggregating first-order neighboring nodes to obtain the representation ofcentral node. Injective layer are stacked to achieve an overall injective function defined on rootedsub-graphs. On the contrary, DS-GNN does not follow the line of designing complicated aggrega-tion functions in each layer. Instead, DS-GNN improve the expressive power of GNNs via obtainingdiverse rooted sub-graphs for each node. Specifically, we sample nodes multiple times on the entireinput graph based on diverse sampling, and obtain multiple sampled sub-graphs for each node. Afterdiverse sampling, we leverage the shared GNN model to get the representation for the central node,including its high-order neighbors. In this way, each node is represented by a multi-set, consistingof the representations obtained from different sampled rooted sub-graphs. The final representationof central node is finally obtained via aggregating the representation of diverse neighborhoods.Finally, we use classical GNNs (i.e., GCN (Kipf & Welling, 2016) and GAT (Veli ˇckovi ́c et al., 2017))as base models to evaluate the effectiveness of the proposed framework. Experiments are conductedat node-based multi-class classification task on three benchmark datasets and node-based multi-label classification task on a dataset collected in this paper. Extensive experiments demonstratethe proposed method consistently improve the performance of base GNN models. The proposedframework is applicable to any GNN models and thus is general for improving the expressive powerof GNNs.2 N OTATIONS AND PRELIMINARIESWe first introduce the general framework of GNNs.2Under review as a conference paper at ICLR 2021G=fV;Agdenotes an undirected graph, where Vis the set of nodes with jVj=n, andAis theadjacency matrix with Ai;j=Aj;ito define the connection between node iand nodej.X2Rnpdenotes the feature matrix and the i-th row inXrepresent the attributes of i-th node. DS-GNNModern GNNs follow a neighborhood aggregation scheme, which iteratively update each node’srepresentation via aggregating the representation of its neighboring nodes. Formally, the k-th layerof GNN isa(k)v= AGGREGATE(k)(h(k1)u :u2N(v)); h(k)v= COMBINE(k)(h(k1)v;a(k)v);(1)whereN(v)represents the neighbors of node v,h(k)vis the representation of node vin thek-th layer,andh(0)v=Xv. Additionally, we introduce two representative base models: Graph ConvolutionalNetwork (GCN) (Kipf & Welling, 2016) and Graph Attention Networks (GAT) (Veli ˇckovi ́c et al.,2017) in Appendix A.3 D IVERSE SAMPLING BASED POWERFUL GRAPH NEURAL NETWORKConsidering that the injectivity of each node with its neighbors making GNNs be possessed withthe most powerful ability for node-level task, i.e., representations can be distinguishable for twonodes when they have dissimilar attributes or different neighbors, we devote to designing a powerfulGNN to reach such injectivity, thus increase its expressive power. Specifically, we propose a novelframework, i.e., Diverse Sampling (DS-GNN), to increase the expressive power via constructingdiverse rooted sub-graphs for each node. In this section, we first give the framework of our DS-GNN. Then we will introduce the important parts of DS-GNN, including diverse sampling strategythat ensure the diversity among each sampling, as well as the part of model learning. We alsotheoretically analyze how the proposed DS-GNN can improve the expressive abilities when addedto generic GNNs.3.1 M ETHOD8974 5163 297413874 562895162101112101111121210Loss SampleBaseGNNBaseGNNBaseGNNMLPMLPMLP/gid335 /gid335 /gid335 /gid335 /gid335Figure 2: Architecture of DS-GNN.The framework of DS-GNN is illustrated in Fig. 2. For the input graph, instead of running GNNon the entire graph, we first do node sampling with K-times and then obtain Ksampled graphs.During sampling, we calculate the sampling probability for each node and randomly retain the nodebased on the probability. If one node is not sampled, it means that the node and all edges connectedwith it will be removed from the graph. The updated adjacency matrix and feature matrix of eachsampling are fed into the base GNN model, and the GNN is shared over all samplings. Then weobtain one representation for each sampled node corresponded with one input sampled graph. Thus,if one node is sampled more than one time, we will obtain multiple representations.To get the final representation of each node, we need to integrate these representations. To hold theinjectivity when integrating representations, we adopt a multi-layer perceptrons (MLP) followed bythe sum aggregation to achieve the injective multi-set aggregation function. Once getting the finalrepresentation for each node, we calculate the loss function and optimize the model.Sampling Strategy3Under review as a conference paper at ICLR 2021Intuitively, with a large number of sampling numbers, the sampling results are expected to retainthe information of entire graph. Thus previous methods, e.g., Dropedge (Rong et al., 2019) whichsamples in each training epoch, do not need to carefully design the sampling strategy and they keepall edges when testing and validation. Meanwhile, they fail to achieve diverse sampled graphs intesting. Different from previous methods, we need to hold the Ksampled graphs in testing andkeep training and testing consistently. To solve this, we only sample the entire graph K-times, andleverage these sampled Kgraphs in all training epochs, as well as the validation and testing. Toguarantee the performance of the proposed DS-GNN when even small K is chosen, we carefullydesign the important diverse sampling strategy.To ensure the reuse of sampling results, we propose node sampling to obtain Ksampled graphs.In one sampling, the neighbors of each node are obtained from the same sampled graph. For thesampling probability of nodes, the most intuitive way is to set the same initial probability for eachnode, which is pinit. However, this method may require a large number of sampling numbers K. Inorder to further speed up or get an effective sampling plan, we also propose three intuitive guidelinesfor sampling strategy: 1) to ensure the coverage of samples, each node should appears at least in onesampled graph; 2) to achieve the diversity among samples, we assume that the sampling probabilityof each node should decrease when its sampled times in previous is already large; 3) to retain theimportant node in the original network and adopting the node degree as the importance indicator,we assume that the sampling probability of each node should increase when its degree in originalnetwork is large. Meanwhile, we also considering the previous sampled results, discounting thesampling probability of each node when the its degree in previous sampled graphs is already large.That’s to say, the sampling probability of each node increase with the increase of node degree inoriginal network, and decrease with the increase of node degree in previous sampled graphs. Toachieve the above three guidelines, we need to relate the sampling probability of each node to thehistorical sampling results. Next, we will introduce in detail how we do.LetKrepresent the total number of samples, pv;irepresent the probability of node vin thei-thsampling, and Hv;irepresent the number of times that the node vhas been sampled before the i-th sampling. To achieve guideline 1), we should guarantee that the probability for a node, whichhas never been sampled in history K-1 samples, equals to 1 at the K-th sampling. In other words,pv;K= 1 whenHv;K= 0. Additionally, to achieve the diversity guideline of 2), the samplingprobabilitypv;ishould decrease with the increase of historical sampled numbers Hv;i. Together, thesampling probability of node vfor thei-th sampling is defined as:pv;i=pinit+ (1pinit)iHv;iK: (2)To achieve guideline of 3), a new sampling strategy is designed. Specifically, we define Dsamplev;j asthe degree of node vin the j-th sampled graph, and Dvas the degree of node vin the original graph.To make the sampling probability increase with Dv, as well as decrease with Dsamplev;j as mentionedin guideline 3), we define the coefficient Dgapv;ias:Dgapv;i=Pi1j=1(DvDsamplev;j )Pi1j=1Dv(3)To achieve our three guidelines in total, we integrate the Eq 2 and Eq 3, and define the samplingprobability for node vin thei-th sampling as:pv;i=pinit+ (1pinit)iHv;iKDgapv;i: (4)Note that our sampling strategy is different from previous methods, i.e., dropping features and drop-ping edges. The Dropout (Srivastava et al., 2014) is proposed to prevent over-fitting by randomlydropping features. Dropping edges can be regarded as a generation of Dropout from dropping fea-tures to dropping edges. However, dropping edges may cause that two nodes who are the samewith identical neighbors obtain different representations, since dropping edges may cause these twonodes result in different neighbors after edge sampling. In contrast, we propose node samplingstrategy, ensuring that the nodes with the same input features and neighbors will still have the same4Under review as a conference paper at ICLR 2021representation. We give the detailed introduction of some previous methods, included in droppingedges in Appendix B.Model LearningIn the following, we describe in detail how DS-GNN can be applied to any base GNN to achieve amore powerful GNN model. Specifically, we first do Ksamplings on the input graph Avia the aboveintroduced diverse sampling, getting the sampled graphs [A1;A2;;AK]in a pre-processing step.Then we applied a base GNN model to obtain the representations of nodes in sampled graphs re-spectively, each layer is formulated as.h(k;l)v= COMBINE(l)(h(k;l1)v;AGGREGATE(l)(h(k;l1)u :u2Nk(v))); (5)wherehk;lvis the representation of node vat thel-th layer in the k-th sampled graphs, and Nk(v)denotes the neighbors of node vin thek-th sampled graphs Ak. For each node, we obtain Krepre-sentationsfh(1;L)v;h(2;L)v;;h(K;L )vgfrom theL-th layer of base GNN based on the Ksampledgraphs. Note that if a node is not sampled in one sampling, we fill its representation with 0 to ensurethat each node has Krepresentations. In order to integrate them into one representation, we fedthem into a MLP with two fully connected layers, and then sum the transferred representations afterMLP. Note that the MLP is shared over all samplings. Let Ovdenote the output representation ofnodev,Ovis calculated asOv=KXi=1MLP(h(i;L)v): (6)To verify the ability of DS-GNN model, we adopt node-level tasks, i.e., node based multi-classclassification and node-based multi-label classification, as the target task. Note that in multi-labelclassification, each node has multiple labels. Let VLabeldenote the set of labeled nodes and mis thenumber of total labels, Yvcequals 1 if node vhas labelc, otherwise 0. For node-based multi-classclassification, the output probability vector ^Zvis calculated by applying “ Softmax ” funtion to theoutput representations Ov:^Zv= Softmax( Ov): (7)For node-based multi-label classification, ^Zvis calculated by applied “ Sigmoid ” funtion instead:^Zv= Sigmoid( Ov): (8)For these two tasks, the loss function is defined as the cross-entropy over all labeled nodes asL=Xv2VLabelmXc=1Yvcln^Zvc; (9)3.2 T OWARDS HIGH EXPRESSIVE POWERRecall that, in this work we devote to achieving injectivity on nodes to increase the expressive powerof GNN, i.e., for any two nodes with different attributes or different neighbors, the model can outputdifferent representations for them. In this section, we theoretically analyze whether our proposedDS-GNN method can achieve this goal. Note that the MLP applied to the K representations in Fig. 2followed by the sum operator can achieve injectivity when integrating the multi-sets (Hornik et al.,1989; Xu et al., 2018a). As a result, to prove the injectivity on nodes of the proposed DS-GNN,we only need to prove that two nodes with different neighbors have different multi-sets of the Krepresentations.For simplicity, we denote the representation of node uon thei-th sampled graph under Llayersbase GNN model, i.e., h(i;L)u, asui. Formally, for two nodes, i.e., node uand nodev, with differentattributes or different neighbors on the original graph, the multi-sets of their representations basedon theKsampled graphs under Llayers base GNN model are:Multiset(u) =fu1;u2;;uKg; Multiset(v) =fv1;v2;;vKg: (10)We provide the following theorem and prove it in Appendix C to demonstrate that we can obtaindifferent multi-sets for these two nodes with high probability.5Under review as a conference paper at ICLR 2021Theorem As the number of samples increases, the probability that the multi-sets for two nodes withdifferent attributes or different neighbors are different can be close to 1.On the basis of the high probability of obtaining different multi-sets and the proven injective multi-set function, i.e., MLP followed by “Sum”, the injectivity can be achieved among nodes.Note that for the two nodes who are the same, i.e., they have the same attributes and neighbors, ourDS-GNN also ensure that they have the same representation. This is because that we adopt nodesampling instead of edge sampling. Under node sampling, the two identical nodes will lose thisneighbor at the same time, otherwise, keeping the neighbor, as well as the representations obtainedby base GNN model stay the same.4 E XPERIMENTSWe use GCN and GAT as our base models and implement them under our proposed DS-GNNframework respectively, referred as DS-GCN and DS-GAT. The effectiveness of DS-GCN and DS-GAT is evaluated on three benchmark datasets via multi-class classification. Detailed analysis ofsampling times and sampling strategy are also provided. In addition, to enrich the type of node-leveltasks, we also offer an multi-label classification task on a newly collected dataset from DouBanwebsite1, namely DBMovie.4.1 E XPERIMENTAL SETTINGSWe use Tensorflow to implement the proposed model and take Adam with an initial learning rate of0.01 as optimizer. For the three benchmark datasets, we set a weight decay of 0:0005 . For DBMovie,we do not use weight decay. Note that for the i-th sampling, we sample 10times and take the averageof these 10sampled garphs as the result of the i-th sampling, preventing the instability of a singlesampling. All GNN models has two layers and leverage Relu as the activation function of hiddenlayers. We add a ResNet (He et al., 2016) between the second GNN layer and the output layer. Werun 1000 epochs and choose the model that performs the best on validation.4.2 B ASELINESFor the three benchmark datasets, GNNs have achieved large improvements than traditional meth-ods. As a result, we choose the the representative GNN models, i.e., GCN (Kipf & Welling, 2016),GAT (Veli ˇckovi ́c et al., 2017), as the base GNN models and apply our DS-GNN to these two models.Additionally, we compare against the SOTA GNNs, including GraphSAGE (Hamilton et al., 2017),GIN (Xu et al., 2018a) and DropEdge (Rong et al., 2019). For the newly collected DBMovie dataset,we first consider traditional node classification methods as our baselines, including Multi-Layer Per-ceptrons (MLP), which only use node features, and graph embeddings (DeepWalk) (Perozzi et al.,2014), which only use graph structures. Since GNNs are proved to be effective in graph-basedlearning, we also compare against GCN and GAT.Table 1: The Statistics of DatasetsDatasets Nodes Edges Classes Features Train/Validation/Test Node-level TaskCora 2,708 5,429 7 1,433 1,208/500/1,000 Multi-classCiteSeer 3,327 4,732 6 3,703 1,812/500/1,000 Multi-classPubMed 19,717 44,338 3 500 18,217/500/1,000 Multi-classDBMovie 21,659 221,138 28 3,000 20,159/500/1,000 Multi-label4.3 N ODE-BASED MULTI -CLASS CLASSIFICATIONTo evaluate the proposed method on node-based multi-class classification, we conduct experimentson the three benchmark datasets, namely, Cora, CiteSeer, PubMed. The first three rows in Table 1show an overview of these three multi-class datasets. To better verify the ability of each model and1https://movie.douban.com/6Under review as a conference paper at ICLR 2021eliminate the effect of other factors such as data insufficiency, we follow the data split in DropE-dge (Rong et al., 2019).Performance on Node-based Multi-class ClassificationMethod Cora CiteSeer PubMedGIN 85.7% 76.4% 89.7%GraphSAGE 87.8% 78.4% 90.1%DropEdge-GCN 86.5% 78.7% 91.2%GCN 86.1% 75.9% 90.2%DS-GCN 88.0 % 79.9% 90.5%GAT 87.4% 77.8% 87.9%DS-GAT 88.2 % 80.0% 91.0%Table 2: Results of Node-based Multi-class Classifica-tion Figure 3: The impact of sampling time KExperimental results are reported in Table 2. Additionally, we provide the corresponding standarddeviation in Appendix D. Similar to previous research, we report the mean classification accuracyover nodes in test set for quantitative evaluation. DS-GNN methods all achieve improvements overbase models (indicated by the bold numbers), e.g., in CiteSeer, DS-GCN (79.9%) achieves largeimprovement over GCN (75.9%). Furthermore, applying our proposed method to the base GNNmodels can achieve the better performance, outperforming the state-of-the-art models. Comparedwith DropEdge which takes GCN as base model, our DiverseSample (DS) mechanism achievesbetter or comparable results.Analysis of Sampling Numbers KTo analyze the impact of sampling times K, we show the accuracy of Cora with different Kin Fig. 3.It is observed that the accuracy increases as Kbecomes larger at the beginning. This is owing tothe increasing probability of achieving injectivity. When Kcontinues to increase, the injectiveprobability becomes stable and close to 1, resulting in the accuracy also showing a stability. Theresults show that we can achieve better experimental results with a small K, e.g.,K= 5in Fig. 3),which proves the practicability of our method.Analysis of Sampling StrategyFigure 4: Diverse sampling VS Random samplingTo evaluate the effectiveness of the proposed diverse sampling strategy, we compare it with randomsampling strategy. Fig. 4 show the accuracy of Cora and CiteSeer with different initial node samplingprobabilitypinitrespectively when sampling times K= 5. The results demonstrate that: 1) theperformance of random sampling will be significantly affected by pinit. The model performs poorwith a relatively small pinit, while the model effect is significantly improved when pinitincreases.Thus, random sampling requires a relatively large pinitto maintain the sampling effect and model7Under review as a conference paper at ICLR 2021performance. 2) The performance of Diverse sampling is relatively stable, obtaining comparableperformance even with a small pinit. This is owing to diverse sampling considers the three guidelinesmentioned earlier, which proves the effectiveness of the diverse sampling strategy we proposed.Furthermore, under diverse sampling, we observe that the best pinitare different among datasets. Ifbetter results achieved under a smaller pinit, our diverse sampling can achieve improvements overrandom sampling. Also, it is observed that Cora performs well with a larger pinit, while CiteSeerachieves better performance when pinitis smaller, i.e., Cora tends to keep more edges in sampledgraphs. Similarly, PubMed also performs better when pinitis smaller. The reason may be relatedto the property of label smoothness “ ” proposed in CS-GNN (Hou et al., 2019), which measuresthe quality of surrounding informations. A larger implies that nodes with different labels areconnected in the graph. In CS-GNN, Cora has a smaller = 0:19, while Citeseer with a larger= 0:26as well as PubMeb with = 0:5. These values indicate that the surrounding informationsare of higher quality in Cora, which explain why Cora prefers to keep more edges.4.4 N ODE-BASED MULTI -LABEL CLASSIFICATIONTo enrich the type of node-level tasks, we collect a new dataset from the DouBan website andoffer an multi-label classification task. Each sample in this dataset has its descriptions (features),genres (labels), and similar movies (edges), as illustrated in Fig. 5. These similar movies (edges) aredirectly provided by DouBan website based on user co-preference. This dataset has 28 labels andeach movie may have more than one label. We define the task as tagging the movie with its ownlabels. The last row in Table 1 shows an overview of the newly collected DBMovie dataset.Figure 5: One Sample of DBMovieMethod MAP F1@3 NDCG@3MLP 54.9% 39.4% 50.6%DeepWalk 61.6% 44.7% 59.1%GCN 83.2% 60.2% 82.1%DS-GCN 83.6 % 60.5% 82.6%GAT 83.3% 60.5% 82.6%DS-GAT 84.0 % 61.0% 83.0%Table 3: Results of Node-based Multi-label Clas-sificationPerformance on Node-based Multi-label ClassificationWe now validate the effectiveness of DS-GCN and DS-GAT. For node-based multi-label classifica-tion, we leverage the widely used ranking metrics to evaluate our method, including Mean AveragePrecision (MAP), F1, and Normalized Discounted Cumulative Gain (NDCG). These metrics en-courage the correct label to be placed ahead of irrelevant labels, and a larger value indicates betterperformance. Experimental results on DBMovie are reported in Table 3. In the multi-label clas-sification task, GNN models, i.e., both GCN and GAT, also perform much better than traditionalmethods, including MLP (only using attributes) and DeepWalk (only using only). GAT performsbetter than GCN, owing to its attention mechanism to learn edge weight. Under all evaluation met-rics, our proposed method achieves consistently an improvement over GCN and GAT, showing anincreased expressive power of model.5 C ONCLUSIONWe proposed DS-GNN to improve the expressive power of graph neural networks. Different fromprevious methods that aim to implement injectivity via designing complicated layer-wise aggrega-tion functions, we focus on designing the diverse rooted sub-graphs for each node. To enhance thediversity of rooted sub-graphs, we design the diverse sampling strategy. As the number of sam-ples increases, we can achieve injectivity with higher probability. Extensive experiments on node-based multi-class classification and node-based multi-label classification prove that our method canachieve improvements over the baselines. In the future, we consider to extend our method to graphclassification scenarios.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title This paper introduces a graph sampling method to increase the chance that GNN can differentiate different node representations. The idea makes sense but is straightforward. The novelty is limited. ### Review Text In this paper, the authors propose to sample nodes of a given graph multiple times to form a set of K sub-graphs. GNNs are then applied on each sampled graph for learning node representations. For each node, all representations are combined for the downstream tasks. The idea of doing multiple sampling is to increase the chance that nodes with different neighborhoods can be more different in the set of sampled graphs, than only considering the original single graph. In contrast, nodes with same neighborhood will remain the same over all sampled graphs. This mechanism helps discriminate node representations better. The key technical contribution of the paper is the proposed sampling strategy, which, with several proposed guidelines, is designed to reduce the possible number of samples K. The strategy is mostly an empirical design. It is good to see some analysis on how do the sample times affect the difference of node representations, but it seems to be very intuitive. As such, the technical contribution of this paper is limited. The idea to use multiple samples to improve performance seems also a straightforward way to improve the coverage of nodes' local structures. In the experiments, it is interesting to compare with GraphSage by sampling nodes' neighborhoods multiple times. Perhaps with a small number of sampling times, GraphSage's performance can be further improved. Especially, its performance in Table 2 is close to the proposed method. In Fig. 3, the decrease of accuracy with increased K remains to be analyzed. Intuitively, the performance should be stable as K increase. From Fig. 4, the improvements brought by the proposed sampling strategy is small compared with the random sampling, which questions the importance of the proposed strategy. In addition, the results cannot show how does the proposed strategy maintain superior performance with a smaller K compared with the random sampling. In Table 3, it is not clear on why do the three GNN methods in Table 2, GIN, GraphSage and DropEdge-GCN are excluded. It is better to make all comparison consistent. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
HxWEL2zQ3AK
ML_Reproducibility_Challenge/2021/Fall
2021
Comparing Rewinding and Fine-tuning in Neural Network Pruning
["Szymon Jakub Mikler"]
Scope of Reproducibility We are reproducing Comparing Rewinding and Fine-tuning in Neural Networks, by Renda et al. In this work the authors compare three different approaches to retraining neural networks after pruning: 1) fine-tuning, 2) rewinding weights as in Frankle et al. and 3) a new, original method involving learning rate rewinding, building upon Frankle et al. We reproduce the results of all three approaches, but we focus on verifying their approach, learning rate rewinding, since it is newly proposed and is described as a universal alternative to other methods. We used CIFAR10 for most reproductions along with additional experiments on the larger CIFAR100 which extends the result originally provided by the authors. We've also extended the list of tested network architectures to include Wide ResNets. The new experiments led us to discover the limitations of learning rate rewinding which can worsen pruning results on large architectures. Methodology We implemented the code ourselves in Python with TensorFlow 2, basing our implementation of the paper alone and without consulting the source code provided by the authors. We ran two sets of experiments. In the reproduction set, we have striven to exactly reproduce the experimental conditions of Renda et al. We have also conducted additional experiments, which use other network architectures, effectively showing results previously unreported by the authors. We did not cover all originally reported experiments -- we covered as many as needed to state the validity of claims. We used Google Cloud resources and a local machine with 2x RTX 3080 GPUs. Results We were able to reproduce the exact results reported by the authors in all originally reported scenarios. However, extended results on larger Wide Residual Networks have demonstrated the limitations of the newly proposed learning rate rewinding -- we observed a previously unreported accuracy degradation for low sparsity ranges. Nevertheless, the general conclusion of the paper still holds and was indeed reproduced. What was easy Re-implementation of the pruning and retraining methods was technically easy, as it is based on a popular and simple pruning criterion -- magnitude pruning. Original work was descriptive enough to reproduce the results with satisfying results without consulting the code. What was difficult Not every design choice was mentioned in the paper, thus reproducing the exact results was rather difficult and required a meticulous choice of hyper-parameters. Experiments on ImageNet and WMT16 datasets were time consuming and required extensive resources, thus we did not verify them. Communication with original authors We did not consult the original authors, as there was no need to.
["pruning", "lottery ticket hypothesis", "finetuning"]
Comparing Rewinding and Fine-tuning in Neural NetworkPruning Reproducibility Challenge 2021Anonymous Author(s)AffiliationemailReproduction Summary 1Scope of Reproducibility 2We are reproducing Comparing Rewinding and Fine-tuning in Neural Networks , by Renda et al. [2020]. In this work 3the authors compare three different approaches to retraining neural networks after pruning: 1) fine-tuning, 2) rewinding 4weights as in Frankle and Carbin [2019] and 3) a new, original method involving learning rate rewinding, building upon 5Frankle and Carbin [2019]. We reproduce the results of all three approaches, but we focus on verifying their approach, 6learning rate rewinding, since it is newly proposed and is described as a universal alternative to other methods. 7We used CIFAR10 for most reproductions along with additional experiments on the larger CIFAR100, which extends 8the results originally provided by the authors. We have also extended the list of tested network architectures to include 9Wide ResNets (Zagoruyko and Komodakis [2016]). The new experiments led us to discover the limitations of learning 10rate rewinding which can worsen pruning results on large architectures. 11Methodology 12We implemented the code ourselves in Python with TensorFlow 2, basing our implementation of the paper alone and 13without consulting the source code provided by the authors. We ran two sets of experiments. In the reproduction set, we 14have striven to exactly reproduce the experimental conditions of Renda et al. [2020]. We have also conducted additional 15experiments, which use other network architectures, effectively showing results previously unreported by the authors. 16We did not cover all originally reported experiments – we covered as many as needed to state the validity of claims. We 17used Google Cloud resources and a local machine with 2x RTX 3080 GPUs. 18Results 19We were able to reproduce the exact results reported by the authors in all originally reported scenarios. However, 20extended results on larger Wide Residual Networks have demonstrated the limitations of the newly proposed learning 21rate rewinding – we observed a previously unreported accuracy degradation for low sparsity ranges. Nevertheless, the 22general conclusion of the paper still holds and was indeed reproduced. 23What was easy 24Re-implementation of the pruning and retraining methods was technically easy, as it is based on a popular and simple 25pruning criterion – magnitude pruning. Original work was descriptive enough to reproduce the results with satisfying 26results without consulting the code. 27What was difficult 28Not every design choice was mentioned in the paper, thus reproducing the exact results was rather difficult and required 29a meticulous choice of hyper-parameters. Experiments on ImageNet and WMT16 datasets were time consuming and 30required extensive resources, thus we did not verify them. 31Communication with original authors 32We did not consult the original authors, as there was no need to 33Submitted to ML Reproducibility Challenge 2021. Do not distribute.1 Introduction 34Neural network pruning is an algorithm leading to decrease the size of a network, usually by removing its connections 35or setting their weights to 0. This procedure generally allows obtaining smaller and more efficient models. It often turns 36out that these smaller networks are as accurate as their bigger counterparts or the accuracy loss is negligible. A common 37way to obtain such high quality sparse network is to prune it after the training has finished (Liu et al. [2019], Frankle 38and Carbin [2019]). Networks that have already converged are easier to prune than randomly initialized networks (Liu 39et al. [2019], Lee et al. [2018]). After pruning, more training is usually required to restore the lost accuracy. Although 40there are a few ways to retrain the network, finetuning might be the easiest and most often chosen by researchers and 41practitioners. (Liu et al. [2019], Renda et al. [2020]). 42Lottery Ticket Hypothesis from Frankle and Carbin [2019] formulates a hypothesis that for every dense neural network, 43there exists a smaller subnetwork that matches or exceeds results of the original. The algorithm originally used to 44obtain examples of such networks is iterative magnitude pruning with weight rewinding, and it is one of the methods of 45retraining after pruning compared in this work. 462 Scope of reproducibility 47Renda et al. [2020] formulated the following claims: 48Claim 1: Widely used method of training after pruning: finetuning yields worse results than rewinding based methods 49(supported by figures 1, 2, 3, 4 and table 5) 50Claim 2: Newly introduced learning rate rewinding works as good or better as weight rewinding in all scenarios 51(supported by figures 1, 2, 3, 4 and table 5, but not supported by figure 5) 52Claim 3: Iterative pruning with learning rate rewinding matches state-of-the-art pruning methods 53(supported by figures 1, 2, 3, 4 and table 5, but not supported by figure 5) 543 Methodology 55We aimed to compare three retraining approaches: 1) finetuning, 2) weight rewinding and 3) learning rate rewinding. 56Our general strategy that repeated across all experiments was as follows: 571. train a dense network to convergence, 582. prune the network using magnitude criterion: remove weights with smallest L1 norm, 593. retrain the network using selected retraining approach. 60In the case of structured pruning: in step 2, we removed structures (rows or convolutional channels) with the smallest 61average L1 norm (Crowley et al. [2018]), rather than removing separate connections. 62In the case of iterative pruning: the network in step 1 was not randomly initialized, but instead: weights from a model 63from a previous iterative pruning step were loaded as the starting point. 64We trained all our networks using Stochastic Gradient Descent with Nesterov Momentum. The learning rate was 65decreased in a piecewise manner during the training, but momentum coefficient was constant and equal to 0:9. 663.1 Model descriptions 67In this report, we were focusing on an image recognition task using convolutional neural networks (LeCun [1988]). For 68most of our experiments, we chose to use identical architectures as Renda et al. [2020] to better validate their claims 69and double-check their results, rather than provide additional ones. Therefore, most of the used networks are residual 70networks, which were originally proposed in He et al. [2016a]. Additionally, to verify the general usefulness of pruning 71and retraining methods proposed in Renda et al. [2020] we extend the list of tested network architectures to much larger 72wide residual networks from Zagoruyko and Komodakis [2016]. 733.1.1 Residual networks (ResNet) 74Just as Renda et al. [2020], we chose to use the original version of ResNet as described in He et al. [2016a] rather than 75the more widely used, improved version (with preactivated blocks) from He et al. [2016b]. We created the models 762ourselves, using TensorFlow (Abadi et al. [2015]) and Keras. We strove to replicate the exact architectures used by 77Renda et al. [2020] and He et al. [2016a] and train them from scratch. 78Model Trainable parameters Kernel parameters CIFAR-10 CIFAR-100ResNet-20 272 282 270 896 92.46% –ResNet-56 855 578 851 504 93.71% 71.90%ResNet-110 1 730 522 1 722 416 94.29% 72.21%Table 1: ResNets architecture description, including baseline accuracy across datasets.Hyper-parameters 79Learning rate started with 0:1and was multiplied by 0:1twice, after 36 000 and 54 000 iterations. One training cyclewas 72 000 iterations in total. For all batch normalization layers, we set the batch norm decay to 0.997, following Rendaet al. [2020] which was also used in the original TensorFlow implementation1. We initialize network’s weights withwhat is known as He uniform initialization from He et al. [2015]. We regularize ResNets, during both training andfinetuning, using L2penalty with 104coefficient. In other words, the loss function (from which we calculate thegradients) looks as follows:FinalLoss =CategoricalCrossentropy (GroundTruth; Prediction ) + 104Xi2Ww2i3.1.2 Wide Residual Networks (Wide ResNet, WRN) 80WRN networks were introduced in Zagoruyko and Komodakis [2016]. They are networks created by simply increasing 81the number of filters in preactivated ResNet networks (He et al. [2016b]). 82Model Trainable parameters Kernel parameters CIFAR-10WRN-16-8 10 961 370 10 954 160 95.72%Table 2: Wide ResNet architecture description.Hyper-parameters 83As Wide ResNets are newer and much larger than ResNets, hyper-parameters are slightly different. To choose them, wefollow Zagoruyko and Komodakis [2016]. Learning rate starts with 0:1and multiplied by 0:2thrice: after 32 000, 48 000and 64 000 iterations. Training lasts for 80 000 iterations. For all batch normalization layers, we use hyper-parametersfrom the newer TensorFlow implementation2with batch norm decay set to 0.9. Following Zagoruyko and Komodakis[2016], we use larger L2penalty for this network: 2104. Finally, the loss function is as follows:FinalLoss =CategoricalCrossentropy (GroundTruth; Prediction ) + 2 104Xi2Ww2i3.2 Datasets 84CIFAR-10 and CIFAR-100 are image classification datasets introduced in Krizhevsky et al.. Following Renda et al. 85[2020], we use all (50 000) training examples to train the model. 861https://github.com/tensorflow/models/blob/r1.13.0/official/resnet/resnet_model.py2https://github.com/tensorflow/models/blob/r2.5.0/official/vision/image_classification/resnet/resnet_model.py3Dataset Training examples Validation examples Classes ResolutionCIFAR-10 50 000 10 000 10 3232CIFAR-100 50 000 10 000 100 3232Table 3: CIFAR datasets description.3.2.1 Postprocessing 87We used a standard postprocessing for both CIFAR-10 and CIFAR-100 datasets (Renda et al. [2020], Frankle and 88Carbin [2019], Zagoruyko and Komodakis [2016]). During training and just before passing data to the model, we: 891.standardized the input by subtracting the mean and dividing by the std of RGB channels (calculated on training 90dataset), 912. randomly flipped in horizontal axis, 923. added a four pixel reflection padding, 934. randomly cropped the image to its original size. 94During the validation, we did only the first step of the above. 953.3 Experimental setup and code 96Our ready-to-use code, which includes experiment definitions, can be found at https://anonymous.4open. 97science/r/reproducing-comparing-rewinding-and-finetuning-1C5A . It’s written using TensorFlow 98(Abadi et al. [2015]) version 2.4.2 in Python. More details are included in the repository. 993.4 Computational requirements 100Recreating the experiments required a modern GPU, training all models on CPU was virtually impossible. Training 101time varies depending on a lot of factors: network version and size, exact version of the deep learning library, and even 102the operating system. In our case, using TensorFlow 2.4.2 on Ubuntu and a single RTX 3080 GPU, the smallest of the 103used models, ResNet-20, takes about 20 minutes to train on CIFAR-10 dataset. To replicate our experiments, training at 104least a single baseline network and then, separately, a single pruned network, is required. To reduce computational 105requirements, we reused one dense baseline for multiple compression ratios. Approximated training time requirements 106can be seen in the table below. 107Model Dataset Number of iterations Iterations per second Time for training cycleResNet-20 CIFAR-10 72 000 59.0 22 minResNet-56 CIFAR-10 72 000 28.6 43 minResNet-110 CIFAR-10 72 000 15.9 77 minWRN-16-8 CIFAR-10 80 000 17.4 78 minTable 4: Time requirements for replicating or running experiments from this report. Reported times are obtained using asingle RTX 3080 GPU in Linux environment, using TensorFlow in version 2.4.2.For all our experiments in total, we used around 536 GPU hours. 1084 Method description 109We compare three methods of retraining after pruning. For all of them, the starting point is a network that was already 110trained to convergence, then pruned to a desired sparsity. The difference between the three retraining methods is what 111follows after it. 11244.1 Fine-tuning 113Fine-tuning is retraining with a small, constant learning rate – in our case, whenever fine-tuning was used, the learning 114rate was set to 0.001 as in Renda et al. [2020]. We finetune the network for the same number of iterations as the baseline 115– 72 000 iterations in the case of the original ResNet architecture. In this method, such long retraining would not be 116necessary in practical applications, since the network converges much faster. 1174.2 Weight rewinding 118Weight rewinding restores the network’s weights from a previous point (possibly beginning) in the training history and 119then continues training from this point using the original training schedule – in our case a piecewise constant decaying 120learning rate schedule. When rewinding a network to iteration Kthat originally trained for Niterations: first prune 121the dense network that was trained for Niterations. Then, for connections that survived, restore their values to K-th 122iteration from the training history. Then train to the convergence for the remaining NKiterations. 1234.3 Learning rate rewinding 124Learning rate rewinding continues training with weights that have already converged, but restores the learning rate 125schedule to the beginning, just as if we were training from scratch, and then trains to the convergence once again. This 126reminds the cyclical learning rates from Smith [2017]. Learning rate rewinding really is weight rewinding for K=N, 127but the final retraining is always for Niterations. 1285 Results 129In most of our experiment, just as Renda et al. [2020], we investigate how does the trade-off between prediction accuracy 130and compression ratio look like. In one of the experiments (table 5) we verify only one compression ratio, but for the 131rest, we verify multiple. We report a median result out of 2 up to 12 trials for each compression ratio. To better utilize 132our compute capabilities, we decided to spend more training cycles in situations where there is no clear winner between 133the compared methods. On each plot, we include error bars showing 80% confidence intervals. 1345.1 Results reproducing original paper 135In this section, we include experiments that we successfully reproduced. They match the original ones within 1% error 136margin. 137Across all scenarios where finetuning was tested, it was by far the worst of the three methods, which directly supports 138claim 1 (section 2). Weight rewinding and learning rate rewinding most often are equally matched, but in some cases 139learning rate rewinding works a little better. 140ResNets on CIFAR-10 dataset 1411.0×1.75×3.06×5.35×9.35×Compression ratio-3%-2%-1%0%1% AccuracyResNet-20 one-shot1.0×1.75×3.06×5.35×9.35×Compression ratio-4%-3%-2%-1%0%1% AccuracyResNet-20 iterativeweight rewinding LR rewinding finetuningFigure 1: Results of ResNet-20 (table 1) on CIFAR-10 (table 3) with unstructured, magnitude pruning in versions:one-shot and iterative. Results show varying compression ratios. Maximal compression ratio (9.35 ) means that thereare only 29 000 non-zero kernel parameters left. This experiment supports claims 1, 2, 3 (section 2).51.0×1.75×3.06×5.35×9.35×Compression ratio-2%-1%0%1% AccuracyResNet-56 one-shot1.0×2.18×4.77×10.41×22.73×Compression ratio-3%-2%-1%0%1% AccuracyResNet-56 iterativeweight rewinding LR rewinding finetuningFigure 2: Results of ResNet-56 (table 1) on CIFAR-10 (table 3) with unstructured, magnitude pruning in versions:one-shot and iterative. Results with varying compression ratios. Maximal compression ratio means (22.73 ) that thereare only 37 600 non-zero kernel parameters left. This experiment supports claims 1, 2, 3 (section 2).Network Dataset Retraining Sparsity Test AccuracyResNet-110 CIFAR-10 None 0% 94.29%ResNet-110 CIFAR-10 LR rewinding 89.3% 93.74%ResNet-110 CIFAR-10 weight rewinding 89.3% 93.73%ResNet-110 CIFAR-10 finetuning 89.3% 93.32%Table 5: Results of ResNet-110 (table 1) trained on CIFAR-10 (table 3) with unstructured, one-shot magnitude pruning.Sparsity 89:3%corresponds to 9:35compression ratio. This experiment supports claims 1, 2, 3 (section 2).1.0×1.75×3.06×5.35×9.35×Compression ratio-7%-6%-5%-4%-3%-2%-1%0% AccuracyResNet-20 structuredweight rewinding LR rewinding finetuningFigure 3: Results of ResNet-20 (table 1) on CIFAR-10 (table 3) with structured, one-shot, magnitude pruning. Resultsshow varying compression ratios. Maximal compression ratio (9.35 ) means that there are only 29 000 non-zero kernelparameters left in ResNet-20.65.2 Results beyond original paper 142ResNets on CIFAR-100 dataset 1431.0×1.75×3.06×5.35×9.35×Compression ratio-7%-6%-5%-4%-3%-2%-1%0%1%2% AccuracyResNet-56 unstructuredweight rewinding LR rewinding finetuningFigure 4: Results of ResNet-56 (table 1) on CIFAR-100 (table 3) with unstructured, one-shot, magnitude pruning.Results with varying compression ratios. Maximal compression ratio (9.35 ) means that there are only 91 500 non-zerokernel parameters left. This experiment supports claims 1, 2, 3 (section 2) even though this scenario wasn’t originallytested in Renda et al. [2020].WRN-16-8 on CIFAR-10 dataset 144WRN-16-8 shows consistent behaviour – accuracy in the low sparsity regime is reduced in comparison to the baseline. 145In the case of iterative pruning, where each step is another pruning in the low sparsity regime, it leads to a large 146difference between the two retraining methods. Since for WRN-16-8 one-shot, low sparsity pruning shows a small 147regression in comparison to the baseline, this regression accumulates when pruning multiple times, as we do in iterative 148pruning. This can be seen in figure 5. 1491.0×2.51×6.32×15.91×40.0×Compression ratio-2%-1%0% AccuracyWRN-16-8 one-shot1.0×3.16×10.0×31.62×100.0×Compression ratio-9%-8%-7%-6%-5%-4%-3%-2%-1%0% AccuracyWRN-16-8 iterativeweight rewinding LR rewinding (0.3 step)Figure 5: Results of WRN-16-8 (table 2) on CIFAR-10 (table 3) with unstructured, magnitude pruning in versions:one-shot and iterative. Results with varying compression ratios. Maximal compression ratio (100 ) leaves 109 500non-zero kernel parameters while achieving around 94% accuracy or around 95% when leaving 153 400 non-zeroparameters. One can see catastrophic effects of low-sparsity pruning when using learning rate rewinding procedure.7For iterative pruning (figures 1, 2) we used a nonstandard step size of 30% per iterative pruning iteration, which was 150a way to reduce the computational requirements. We provide a comparison of our step size to the more commonly 151used 20%. We show that there is virtually no difference between both versions and the aforementioned catastrophic 152degradation occurs in both cases, as long as the step size is in the low sparsity regime. 1531.0×2.04×4.15×8.46×17.24×Compression ratio-5%-4%-3%-2%-1%0% AccuracyWRN-16-8 iterativeLR rewinding (0.3 step) LR rewinding (0.2 step)Figure 6: Results of WRN-16-8 (table 2) on CIFAR-10 (table 3) with unstructured, iterative, magnitude pruning withtwo different step sizes. Results show varying compression ratios and accuracy.6 Discussion 154We were able to confirm the general conclusion of Renda et al. [2020]. Fine-tuning can mostly be replaced by other 155retraining techniques, e.g., by weight rewinding as done by Frankle and Carbin [2019]. However, we have also shown 156in figure 5 that the newly proposed learning rate rewinding was a poor choice when we were pruning larger networks – 157in our case that was WRN-16-8. We believe this should be further examined as there might exist a simple workaround 158to this problem – a retraining procedure in between weight rewinding and learning rate rewinding which would work in 159all cases. Furthermore, it would be interesting to see where exactly learning rate rewinding starts losing accuracy in 160comparison to weight rewinding and why this catastrophic accuracy degradation occurs. Perhaps, the reason for it not 161occurring with the original ResNet architecture is the degree to which the larger networks overtrain – larger networks 162tend to overtrain more. Such an overtrained network might not be a good starting point for the retraining. 163Acknowledgements 164The authors thank Polish National Science Center for funding under the OPUS-18 2019/35/B/ST6/04379 grant and the 165PlGrid consortium for computational resources. 166References 167Alex Renda, Jonathan Frankle, and Michael Carbin. Comparing Rewinding and Fine-tuning in Neural Network Pruning. 1682020. URL http://arxiv.org/abs/2003.02389 . arXiv: 2003.02389. 169Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. 7th 170International Conference on Learning Representations, ICLR 2019 , page 1–42, 2019. arXiv: 1803.03635 Citation 171Key: Frankle2019. 172Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Edwin R. Hancock Richard C. Wilson and William 173A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC) , pages 87.1–87.12. BMV A Press, 174September 2016. ISBN 1-901725-59-6. doi: 10.5244/C.30.87. URL https://dx.doi.org/10.5244/C.30.87 . 1758Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. 7th 176International Conference on Learning Representations, ICLR 2019 , page 1–21, 2019. arXiv: 1810.05270 Citation 177Key: Liu2019a. 178Namhoon Lee, Thalaiyasingam Ajanthan, and Philip H. S. Torr. SNIP: single-shot network pruning based on connection 179sensitivity. CoRR , abs/1810.02340, 2018. URL http://arxiv.org/abs/1810.02340 . 180Elliot J. Crowley, Jack Turner, Amos Storkey, and Michael O’Boyle. A closer look at structured pruning for neural 181network compression. 10:1–12, 2018. URL http://arxiv.org/abs/1810.04622 . arXiv: 1810.04622 Citation 182Key: Crowley2018. 183Yann LeCun. Handwritten Digit Recognition with a Back-Propagation Network. In Neural Information Processing 184Systems . American Institute of Physics, 1988. URL https://proceedings.neurips.cc/paper/1987/file/ 185a684eceee76fc522773286a895bc8436-Paper.pdf . 186Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Proceedings 187of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition , 2016-Decem:770–778, 1882016a. doi: 10.1109/CVPR.2016.90. arXiv: 1512.03385 Citation Key: He2016 ISBN: 9781467388504. 189Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. Lecture Notes 190in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 1919908 LNCS:630–645, 2016b. ISSN 16113349. doi: 10.1007/978-3-319-46493-0_38. arXiv: 1603.05027 ISBN: 1929783319464923. 193Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy 194Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael 195Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat 196Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, 197Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, 198Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on 199heterogeneous systems, 2015. URL https://www.tensorflow.org/ . Software available from tensorflow.org. 200Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level 201performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision 202(ICCV) , December 2015. 203Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL 204http://www.cs.toronto.edu/~kriz/cifar.html . 205Leslie N. Smith. Cyclical learning rates for training neural networks, 2017. 2069
HMfgNETCilc
Review
8: Top 50% of accepted papers, clear accept
This report does an excellent job at a reproducing the original paper. I want to commend the authors at their outstanding tenacity to reproduce the results from the paper description alone. Only a few comments/suggestions for future reproduction efforts, but overall I think this is an excellent report and should definitely be accepted. Suggestions: - I was hoping there would be inline comments in the code you submitted, but couldn't find any. I think you do a good job making the code base easy to use and play with, but would suggest you comment important parts of the code. This would help with the extensibility of the ideas, and give better insight into the parts that weren't reported you worked hard to figure out. - I appreciate the addition of CIFAR-100 as added dataset, but I do wonder if a dataset of a different type (i.e. not image classification) would have been better to address the robustness and generalizability of the original paper's methods.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Comparing Rewinding and Fine-tuning in Neural Network Pruning ### Paper Abstract Scope of Reproducibility We are reproducing Comparing Rewinding and Fine-tuning in Neural Networks, by Renda et al. In this work the authors compare three different approaches to retraining neural networks after pruning: 1) fine-tuning, 2) rewinding weights as in Frankle et al. and 3) a new, original method involving learning rate rewinding, building upon Frankle et al. We reproduce the results of all three approaches, but we focus on verifying their approach, learning rate rewinding, since it is newly proposed and is described as a universal alternative to other methods. We used CIFAR10 for most reproductions along with additional experiments on the larger CIFAR100 which extends the result originally provided by the authors. We've also extended the list of tested network architectures to include Wide ResNets. The new experiments led us to discover the limitations of learning rate rewinding which can worsen pruning results on large architectures. Methodology We implemented the code ourselves in Python with TensorFlow 2, basing our implementation of the paper alone and without consulting the source code provided by the authors. We ran two sets of experiments. In the reproduction set, we have striven to exactly reproduce the experimental conditions of Renda et al. We have also conducted additional experiments, which use other network architectures, effectively showing results previously unreported by the authors. We did not cover all originally reported experiments -- we covered as many as needed to state the validity of claims. We used Google Cloud resources and a local machine with 2x RTX 3080 GPUs. Results We were able to reproduce the exact results reported by the authors in all originally reported scenarios. However, extended results on larger Wide Residual Networks have demonstrated the limitations of the newly proposed learning rate rewinding -- we observed a previously unreported accuracy degradation for low sparsity ranges. Nevertheless, the general conclusion of the paper still holds and was indeed reproduced. What was easy Re-implementation of the pruning and retraining methods was technically easy, as it is based on a popular and simple pruning criterion -- magnitude pruning. Original work was descriptive enough to reproduce the results with satisfying results without consulting the code. What was difficult Not every design choice was mentioned in the paper, thus reproducing the exact results was rather difficult and required a meticulous choice of hyper-parameters. Experiments on ImageNet and WMT16 datasets were time consuming and required extensive resources, thus we did not verify them. Communication with original authors We did not consult the original authors, as there was no need to. ### Paper Keywords ["pruning", "lottery ticket hypothesis", "finetuning"] ### Paper Content Comparing Rewinding and Fine-tuning in Neural NetworkPruning Reproducibility Challenge 2021Anonymous Author(s)AffiliationemailReproduction Summary 1Scope of Reproducibility 2We are reproducing Comparing Rewinding and Fine-tuning in Neural Networks , by Renda et al. [2020]. In this work 3the authors compare three different approaches to retraining neural networks after pruning: 1) fine-tuning, 2) rewinding 4weights as in Frankle and Carbin [2019] and 3) a new, original method involving learning rate rewinding, building upon 5Frankle and Carbin [2019]. We reproduce the results of all three approaches, but we focus on verifying their approach, 6learning rate rewinding, since it is newly proposed and is described as a universal alternative to other methods. 7We used CIFAR10 for most reproductions along with additional experiments on the larger CIFAR100, which extends 8the results originally provided by the authors. We have also extended the list of tested network architectures to include 9Wide ResNets (Zagoruyko and Komodakis [2016]). The new experiments led us to discover the limitations of learning 10rate rewinding which can worsen pruning results on large architectures. 11Methodology 12We implemented the code ourselves in Python with TensorFlow 2, basing our implementation of the paper alone and 13without consulting the source code provided by the authors. We ran two sets of experiments. In the reproduction set, we 14have striven to exactly reproduce the experimental conditions of Renda et al. [2020]. We have also conducted additional 15experiments, which use other network architectures, effectively showing results previously unreported by the authors. 16We did not cover all originally reported experiments – we covered as many as needed to state the validity of claims. We 17used Google Cloud resources and a local machine with 2x RTX 3080 GPUs. 18Results 19We were able to reproduce the exact results reported by the authors in all originally reported scenarios. However, 20extended results on larger Wide Residual Networks have demonstrated the limitations of the newly proposed learning 21rate rewinding – we observed a previously unreported accuracy degradation for low sparsity ranges. Nevertheless, the 22general conclusion of the paper still holds and was indeed reproduced. 23What was easy 24Re-implementation of the pruning and retraining methods was technically easy, as it is based on a popular and simple 25pruning criterion – magnitude pruning. Original work was descriptive enough to reproduce the results with satisfying 26results without consulting the code. 27What was difficult 28Not every design choice was mentioned in the paper, thus reproducing the exact results was rather difficult and required 29a meticulous choice of hyper-parameters. Experiments on ImageNet and WMT16 datasets were time consuming and 30required extensive resources, thus we did not verify them. 31Communication with original authors 32We did not consult the original authors, as there was no need to 33Submitted to ML Reproducibility Challenge 2021. Do not distribute.1 Introduction 34Neural network pruning is an algorithm leading to decrease the size of a network, usually by removing its connections 35or setting their weights to 0. This procedure generally allows obtaining smaller and more efficient models. It often turns 36out that these smaller networks are as accurate as their bigger counterparts or the accuracy loss is negligible. A common 37way to obtain such high quality sparse network is to prune it after the training has finished (Liu et al. [2019], Frankle 38and Carbin [2019]). Networks that have already converged are easier to prune than randomly initialized networks (Liu 39et al. [2019], Lee et al. [2018]). After pruning, more training is usually required to restore the lost accuracy. Although 40there are a few ways to retrain the network, finetuning might be the easiest and most often chosen by researchers and 41practitioners. (Liu et al. [2019], Renda et al. [2020]). 42Lottery Ticket Hypothesis from Frankle and Carbin [2019] formulates a hypothesis that for every dense neural network, 43there exists a smaller subnetwork that matches or exceeds results of the original. The algorithm originally used to 44obtain examples of such networks is iterative magnitude pruning with weight rewinding, and it is one of the methods of 45retraining after pruning compared in this work. 462 Scope of reproducibility 47Renda et al. [2020] formulated the following claims: 48Claim 1: Widely used method of training after pruning: finetuning yields worse results than rewinding based methods 49(supported by figures 1, 2, 3, 4 and table 5) 50Claim 2: Newly introduced learning rate rewinding works as good or better as weight rewinding in all scenarios 51(supported by figures 1, 2, 3, 4 and table 5, but not supported by figure 5) 52Claim 3: Iterative pruning with learning rate rewinding matches state-of-the-art pruning methods 53(supported by figures 1, 2, 3, 4 and table 5, but not supported by figure 5) 543 Methodology 55We aimed to compare three retraining approaches: 1) finetuning, 2) weight rewinding and 3) learning rate rewinding. 56Our general strategy that repeated across all experiments was as follows: 571. train a dense network to convergence, 582. prune the network using magnitude criterion: remove weights with smallest L1 norm, 593. retrain the network using selected retraining approach. 60In the case of structured pruning: in step 2, we removed structures (rows or convolutional channels) with the smallest 61average L1 norm (Crowley et al. [2018]), rather than removing separate connections. 62In the case of iterative pruning: the network in step 1 was not randomly initialized, but instead: weights from a model 63from a previous iterative pruning step were loaded as the starting point. 64We trained all our networks using Stochastic Gradient Descent with Nesterov Momentum. The learning rate was 65decreased in a piecewise manner during the training, but momentum coefficient was constant and equal to 0:9. 663.1 Model descriptions 67In this report, we were focusing on an image recognition task using convolutional neural networks (LeCun [1988]). For 68most of our experiments, we chose to use identical architectures as Renda et al. [2020] to better validate their claims 69and double-check their results, rather than provide additional ones. Therefore, most of the used networks are residual 70networks, which were originally proposed in He et al. [2016a]. Additionally, to verify the general usefulness of pruning 71and retraining methods proposed in Renda et al. [2020] we extend the list of tested network architectures to much larger 72wide residual networks from Zagoruyko and Komodakis [2016]. 733.1.1 Residual networks (ResNet) 74Just as Renda et al. [2020], we chose to use the original version of ResNet as described in He et al. [2016a] rather than 75the more widely used, improved version (with preactivated blocks) from He et al. [2016b]. We created the models 762ourselves, using TensorFlow (Abadi et al. [2015]) and Keras. We strove to replicate the exact architectures used by 77Renda et al. [2020] and He et al. [2016a] and train them from scratch. 78Model Trainable parameters Kernel parameters CIFAR-10 CIFAR-100ResNet-20 272 282 270 896 92.46% –ResNet-56 855 578 851 504 93.71% 71.90%ResNet-110 1 730 522 1 722 416 94.29% 72.21%Table 1: ResNets architecture description, including baseline accuracy across datasets.Hyper-parameters 79Learning rate started with 0:1and was multiplied by 0:1twice, after 36 000 and 54 000 iterations. One training cyclewas 72 000 iterations in total. For all batch normalization layers, we set the batch norm decay to 0.997, following Rendaet al. [2020] which was also used in the original TensorFlow implementation1. We initialize network’s weights withwhat is known as He uniform initialization from He et al. [2015]. We regularize ResNets, during both training andfinetuning, using L2penalty with 104coefficient. In other words, the loss function (from which we calculate thegradients) looks as follows:FinalLoss =CategoricalCrossentropy (GroundTruth; Prediction ) + 104Xi2Ww2i3.1.2 Wide Residual Networks (Wide ResNet, WRN) 80WRN networks were introduced in Zagoruyko and Komodakis [2016]. They are networks created by simply increasing 81the number of filters in preactivated ResNet networks (He et al. [2016b]). 82Model Trainable parameters Kernel parameters CIFAR-10WRN-16-8 10 961 370 10 954 160 95.72%Table 2: Wide ResNet architecture description.Hyper-parameters 83As Wide ResNets are newer and much larger than ResNets, hyper-parameters are slightly different. To choose them, wefollow Zagoruyko and Komodakis [2016]. Learning rate starts with 0:1and multiplied by 0:2thrice: after 32 000, 48 000and 64 000 iterations. Training lasts for 80 000 iterations. For all batch normalization layers, we use hyper-parametersfrom the newer TensorFlow implementation2with batch norm decay set to 0.9. Following Zagoruyko and Komodakis[2016], we use larger L2penalty for this network: 2104. Finally, the loss function is as follows:FinalLoss =CategoricalCrossentropy (GroundTruth; Prediction ) + 2 104Xi2Ww2i3.2 Datasets 84CIFAR-10 and CIFAR-100 are image classification datasets introduced in Krizhevsky et al.. Following Renda et al. 85[2020], we use all (50 000) training examples to train the model. 861https://github.com/tensorflow/models/blob/r1.13.0/official/resnet/resnet_model.py2https://github.com/tensorflow/models/blob/r2.5.0/official/vision/image_classification/resnet/resnet_model.py3Dataset Training examples Validation examples Classes ResolutionCIFAR-10 50 000 10 000 10 3232CIFAR-100 50 000 10 000 100 3232Table 3: CIFAR datasets description.3.2.1 Postprocessing 87We used a standard postprocessing for both CIFAR-10 and CIFAR-100 datasets (Renda et al. [2020], Frankle and 88Carbin [2019], Zagoruyko and Komodakis [2016]). During training and just before passing data to the model, we: 891.standardized the input by subtracting the mean and dividing by the std of RGB channels (calculated on training 90dataset), 912. randomly flipped in horizontal axis, 923. added a four pixel reflection padding, 934. randomly cropped the image to its original size. 94During the validation, we did only the first step of the above. 953.3 Experimental setup and code 96Our ready-to-use code, which includes experiment definitions, can be found at https://anonymous.4open. 97science/r/reproducing-comparing-rewinding-and-finetuning-1C5A . It’s written using TensorFlow 98(Abadi et al. [2015]) version 2.4.2 in Python. More details are included in the repository. 993.4 Computational requirements 100Recreating the experiments required a modern GPU, training all models on CPU was virtually impossible. Training 101time varies depending on a lot of factors: network version and size, exact version of the deep learning library, and even 102the operating system. In our case, using TensorFlow 2.4.2 on Ubuntu and a single RTX 3080 GPU, the smallest of the 103used models, ResNet-20, takes about 20 minutes to train on CIFAR-10 dataset. To replicate our experiments, training at 104least a single baseline network and then, separately, a single pruned network, is required. To reduce computational 105requirements, we reused one dense baseline for multiple compression ratios. Approximated training time requirements 106can be seen in the table below. 107Model Dataset Number of iterations Iterations per second Time for training cycleResNet-20 CIFAR-10 72 000 59.0 22 minResNet-56 CIFAR-10 72 000 28.6 43 minResNet-110 CIFAR-10 72 000 15.9 77 minWRN-16-8 CIFAR-10 80 000 17.4 78 minTable 4: Time requirements for replicating or running experiments from this report. Reported times are obtained using asingle RTX 3080 GPU in Linux environment, using TensorFlow in version 2.4.2.For all our experiments in total, we used around 536 GPU hours. 1084 Method description 109We compare three methods of retraining after pruning. For all of them, the starting point is a network that was already 110trained to convergence, then pruned to a desired sparsity. The difference between the three retraining methods is what 111follows after it. 11244.1 Fine-tuning 113Fine-tuning is retraining with a small, constant learning rate – in our case, whenever fine-tuning was used, the learning 114rate was set to 0.001 as in Renda et al. [2020]. We finetune the network for the same number of iterations as the baseline 115– 72 000 iterations in the case of the original ResNet architecture. In this method, such long retraining would not be 116necessary in practical applications, since the network converges much faster. 1174.2 Weight rewinding 118Weight rewinding restores the network’s weights from a previous point (possibly beginning) in the training history and 119then continues training from this point using the original training schedule – in our case a piecewise constant decaying 120learning rate schedule. When rewinding a network to iteration Kthat originally trained for Niterations: first prune 121the dense network that was trained for Niterations. Then, for connections that survived, restore their values to K-th 122iteration from the training history. Then train to the convergence for the remaining NKiterations. 1234.3 Learning rate rewinding 124Learning rate rewinding continues training with weights that have already converged, but restores the learning rate 125schedule to the beginning, just as if we were training from scratch, and then trains to the convergence once again. This 126reminds the cyclical learning rates from Smith [2017]. Learning rate rewinding really is weight rewinding for K=N, 127but the final retraining is always for Niterations. 1285 Results 129In most of our experiment, just as Renda et al. [2020], we investigate how does the trade-off between prediction accuracy 130and compression ratio look like. In one of the experiments (table 5) we verify only one compression ratio, but for the 131rest, we verify multiple. We report a median result out of 2 up to 12 trials for each compression ratio. To better utilize 132our compute capabilities, we decided to spend more training cycles in situations where there is no clear winner between 133the compared methods. On each plot, we include error bars showing 80% confidence intervals. 1345.1 Results reproducing original paper 135In this section, we include experiments that we successfully reproduced. They match the original ones within 1% error 136margin. 137Across all scenarios where finetuning was tested, it was by far the worst of the three methods, which directly supports 138claim 1 (section 2). Weight rewinding and learning rate rewinding most often are equally matched, but in some cases 139learning rate rewinding works a little better. 140ResNets on CIFAR-10 dataset 1411.0×1.75×3.06×5.35×9.35×Compression ratio-3%-2%-1%0%1% AccuracyResNet-20 one-shot1.0×1.75×3.06×5.35×9.35×Compression ratio-4%-3%-2%-1%0%1% AccuracyResNet-20 iterativeweight rewinding LR rewinding finetuningFigure 1: Results of ResNet-20 (table 1) on CIFAR-10 (table 3) with unstructured, magnitude pruning in versions:one-shot and iterative. Results show varying compression ratios. Maximal compression ratio (9.35 ) means that thereare only 29 000 non-zero kernel parameters left. This experiment supports claims 1, 2, 3 (section 2).51.0×1.75×3.06×5.35×9.35×Compression ratio-2%-1%0%1% AccuracyResNet-56 one-shot1.0×2.18×4.77×10.41×22.73×Compression ratio-3%-2%-1%0%1% AccuracyResNet-56 iterativeweight rewinding LR rewinding finetuningFigure 2: Results of ResNet-56 (table 1) on CIFAR-10 (table 3) with unstructured, magnitude pruning in versions:one-shot and iterative. Results with varying compression ratios. Maximal compression ratio means (22.73 ) that thereare only 37 600 non-zero kernel parameters left. This experiment supports claims 1, 2, 3 (section 2).Network Dataset Retraining Sparsity Test AccuracyResNet-110 CIFAR-10 None 0% 94.29%ResNet-110 CIFAR-10 LR rewinding 89.3% 93.74%ResNet-110 CIFAR-10 weight rewinding 89.3% 93.73%ResNet-110 CIFAR-10 finetuning 89.3% 93.32%Table 5: Results of ResNet-110 (table 1) trained on CIFAR-10 (table 3) with unstructured, one-shot magnitude pruning.Sparsity 89:3%corresponds to 9:35compression ratio. This experiment supports claims 1, 2, 3 (section 2).1.0×1.75×3.06×5.35×9.35×Compression ratio-7%-6%-5%-4%-3%-2%-1%0% AccuracyResNet-20 structuredweight rewinding LR rewinding finetuningFigure 3: Results of ResNet-20 (table 1) on CIFAR-10 (table 3) with structured, one-shot, magnitude pruning. Resultsshow varying compression ratios. Maximal compression ratio (9.35 ) means that there are only 29 000 non-zero kernelparameters left in ResNet-20.65.2 Results beyond original paper 142ResNets on CIFAR-100 dataset 1431.0×1.75×3.06×5.35×9.35×Compression ratio-7%-6%-5%-4%-3%-2%-1%0%1%2% AccuracyResNet-56 unstructuredweight rewinding LR rewinding finetuningFigure 4: Results of ResNet-56 (table 1) on CIFAR-100 (table 3) with unstructured, one-shot, magnitude pruning.Results with varying compression ratios. Maximal compression ratio (9.35 ) means that there are only 91 500 non-zerokernel parameters left. This experiment supports claims 1, 2, 3 (section 2) even though this scenario wasn’t originallytested in Renda et al. [2020].WRN-16-8 on CIFAR-10 dataset 144WRN-16-8 shows consistent behaviour – accuracy in the low sparsity regime is reduced in comparison to the baseline. 145In the case of iterative pruning, where each step is another pruning in the low sparsity regime, it leads to a large 146difference between the two retraining methods. Since for WRN-16-8 one-shot, low sparsity pruning shows a small 147regression in comparison to the baseline, this regression accumulates when pruning multiple times, as we do in iterative 148pruning. This can be seen in figure 5. 1491.0×2.51×6.32×15.91×40.0×Compression ratio-2%-1%0% AccuracyWRN-16-8 one-shot1.0×3.16×10.0×31.62×100.0×Compression ratio-9%-8%-7%-6%-5%-4%-3%-2%-1%0% AccuracyWRN-16-8 iterativeweight rewinding LR rewinding (0.3 step)Figure 5: Results of WRN-16-8 (table 2) on CIFAR-10 (table 3) with unstructured, magnitude pruning in versions:one-shot and iterative. Results with varying compression ratios. Maximal compression ratio (100 ) leaves 109 500non-zero kernel parameters while achieving around 94% accuracy or around 95% when leaving 153 400 non-zeroparameters. One can see catastrophic effects of low-sparsity pruning when using learning rate rewinding procedure.7For iterative pruning (figures 1, 2) we used a nonstandard step size of 30% per iterative pruning iteration, which was 150a way to reduce the computational requirements. We provide a comparison of our step size to the more commonly 151used 20%. We show that there is virtually no difference between both versions and the aforementioned catastrophic 152degradation occurs in both cases, as long as the step size is in the low sparsity regime. 1531.0×2.04×4.15×8.46×17.24×Compression ratio-5%-4%-3%-2%-1%0% AccuracyWRN-16-8 iterativeLR rewinding (0.3 step) LR rewinding (0.2 step)Figure 6: Results of WRN-16-8 (table 2) on CIFAR-10 (table 3) with unstructured, iterative, magnitude pruning withtwo different step sizes. Results show varying compression ratios and accuracy.6 Discussion 154We were able to confirm the general conclusion of Renda et al. [2020]. Fine-tuning can mostly be replaced by other 155retraining techniques, e.g., by weight rewinding as done by Frankle and Carbin [2019]. However, we have also shown 156in figure 5 that the newly proposed learning rate rewinding was a poor choice when we were pruning larger networks – 157in our case that was WRN-16-8. We believe this should be further examined as there might exist a simple workaround 158to this problem – a retraining procedure in between weight rewinding and learning rate rewinding which would work in 159all cases. Furthermore, it would be interesting to see where exactly learning rate rewinding starts losing accuracy in 160comparison to weight rewinding and why this catastrophic accuracy degradation occurs. Perhaps, the reason for it not 161occurring with the original ResNet architecture is the degree to which the larger networks overtrain – larger networks 162tend to overtrain more. Such an overtrained network might not be a good starting point for the retraining. 163Acknowledgements 164The authors thank Polish National Science Center for funding under the OPUS-18 2019/35/B/ST6/04379 grant and the 165PlGrid consortium for computational resources. 166References 167Alex Renda, Jonathan Frankle, and Michael Carbin. Comparing Rewinding and Fine-tuning in Neural Network Pruning. 1682020. URL http://arxiv.org/abs/2003.02389 . arXiv: 2003.02389. 169Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. 7th 170International Conference on Learning Representations, ICLR 2019 , page 1–42, 2019. arXiv: 1803.03635 Citation 171Key: Frankle2019. 172Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Edwin R. Hancock Richard C. Wilson and William 173A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC) , pages 87.1–87.12. BMV A Press, 174September 2016. ISBN 1-901725-59-6. doi: 10.5244/C.30.87. URL https://dx.doi.org/10.5244/C.30.87 . 1758Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. 7th 176International Conference on Learning Representations, ICLR 2019 , page 1–21, 2019. arXiv: 1810.05270 Citation 177Key: Liu2019a. 178Namhoon Lee, Thalaiyasingam Ajanthan, and Philip H. S. Torr. SNIP: single-shot network pruning based on connection 179sensitivity. CoRR , abs/1810.02340, 2018. URL http://arxiv.org/abs/1810.02340 . 180Elliot J. Crowley, Jack Turner, Amos Storkey, and Michael O’Boyle. A closer look at structured pruning for neural 181network compression. 10:1–12, 2018. URL http://arxiv.org/abs/1810.04622 . arXiv: 1810.04622 Citation 182Key: Crowley2018. 183Yann LeCun. Handwritten Digit Recognition with a Back-Propagation Network. In Neural Information Processing 184Systems . American Institute of Physics, 1988. URL https://proceedings.neurips.cc/paper/1987/file/ 185a684eceee76fc522773286a895bc8436-Paper.pdf . 186Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Proceedings 187of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition , 2016-Decem:770–778, 1882016a. doi: 10.1109/CVPR.2016.90. arXiv: 1512.03385 Citation Key: He2016 ISBN: 9781467388504. 189Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. Lecture Notes 190in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) , 1919908 LNCS:630–645, 2016b. ISSN 16113349. doi: 10.1007/978-3-319-46493-0_38. arXiv: 1603.05027 ISBN: 1929783319464923. 193Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy 194Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael 195Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat 196Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, 197Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, 198Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on 199heterogeneous systems, 2015. URL https://www.tensorflow.org/ . Software available from tensorflow.org. 200Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level 201performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision 202(ICCV) , December 2015. 203Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). URL 204http://www.cs.toronto.edu/~kriz/cifar.html . 205Leslie N. Smith. Cyclical learning rates for training neural networks, 2017. 2069<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text This report does an excellent job at a reproducing the original paper. I want to commend the authors at their outstanding tenacity to reproduce the results from the paper description alone. Only a few comments/suggestions for future reproduction efforts, but overall I think this is an excellent report and should definitely be accepted. Suggestions: - I was hoping there would be inline comments in the code you submitted, but couldn't find any. I think you do a good job making the code base easy to use and play with, but would suggest you comment important parts of the code. This would help with the extensibility of the ideas, and give better insight into the parts that weren't reported you worked hard to figure out. - I appreciate the addition of CIFAR-100 as added dataset, but I do wonder if a dataset of a different type (i.e. not image classification) would have been better to address the robustness and generalizability of the original paper's methods. ### Review Rating 8: Top 50% of accepted papers, clear accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
9wHe4F-lpp
ICLR.cc/2021/Conference
2021
FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond
["Zhuo Su", "Linpu Fang", "Deke Guo", "Dewen Hu", "Matti Pietik\u00e4inen", "Li Liu"]
Binary neural networks (BNNs), where both weights and activations are binarized into 1 bit, have been widely studied in recent years due to its great benefit of highly accelerated computation and substantially reduced memory footprint that appeal to the development of resource constrained devices. In contrast to previous methods tending to reduce the quantization error for training BNN structures, we argue that the binarized convolution process owns an increasing linearity towards the target of minimizing such error, which in turn hampers BNN's discriminative ability. In this paper, we re-investigate and tune proper non-linear modules to fix that contradiction, leading to a strong baseline which achieves state-of-the-art performance on the large-scale ImageNet dataset in terms of accuracy and training efficiency. To go further, we find that the proposed BNN model still has much potential to be compressed by making a better use of the efficient binary operations, without losing accuracy. In addition, the limited capacity of the BNN model can also be increased with the help of group execution. Based on these insights, we are able to improve the baseline with an additional 4$\sim$5% top-1 accuracy gain even with less computational cost. Our code and all trained models will be made public.
["Binary neural networks", "network quantization", "network compression"]
ABSTRACTBinary neural networks (BNNs), where both weights and activations are binarizedinto 1 bit, have been widely studied in recent years due to its great benefit ofhighly accelerated computation and substantially reduced memory footprint thatappeal to the development of resource constrained devices. In contrast to previousmethods tending to reduce the quantization error for training BNN structures, weargue that the binarized convolution process owns an increasing linearity towardsthe target of minimizing such error, which in turn hampers BNN’s discriminativeability. In this paper, we re-investigate and tune proper non-linear modules tofix that contradiction, leading to a strong baseline which achieves state-of-the-art performance on the large-scale ImageNet dataset in terms of accuracy andtraining efficiency. To go further, we find that the proposed BNN model still hasmuch potential to be compressed by making a better use of the efficient binaryoperations, without losing accuracy. In addition, the limited capacity of the BNNmodel can also be increased with the help of group execution. Based on theseinsights, we are able to improve the baseline with an additional 4 5% top-1accuracy gain even with less computational cost. Our code and all trained modelswill be made public.1 I NTRODUCTIONIn the past decade, Deep Neural Networks (DNNs), in particular Deep Convolutional NeuralNetworks (DCNNs), has revolutionized computer vision and been ubiquitously applied in variouscomputer vision tasks including image classification (Krizhevsky et al., 2012), object detection (Liuet al., 2020a) and semantic segmentation (Minaee et al., 2020). The top performing DCNNs (Heet al., 2016; Huang et al., 2017) are data and energy hungry, relying on cloud centers with clustersof energy hungry processors to speed up processing, which greatly impedes their deployment inubiquitous edge devices such as smartphones, automobiles, wearable devices and IoTs which havevery limited computing resources. Therefore, in the past few years, numerous research effort hasbeen devoted to developing DNN compression techniques to pursue a satisfactory tradeoff betweencomputational efficiency and prediction accuracy (Deng et al., 2020).Among various DNN compression techniques, Binary Neural Networks (BNNs), firstly appearedin the pioneering work by Hubara et al. (2016), have attracted increasing attention due to theirfavorable properties such as fast inference, low power consumption and memory saving. In aBNN, the weights and activations during inference are aggressively quantized into 1-bit (namelytwo values), which can lead to 32saving in memory footprint and up to 64speedup on CPUs(Rastegari et al., 2016). However, the main drawback of BNNs is that despite recent progress (Liuet al., 2018; Gu et al., 2019; Kim et al., 2020b), BNNs have trailed the accuracy of their full-precisioncounterparts. This is because the binarization inevitably causes serious information loss due to thelimited representational capacity with extreme discreteness. Additionally, the discontinuity natureof the binarization operation brings difficulty to the optimization of the deep network (Alizadehet al., 2018).A popular direction on enhancing the predictive performance of a BNN is to make the binaryoperation mimic the behavior of its full-precision counterpart by reducing the quantization errorcuased by the binarization function. For example, XNOR-Net (Rastegari et al., 2016) firstlyintroduced scaling factors for both the binary weight and activation such that the output of the1Under review as a conference paper at ICLR 2021SignBinary weightScaling factorsBNAdditionxlxl+1SignBinary weightxlxl+1BNAdditionBlock chain in Bi-Real Net: Block chain in FTBNN: (a): Basic block in Bi-Real Net(b): Basic block in FTBNN/: Basic block shown in left : FPReLU: ReLU(a) (b)EpochTop-1 AccuracyTraining LossFigure 1: Left: The basic block in original Bi-Real Net vs.the simplified basic block in FTBNN,where we directly absorb the explicit scaling factors into the BN layer by leveraging BN’s scalingfactors. Right : The non-linear modules (ReLU or FPReLU) are explicitly added after each basicblock in FTBNN. To maximize the model’s discriminative power while keeping its training stability,the number of ReLU is controlled and the proposed FPReLU is connected with most blocks.Training curves on ImageNet of the two 18-layer networks are depicted to show the trainingefficiency. Solid lines denote Top-1 accuracy on the validation set (y-axis on the right), dashedlines denote training loss (y-axis on the left). Both models are trained from scratch.binary convolution can be rescaled to closely match the result of the real-valued convolution justlike the original full-precision weight and activation are used. The method outperforms its vanillacounterpart BNN (Hubara et al., 2016) by a large margin (44.2% vs.27.9% in Top-1 accuracy onImageNet using the AlexNet architecture (Krizhevsky et al., 2012)). Because of the remarkablesuccess of XNOR-Net, a series of approaches emerged subsequently with the effort of either findingbetter scaling factors or proposing novel optimization strategies to further reduce the quantizationerror. Specifically, XNOR-Net++ (Bulat & Tzimiropoulos, 2019) improved the way of calculatingthe scaling factors by regarding them as model parameters which can be learnt end-to-end from thetarget loss. While Real-to-Bin (Martinez et al., 2020) proposed to compute the scaling factors onthe fly according to individual input samples, which is more flexible. From another perspective, IR-Net (Qin et al., 2020) progressively evolved the backward function for binarization from an identitymap into the original Sign function during training, which can avoid big quantization error in theearly stage of training. BONN (Gu et al., 2019) added a Bayesian loss to encourage the weight kernelfollowing a Gaussian mixture model with each Gaussian centered at each quantization value, leadingto higher accuracy. Other works aiming to reduce the quantization error also include ABC-Net (Linet al., 2017), Bi-Real Net (Liu et al., 2018), ProxyBNN (He et al., 2020), etc.However, another problem arises with the quantization error optimized towards 0, especially for thestructure like Bi-Real Net ( Fig. 1), where the only non-linear function is the binarization function.The non-linearity of the binarization function will be eliminated if the binary convolution withscaling factors can perfectly mimic the real-valued convolution in the extreme case (quantizationerror equals to 0), thus hindering the discriminative ability of BNNs. Therefore, it is necessary tore-investigate the non-linear property of BNNs when inheriting existing advanced structures.Based on this consideration, we conduct the experiment on MNIST dataset (LeCun & Cortes, 2010)using a 2-layer Bi-Real Net like structure (which begins with an initial real-valued convolutionlayer and two basic blocks illustrated in Fig. 1 (b), optionally followed by a non-linear module, andends with a fully connected (FC) layer) and some interesting phenomenons can be found as shownin Fig. 2, where we visualized the feature space before the FC layer and calculated the featurediscrepancy caused by the binarization process as well as the corresponding classification accuracy(ACC, in %) for each model. Firstly, comparing the first two figures, despite the big quantizationerror made by binarization, the binary model achieves much higher accuracy than the real-valuedmodel, which does not have quantization error. This indicates that the binarization function ownsa potential ability to enhance the model’s discriminative power, and also explains why Bi-Real Net2Under review as a conference paper at ICLR 2021Linear Binary PReLU ReLU ACC: 92.64 0.04Discrepancy: 0 ACC: 98.45 0.16 Discrepancy: 1.04 0.03 ACC: 98.70 0.07 Discrepancy: 1.66 0.04 ACC: 98.83 0.07 Discrepancy: 2.41 0.13Figure 2: Feature visualization for 4 different models experimenting on MNIST, which are denotedasLinear : using real-valued convolution layers without non-linear module; Binary : using binaryconvolution layers without non-linear module; PReLU : using binary convolution layers with PReLUmodule; ReLU : using binary convolution layers with ReLU module. Testing accuracies are recordedbased on 5 runs for each model. The average discrepancy between output from binary convolutionsand that from its corresponding real-valued convolutions (using the proxy weights and original real-valued activations to conduct the convolution) are recorded as well (best viewed in color).Table 1: Top-1 accuracy on ImageNet of different BNN models with commonly used trainingstrategies, based on the ResNet-18 architecture (He et al., 2016). X/7denotes the correspondingstrategy is / is not utilized. KD denotes knowledge distillation, MS means using multiple trainingsteps, GA indicates applying gradient approximation in the backward pass. Note that BinaryDuo(Kim et al., 2020b) improved GA with a coupled ternary model. Scaling represents using explicitscaling factors to reweight the output of binary convolutions. N/A+ nmeans only the number ofepochs nin the final step for multi-step training is given. Here Bi-Real Net was trained using twodifferent implementations. yindicates the double skip connections proposed by Liu et al. (2018)were used. It can be seen that FTBNN achieves promising accuracy even with a naive trainingscheme and can be further improved when combining the training strategies.Method KD GA MS Scaling Epoch Top-1 Method KD GA MS Scaling Epoch Top-1BNN 7 7 7 7 N/A 42.2% IR-Nety 7X 7X N/A 58.1%XNOR-Net 7 7 X X N/A+58 51.2% BinaryDuoy 7X X N/A 120+40 60.9%XNOR-Net++ 7 7 7 X 80 57.1% Real-to-Bin baseline yX 7X X 75+75 60.9%Bi-Real Net (B)y7X 7X 256 56.4% FTBNNy 7 7 7 7 60 60.2%Bi-Real Net (A)y7X X X N/A+20 56.4% FTBNNy X 7X 7 75+75 63.2%can possibly achieve high accuracy even without any non-linear modules. Such ability should berespected when we design the BNN structures rather than minimizing the quantization error to 0.Second, when explicitly introducing a non-linear module, either PReLU (He et al., 2015) or ReLU,the discriminative power can be further enhanced, even though the feature discrepancy is enlarged.This again shows that there is no strict correlation between the quantization discrepancy and the finalpredictive performance. Meanwhile, the ReLU model brings the most discriminative power with aclearer class separateness in the feature space. Our following experiments also show the superiorityof the traditional ReLU function due to a stronger non-linearity .Motivated by the above observation, we propose FTBNN (meaning fast training, see Fig. 1), byfirstly eliminating the scaling factors after the binary convolutional operation, which are insteadfused into the Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers by leveraging BN’s scalingfactors, and explicitly configuring appropriate non-linear modules after each block. By doing so, wecan release the optimization burden for extra scaling layers as well as keep the non-linear propertyof the Sign function via a weaker scaling mechanism. At the same time, the configuration of explicitnon-linear modules (we also proposed a more flexible non-linear module, namely Fully ParametricReLU (FPReLU) shown in Fig. 3) is able to further boost the discriminative ability of the BNNmodel as well as ensure the training stability. The resulted FTBNN is highly competitive amongthe state-of-the-art BNN frameworks, with its enhanced performance in terms of both accuracy andtraining efficiency, as shown in Table 1 and Fig. 1. Note that very recently Liu et al. (2020b) alsodemonstrated the importance of non-linearity by redistributing activations in BNNs.To go beyond the baseline structure we have already obtained, we also notice that two aspects inthe literature were rarely investigated. One is to think how far we can go to make use of the fastand light binary operations and avoid the use of expensive real-valued counterparts, to make our3Under review as a conference paper at ICLR 2021BNN model more compact and computationally efficient. This is motivated by the fact that the real-valued operations in a BNN structure often take a considerable proportion of computation cost ( e.g.,85% in an 18-layer Bi-Real Net, and so does the proposed FTBNN, see Fig. 3). Second, inspiredby Binary MobileNet (Phan et al., 2020) which leveraged group convolution (Xie et al., 2017) toautomatically configure an optimal binary network architecture in their network search space, weconsider how group convolution can affect the performance of BNN structures (in representationalcapacity, discriminative power, etc.) with different configurations (vary in depth and width). To thebest of our knowledge this paper is also the first attempt to give such a specific investigation and canlead to useful insights on designing better BNN structures with the group mechanism.As for our experimental outcomes, firstly, we are able to build an easily trainable strong baselineFTBNN by improving the popular Bi-Real Net structure, already achieving the state-of-the-artperformance (Table 1, Table 3 and Table 4). Secondly, we enhanced the FTBNN by circumventingthe usage of real-valued operations and incorporating the group mechanism, leading to a series ofderived models that not only surpass the baseline in higher accuracy but also have less computationaloverhead (Table 4). It is also hoped that our exploration can give a better understanding of BNNdesign in the community and derive more efficient binary structures.2 M ETHODS2.1 B ACKGROUND AND FTBNNIn order to reduce the quantization error (denote as E) and protect the information flow in BNNs,which has been an important focus of recent years’ studies, a very basic idea is introducing severalscaling factors to reweight the output of the binary convolution, such that the rescaled output canapproach closely to the result as if using the real-valued operation:E= (WbA b)WA 0; (1)whereW,A,WbandAbindicate the real-valued weight and activation, the binarized weight andactivation respectively, represents a multiplicative convolution and represents a convolutionusing the XNOR-Count operations (or binary convolution), is the scaling factors and is element-wise multiplication.In XNOR-Net (Rastegari et al., 2016), actually consisted of two separate parts, for the weightsand activations respectively and both were calculated in an analytical way during the forward pass,by solving an optimization problem, which is therefore time-consuming. XNOR-Net++ (Bulat &Tzimiropoulos, 2019) further argued that can be a single part and learnt during training, showingbetter performance in terms of both accuracy and efficiency. More recently, Real-to-Bin (Martinezet al., 2020) proposed a data-driven version of , which was computed on the fly according toindividual input samples.In fact, with the purpose of finding a satisfactory initialization for , usually a pretrained model withfull-precision weights is needed (Rastegari et al., 2016; Lin et al., 2017). However, it is shown thatthe scaling factors can be fusable with the parameters in BN (Liu et al., 2018), directly training theBN layers can avoid the above troublesome, without degenerating the whole structure (Bethge et al.,2019). This also matches our assumption on Section 1 that the BN scaling factors can be a goodalternative for an explicit scaling layer.On the other hand, to maintain the information flow and maximize BNN’s expressive ability, anotheraspect of experience in the literature is the usage of shortcut connections (He et al., 2016). Onemilestone about that is the Bi-Real Net structure (Liu et al., 2018), where the authors emphasizedusing the shortcut which connects the real-valued activations before the Sign function to the outputof binary convolution, and proved the almost computation-free shortcut can significantly preserveBNN’s representational capability. It was then repeatedly adopted in other works (Liu et al., 2020b;Bulat et al., 2020; Kim et al., 2020a) to improve the quality of information propagation.Standing on the shoulders of giants and according to our discuss in Section 1, we propose asimple block based structure, dubbed as FTBNN1(Fig. 1) that both enables training efficiency andencourages non-linearity by discarding the scaling factors which are implicitly fused into BN, and1For reduction block, we adopt the real-valued 1x1 convolutional downsampling shortcut.4Under review as a conference paper at ICLR 2021Before compressionFlops budget: 172MTOP-1: 60.2%After compressionFlops budget: 90MTOP-1: 61.9%15%77%020408060120100Flops budget (in M)yf(y)f(y)=byf(y)=ayf(y)=y(starting status)SignBinary weightxlxl+1BNAdditionCopy(group convolution)AvgPool stride=2Figure 3: Left: The proposed FPReLU, slops in both sides are started from 1 (equal to anidentity map) and adaptively learned in channel-wise. Middle : Reduction block with group binaryconvolution and cheap downsampling layer (for normal block, the original binary convolutionlayer in Fig. 1 is also implemented with groups in the derived models). Right : Computationalbudget distributions for original and binary-purified FTBNN. The computational budget of a originalfloating point multiply operation (FLOP) in a convolution is reduced by 64 times if the operation isbinarized (using XNOR-Count instead). Based on that, when measuring the computational budget,the budget of binary operations (BOPs) are converted to equivalent of FLOPs with a factor of 1=64.15% and 77% denote the corresponding budget percentages for BOPs.adding specific activation functions. Contrary to Tang et al. (2017) and Bulat et al. (2019), wefind that traditional ReLU is still more effective than PReLU with a stronger non-linear property.While at the same time, immoderately inserting ReLUs in the network can also cause divergenceand deterioration, as the information collapse phenomenon (Sandler et al., 2018) become serious.To alleviate that, we extend PReLU to Fully Parametric ReLU (FPReLU) where the slops for bothnegative and positive activations are learnable (Fig. 3), to stably maximize BNN’s flexibility.2.2 T OWARDS A MORE COMPACT BNN WITH BINARY PURIFICATIONReal-valued operations in BNN are often needed to drive and preserve the information flow ( e.g.,binarization for the 1x1 convolutional shortcuts in the reduction block of Bi-Real Net will interruptthe continuous full-precision information flow, the damage of which is hard to recover in thefollowing layers (Bethge et al., 2019)).From Fig. 3, we find that the most computation-consuming parts among the real-valued operations inFTBNN are the initial convolutional layer and the real-valued 1x1 convolutional shortcuts. Therebywe make the following two changes to relieve the computation burden for the baseline.Initial Conv: Inspired by Bethge et al. (2020), in the full-precision initial convolutional layer, we setthe kernel size to 3x3 and constrain the number of output channels (only half of that in the baselinein our experiments). To facilitate building BNNs with different width configurations, as we willdiscuss later, we add an extra binary convolutional layer after the initial convolutional layer as abridge that can arbitrarily expand the initial width (see appendix for more detail).Downsampling shortcuts: Similar to Liu et al. (2020b), we propose a copy-and-paste strategy toavoid a convolutional downsampling shortcut to save computation while preserving the informationflow. In a reduction block where the resolutions of feature maps are halved and the number of filtersis doubled, we firstly concatenate the duplicated input volume to extend the input, then employ a3x3 average pooling with stride=2 to construct the shortcut (Fig. 3).2.3 F EATURE AGGREGATION WITH GROUP EXECUTIONAs discussed by Phan et al. (2020), binary operation restricts BNN’s feature representation onlyusing limited discrete values. Simply enlarging the network by increasing the number of channelscan be very expensive since a Nwidening leads to a N2growth in computational budget. Weexploit the group convolution (Xie et al., 2017) to enable a wider network (Fig. 3), which we believe5Under review as a conference paper at ICLR 2021can strengthen BNN’s representational ability, while keeping the total overhead unchanged. Thedifferences to other similar works are as follows.Compared with methods using multiple bases or branches: Lin et al. (2017) and Zhuang et al.(2019) gathered results from multiple branches or bases to enrich the feature representation forbinary convolutions. Their methods can be equally viewed as running several independent networksparallelly followed by a summation, while ours only uses a single pipeline without introducingadditional computational overhead. In the form of information aggregation, concatenation is used ingroup convolution, which is different to summation adopted in these methods.Compared with methods leveraging group convolutions: Group convolution is also incorporatedin recent approaches to facilitate binary network architecture search (NAS) (Phan et al., 2020; Bulatet al., 2020), while unfortunately a proper ablation study is missing, making the benefits from thegroup convolution unclear under the whole framework where the resulted advantage may be biasedtowards the NAS technology itself. Our study can be regarded as such an specific investigation tomake a better understanding of group convolution when designing BNNs.3 E XPERIMENTS AND DISCUSSESFTBNN Baseline: As shown in Fig. 1, to give the model enough non-linearity while maintainingtraining stability, we insert a ReLU after every four blocks and for other blocks, FPReLU is adopted.Compact and efficient derived models: Based on our baseline and the proposed binary purificationpipeline, we set the number of output channels in the initial convolutional layer as 32. A bridgebinary convolutional layer is then connected, followed by the blocks with group convolution asshown in Fig. 3 (also see the appendix) and the non-linear modules. Finally, a FC layer followed by asoft-max layer is applied to do the classification. In order to observe the effect of group convolution,we elaborate different computational budgets for BNNs (calculated in FLOPs as stated in Fig. 3,following Liu et al. (2018) and Rastegari et al. (2016)) by varying width and depth.Replacing XNOR-Count when ReLU is used: In Fig. 1 (b), activation values before Sign are either0 or positive if ReLU is applied at the end of the previous block. Then, all the 0s would be assignedto the lower binary value with the Sign function, eliminating the effect of ReLU in the residualbranch. In this case, we have designed particular bit operations, which is illustrated in the appendix.All the models are simply trained using Adam optimizer (Kingma & Ba, 2014) for 60 epochs fromscratch with a starting learning rate 0.01, which is decayed in a multi-step way (at epoch 45 and55, with the decaying rate 0.1) for the baseline, and a cosine shape scheme (towards 0 at the end oftraining) for the derived models. However we note there is no big difference between this two ways.The weight decay is set to 0 for all the binary convolutional layers. We use STE (Hubara et al., 2016)for backward propagation with value clipping in range (-1.2, 1.2) for both weights and activations.The Pytorch library (Paszke et al., 2019) is applied for implementation. Finally, following Bi-RealNet, we use the large-scale challenging ImageNet benchmark (ILSVRC 2012, Deng et al. (2009)) inour experiments. A more detailed description for the implementation can be found in the appendix.3.1 A BLATION ON FTBNN B ASELINEThe most similar one of the proposed baseline is the Bi-Real Net structure. Where, the authorsused a 2-stage training scheme to get a good initialization for BNN, a magnitude-aware gradientapproximation was also considered during weights updating. Finally, the trained scaling factorsare absorbed into the BN layers. The whole training procedure was carefully tuned in orderto fully exploit the model’s capacity. It can be seen in Table 3 that when applying our naivetraining scheme, the Bi-Real Net structure (Fig. 1 (a)) deleteriously degraded (from 56.4% to 52.3%on Top-1 accuracy). The main deficiency of this structure is the lack of a proper non-linearityintroduction, which is demonstrated to be indispensable for BNNs. In addition, our structure withoutany non-linear module performs slightly better than the Bi-Real Net structure (52.5% vs.52.3%),demonstrating the benefits of a weaker scaling mechanism from the BN layers.The following experiments then focus on the effectiveness of different non-linear modules. FromTable 3, we can find that the ReLU function is still preferred with a stronger non-linear property, thatis able to bring in more discriminative power. For example, only inserting 4 ReLUs can outperforms6Under review as a conference paper at ICLR 2021Table 2: Binary convolutions in FTBNN baseline based on ResNet-18 structure. BConv x-yindicatesyth block in the xth stage, where the size and number of filters are also given. The learnedcoefficients of FPReLU after each block are listed in the form of mean [minimum maximum] .Layer Learned coefficients Layer Learned coefficientsNegative Positive Negative PositiveBConv1-1 33, 64 1.82 [0.48 4.63] 1.16 [0.63 2.19] BConv3-1 33, 256 0.84 [0.47 2.07] 0.95 [0.61 1.84]BConv1-2 33, 64 0.95 [0.00 2.69] 0.72 [0.34 1.72] BConv3-2 33, 256 0.86 [0.44 2.85] 0.79 [0.46 1.51]BConv1-3 33, 64 0.87 [0.33 4.49] 0.75 [0.26 1.40] BConv3-3 33, 256 0.76 [0.04 1.97] 0.78 [0.28 2.50]BConv1-4 33, 64 ReLU BConv3-4 33, 256 ReLUBConv2-1 33, 128 0.91 [0.54 2.80] 0.93 [0.56 2.04] BConv4-1 33, 512 0.96 [0.54 2.86] 0.90 [0.14 2.05]BConv2-2 33, 128 0.93 [0.45 3.16] 0.83 [0.54 1.70] BConv4-2 33, 512 1.04 [0.44 4.66] 0.67 [0.00 1.44]BConv2-3 33, 128 0.81 [0.00 1.82] 0.86 [0.46 1.69] BConv4-3 33, 512 0.68 [0.23 2.80] 0.95 [0.23 3.51]BConv2-4 33, 128 ReLU BConv4-4 33, 512 ReLUTable 3: Ablation study for FTBNN baseline based on the ResNet 18 architecture.Model Top-1 (%) Top-5 (%)Bi-Real Net reported by Liu et al. (2018) 56.4 79.5Bi-Real Net with our naive training scheme 52.3 75.9No explicit non-linear modules 52.5 76.0All configured with PReLU 58.9 81.1All configured with FPReLU 59.3 81.3All configured with ReLU (lr=0.00015, diverge if lr is too big) 45.9 70.5Partially configured with ReLU 59.6 81.4Partially configured with ReLU & PReLU in others 59.7 81.6Baseline (Partially configured with ReLU & FPReLU in others) 60.2 82.0Full-precision of ResNet-18 (our implementation) 69.8 89.2the one with all blocks connected with PReLU by 0.7% in Top-1 accuracy. Note that ReLU isa special case of PReLU, indicating ReLU is already a decent local optima for PReLU by givinga strong non-linearity, making the optimization easier. At the same time, if the usage of ReLUis limitless, the BNN would also be prone to divergence, showing that unlike the full-precisioncounterparts, binarized architectures are essentially vulnerable, an abuse of ReLU which causessevere information collapse in the feature space should be avoided.A straightforward but effective way to control the non-linearity, so that model’s discriminative powercan be maximized, and the stability is also guaranteed, is to control the number of ReLUs andadditionally introduce weaker non-linear functions. Here, we show that compared with PReLU, theproposed FPReLU is an ideal solution to meet this requirement. As shown in Fig. 3, FPReLU startsfrom an identity function and gradually evolves during training to compensate the non-linearity inboth sides of activations. Table 2 lists the ultimately learnt coefficients of the FPReLU functions.These values for both side of the functions vary around 1 with a fairly wide range, some evenapproach to 0 or a big value ( i.e., 5), meaning that FPReLU is competent to tune the network into abalanced point automatically, since both slops for negative and positive activations are considered.3.2 T HE DERIVED MODELSIn this section, we evaluate the effectiveness of the derived models with binary purification byadjusting the number of groups. Table 4 leftshows the influences of group convlutions on binary-purified BNNs with varying settings of depth and group numbers, by constraining the computationcosts similar. It can be found that the simplest derived models without using group convolutions(i.e.,, G = 1) can surpass the proposed FTBNN baseline by a large margin (a 1.7% improvementon Top-1 accuracy with computational budget almost halved based on ResNet 18), which stronglydemonstrates the effectivenss of the proposed binary purification strategy. It also shows that thereis no influence of the group execution for relatively shallow binary networks under different budgetconstraints. As going deeper, the group convolution begin to diverge the corresponding models withnotable characteristics. On one hand, when observing the training accuracy, increasing the numberof groups can obviously introduce a better fitting, indicating an increasing representational capacityof the BNN. The phenomenon can be found where the layer 34 regardless of the model complexity.7Under review as a conference paper at ICLR 2021Table 4: Left: Experimental results of the derived structures with different group and budgetconfigurations, with Top-1 accuracy (in %) recorded, where the widths of networks are adjustedto meet the budget requirements. For a clearer illustration, we also give the number of parametersfor each model: Params = number of floating point paramters + number of binary parameters / 32.Right : Comparison with state-of-the-arts. (W/A) means how many bits used in the weights (W) andactivations (A), (1/1) nindicates nbranches or bases or individual models were adopted to expandthe network capacity for BNN. Our method with indicates knowledge distillation and multi-steptraining were used (refer to appendix for detail).Budget Layer Group FLOPs BOPs Params Top-1 Top-1(in M) (in M) (in M) (in M) (Train) (Val)9018 1 19.0 4516 1.66 61.0 61.918 3 23.1 4580 2.06 61.6 61.918 5 25.8 4301 2.28 62.2 61.918 8 28.7 3988 2.55 61.5 61.134 1 19.3 4176 1.32 58.8 61.034 3 23.8 4005 1.59 60.6 62.034 5 27.0 3829 1.79 61.8 62.534 8 30.1 3577 1.97 61.9 62.013518 1 21.1 7399 2.42 66.1 64.818 3 26.2 7285 2.88 66.6 64.518 5 29.9 7031 3.22 67.2 64.218 8 34.0 6717 3.60 66.4 63.434 1 21.6 6994 1.99 63.5 64.534 3 27.4 6639 2.31 65.8 65.334 5 31.5 6412 2.56 66.8 65.534 8 36.0 6090 2.83 67.3 65.044 1 22.1 6847 2.07 62.4 63.344 3 28.5 6512 2.33 65.3 64.844 5 32.9 6251 2.55 66.4 65.344 8 37.5 5947 2.76 66.9 65.3Mothod Bitwidth Budget Top-1(W/A) (in M) (in %)TTQ (Zhu et al., 2017) 2/32N/A66.6BWN (Rastegari et al., 2016) 1/32 60.8RQ ST (Louizos et al., 2019) 4/4 62.5ALQ (Qu et al., 2020) 2/2 66.4SYQ (Faraone et al., 2018) 1/8 62.9HWGQ (Cai et al., 2017) 1/2 59.6LQ-Net (Zhang et al., 2018) 1/2 62.6ABC-Net (Lin et al., 2017) (1/1)5N/A65.0CBCN (Liu et al., 2019) (1/1)4 61.4Group-Net (Zhuang et al., 2019) (1/1)4 66.3BNN Ensemble (Zhu et al., 2019) (1/1)6 61.0XNOR-Net (Rastegari et al., 2016) 1/116351.2Bi-Real Net-18 (Liu et al., 2018) 1/1 56.4XNOR-Net++ 1/1 57.1CI-BCNN (Wang et al., 2019) 1/1 59.9BONN (Gu et al., 2019) 1/1 59.3IR-Net (Qin et al., 2020) 1/1 58.1FTBNN Baseline (ours) 1/1 60.2FTBNN Baseline(ours) 1/1 63.2Bi-Real Net-34 (Liu et al., 2018) 1/1 193 62.2Binary MobileNet (Phan et al., 2020) 1/1 154 60.9Real-to-Bin (Martinez et al., 2020) 1/1 183 65.4FTBNN derived (ours) 1/1 132 65.5On the other hand, the positive effect for deeper networks can also activate a better accuracy on thevalidation set, meaning that an increase in discriminative power are also accompanied. All theexperiments on deeper BNNs with Group = 5 show a clear accuracy boost than those without groupexecution (Group = 1). E.g., from 61.0% to 62.5% for Layer = 34, and from 63.3% to 65.3% forLayer = 44.However, an increasing representational capacity may also lead to a higher risk of dropping ingenerability. For some cases, the accuracy on validation set is higher than that reported on thetraining set, due to the regularization nature of binary operations. With group execution, such effectof binarization is gradually weakened, leading to overfitting issues.3.3 C OMPARISON WITH STATE -OF-THE-ARTSHere we compare the proposed FTBNN with existing state-of-the-art methods as illustrated in theright side of Table 4, including the low-bit quantization methods, multiple binarizations methods,other binary frameworks based on the ResNet-18 architecture, etc.Interestingly, we find that someof our derived models are able to achieve superior or comparable performance when compared withthe methods using multiple bases or branches, while with far less computational overhead, againshowing the effectiveness and efficiency of the proposed method.4 P ERSPECTIVESHere, we raise some possible insights from this work: a) To further enhance the model’sdiscriminative ability through non-linearity, we wonder is it the best way to combine ReLU andFPReLU? Is it affected by other elements, such as depth, width and block orders? b) To what extentcan we do the binary purification? As we still have much proportion of real-valued operations inthe derived models. c) As shown in our experiments on BNN, group convolution accompanies withoverfitting. How to alleviate overfitting for BNNs should also be considered in future works.8Under review as a conference paper at ICLR 2021
rXEN3-UyRhI
Official Blind Review #2
4: Ok but not good enough - rejection
---paper summary---: This paper proposes to improve the BNN’s discriminative ability by introducing additional non-linearities. In addition, the paper exploits the group convolution to enable a wider network, which can strengthen BNN’s representational ability, while keeping the total overhead unchanged. ---Pros---: This paper introduces some practical methods to improve the performance of BNNs. In particular, the additional FPReLU is convincing. Moreover, the paper shows that grouped convolutions can be applied to wider BNNs to increase the representational capability while keeping the same complexity. ---Cons---: 1: This paper is incremental with limited new technical insights. a) Adding the nonlinearity can improve the representational capability of BNNs has been extensively explored in the literature. Specifically, ReActNet [Liu et al. 2020b] inserts additional RPReLU after each binary convolution to increase the non-linearity; [A1; Martinez et al., 2020; Tang et al. 2017] add PReLU as the nonlinearity; Group-Net [Zhuang el al. 2019] and XNOR-Net [Rastegari et al., 2016] argue that adding additional ReLU is important to BNNs performance. b) Varying width and/or depth has been studied in previous fixed-point quantization/BNNs literature, especially in NAS-based methods [Bulat et al. 2020; A2]. The original idea comes from EfficientNet. c) Some other tricks such as replacing the 1x1 downsampling shortcut with pooling have been widely used in the community. 2: Some arguments need theoretical proofs. For example, the authors argue that “despite the big quantization error made by quantization, the binary model can achieve much higher accuracy than the real-valued model with no quantization error”. In other words, minimizing the quantization error can hinder the discriminative ability of BNNs, which is the main point of this paper. This observation is interesting, but needs further theorems to further explore whether the quantization error has some relations with the predicted accuracy under some constraints. If zero quantization error cannot lead to a good result, then what should be the best trade-off? I encourage the authors to further explore this issue. At the current stage, it is far from enough. 3: The experimental results in Table 4 may have mistakes. The paper claims that BOPs are converted to equivalent FLOPs with a factor of $1/64$. However, why do smaller BOPs correspond to larger FLOPs? 4: The necessary “AND-count” operations may have technical issues. The AND-Count with values binarized to {0,1} should be equivalent to XNOR-Count with the values binarized to {-1, 1}, with a scalar difference. The authors can derive by themselves to verify this. However, the formulations in Eq. (3) and Eq. (4) are not equivalent if both values are binarized to {-1, 1}. 5: More experimental results on deeper networks (e.g., ResNet-50, -101) on ImageNet are needed to justify the effectiveness of the method. In addition, the comparisons with RELU [Zhuang el al. 2019; Rastegari et al., 2016], PReLU [A1; Martinez et al., 2020; Tang et al. 2017], RPReLU [Liu et al. 2020b] (optional) should be included. 6: Some typos. For example, “inheriting exiting advanced structures” → “inheriting existing advanced structures”; “Base on this consideration” → “Based on this consideration”. References: [A1]: Bayesian Optimized 1-Bit CNNs, in ICCV2019 [A2]: Joint Neural Architecture Search and Quantization, arXiv:1811.09426v1
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond ### Paper Abstract Binary neural networks (BNNs), where both weights and activations are binarized into 1 bit, have been widely studied in recent years due to its great benefit of highly accelerated computation and substantially reduced memory footprint that appeal to the development of resource constrained devices. In contrast to previous methods tending to reduce the quantization error for training BNN structures, we argue that the binarized convolution process owns an increasing linearity towards the target of minimizing such error, which in turn hampers BNN's discriminative ability. In this paper, we re-investigate and tune proper non-linear modules to fix that contradiction, leading to a strong baseline which achieves state-of-the-art performance on the large-scale ImageNet dataset in terms of accuracy and training efficiency. To go further, we find that the proposed BNN model still has much potential to be compressed by making a better use of the efficient binary operations, without losing accuracy. In addition, the limited capacity of the BNN model can also be increased with the help of group execution. Based on these insights, we are able to improve the baseline with an additional 4$\sim$5% top-1 accuracy gain even with less computational cost. Our code and all trained models will be made public. ### Paper Keywords ["Binary neural networks", "network quantization", "network compression"] ### Paper Content ABSTRACTBinary neural networks (BNNs), where both weights and activations are binarizedinto 1 bit, have been widely studied in recent years due to its great benefit ofhighly accelerated computation and substantially reduced memory footprint thatappeal to the development of resource constrained devices. In contrast to previousmethods tending to reduce the quantization error for training BNN structures, weargue that the binarized convolution process owns an increasing linearity towardsthe target of minimizing such error, which in turn hampers BNN’s discriminativeability. In this paper, we re-investigate and tune proper non-linear modules tofix that contradiction, leading to a strong baseline which achieves state-of-the-art performance on the large-scale ImageNet dataset in terms of accuracy andtraining efficiency. To go further, we find that the proposed BNN model still hasmuch potential to be compressed by making a better use of the efficient binaryoperations, without losing accuracy. In addition, the limited capacity of the BNNmodel can also be increased with the help of group execution. Based on theseinsights, we are able to improve the baseline with an additional 4 5% top-1accuracy gain even with less computational cost. Our code and all trained modelswill be made public.1 I NTRODUCTIONIn the past decade, Deep Neural Networks (DNNs), in particular Deep Convolutional NeuralNetworks (DCNNs), has revolutionized computer vision and been ubiquitously applied in variouscomputer vision tasks including image classification (Krizhevsky et al., 2012), object detection (Liuet al., 2020a) and semantic segmentation (Minaee et al., 2020). The top performing DCNNs (Heet al., 2016; Huang et al., 2017) are data and energy hungry, relying on cloud centers with clustersof energy hungry processors to speed up processing, which greatly impedes their deployment inubiquitous edge devices such as smartphones, automobiles, wearable devices and IoTs which havevery limited computing resources. Therefore, in the past few years, numerous research effort hasbeen devoted to developing DNN compression techniques to pursue a satisfactory tradeoff betweencomputational efficiency and prediction accuracy (Deng et al., 2020).Among various DNN compression techniques, Binary Neural Networks (BNNs), firstly appearedin the pioneering work by Hubara et al. (2016), have attracted increasing attention due to theirfavorable properties such as fast inference, low power consumption and memory saving. In aBNN, the weights and activations during inference are aggressively quantized into 1-bit (namelytwo values), which can lead to 32saving in memory footprint and up to 64speedup on CPUs(Rastegari et al., 2016). However, the main drawback of BNNs is that despite recent progress (Liuet al., 2018; Gu et al., 2019; Kim et al., 2020b), BNNs have trailed the accuracy of their full-precisioncounterparts. This is because the binarization inevitably causes serious information loss due to thelimited representational capacity with extreme discreteness. Additionally, the discontinuity natureof the binarization operation brings difficulty to the optimization of the deep network (Alizadehet al., 2018).A popular direction on enhancing the predictive performance of a BNN is to make the binaryoperation mimic the behavior of its full-precision counterpart by reducing the quantization errorcuased by the binarization function. For example, XNOR-Net (Rastegari et al., 2016) firstlyintroduced scaling factors for both the binary weight and activation such that the output of the1Under review as a conference paper at ICLR 2021SignBinary weightScaling factorsBNAdditionxlxl+1SignBinary weightxlxl+1BNAdditionBlock chain in Bi-Real Net: Block chain in FTBNN: (a): Basic block in Bi-Real Net(b): Basic block in FTBNN/: Basic block shown in left : FPReLU: ReLU(a) (b)EpochTop-1 AccuracyTraining LossFigure 1: Left: The basic block in original Bi-Real Net vs.the simplified basic block in FTBNN,where we directly absorb the explicit scaling factors into the BN layer by leveraging BN’s scalingfactors. Right : The non-linear modules (ReLU or FPReLU) are explicitly added after each basicblock in FTBNN. To maximize the model’s discriminative power while keeping its training stability,the number of ReLU is controlled and the proposed FPReLU is connected with most blocks.Training curves on ImageNet of the two 18-layer networks are depicted to show the trainingefficiency. Solid lines denote Top-1 accuracy on the validation set (y-axis on the right), dashedlines denote training loss (y-axis on the left). Both models are trained from scratch.binary convolution can be rescaled to closely match the result of the real-valued convolution justlike the original full-precision weight and activation are used. The method outperforms its vanillacounterpart BNN (Hubara et al., 2016) by a large margin (44.2% vs.27.9% in Top-1 accuracy onImageNet using the AlexNet architecture (Krizhevsky et al., 2012)). Because of the remarkablesuccess of XNOR-Net, a series of approaches emerged subsequently with the effort of either findingbetter scaling factors or proposing novel optimization strategies to further reduce the quantizationerror. Specifically, XNOR-Net++ (Bulat & Tzimiropoulos, 2019) improved the way of calculatingthe scaling factors by regarding them as model parameters which can be learnt end-to-end from thetarget loss. While Real-to-Bin (Martinez et al., 2020) proposed to compute the scaling factors onthe fly according to individual input samples, which is more flexible. From another perspective, IR-Net (Qin et al., 2020) progressively evolved the backward function for binarization from an identitymap into the original Sign function during training, which can avoid big quantization error in theearly stage of training. BONN (Gu et al., 2019) added a Bayesian loss to encourage the weight kernelfollowing a Gaussian mixture model with each Gaussian centered at each quantization value, leadingto higher accuracy. Other works aiming to reduce the quantization error also include ABC-Net (Linet al., 2017), Bi-Real Net (Liu et al., 2018), ProxyBNN (He et al., 2020), etc.However, another problem arises with the quantization error optimized towards 0, especially for thestructure like Bi-Real Net ( Fig. 1), where the only non-linear function is the binarization function.The non-linearity of the binarization function will be eliminated if the binary convolution withscaling factors can perfectly mimic the real-valued convolution in the extreme case (quantizationerror equals to 0), thus hindering the discriminative ability of BNNs. Therefore, it is necessary tore-investigate the non-linear property of BNNs when inheriting existing advanced structures.Based on this consideration, we conduct the experiment on MNIST dataset (LeCun & Cortes, 2010)using a 2-layer Bi-Real Net like structure (which begins with an initial real-valued convolutionlayer and two basic blocks illustrated in Fig. 1 (b), optionally followed by a non-linear module, andends with a fully connected (FC) layer) and some interesting phenomenons can be found as shownin Fig. 2, where we visualized the feature space before the FC layer and calculated the featurediscrepancy caused by the binarization process as well as the corresponding classification accuracy(ACC, in %) for each model. Firstly, comparing the first two figures, despite the big quantizationerror made by binarization, the binary model achieves much higher accuracy than the real-valuedmodel, which does not have quantization error. This indicates that the binarization function ownsa potential ability to enhance the model’s discriminative power, and also explains why Bi-Real Net2Under review as a conference paper at ICLR 2021Linear Binary PReLU ReLU ACC: 92.64 0.04Discrepancy: 0 ACC: 98.45 0.16 Discrepancy: 1.04 0.03 ACC: 98.70 0.07 Discrepancy: 1.66 0.04 ACC: 98.83 0.07 Discrepancy: 2.41 0.13Figure 2: Feature visualization for 4 different models experimenting on MNIST, which are denotedasLinear : using real-valued convolution layers without non-linear module; Binary : using binaryconvolution layers without non-linear module; PReLU : using binary convolution layers with PReLUmodule; ReLU : using binary convolution layers with ReLU module. Testing accuracies are recordedbased on 5 runs for each model. The average discrepancy between output from binary convolutionsand that from its corresponding real-valued convolutions (using the proxy weights and original real-valued activations to conduct the convolution) are recorded as well (best viewed in color).Table 1: Top-1 accuracy on ImageNet of different BNN models with commonly used trainingstrategies, based on the ResNet-18 architecture (He et al., 2016). X/7denotes the correspondingstrategy is / is not utilized. KD denotes knowledge distillation, MS means using multiple trainingsteps, GA indicates applying gradient approximation in the backward pass. Note that BinaryDuo(Kim et al., 2020b) improved GA with a coupled ternary model. Scaling represents using explicitscaling factors to reweight the output of binary convolutions. N/A+ nmeans only the number ofepochs nin the final step for multi-step training is given. Here Bi-Real Net was trained using twodifferent implementations. yindicates the double skip connections proposed by Liu et al. (2018)were used. It can be seen that FTBNN achieves promising accuracy even with a naive trainingscheme and can be further improved when combining the training strategies.Method KD GA MS Scaling Epoch Top-1 Method KD GA MS Scaling Epoch Top-1BNN 7 7 7 7 N/A 42.2% IR-Nety 7X 7X N/A 58.1%XNOR-Net 7 7 X X N/A+58 51.2% BinaryDuoy 7X X N/A 120+40 60.9%XNOR-Net++ 7 7 7 X 80 57.1% Real-to-Bin baseline yX 7X X 75+75 60.9%Bi-Real Net (B)y7X 7X 256 56.4% FTBNNy 7 7 7 7 60 60.2%Bi-Real Net (A)y7X X X N/A+20 56.4% FTBNNy X 7X 7 75+75 63.2%can possibly achieve high accuracy even without any non-linear modules. Such ability should berespected when we design the BNN structures rather than minimizing the quantization error to 0.Second, when explicitly introducing a non-linear module, either PReLU (He et al., 2015) or ReLU,the discriminative power can be further enhanced, even though the feature discrepancy is enlarged.This again shows that there is no strict correlation between the quantization discrepancy and the finalpredictive performance. Meanwhile, the ReLU model brings the most discriminative power with aclearer class separateness in the feature space. Our following experiments also show the superiorityof the traditional ReLU function due to a stronger non-linearity .Motivated by the above observation, we propose FTBNN (meaning fast training, see Fig. 1), byfirstly eliminating the scaling factors after the binary convolutional operation, which are insteadfused into the Batch Normalization (BN) (Ioffe & Szegedy, 2015) layers by leveraging BN’s scalingfactors, and explicitly configuring appropriate non-linear modules after each block. By doing so, wecan release the optimization burden for extra scaling layers as well as keep the non-linear propertyof the Sign function via a weaker scaling mechanism. At the same time, the configuration of explicitnon-linear modules (we also proposed a more flexible non-linear module, namely Fully ParametricReLU (FPReLU) shown in Fig. 3) is able to further boost the discriminative ability of the BNNmodel as well as ensure the training stability. The resulted FTBNN is highly competitive amongthe state-of-the-art BNN frameworks, with its enhanced performance in terms of both accuracy andtraining efficiency, as shown in Table 1 and Fig. 1. Note that very recently Liu et al. (2020b) alsodemonstrated the importance of non-linearity by redistributing activations in BNNs.To go beyond the baseline structure we have already obtained, we also notice that two aspects inthe literature were rarely investigated. One is to think how far we can go to make use of the fastand light binary operations and avoid the use of expensive real-valued counterparts, to make our3Under review as a conference paper at ICLR 2021BNN model more compact and computationally efficient. This is motivated by the fact that the real-valued operations in a BNN structure often take a considerable proportion of computation cost ( e.g.,85% in an 18-layer Bi-Real Net, and so does the proposed FTBNN, see Fig. 3). Second, inspiredby Binary MobileNet (Phan et al., 2020) which leveraged group convolution (Xie et al., 2017) toautomatically configure an optimal binary network architecture in their network search space, weconsider how group convolution can affect the performance of BNN structures (in representationalcapacity, discriminative power, etc.) with different configurations (vary in depth and width). To thebest of our knowledge this paper is also the first attempt to give such a specific investigation and canlead to useful insights on designing better BNN structures with the group mechanism.As for our experimental outcomes, firstly, we are able to build an easily trainable strong baselineFTBNN by improving the popular Bi-Real Net structure, already achieving the state-of-the-artperformance (Table 1, Table 3 and Table 4). Secondly, we enhanced the FTBNN by circumventingthe usage of real-valued operations and incorporating the group mechanism, leading to a series ofderived models that not only surpass the baseline in higher accuracy but also have less computationaloverhead (Table 4). It is also hoped that our exploration can give a better understanding of BNNdesign in the community and derive more efficient binary structures.2 M ETHODS2.1 B ACKGROUND AND FTBNNIn order to reduce the quantization error (denote as E) and protect the information flow in BNNs,which has been an important focus of recent years’ studies, a very basic idea is introducing severalscaling factors to reweight the output of the binary convolution, such that the rescaled output canapproach closely to the result as if using the real-valued operation:E= (WbA b)WA 0; (1)whereW,A,WbandAbindicate the real-valued weight and activation, the binarized weight andactivation respectively, represents a multiplicative convolution and represents a convolutionusing the XNOR-Count operations (or binary convolution), is the scaling factors and is element-wise multiplication.In XNOR-Net (Rastegari et al., 2016), actually consisted of two separate parts, for the weightsand activations respectively and both were calculated in an analytical way during the forward pass,by solving an optimization problem, which is therefore time-consuming. XNOR-Net++ (Bulat &Tzimiropoulos, 2019) further argued that can be a single part and learnt during training, showingbetter performance in terms of both accuracy and efficiency. More recently, Real-to-Bin (Martinezet al., 2020) proposed a data-driven version of , which was computed on the fly according toindividual input samples.In fact, with the purpose of finding a satisfactory initialization for , usually a pretrained model withfull-precision weights is needed (Rastegari et al., 2016; Lin et al., 2017). However, it is shown thatthe scaling factors can be fusable with the parameters in BN (Liu et al., 2018), directly training theBN layers can avoid the above troublesome, without degenerating the whole structure (Bethge et al.,2019). This also matches our assumption on Section 1 that the BN scaling factors can be a goodalternative for an explicit scaling layer.On the other hand, to maintain the information flow and maximize BNN’s expressive ability, anotheraspect of experience in the literature is the usage of shortcut connections (He et al., 2016). Onemilestone about that is the Bi-Real Net structure (Liu et al., 2018), where the authors emphasizedusing the shortcut which connects the real-valued activations before the Sign function to the outputof binary convolution, and proved the almost computation-free shortcut can significantly preserveBNN’s representational capability. It was then repeatedly adopted in other works (Liu et al., 2020b;Bulat et al., 2020; Kim et al., 2020a) to improve the quality of information propagation.Standing on the shoulders of giants and according to our discuss in Section 1, we propose asimple block based structure, dubbed as FTBNN1(Fig. 1) that both enables training efficiency andencourages non-linearity by discarding the scaling factors which are implicitly fused into BN, and1For reduction block, we adopt the real-valued 1x1 convolutional downsampling shortcut.4Under review as a conference paper at ICLR 2021Before compressionFlops budget: 172MTOP-1: 60.2%After compressionFlops budget: 90MTOP-1: 61.9%15%77%020408060120100Flops budget (in M)yf(y)f(y)=byf(y)=ayf(y)=y(starting status)SignBinary weightxlxl+1BNAdditionCopy(group convolution)AvgPool stride=2Figure 3: Left: The proposed FPReLU, slops in both sides are started from 1 (equal to anidentity map) and adaptively learned in channel-wise. Middle : Reduction block with group binaryconvolution and cheap downsampling layer (for normal block, the original binary convolutionlayer in Fig. 1 is also implemented with groups in the derived models). Right : Computationalbudget distributions for original and binary-purified FTBNN. The computational budget of a originalfloating point multiply operation (FLOP) in a convolution is reduced by 64 times if the operation isbinarized (using XNOR-Count instead). Based on that, when measuring the computational budget,the budget of binary operations (BOPs) are converted to equivalent of FLOPs with a factor of 1=64.15% and 77% denote the corresponding budget percentages for BOPs.adding specific activation functions. Contrary to Tang et al. (2017) and Bulat et al. (2019), wefind that traditional ReLU is still more effective than PReLU with a stronger non-linear property.While at the same time, immoderately inserting ReLUs in the network can also cause divergenceand deterioration, as the information collapse phenomenon (Sandler et al., 2018) become serious.To alleviate that, we extend PReLU to Fully Parametric ReLU (FPReLU) where the slops for bothnegative and positive activations are learnable (Fig. 3), to stably maximize BNN’s flexibility.2.2 T OWARDS A MORE COMPACT BNN WITH BINARY PURIFICATIONReal-valued operations in BNN are often needed to drive and preserve the information flow ( e.g.,binarization for the 1x1 convolutional shortcuts in the reduction block of Bi-Real Net will interruptthe continuous full-precision information flow, the damage of which is hard to recover in thefollowing layers (Bethge et al., 2019)).From Fig. 3, we find that the most computation-consuming parts among the real-valued operations inFTBNN are the initial convolutional layer and the real-valued 1x1 convolutional shortcuts. Therebywe make the following two changes to relieve the computation burden for the baseline.Initial Conv: Inspired by Bethge et al. (2020), in the full-precision initial convolutional layer, we setthe kernel size to 3x3 and constrain the number of output channels (only half of that in the baselinein our experiments). To facilitate building BNNs with different width configurations, as we willdiscuss later, we add an extra binary convolutional layer after the initial convolutional layer as abridge that can arbitrarily expand the initial width (see appendix for more detail).Downsampling shortcuts: Similar to Liu et al. (2020b), we propose a copy-and-paste strategy toavoid a convolutional downsampling shortcut to save computation while preserving the informationflow. In a reduction block where the resolutions of feature maps are halved and the number of filtersis doubled, we firstly concatenate the duplicated input volume to extend the input, then employ a3x3 average pooling with stride=2 to construct the shortcut (Fig. 3).2.3 F EATURE AGGREGATION WITH GROUP EXECUTIONAs discussed by Phan et al. (2020), binary operation restricts BNN’s feature representation onlyusing limited discrete values. Simply enlarging the network by increasing the number of channelscan be very expensive since a Nwidening leads to a N2growth in computational budget. Weexploit the group convolution (Xie et al., 2017) to enable a wider network (Fig. 3), which we believe5Under review as a conference paper at ICLR 2021can strengthen BNN’s representational ability, while keeping the total overhead unchanged. Thedifferences to other similar works are as follows.Compared with methods using multiple bases or branches: Lin et al. (2017) and Zhuang et al.(2019) gathered results from multiple branches or bases to enrich the feature representation forbinary convolutions. Their methods can be equally viewed as running several independent networksparallelly followed by a summation, while ours only uses a single pipeline without introducingadditional computational overhead. In the form of information aggregation, concatenation is used ingroup convolution, which is different to summation adopted in these methods.Compared with methods leveraging group convolutions: Group convolution is also incorporatedin recent approaches to facilitate binary network architecture search (NAS) (Phan et al., 2020; Bulatet al., 2020), while unfortunately a proper ablation study is missing, making the benefits from thegroup convolution unclear under the whole framework where the resulted advantage may be biasedtowards the NAS technology itself. Our study can be regarded as such an specific investigation tomake a better understanding of group convolution when designing BNNs.3 E XPERIMENTS AND DISCUSSESFTBNN Baseline: As shown in Fig. 1, to give the model enough non-linearity while maintainingtraining stability, we insert a ReLU after every four blocks and for other blocks, FPReLU is adopted.Compact and efficient derived models: Based on our baseline and the proposed binary purificationpipeline, we set the number of output channels in the initial convolutional layer as 32. A bridgebinary convolutional layer is then connected, followed by the blocks with group convolution asshown in Fig. 3 (also see the appendix) and the non-linear modules. Finally, a FC layer followed by asoft-max layer is applied to do the classification. In order to observe the effect of group convolution,we elaborate different computational budgets for BNNs (calculated in FLOPs as stated in Fig. 3,following Liu et al. (2018) and Rastegari et al. (2016)) by varying width and depth.Replacing XNOR-Count when ReLU is used: In Fig. 1 (b), activation values before Sign are either0 or positive if ReLU is applied at the end of the previous block. Then, all the 0s would be assignedto the lower binary value with the Sign function, eliminating the effect of ReLU in the residualbranch. In this case, we have designed particular bit operations, which is illustrated in the appendix.All the models are simply trained using Adam optimizer (Kingma & Ba, 2014) for 60 epochs fromscratch with a starting learning rate 0.01, which is decayed in a multi-step way (at epoch 45 and55, with the decaying rate 0.1) for the baseline, and a cosine shape scheme (towards 0 at the end oftraining) for the derived models. However we note there is no big difference between this two ways.The weight decay is set to 0 for all the binary convolutional layers. We use STE (Hubara et al., 2016)for backward propagation with value clipping in range (-1.2, 1.2) for both weights and activations.The Pytorch library (Paszke et al., 2019) is applied for implementation. Finally, following Bi-RealNet, we use the large-scale challenging ImageNet benchmark (ILSVRC 2012, Deng et al. (2009)) inour experiments. A more detailed description for the implementation can be found in the appendix.3.1 A BLATION ON FTBNN B ASELINEThe most similar one of the proposed baseline is the Bi-Real Net structure. Where, the authorsused a 2-stage training scheme to get a good initialization for BNN, a magnitude-aware gradientapproximation was also considered during weights updating. Finally, the trained scaling factorsare absorbed into the BN layers. The whole training procedure was carefully tuned in orderto fully exploit the model’s capacity. It can be seen in Table 3 that when applying our naivetraining scheme, the Bi-Real Net structure (Fig. 1 (a)) deleteriously degraded (from 56.4% to 52.3%on Top-1 accuracy). The main deficiency of this structure is the lack of a proper non-linearityintroduction, which is demonstrated to be indispensable for BNNs. In addition, our structure withoutany non-linear module performs slightly better than the Bi-Real Net structure (52.5% vs.52.3%),demonstrating the benefits of a weaker scaling mechanism from the BN layers.The following experiments then focus on the effectiveness of different non-linear modules. FromTable 3, we can find that the ReLU function is still preferred with a stronger non-linear property, thatis able to bring in more discriminative power. For example, only inserting 4 ReLUs can outperforms6Under review as a conference paper at ICLR 2021Table 2: Binary convolutions in FTBNN baseline based on ResNet-18 structure. BConv x-yindicatesyth block in the xth stage, where the size and number of filters are also given. The learnedcoefficients of FPReLU after each block are listed in the form of mean [minimum maximum] .Layer Learned coefficients Layer Learned coefficientsNegative Positive Negative PositiveBConv1-1 33, 64 1.82 [0.48 4.63] 1.16 [0.63 2.19] BConv3-1 33, 256 0.84 [0.47 2.07] 0.95 [0.61 1.84]BConv1-2 33, 64 0.95 [0.00 2.69] 0.72 [0.34 1.72] BConv3-2 33, 256 0.86 [0.44 2.85] 0.79 [0.46 1.51]BConv1-3 33, 64 0.87 [0.33 4.49] 0.75 [0.26 1.40] BConv3-3 33, 256 0.76 [0.04 1.97] 0.78 [0.28 2.50]BConv1-4 33, 64 ReLU BConv3-4 33, 256 ReLUBConv2-1 33, 128 0.91 [0.54 2.80] 0.93 [0.56 2.04] BConv4-1 33, 512 0.96 [0.54 2.86] 0.90 [0.14 2.05]BConv2-2 33, 128 0.93 [0.45 3.16] 0.83 [0.54 1.70] BConv4-2 33, 512 1.04 [0.44 4.66] 0.67 [0.00 1.44]BConv2-3 33, 128 0.81 [0.00 1.82] 0.86 [0.46 1.69] BConv4-3 33, 512 0.68 [0.23 2.80] 0.95 [0.23 3.51]BConv2-4 33, 128 ReLU BConv4-4 33, 512 ReLUTable 3: Ablation study for FTBNN baseline based on the ResNet 18 architecture.Model Top-1 (%) Top-5 (%)Bi-Real Net reported by Liu et al. (2018) 56.4 79.5Bi-Real Net with our naive training scheme 52.3 75.9No explicit non-linear modules 52.5 76.0All configured with PReLU 58.9 81.1All configured with FPReLU 59.3 81.3All configured with ReLU (lr=0.00015, diverge if lr is too big) 45.9 70.5Partially configured with ReLU 59.6 81.4Partially configured with ReLU & PReLU in others 59.7 81.6Baseline (Partially configured with ReLU & FPReLU in others) 60.2 82.0Full-precision of ResNet-18 (our implementation) 69.8 89.2the one with all blocks connected with PReLU by 0.7% in Top-1 accuracy. Note that ReLU isa special case of PReLU, indicating ReLU is already a decent local optima for PReLU by givinga strong non-linearity, making the optimization easier. At the same time, if the usage of ReLUis limitless, the BNN would also be prone to divergence, showing that unlike the full-precisioncounterparts, binarized architectures are essentially vulnerable, an abuse of ReLU which causessevere information collapse in the feature space should be avoided.A straightforward but effective way to control the non-linearity, so that model’s discriminative powercan be maximized, and the stability is also guaranteed, is to control the number of ReLUs andadditionally introduce weaker non-linear functions. Here, we show that compared with PReLU, theproposed FPReLU is an ideal solution to meet this requirement. As shown in Fig. 3, FPReLU startsfrom an identity function and gradually evolves during training to compensate the non-linearity inboth sides of activations. Table 2 lists the ultimately learnt coefficients of the FPReLU functions.These values for both side of the functions vary around 1 with a fairly wide range, some evenapproach to 0 or a big value ( i.e., 5), meaning that FPReLU is competent to tune the network into abalanced point automatically, since both slops for negative and positive activations are considered.3.2 T HE DERIVED MODELSIn this section, we evaluate the effectiveness of the derived models with binary purification byadjusting the number of groups. Table 4 leftshows the influences of group convlutions on binary-purified BNNs with varying settings of depth and group numbers, by constraining the computationcosts similar. It can be found that the simplest derived models without using group convolutions(i.e.,, G = 1) can surpass the proposed FTBNN baseline by a large margin (a 1.7% improvementon Top-1 accuracy with computational budget almost halved based on ResNet 18), which stronglydemonstrates the effectivenss of the proposed binary purification strategy. It also shows that thereis no influence of the group execution for relatively shallow binary networks under different budgetconstraints. As going deeper, the group convolution begin to diverge the corresponding models withnotable characteristics. On one hand, when observing the training accuracy, increasing the numberof groups can obviously introduce a better fitting, indicating an increasing representational capacityof the BNN. The phenomenon can be found where the layer 34 regardless of the model complexity.7Under review as a conference paper at ICLR 2021Table 4: Left: Experimental results of the derived structures with different group and budgetconfigurations, with Top-1 accuracy (in %) recorded, where the widths of networks are adjustedto meet the budget requirements. For a clearer illustration, we also give the number of parametersfor each model: Params = number of floating point paramters + number of binary parameters / 32.Right : Comparison with state-of-the-arts. (W/A) means how many bits used in the weights (W) andactivations (A), (1/1) nindicates nbranches or bases or individual models were adopted to expandthe network capacity for BNN. Our method with indicates knowledge distillation and multi-steptraining were used (refer to appendix for detail).Budget Layer Group FLOPs BOPs Params Top-1 Top-1(in M) (in M) (in M) (in M) (Train) (Val)9018 1 19.0 4516 1.66 61.0 61.918 3 23.1 4580 2.06 61.6 61.918 5 25.8 4301 2.28 62.2 61.918 8 28.7 3988 2.55 61.5 61.134 1 19.3 4176 1.32 58.8 61.034 3 23.8 4005 1.59 60.6 62.034 5 27.0 3829 1.79 61.8 62.534 8 30.1 3577 1.97 61.9 62.013518 1 21.1 7399 2.42 66.1 64.818 3 26.2 7285 2.88 66.6 64.518 5 29.9 7031 3.22 67.2 64.218 8 34.0 6717 3.60 66.4 63.434 1 21.6 6994 1.99 63.5 64.534 3 27.4 6639 2.31 65.8 65.334 5 31.5 6412 2.56 66.8 65.534 8 36.0 6090 2.83 67.3 65.044 1 22.1 6847 2.07 62.4 63.344 3 28.5 6512 2.33 65.3 64.844 5 32.9 6251 2.55 66.4 65.344 8 37.5 5947 2.76 66.9 65.3Mothod Bitwidth Budget Top-1(W/A) (in M) (in %)TTQ (Zhu et al., 2017) 2/32N/A66.6BWN (Rastegari et al., 2016) 1/32 60.8RQ ST (Louizos et al., 2019) 4/4 62.5ALQ (Qu et al., 2020) 2/2 66.4SYQ (Faraone et al., 2018) 1/8 62.9HWGQ (Cai et al., 2017) 1/2 59.6LQ-Net (Zhang et al., 2018) 1/2 62.6ABC-Net (Lin et al., 2017) (1/1)5N/A65.0CBCN (Liu et al., 2019) (1/1)4 61.4Group-Net (Zhuang et al., 2019) (1/1)4 66.3BNN Ensemble (Zhu et al., 2019) (1/1)6 61.0XNOR-Net (Rastegari et al., 2016) 1/116351.2Bi-Real Net-18 (Liu et al., 2018) 1/1 56.4XNOR-Net++ 1/1 57.1CI-BCNN (Wang et al., 2019) 1/1 59.9BONN (Gu et al., 2019) 1/1 59.3IR-Net (Qin et al., 2020) 1/1 58.1FTBNN Baseline (ours) 1/1 60.2FTBNN Baseline(ours) 1/1 63.2Bi-Real Net-34 (Liu et al., 2018) 1/1 193 62.2Binary MobileNet (Phan et al., 2020) 1/1 154 60.9Real-to-Bin (Martinez et al., 2020) 1/1 183 65.4FTBNN derived (ours) 1/1 132 65.5On the other hand, the positive effect for deeper networks can also activate a better accuracy on thevalidation set, meaning that an increase in discriminative power are also accompanied. All theexperiments on deeper BNNs with Group = 5 show a clear accuracy boost than those without groupexecution (Group = 1). E.g., from 61.0% to 62.5% for Layer = 34, and from 63.3% to 65.3% forLayer = 44.However, an increasing representational capacity may also lead to a higher risk of dropping ingenerability. For some cases, the accuracy on validation set is higher than that reported on thetraining set, due to the regularization nature of binary operations. With group execution, such effectof binarization is gradually weakened, leading to overfitting issues.3.3 C OMPARISON WITH STATE -OF-THE-ARTSHere we compare the proposed FTBNN with existing state-of-the-art methods as illustrated in theright side of Table 4, including the low-bit quantization methods, multiple binarizations methods,other binary frameworks based on the ResNet-18 architecture, etc.Interestingly, we find that someof our derived models are able to achieve superior or comparable performance when compared withthe methods using multiple bases or branches, while with far less computational overhead, againshowing the effectiveness and efficiency of the proposed method.4 P ERSPECTIVESHere, we raise some possible insights from this work: a) To further enhance the model’sdiscriminative ability through non-linearity, we wonder is it the best way to combine ReLU andFPReLU? Is it affected by other elements, such as depth, width and block orders? b) To what extentcan we do the binary purification? As we still have much proportion of real-valued operations inthe derived models. c) As shown in our experiments on BNN, group convolution accompanies withoverfitting. How to alleviate overfitting for BNNs should also be considered in future works.8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text ---paper summary---: This paper proposes to improve the BNN’s discriminative ability by introducing additional non-linearities. In addition, the paper exploits the group convolution to enable a wider network, which can strengthen BNN’s representational ability, while keeping the total overhead unchanged. ---Pros---: This paper introduces some practical methods to improve the performance of BNNs. In particular, the additional FPReLU is convincing. Moreover, the paper shows that grouped convolutions can be applied to wider BNNs to increase the representational capability while keeping the same complexity. ---Cons---: 1: This paper is incremental with limited new technical insights. a) Adding the nonlinearity can improve the representational capability of BNNs has been extensively explored in the literature. Specifically, ReActNet [Liu et al. 2020b] inserts additional RPReLU after each binary convolution to increase the non-linearity; [A1; Martinez et al., 2020; Tang et al. 2017] add PReLU as the nonlinearity; Group-Net [Zhuang el al. 2019] and XNOR-Net [Rastegari et al., 2016] argue that adding additional ReLU is important to BNNs performance. b) Varying width and/or depth has been studied in previous fixed-point quantization/BNNs literature, especially in NAS-based methods [Bulat et al. 2020; A2]. The original idea comes from EfficientNet. c) Some other tricks such as replacing the 1x1 downsampling shortcut with pooling have been widely used in the community. 2: Some arguments need theoretical proofs. For example, the authors argue that “despite the big quantization error made by quantization, the binary model can achieve much higher accuracy than the real-valued model with no quantization error”. In other words, minimizing the quantization error can hinder the discriminative ability of BNNs, which is the main point of this paper. This observation is interesting, but needs further theorems to further explore whether the quantization error has some relations with the predicted accuracy under some constraints. If zero quantization error cannot lead to a good result, then what should be the best trade-off? I encourage the authors to further explore this issue. At the current stage, it is far from enough. 3: The experimental results in Table 4 may have mistakes. The paper claims that BOPs are converted to equivalent FLOPs with a factor of $1/64$. However, why do smaller BOPs correspond to larger FLOPs? 4: The necessary “AND-count” operations may have technical issues. The AND-Count with values binarized to {0,1} should be equivalent to XNOR-Count with the values binarized to {-1, 1}, with a scalar difference. The authors can derive by themselves to verify this. However, the formulations in Eq. (3) and Eq. (4) are not equivalent if both values are binarized to {-1, 1}. 5: More experimental results on deeper networks (e.g., ResNet-50, -101) on ImageNet are needed to justify the effectiveness of the method. In addition, the comparisons with RELU [Zhuang el al. 2019; Rastegari et al., 2016], PReLU [A1; Martinez et al., 2020; Tang et al. 2017], RPReLU [Liu et al. 2020b] (optional) should be included. 6: Some typos. For example, “inheriting exiting advanced structures” → “inheriting existing advanced structures”; “Base on this consideration” → “Based on this consideration”. References: [A1]: Bayesian Optimized 1-Bit CNNs, in ICCV2019 [A2]: Joint Neural Architecture Search and Quantization, arXiv:1811.09426v1 ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
-2FCwDKRREu
ICLR.cc/2021/Conference
2021
Learning Invariant Representations for Reinforcement Learning without Reconstruction
["Amy Zhang", "Rowan Thomas McAllister", "Roberto Calandra", "Yarin Gal", "Sergey Levine"]
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn representations that provide for effective downstream control and invariance to task-irrelevant details. Bisimulation metrics quantify behavioral similarity between states in continuous MDPs, which we propose using to learn robust latent representations which encode only the task-relevant information from observations. Our method trains encoders such that distances in latent space equal bisimulation distances in state space. We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks, where the background is replaced with moving distractors and natural videos, while achieving SOTA performance. We also test a first-person highway driving task where our method learns invariance to clouds, weather, and time of day. Finally, we provide generalization results drawn from properties of bisimulation metrics, and links to causal inference.
["rich observations", "bisimulation metrics", "representation learning", "state abstractions"]
ABSTRACTWe study how representation learning can accelerate reinforcement learning fromrich observations, such as images, without relying either on domain knowledge orpixel-reconstruction. Our goal is to learn representations that provide for effectivedownstream control and invariance to task-irrelevant details. Bisimulation metricsquantify behavioral similarity between states in continuous MDPs, which we pro-pose using to learn robust latent representations which encode only the task-relevantinformation from observations. Our method trains encoders such that distances inlatent space equal bisimulation distances in state space. We demonstrate the effec-tiveness of our method at disregarding task-irrelevant information using modifiedvisual MuJoCo tasks, where the background is replaced with moving distractorsand natural videos, while achieving SOTA performance. We also test a first-personhighway driving task where our method learns invariance to clouds, weather, andtime of day. Finally, we provide generalization results drawn from properties ofbisimulation metrics, and links to causal inference.1 IntroductionFigure 1: Robust representa-tions of the visual scene shouldbe insensitive to irrelevant objects(e.g., clouds) or details (e.g., cartypes), and encode two observa-tions equivalently if their relevantdetails are equal (e.g., road direc-tion and locations of other cars).Learning control from images is important for many real worldapplications. While deep reinforcement learning (RL) has enjoyedmany successes in simulated tasks, learning control from real visionis more complex, especially outdoors, where images reveal detailedscenes of a complex and unstructured world. Furthermore, whilemany RL algorithms can eventually learn control from real imagesgiven unlimited data, data-efficiency is often a necessity in real trialswhich are expensive and constrained to real-time. Prior methodsfor data-efficient learning of simulated visual tasks typically userepresentation learning. Representation learning summarizes imagesby encoding them into smaller vectored representations better suitedfor RL. For example, sequential autoencoders aim to learn losslessrepresentations of streaming observations—sufficient to reconstructcurrent observations and predict future observations—from whichvarious RL algorithms can be trained (Hafner et al., 2018; Leeet al., 2019; Yarats et al., 2019). However, such methods are task-agnostic : the models represent all dynamic elements they observe inthe world, whether they are relevant to the task or not. We argue suchrepresentations can easily “distract” RL algorithms with irrelevantinformation in the case of real images. The issues of distraction isless evident in popular simulation MuJoCo and Atari tasks, since any change in observation space islikely task-relevant, and thus, worth representing. By contrast, visual images that autonomous carsobserve contain predominately task-irrelevant information, like cloud shapes and architectural details,illustrated in Figure 1.Equal contribution. Corresponding author: amy.x.zhang@mail.mcgill.ca1Published as a conference paper at ICLR 2021Rather than learning control-agnostic representations that focus on accurate reconstruction of cloudsand buildings, we would rather achieve a more compressed representation from a lossy encoder, whichonly retains state information relevant to our task. If we would like to learn representations that captureonly task-relevant elements of the state and are invariant to task-irrelevant information, intuitively wecan utilize the reward signal to help determine task-relevance, as shown by Jonschkowski & Brock(2015). As cumulative rewards are our objective, state elements are relevant not only if they influencethe current reward, but also if they influence state elements in the future that in turn influence futurerewards. This recursive relationship can be distilled into a recursive task-aware notion of stateabstraction: an ideal representation is one that is predictive of reward, and also predictive of itself inthe future.We propose learning such an invariant representation using the bisimulation metric, where the dis-tance between two observation encodings correspond to how “behaviourally different” (Ferns &Precup, 2014) both observations are. Our main contribution is a practical representation learningmethod based on the bisimulation metric suitable for downstream control, which we call deepbisimulation for control (DBC). We additionally provide theoretical analysis that proves valuebounds between the optimal value function of the true MDP and the optimal value function ofthe MDP constructed by the learned representation. Empirical evaluations demonstrate our non-reconstructive approach using bisimulation is substantially more robust to task-irrelevant distractorswhen compared to prior approaches that use reconstruction losses or contrastive losses. Our initialexperiments insert natural videos into the background of MoJoCo control task as complex distrac-tion. Our second setup is a high-fidelity highway driving task using CARLA (Dosovitskiy et al.,2017), showing that our representations can be trained effectively even on highly realistic imageswith many distractions, such as trees, clouds, buildings, and shadows. For example videos seehttps://sites.google.com/view/deepbisim4control .2 Related WorkOur work builds on the extensive prior research on bisimulation in MDP state aggregation.Reconstruction-based Representations. Early works on deep reinforcement learning from im-ages (Lange & Riedmiller, 2010; Lange et al., 2012) used a two-step learning process where first anauto-encoder was trained using reconstruction loss to learn a low-dimensional representation, andsubsequently a controller was learned using this representation. This allows effective leveraging oflarge, unlabeled datasets for learning representations for control. In practice, there is no guaranteethat the learned representation will capture useful information for the control task, and significantexpert knowledge and tricks are often necessary for these approaches to work. In model-basedRL, one solution to this problem has been to jointly train the encoder and the dynamics modelend-to-end (Watter et al., 2015; Wahlström et al., 2015) – this proved effective in learning usefultask-oriented representations. Hafner et al. (2018) and Lee et al. (2019) learn latent state models usinga reconstruction loss, but these approaches suffer from the difficulty of learning accurate long-termpredictions and often still require significant manual tuning. Gelada et al. (2019) also propose alatent dynamics model-based method and connect their approach to bisimulation metrics, using areconstruction loss in Atari. They show that `2distance in the DeepMDP representation upper boundsthe bisimulation distance, whereas our objective directly learns a representation where distance inlatent space isthe bisimulation metric. Further, their results rely on the assumption that the learnedrepresentation is Lipschitz, whereas we show that, by directly learning a bisimilarity-based represen-tation, we guarantee a representation that generates a Lipschitz MDP. We show experimentally thatournon-reconstructive DBC method is substantially more robust to complex distractors.Contrastive-based Representations. Contrastive losses are a self-supervised approach to learnuseful representations by enforcing similarity constraints between data (van den Oord et al., 2018;Chen et al., 2020). Similarity functions can be provided as domain knowledge in the form ofheuristic data augmentation, where we maximize similarity between augmentations of the same datapoint (Laskin et al., 2020) or nearby image patches (Hénaff et al., 2019), and minimize similaritybetween different data points. In the absence of this domain knowledge, contrastive representationscan be trained by predicting the future (van den Oord et al., 2018). We compare to such an approachin our experiments, and show that DBC is substantially more robust. While contrastive losses donot require reconstruction, they do not inherently have a mechanism to determine downstream taskrelevance without manual engineering, and when trained only for prediction, they aim to capture all2Published as a conference paper at ICLR 2021predictable features in the observation, which performs poorly on real images for the same reasonsworld models do. A better method would be to incorporate knowledge of the downstream task intothe similarity function in a data-driven way, so that images that are very different pixel-wise (e.g.lighting or texture changes), can also be grouped as similar w.r.t. downstream objectives.Bisimulation. Various forms of state abstractions have been defined in Markov decision processes(MDPs) to group states into clusters whilst preserving some property (e.g. the optimal value, or allvalues, or all action values from each state) (Li et al., 2006). The strictest form, which generallypreserves the most properties, is bisimulation (Larsen & Skou, 1989). Bisimulation only groups statesthat are indistinguishable w.r.t. reward sequences output given any action sequence tested. A relatedconcept is bisimulation metrics (Ferns & Precup, 2014), which measure how “behaviorally similar”states are. Ferns et al. (2011) defines the bisimulation metric with respect to continuous MDPs,and propose a Monte Carlo algorithm for learning it using an exact computation of the Wassersteindistance between empirically measured transition distributions. However, this method does not scalewell to large state spaces. Taylor et al. (2009) relate MDP homomorphisms to lax probabilisticbisimulation, and define a lax bisimulation metric. They then compute a value bound based on thismetric for MDP homomorphisms, where approximately equivalent state-action pairs are aggregated.Most recently, Castro (2020) propose an algorithm for computing on-policy bisimulation metrics,but does so directly, without learning a representation. They focus on deterministic settings and thepolicy evaluation problem. We believe our work is the first to propose a gradient-based method fordirectly learning a representation space with the properties of bisimulation metrics and show that itworks in the policy optimization setting.3 PreliminariesWe start by introducing notation and outlining realistic assumptions about underlying structure in theenvironment. Then, we review state abstractions and metrics for state similarity.We assume the underlying environment is a Markov decision process (MDP), described by the tupleM= (S;A;P;R;), whereSis the state space,Athe action space,P(s0js;a)the probability oftransitioning from state s2Sto state s02S, and2[0;1)a discount factor. An “agent” choosesactions a2A according to a policy function a(s), which updates the system state s0P(s;a),yielding a reward r=R(s)2R. The agent’s goal is to maximize the expected cumulative discountedrewards by learning a good policy: maxEP[P1t=0[tR(st)]. While our primary concern is learningfrom images, we do not address the partial-observability problem explicitly: we instead approximatestacked pixel observations as the fully-observed system state s(explained further in Appendix B).Bisimulation is a form of state abstraction that groups states siandsjthat are “behaviorally equiv-alent” (Li et al., 2006). For any action sequence a0:1, the probabilistic sequence of rewards fromsiandsjare identical. A more compact definition has a recursive form: two states are bisimilarif they share both the same immediate reward and equivalent distributions over the next bisimilarstates (Larsen & Skou, 1989; Givan et al., 2003).Definition 1 (Bisimulation Relations (Givan et al., 2003)) .Given an MDPM, an equivalencerelationBbetween states is a bisimulation relation if, for all states si;sj2S that are equivalentunderB(denoted siBsj) the following conditions hold:R(si;a) =R(sj;a)8a2A; (1)P(Gjsi;a) =P(Gjsj;a)8a2A;8G2SB; (2)whereSBis the partition ofSunder the relation B(the set of all groups Gof equivalent states), andP(Gjs;a) =Ps02GP(s0js;a):Exact partitioning with bisimulation relations is generally impractical in continuous state spaces, asthe relation is highly sensitive to infinitesimal changes in the reward function or dynamics. For thisreason, Bisimulation Metrics (Ferns et al., 2011; Ferns & Precup, 2014; Castro, 2020) softens theconcept of state partitions, and instead defines a pseudometric space (S;d), where a distance functiond:SS7! R0measures the “behavioral similarity” between two states1.Defining a distance dbetween states requires defining both a distance between rewards (to softenEquation (1)), and distance between state distributions (to soften Equation (2)). Prior works use theWasserstein metric for the latter, originally used in the context of bisimulation metrics by van Breugel1Note thatdis a pseudometric, meaning the distance between two different states can be zero, correspondingto behavioral equivalence.3Published as a conference paper at ICLR 2021& Worrell (2001). The pthWasserstein metric is defined between two probability distributions PiandPjasWp(Pi;Pj;d) = (inf02(Pi;Pj)RSSd(si;sj)pd0(si;sj))1=p, where (Pi;Pj)is theset of all couplings of PiandPj. This is known as the “earth mover” distance, denoting the cost oftransporting mass from one distribution to another (Villani, 2003). Finally, the bisimulation metric isthe reward difference added to the Wasserstein distance between transition distributions:Definition 2 (Bisimulation Metric) .From Theorem 2.6 in Ferns et al. (2011) with c2[0;1):d(si;sj) = maxa2A(1c)jRasiRasjj+cW1(Pasi;Pasj;d): (3)4 Learning Representations for Control with Bisimulation MetricsFigure 2: Learning a bisimulation metric represen-tation: shaded in blue is the main model architecture,it is reused for both states, like a Siamese network.The loss is the reward and discounted transition dis-tribution distances (using Wasserstein metric W).Algorithm 1 Deep Bisimulation for Control (DBC)1:forTimet= 0to1do2: Encode state zt=(st)3: Execute action at(zt)4: Record data: D D[f st;at;st+1;rt+1g5: Sample batch BiD6: Permute batch: Bj=permute (Bi)7: Train policy: EBi[J()].Algorithm 28: Train encoder: EBi;Bj[J()].Equation (4)9: Train dynamics: J(^P;)=(^P((st);at)zt+1)2We propose Deep Bisimulation for Control (DBC), a data-efficient approach to learn control policiesfrom unstructured, high-dimensional states. In contrast to prior work on bisimulation, which typicallyaims to learn a distance function of the form d:SS7! R0between states, our aim is instead tolearn representationsZunder which `1distances correspond to bisimulation metrics, and then usethese representations to improve reinforcement learning. Our goal is to learn encoders :S7!Zthat capture representations of states that are suitable to control, while discarding any informationthat is irrelevant for control. Any representation that relies on reconstruction of the state cannot dothis, as these irrelevant details are still important for reconstruction. We hypothesize that bisimulationmetrics can acquire this type of representation, without any reconstruction.Bisimulation metrics are a useful form of state abstraction, but prior methods to train distancefunctions either do not scale to pixel observations (Ferns et al., 2011) (due to the max operatorin Equation (3)), or were only designed for the (fixed) policy evaluation setting (Castro, 2020).By contrast, we learn improved representations for policy inputs, as the policy improves online.Our-bisimulation metric is learned with gradient decent, and we prove it converges to a fixedpoint in Theorem 1 under some assumptions. To train our encoder towards our desired relationd(si;sj) :=jj(si)(sj)jj1, we draw batches of state pairs, and minimise the mean square errorbetween the on-policy bisimulation metric and `1distance in the latent space:J() =jjzizjjj1 jrirjj W2^P(jzi;ai);^P(jzj;aj)2; (4)where zi=(si),zj=(sj),rare rewards, and zdenotes(s)with stop gradients. Equation (4)also uses a probabilistic dynamics model ^Pwhich outputs a Gaussian distribution. For this reason,we use the 2-Wasserstein metric W2in Equation (4), as opposed to the 1-Wasserstein in Equation (3),since theW2metric has a convenient closed form: W2(N(i;i);N(j;j))2=jjijjj22+jj1=2i1=2jjj2F, wherejjjjFis the Frobenius norm. For all other distances we continue using the`1norm. Our model architecture and training is illustrated by Figure 2 and Algorithm 1.Algorithm 2 Train Policy (changes to SAC in blue)1: Get value: V= mini=1;2^Qi(^(s))log(aj(s))2: Train critics: J(Qi;) = (Qi((s))rV)23: Train actor: J() =logp(aj(s))mini=1;2Qi((s))4: Train alpha: J() =logp(aj(s))5: Update target critics: ^Qi QQi+ (1Q)^Qi6: Update target encoder: ^ + (1)^Incorporating control. We combine our rep-resentation learning approach (Algorithm 1)with the soft actor-critic (SAC) algorithm(Haarnoja et al., 2018) to devise a practicalreinforcement learning method. We modifiedSAC slightly in Algorithm 2 to allow the valuefunction to backprop to our encoder, whichcan improve performance further (Yarats et al.,2019; Rakelly et al., 2019). Although, in principle, our method could be combined with any RLalgorithm, including the model-free DQN (Mnih et al., 2015), or model-based PETS (Chua et al.,4Published as a conference paper at ICLR 20212018). Implementation details and hyperparameter values of DBC are summarized in the appendix,Table 2. We train DBC by iteratively updating three components in turn: a policy (in this case SAC),an encoder, and a dynamics model ^P(lines 7–9, Algorithm 1). We found a single loss function wasless stable to train. The inputs of each loss function J()in Algorithm 1 represents which componentsare updated. After each training step, the policy is used to step in the environment, the data iscollected in a replay buffer D, and a batch is randomly selected to repeat training.5 Generalization Bounds and Links to Causal InferenceWhile DBC enables representation learning without pixel reconstruction, it leaves open the questionof how good the resulting representations really are. In this section, we present theoretical analysisthat bounds the suboptimality of a value function trained on the representation learned via DBC.First, we show that our -bisimulation metric converges to a fixed point, starting from the initializedpolicy0and converging to an optimal policy .Theorem 1. Letmetbe the space of bounded pseudometrics on Sanda policy that is continuouslyimproving. DefineF:met7!metbyF(d;)(si;sj) = (1c)jrsirsjj+cW(d)(Psi;Psj): (5)ThenFhas a least fixed point ~dwhich is a-bisimulation metric.Proof in appendix. As evidenced by Definition 2, the bisimulation metric has no direct dependence onthe state space. Pixels can change, but bisimilarity will stay the same. Instead, bisimilarity is groundedin a recursion of future transition probabilities and rewards, which is closely related to the optimalvalue function. In fact, the bisimulation metric gives tight bounds on the optimal value functionwith discount factor . We show this using the property that the optimal value function is Lipschitzwith respect to the bisimulation metric, see Theorem 5 in Appendix (Ferns et al., 2004). This resultalso implies that the closer two states are in terms of ~d, the more likely they are to share the sameoptimal actions. This leads us to a generalization bound on the optimal value function of an MDPconstructed from a representation space using bisimulation metrics, jj(si)(sj)jj1:=~d(si;sj):We can construct a partition of this space for some >0, giving usnpartitions where1n<(1c).We denoteas the encoder that maps from the original state space Sto each-cluster. This denotesthe amount of approximation allowed, where large leads to a more compact bisimulation partitionat the expense of a looser bound on the optimal value function.Theorem 2 (Value bound based on bisimulation metrics) .Given an MDP Mconstructed by aggre-gating states in an -neighborhood, and an encoder that maps from states in the original MDP Mto these clusters, the optimal value functions for the two MDPs are bounded asjV(s)V((s))j2(1)(1c): (6)Proof in appendix. As !0the optimal value function of the aggregated MDP converges to theoriginal. Further, by defining a learning error for ,L:= supsi;sj2Sjj(si)(sj)jj1~d(si;sj),we can update the bound in Theorem 2 to incorporate L:jV(s)V((s))j2+2L(1)(1c):MDP dynamics have a strong connection to causal inference and causal graphs, which are directedacyclic graphs (Jonsson & Barto, 2006; Schölkopf, 2019; Zhang et al., 2020). Specifically, the stateand action at time tcausally affect the next state at time t+ 1. In this work, we care about thecomponents of the state space that causally affect current and future reward. Deep bisimulation forcontrol representations connect to causal feature sets , or the minimal feature set needed to predict atarget variable (Zhang et al., 2020).Theorem 3 (Connections to causal feature sets (Thm 1 in Zhang et al. (2020))) .If we partitionobservations using the bisimulation metric, those clusters (a bisimulation partition) correspond tothe causal feature set of the observation space with respect to current and future reward.This connection tells us that these features are the minimal sufficient statistic of the current and futurereward, and therefore consist of (and only consist of) the causal ancestors of the reward variable r.Definition 3 (Causal Ancestors) .In a causal graph where nodes correspond to variables and directededges between a parent node Pand child node Care causal relationships, the causal ancestorsAN(C)of a node are all nodes in the path from Cto a root node.If there are interventions on distractor variables , or variables that control the rendering function qand therefore the rendered observation but do not affect the reward, the causal feature set will be5Published as a conference paper at ICLR 2021robust to these interventions, and correctly predict current and future reward in the linear functionapproximation setting (Zhang et al., 2020). As an example, in autonomous driving, an interventioncan be a change from day to night which affects the observation space but not the dynamics or reward.Finally, we show that a representation based on the bisimulation metric generalizes to other rewardfunctions with the same causal ancestors.Theorem 4 (Task Generalization) .Given an encoder :S7!Z that maps observations to a latentbisimulation metric representation where jj(si)(sj)jj1:=~d(si;sj),Zencodes informationabout all the causal ancestors of the reward AN(R).Proof in appendix. This result shows that the learned representation will generalize to unseen rewardfunctions, as long as the new reward function has a subset of the same causal ancestors. As anexample, a representation learned for a robot to walk will likely generalize to learning to run, becausethe reward function depends on forward velocity and all the factors that contribute to forward velocity.However, that representation will not generalize to picking up objects, as those objects will be ignoredby the learned representation, since they are not likely to be causal ancestors of a reward functiondesigned for walking. Theorem 4 shows that the learned representation will be robust to spuriouscorrelations, or changes in factors that are not in AN(R). This complements Theorem 5, that therepresentation is a minimal sufficient statistic of the optimal value function, improving generalizationover non-minimal representations.Theorem 5 (Vis Lipschitz with respect to ~d).LetVbe the optimal value function for a givendiscount factor . Ifc, thenVis Lipschitz continuous with respect to ~dwith Lipschitz constant11c, where ~dis a-bisimulation metric.jV(si)V(sj)j11c~d(si;sj): (7)See Theorem 5.1 in Ferns et al. (2004) for proof. We show empirical validation of these findings inSection 6.2.6 ExperimentsOur central hypothesis is that our non-reconstructive bisimulation based representation learningapproach should be substantially more robust to task-irrelevant distractors. To that end, we evaluateour method in a clean setting without distractors, as well as a much more difficult setting withdistractors. We compare against several baselines. The first is Stochastic Latent Actor-Critic (SLAC,Lee et al. (2019)), a state-of-the-art method for pixel observations on DeepMind Control that learns adynamics model with a reconstruction loss. The second is DeepMDP (Gelada et al., 2019), a recentmethod that also learns a latent representation space using a latent dynamics model, reward model, anddistributional Q learning, but for which they needed a reconstruction loss to scale up to Atari. Finally,we compare against two methods using the same architecture as ours but exchange our bisimulationloss with (1) a reconstruction loss (“ Reconstruction ”) and (2) contrastive predictive coding (Oordet al., 2018) (“ Contrastive ”) to ground the dynamics model and learn a latent representation.6.1 Control with Background DistractionIn this section, we benchmark DBC and the previously described baselines on the DeepMind Control(DMC) suite (Tassa et al., 2018) in two settings and nine environments (Figure 3), finger_spin ,cheetah_run , andwalker_walk and additional environments in the appendix.Default Setting. Here, the pixel observations have simple backgrounds as shown in Figure 3 (top row)with training curves for our DBC and baselines. We see SLAC, a recent state-of-the-art model-basedrepresentation learning method that uses reconstruction, generally performs best.Simple Distractors Setting. Next, we include simple background distractors, shown in Figure 3(middle row), with easy-to-predict motions. We use a fixed number of colored circles that obey thedynamics of an ideal gas (no attraction or repulsion between objects) with no collisions. Note theperformance of DBC remains consistent, as other methods start decreasing.Natural Video Setting. Then, we incorporate natural video from the Kinetics dataset (Kay et al.,2017) as background (Zhang et al., 2018), shown in Figure 3 (bottom row). The results confirm ourhypothesis: although a number of prior methods can learn effectively in the absence of distractors,when complex distractions are introduced, our non-reconstructive bisimulation based method attainssubstantially better results.6Published as a conference paper at ICLR 20210 1 2 3 4 5 6 7 8Environment Steps 1e502004006008001000AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800900AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnwalker/walk0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturnwalker/walk0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e5050100150200250300AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700AverageReturnwalker/walkFigure 3: Left observations : Pixel observations in DMC in the default setting (top row) of the finger spin (leftcolumn), cheetah (middle column), and walker (right column), with simple distractors (middle row), and naturalvideo distractors (bottom row). Right training curves : Results comparing out DBC method to baselines on 10seeds with 1 standard error shaded in the default setting. The grid-location of each graph corresponds to thegrid-location of each observation.Figure 4: t-SNE of latent spaces learned with a bisimulation metric (left t-SNE) and V AE (right t-SNE)after training has completed, color-coded with predicted state values (higher value yellow, lower value purple).Neighboring points in the embedding space learned with a bisimulation metric have similar states and correspondto observations with the same task-related information (depicted as pairs of images with their correspondingembeddings), whereas no such structure is seen in the embedding space learned by V AE, where the same imagepairs are mapped far away from each other.To visualize the representation learned with our bisimulation metric loss function in Equation (4), weuse a t-SNE plot (Figure 4). We see that even when the background looks drastically different, our en-coder learns to ignore irrelevant information and maps observations with similar robot configurationsnear each other. See Appendix D for another visualization.6.2 Generalization ExperimentsWe test generalization of our learned representation in two ways. First, we show that the learnedrepresentation space can generalize to different types of distractors, by training with simple distractorsand testing on the natural video setting. Second, we show that our learned representation can beuseful reward functions other than those it was trained for.Generalizing over backgrounds. We first train on the simple distractors setting and eval-uate on natural video . Figure 5 shows an example of the simple distractors settingand performance during training time of two experiments, blue being the zero-shot transfer to thenatural video setting, and orange the baseline which trains on natural video . This resultempirically validates that the representations learned by DBC are able to effectively learn to ignorethe background, regardless of what the background contains or how dynamic it is.7Published as a conference paper at ICLR 2021Generalizing over reward functions. We evaluate (Figure 5) the generalization capabilities ofthe learned representation by training SAC with new reward functions walker_stand andwalker_run using the fixed representation learned from walker_walk . This is empiricalevidence that confirms Theorem 4: if the new reward functions are causally dependent on a subset ofthe same factors that determine the original reward function, then our representation is sufficient.0 200000 400000 600000 800000 1000000step0100200300400500600700episode_rewardwalker_walkDBC: Transfer ideal gas to kineticsDBC: Trained on kineticsDeepMDP: Transfer ideal gas to kinetics0 200000 400000 600000 800000 1000000step1002003004005006007008009001000episode_rewardwalker_standSAC trained on observationSAC trained with frozen DBC encoderSAC trained with frozen DeepMDP encoder0 200000 400000 600000 800000 1000000step50100150200250300episode_rewardwalker_runSAC trained on observationSAC trained with frozen DBC encoderSAC trained with frozen DeepMDP encoderFigure 5: Generalization of a model trained on simple distractors environment and evaluated onkinetics (left). Generalization of an encoder trained on walker_walk environment and evaluated onwalker_stand (center) and walker_run (right), all in the simple distractors setting. 10 seeds, 1standard error shaded.6.3 Comparison with other Bisimulation EncodersEven though the purpose of bisimulation metrics by Castro (2020) is learning distances d, notrepresentation spaces Z, it nevertheless implements dwith function approximation: d(si;sj) = (si);(sj)by encoding observations with before computing distances with , trained as:J(; ) = (si);(sj)jrirjj^ ^P(si;(si));^P(sj;(sj))2;(8)0 100000 200000 300000 400000 500000 600000 700000 800000step0100200300400500600700episode_rewardwalker_walk with natural videoDBCCastroFigure 6: Bisim. results. Blue is DBCand orange is Castro (2020).where ^and^ are target networks. A natural question is: howdoes the encoder above perform in control tasks? We com-bineabove with our policy in Algorithm 2 and use the samenetwork (single hidden layer 729 wide). Figure 6 shows rep-resentations from Castro (2020) can learn control (surprisinglywell given it was not designed to), but our method learns faster.Further, our method is simpler: by comparing Equation (8)to Equation (4), our method uses the `1distance between theencoding instead of introducing an addition network .6.4 Autonomous Driving with Visual RedundancyFigure 7: The driving task is to drive the red ego car(left) safely in traffic (middle) along a highway (right).Real-world control systems such as roboticsand autonomous vehicles must contend witha huge variety of task-irrelevant information,such as irrelevant objects (e.g. clouds) and ir-relevant details (e.g. obstacle color). To eval-uate DBC on tasks with more realistic obser-vations, we construct a highway driving sce-nario with photo-realistic visual observations using the CARLA simulator (Dosovitskiy et al.,2017) shown in Figure 7. The agent’s goal is to drive as far as possible along CARLA’sTown04’s figure-8 the highway in 1000 time-steps without colliding into the 20 other movingvehicles or barriers. Our objective function rewards highway progression and penalises collisions:rt=v>ego^uhighwaytiimpulsesjsteerj, where vegois the velocity vector of the ego vehi-cle, projected onto the highway’s unit vector ^uhighway , and multiplied by time discretization t= 0:05to measure highway progression in meters. Collisions result in impulses 2R+, measured in Newton-seconds. We found a steering penalty steer2[1;1]helped, and used weights i= 104ands= 1. While more specialized objectives exist like lane-keeping, this experiment’s purpose is onlyto compare representations with observations more characteristic of real robotic tasks. We use fivecameras on the vehicle’s roof, each with 60 degree views. By concatenating the images together, ourvehicle has a 300 degree view, observed as 84420pixels. Code and install instructions in appendix.Results in Figure 9 compare the same baselines as before, except for SLAC which is easily distracted(Figure 3). Instead we used SAC, which does not explicitly learn a representation, but performssurprisingly well from raw images. DeepMDP performs well too, perhaps given its similarly tobisimulation. But, Reconstruction and Contrastive methods again perform poorly with complex8Published as a conference paper at ICLR 2021images. More intuitive metrics are in Table 1 and Figure 8 depicts the representation space as a t-SNEwith corresponding observations. Each run took 12 hours on a GTX 1080 GPU.Figure 8: A t-SNE diagram of encoded first-person driving observations after 10k training steps of Algorithm 1,color coded by value ( Vin Algorithm 2). Top: the learned representation identifies an obstacle on the rightside. Whether that obstacle is a dark wall, bright car, or truck is task-irrelevant: these states are behaviourallyequivalent. Left: the ego vehicle has flipped onto its left side. The different wall colors, due to a setting sun, isirrelevant: all states are equally stuck and low-value (purple t-SNE color). Right : clear highway driving. Cloudsand sun position are irrelevant.0 20000 40000 60000 80000 100000step0255075100125150175episode_rewardcarlaContrastiveReconstructionSACDeepMDPDBC (ours)Figure 9: Performance comparison with 3 seeds on thedriving task. Our DBC method (red) performs betterthan DeepMDP (purple) or learning direct from pixelswithout a representation (SAC, green), and much betterthan contrastive methods (blue). Our method’s finalperformance is 46.8% better than the next best baseline.Table 1: Driving metrics, averaged over 100 episodes,after 100k training steps, with standard error. Arrowdirection indicates if metric desired larger or smaller.SAC DeepMDP DBC (ours)successes (100m)" 12% 17% 24%distance (m)" 123:27:43 106 :711:1 179:011:4crash intensity# 460430:7 195815:6 267338:5average steer#16:6%0:019% 10:4%0:015% 7:3%0:012%average brake#1:3%0:006% 4:3%0:033% 1:6%0:022%7 DiscussionThis paper presents Deep Bisimulation for Control: a new representation learning method thatconsiders downstream control. Observations are encoded into representations that are invariant todifferent task-irrelevant details in the observation. We show this is important when learning controlfrom outdoor images, or otherwise images with background “distractions”. In contrast to otherbisimulation methods, we show performance gains when distances in representation space match thebisimulation distance between observations.Future work: Several options exist for future work. First, our latent dynamics model ^Pwas onlyused for training our encoder in Equation (4), but could also be used for multi-step planning in latentspace. Second, estimating uncertainty could also be important to produce agents that can work inthe real world, perhaps via an ensemble of models f^PkgKk=1, to detect—and adapt to—distributionalshifts between training and test observations. Third, an undressed issue is that of partially observedsettings (that assumed approximately full observability by using stacked images), possibly usingexplicit memory or implicit memory such as an LSTM. Finally, investigating which metrics (L1 orL2) and dynamics distributions (Gaussians or not) would be beneficial.9Published as a conference paper at ICLR 2021ReferencesPablo Samuel Castro. Scalable methods for computing state similarity in deterministic Markovdecision processes. In Association for the Advancement of Artificial Intelligence (AAAI) , 2020.Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A Simple Framework forContrastive Learning of Visual Representations. arXiv:2002.05709 [cs, stat] , February 2020. URLhttp://arxiv.org/abs/2002.05709 . arXiv: 2002.05709.Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcementlearning in a handful of trials using probabilistic dynamics models. In Neural InformationProcessing Systems (NeurIPS) , pp. 4754–4765, 2018.Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA:An open urban driving simulator. arXiv preprint arXiv:1711.03938 , 2017.Simon S. Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudík, and John Langford.Provably efficient RL with rich observations via latent state decoding. Computing ResearchRepository (CoRR) , abs/1901.09018, 2019. URL http://arxiv.org/abs/1901.09018 .Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite Markov decision processes.InUncertainty in Artificial Intelligence (UAI) , pp. 162–169, 2004. ISBN 0-9749039-0-6. URLhttp://dl.acm.org/citation.cfm?id=1036843.1036863 .Norm Ferns, Prakash Panangaden, and Doina Precup. Bisimulation metrics for continuous Markovdecision processes. Society for Industrial and Applied Mathematics , 40(6):1662–1714, December2011. ISSN 0097-5397. doi: 10.1137/10080484X. URL https://doi.org/10.1137/10080484X .Norman Ferns and Doina Precup. Bisimulation metrics are optimal value functions. In Uncertaintyin Artificial Intelligence (UAI) , pp. 210–219, 2014.Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G. Bellemare. DeepMDP:Learning continuous latent space models for representation learning. In Kamalika Chaudhuri andRuslan Salakhutdinov (eds.), International Conference on Machine Learning (ICML) , volume 97,pp. 2170–2179, Jun 2019.Robert Givan, Thomas L. Dean, and Matthew Greig. Equivalence notions and model minimization inMarkov decision processes. Artificial Intelligence , 147:163–223, 2003.Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maxi-mum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290 ,2018.Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and JamesDavidson. Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551 ,2018.Olivier J Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, andAaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXivpreprint arXiv:1905.09272 , 2019.Rico Jonschkowski and Oliver Brock. Learning state representations with robotic priors. AutonomousRobots , 39(3):407–428, 2015.Anders Jonsson and Andrew Barto. Causal graph based decomposition of factored MDPs. J. Mach.Learn. Res. , 7:2259–2301, December 2006. ISSN 1532-4435.Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan,Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman.The kinetics human action video dataset. Computing Research Repository (CoRR) , 2017. URLhttp://arxiv.org/abs/1705.06950 .10Published as a conference paper at ICLR 2021Sascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning.InInternational Joint Conference on Neural Networks (IJCNN) , pp. 1–8. IEEE, 2010.Sascha Lange, Martin Riedmiller, and Arne V oigtländer. Autonomous reinforcement learning on rawvisual input data in a real world application. In International Joint Conference on Neural Networks(IJCNN) , pp. 1–8, 2012. doi: 10.1109/IJCNN.2012.6252823.K. G. Larsen and A. Skou. Bisimulation through probabilistic testing (preliminary report). InSymposium on Principles of Programming Languages , pp. 344–352. Association for ComputingMachinery, 1989. ISBN 0897912942. doi: 10.1145/75277.75307. URL https://doi.org/10.1145/75277.75307 .Michael Laskin, Aravind Srinivas, and Pieter Abbeel. CURL: Contrastive unsupervised representa-tions for reinforcement learning. arXiv:2003.06417, 2020.Alex X Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic:Deep reinforcement learning with a latent variable model. arXiv preprint arXiv:1907.00953 , 2019.Lihong Li, Thomas J Walsh, and Michael L Littman. Towards a unified theory of state abstraction forMDPs. In ISAIM , 2006.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe-tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran,Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep rein-forcement learning. Nature , 518(7540):529–533, February 2015. ISSN 00280836. URLhttp://dx.doi.org/10.1038/nature14236 .Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictivecoding. arXiv preprint arXiv:1807.03748 , 2018.Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policymeta-reinforcement learning via probabilistic context variables. arXiv preprint arXiv:1903.08254 ,2019.Bernhard Schölkopf. Causality for machine learning, 2019.Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, DavidBudden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and MartinRiedmiller. DeepMind control suite. Technical report, DeepMind, January 2018. URL https://arxiv.org/abs/1801.00690 .Jonathan Taylor, Doina Precup, and Prakash Panagaden. Bounding performance loss in approximateMDP homomorphisms. In Neural Information Processing (NeurIPS) , pp. 1649–1656, 2009.Franck van Breugel and James Worrell. Towards quantitative verification of probabilistic transitionsystems. In Fernando Orejas, Paul G. Spirakis, and Jan van Leeuwen (eds.), Automata, Languagesand Programming , pp. 421–432. Springer, 2001. ISBN 978-3-540-48224-6. doi: 10.1007/3-540-48224-5_35.Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictivecoding. ArXiv , abs/1807.03748, 2018.Cédric Villani. Topics in optimal transportation . American Mathematical Society, 01 2003.Niklas Wahlström, Thomas Schön, and Marc Deisenroth. From pixels to torques: Policy learningwith deep dynamical models. arXiv preprint arXiv:1502.02251 , 2015.Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control:A locally linear latent dynamics model for control from raw images. In Neural InformationProcessing Systems (NeurIPS) , pp. 2728–2736, 2015.Denis Yarats and Ilya Kostrikov. Soft actor-critic (SAC) implementation in PyTorch. https://github.com/denisyarats/pytorch_sac , 2020.11Published as a conference paper at ICLR 2021Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, and Rob Fergus. Im-proving sample efficiency in model-free reinforcement learning from images. arXiv preprintarXiv:1910.01741 , 2019.Amy Zhang, Yuxin Wu, and Joelle Pineau. Natural environment benchmarks for reinforcementlearning. Computing Research Repository (CoRR) , abs/1811.06032, 2018. URL http://arxiv.org/abs/1811.06032 .Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, YarinGal, and Doina Precup. Invariant causal prediction for block mdps. In International Conferenceon Machine Learning (ICML) , 2020.12Published as a conference paper at ICLR 2021A Additional Theorems and ProofsTheorem 1. Let metbe the space of bounded pseudometrics on Sand2a policy that iscontinuously improving in the space of policies . DefineF:met7!metbyF(d;)(si;sj) = (1c)jrsirsjj+cW(d)(Psi;Psj): (9)ThenFhas a least fixed point ~dwhich is a-bisimulation metric.Proof. Ideally, to prove this theorem we show that Fis monotonically increasing and continuous, andapply Fixed Point Theorem to show the existence of a fixed point that Fconverges to. Unfortunately,we can show thatFunderasmonotonically converges to isnotalso monotonic, unlike theoriginal bisimulation metric setting (Ferns et al., 2004) and the policy evaluation setting (Castro,2020). We start the iterates Fnfrom bottom?, denoted asFn(?). In Ferns et al. (2004) the max a2Acan be thought of as learning a policy between every two pairs of states to maximize their distance,and therefore this distance can only stay the same or grow over iterations of F. In Castro (2020), isfixed, and under a deterministic MDP it can also be shown that distance between states dn(si;sj)will only expand, not contract as nincreases. In the policy iteration setting, however, with startingfrom initialization 0and getting updated:k(s) = arg maxa2AXs02S[rass0+Vk1(s0)]; (10)there is no guarantee that the distance between two states dk1n1(si;sj)<dkn(si;sj)under policyiterationsk1;kand distance metric iterations dn1;dnfork;n2N, which is required formonotonicity.Instead, we show that using the policy improvement theorem which gives usVk(s)Vk1(s);8s2S; (11)will converge to a fixed point using the Fixed Point Theorem, and taking the result by Castro (2020)thatFhas a fixed point for every 2, we can show that a fixed point bisimulation metric will befound with policy iteration.Theorem 2. Given a new aggregated MDP Mconstructed by aggregating states in an -neighborhood, and an encoder that maps from states in the original MDP Mto these clusters, theoptimal value functions for the two MDPs are bounded asjV(s)V((s))j2(1)(1c): (12)Proof. From Theorem 5.1 in Ferns et al. (2004) we have:(1c)jV(s)V((s))jg(s;~d) +1maxu2Sg(u;~d)wheregis the average distance between a state and all other states in its equivalence class under thebisimulation metric ~d. By specifying a -neighborhood for each cluster of states we can replace g:(1c)jV(s)V((s))j2+12jV(s)V((s))j11c(2+12)=2(1)(1c):Theorem 4. Given an encoder :S7!Z that maps observations to a latent bisimulation metricrepresentation where jj(si)(sj)jj1:=~d(si;sj),Zencodes information about all the causalancestors of the reward AN(R).Proof. We assume a MDP with a state space S:=fS1;:::;SKgthat can be factorized into Kvariables with 1-step causal transition dynamics described by a causal graph G(example in Figure 10).We break the proof up into two parts: 1) show that if a factor Si=2AN(R)changes, the bisimulationdistance between the original state sand the new state s0is 0. and 2) show that if a factor Sj2AN(R)changes, the bisimulation distance can be >0.13Published as a conference paper at ICLR 2021Figure 10: Causal graph of transition dynamics. Reward depends only on s1as a causal parent, but s1causally depends on s2, so AN(R) is the set fs1;s2g.1) IfSi=2AN(R), an intervention on that factor does not affect current or future reward.~d(si;sj) = maxa2A(1c)jrasirasjj+cW(~d)(Pasi;Pasj)= maxa2AcW(~d)(Pasi;Pasj)siandsjhave the same reward.IfSidoes not affect future reward, then states siandsjwill have the same future reward conditionedon all future actions. This gives us~d(s;s0) = 0:2) If there is an intervention on Sj2AN(R)then current and/or future reward can change. Ifcurrent reward changes, then we already have max a2A(1c)jrasirasjj>0, giving us ~d(si;sj)>0. If only future reward changes, then those future states will have nonzero bisimilarity, andmax a2AW(~d)(Pasi;Pasj)>0, giving us ~d(si;sj)>0.B Definition of StateSince we are concerned primarily with learning from image observations, we could explicitlydistinguish the image observation space Ofrom an unknown state space S. However, since we arenot tackling the general POMDP problem, we consider the Block MDP (Du et al., 2019), whichassumes the state space is latent, and that we are instead given access to an observation space Oand rendering function q:S7!O . The crucial assumption that distinguishes the Block MDP frompartially observable MDPs is the following:Assumption 1 (Block structure (Du et al., 2019)) .Each observation ouniquely determines itsgenerating state s. That is, the observation space Ocan be partitioned into disjoint blocks Os, eachcontaining the support of the conditional distribution q(ojs).This assumption gives us the Markov property in the observation space o2O. As an example,one can think of the proprioceptive state consisting of positions and velocities of actuators as theunderlying state, and stacked pixel observations from a specific camera angle as a particular renderingfunction and corresponding observation space.C Additional DMC ResultsIn Figure 11 we show performance on the default setting on 9 different environments from DMC.Figures 12 and 13 give performance on the simple distractors and natural video settings for all 9environments.14Published as a conference paper at ICLR 20210 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturncartpole/swingupContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800900AverageReturncheetah/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e502004006008001000AverageReturnfinger/spinContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5050100150200250AverageReturnhopper/hopContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturnhopper/standContrastiveReconstructionBisimDeepMDPSLAC0 100000 200000 300000 400000 500000 600000 700000 800000step50100150200250300350episode_rewardreacher_easyContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600AverageReturnwalker/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e51002003004005006007008009001000AverageReturnwalker/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnwalker/walkContrastiveReconstructionBisimDeepMDPSLACFigure 11: Results for DBC in the default setting, in comparison to baselines with reconstruction loss,contrastive loss, and SLAC on 10 seeds with 1 standard error shaded.0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturncartpole/swingupContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e50102030405060AverageReturnhopper/hopContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500AverageReturnhopper/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5100200300400500600AverageReturnreacher/easyContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e550100150200250300350AverageReturnwalker/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e51002003004005006007008009001000AverageReturnwalker/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturnwalker/walkFigure 12: Results for DBC in the simple distractors setting, in comparison to baselines withreconstruction loss, contrastive loss, DeepMDP, and SLAC on 10 seeds with 1 standard error shaded.15Published as a conference paper at ICLR 20210 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturncartpole/swingupContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5050100150200250300AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 100000 200000 300000 400000 500000 600000 700000 800000step010203040506070episode_rewardhopper_hop0 100000 200000 300000 400000 500000 600000 700000 800000step050100150200250300350episode_rewardhopper_stand0 1 2 3 4 5 6 7 8Environment Steps 1e550100150200250300350400450AverageReturnreacher/easyContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e550100150200250300AverageReturnwalker/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5100200300400500600700800900AverageReturnwalker/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700AverageReturnwalker/walkFigure 13: Results for our bisimulation metric method in the natural video setting, in comparison tobaselines with reconstruction loss, contrastive loss, DeepMDP, and SLAC on 10 seeds with 1 standarderror shaded.D Additional VisualizationsIn addition to Figure 4, we also took 10 nearby points in the t-SNE plot and average the observations,shown on the far left of Figure 14. Note the robot agent is quite crisp, which means neighboringpoints encode the agent in similar positions, but the backgrounds are very different, and so are blurrywhen averaged.Figure 14: t-SNE of latent spaces learned with a bisimulation metric after training has completed, color-codedwith predicted state values (higher value yellow, lower value purple). Neighboring points (right) in the embeddingspace learned with a bisimulation metric have similar encodings (middle). When we sample from the same latentpoint, and average the images, we see the robot configuration is crisp, meaning neighboring points encode theagent in similar positions, but the backgrounds are very different, and so are blurry when averaged.16Published as a conference paper at ICLR 2021E Implementation DetailsWe use the same encoder architecture as in Yarats et al. (2019), which is an almost identical encoderarchitecture as in Tassa et al. (2018), with two more convolutional layers to the convnet trunk. Theencoder has kernels of size 33with32channels for all the convolutional layers and set stride to 1everywhere, except of the first convolutional layer, which has stride 2, and interpolate with ReLUactivations. Finally, we add tanh nonlinearity to the 50dimensional output of the fully-connectedlayer.For the reconstruction method, the decoder consists of a fully-connected layer followed by fourdeconvolutional layers. We use ReLU activations after each layer, except the final deconvolutionallayer that produces pixels representation. Each deconvolutional layer has kernels of size 33with32channels and stride 1, except of the last layer, where stride is 2.The dynamics and reward models are both MLPs with two hidden layers with 200 neurons each andReLU activations.Soft Actor Critic (SAC) (Haarnoja et al., 2018) is an off-policy actor-critic method that uses themaximum entropy framework for soft policy iteration. At each iteration, SAC performs soft policyevaluation and improvement steps. The policy evaluation step fits a parametric soft Q-functionQ(st;at)using transitions sampled from the replay buffer Dby minimizing the soft Bellman residual,J(Q) =E(st;st;rt;st+1)DQ(st;at)rtV(st+1)2: (13)The target value function Vis approximated via a Monte-Carlo estimate of the following expectation,V(st+1) =Eat+1Q(st+1;at+1)log(at+1jst+1); (14)where Qis the target soft Q-function parameterized by a weight vector obtained from an exponentiallymoving average of the Q-function weights to stabilize training. The policy improvement step thenattempts to project a parametric policy (atjst)by minimizing KL divergence between the policyand a Boltzmann distribution induced by the Q-function, producing the following objective,J() =EstDEat[log((atjst))Q(st;at)]: (15)We modify the Soft Actor-Critic PyTorch implementation by Yarats & Kostrikov (2020) and augmentwith a shared encoder between the actor and critic, the general model fsand task-specific models fe.The forward models are multi-layer perceptions with ReLU non-linearities and two hidden layers of200 neurons each. The encoder is a linear layer that maps to a 50-dim hidden representation. Thehyperparameters used for the RL experiments are in Table 2.Parameter name ValueReplay buffer capacity 106Batch size 128Discount 0:99Optimizer AdamCritic learning rate 105Critic target update frequency 2Critic Q-function soft-update rate Q 0.005Critic encoder soft-update rate 0.005Actor learning rate 105Actor update frequency 2Actor log stddev bounds [5;2]Encoder learning rate 105Decoder learning rate 105Decoder weight decay 107Temperature learning rate 104Temperature Adam’s 1 0:9Init temperature 0:1Table 2: A complete overview of used hyper parameters.17
RbVptpSUFSy
The paper is well written and presents a clear approach to a well motivated problem with strong evaluation results.
7: Good paper, accept
The authors propose an approach to robust representation learning of observations for reinforcement learning by training a model to align the euclidean distance between two observations with bisimulation metrics that quantify how similar the states that generated the observations are in terms of the control problem. This reduces the effect of irrelevant features in the observations on the representations. The paper is well written, the problem is clearly motivated and the approach and technical contribution is easy to follow. The approach to use the state bisimulation metric to supervise observation representation is intuitive and clearly motivated. Theoretical analysis is provided with generalization guarantees. "As an example, in the context of autonomous driving, an intervention can be a change in weather, or a change from day to night which affects the observation space but not the dynamics or reward." I do not agree with the example as weather can directly alter the dynamics and desired behavior of an AV system. The point in this paragraph is still clear but I would suggest a different example. The evaluations are strong and run on a number of different experiment settings against multiple strong SOTA models. In Figure 4 the proposed approach is outperformed by contrastive learning in the default setting. I understand that the goal is to learn robust representations for the natural setting, but can the authors comment on why it fails to beat the contrastive approach here and provide insight on how this may be addressed. Some problems have only a few distractors and may fall between the natural and the default setting. recommendation and reasoning The paper is well written and presents a clear approach to a well motivated problem with strong evaluation results. I recommend acceptance.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning Invariant Representations for Reinforcement Learning without Reconstruction ### Paper Abstract We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn representations that provide for effective downstream control and invariance to task-irrelevant details. Bisimulation metrics quantify behavioral similarity between states in continuous MDPs, which we propose using to learn robust latent representations which encode only the task-relevant information from observations. Our method trains encoders such that distances in latent space equal bisimulation distances in state space. We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks, where the background is replaced with moving distractors and natural videos, while achieving SOTA performance. We also test a first-person highway driving task where our method learns invariance to clouds, weather, and time of day. Finally, we provide generalization results drawn from properties of bisimulation metrics, and links to causal inference. ### Paper Keywords ["rich observations", "bisimulation metrics", "representation learning", "state abstractions"] ### Paper Content ABSTRACTWe study how representation learning can accelerate reinforcement learning fromrich observations, such as images, without relying either on domain knowledge orpixel-reconstruction. Our goal is to learn representations that provide for effectivedownstream control and invariance to task-irrelevant details. Bisimulation metricsquantify behavioral similarity between states in continuous MDPs, which we pro-pose using to learn robust latent representations which encode only the task-relevantinformation from observations. Our method trains encoders such that distances inlatent space equal bisimulation distances in state space. We demonstrate the effec-tiveness of our method at disregarding task-irrelevant information using modifiedvisual MuJoCo tasks, where the background is replaced with moving distractorsand natural videos, while achieving SOTA performance. We also test a first-personhighway driving task where our method learns invariance to clouds, weather, andtime of day. Finally, we provide generalization results drawn from properties ofbisimulation metrics, and links to causal inference.1 IntroductionFigure 1: Robust representa-tions of the visual scene shouldbe insensitive to irrelevant objects(e.g., clouds) or details (e.g., cartypes), and encode two observa-tions equivalently if their relevantdetails are equal (e.g., road direc-tion and locations of other cars).Learning control from images is important for many real worldapplications. While deep reinforcement learning (RL) has enjoyedmany successes in simulated tasks, learning control from real visionis more complex, especially outdoors, where images reveal detailedscenes of a complex and unstructured world. Furthermore, whilemany RL algorithms can eventually learn control from real imagesgiven unlimited data, data-efficiency is often a necessity in real trialswhich are expensive and constrained to real-time. Prior methodsfor data-efficient learning of simulated visual tasks typically userepresentation learning. Representation learning summarizes imagesby encoding them into smaller vectored representations better suitedfor RL. For example, sequential autoencoders aim to learn losslessrepresentations of streaming observations—sufficient to reconstructcurrent observations and predict future observations—from whichvarious RL algorithms can be trained (Hafner et al., 2018; Leeet al., 2019; Yarats et al., 2019). However, such methods are task-agnostic : the models represent all dynamic elements they observe inthe world, whether they are relevant to the task or not. We argue suchrepresentations can easily “distract” RL algorithms with irrelevantinformation in the case of real images. The issues of distraction isless evident in popular simulation MuJoCo and Atari tasks, since any change in observation space islikely task-relevant, and thus, worth representing. By contrast, visual images that autonomous carsobserve contain predominately task-irrelevant information, like cloud shapes and architectural details,illustrated in Figure 1.Equal contribution. Corresponding author: amy.x.zhang@mail.mcgill.ca1Published as a conference paper at ICLR 2021Rather than learning control-agnostic representations that focus on accurate reconstruction of cloudsand buildings, we would rather achieve a more compressed representation from a lossy encoder, whichonly retains state information relevant to our task. If we would like to learn representations that captureonly task-relevant elements of the state and are invariant to task-irrelevant information, intuitively wecan utilize the reward signal to help determine task-relevance, as shown by Jonschkowski & Brock(2015). As cumulative rewards are our objective, state elements are relevant not only if they influencethe current reward, but also if they influence state elements in the future that in turn influence futurerewards. This recursive relationship can be distilled into a recursive task-aware notion of stateabstraction: an ideal representation is one that is predictive of reward, and also predictive of itself inthe future.We propose learning such an invariant representation using the bisimulation metric, where the dis-tance between two observation encodings correspond to how “behaviourally different” (Ferns &Precup, 2014) both observations are. Our main contribution is a practical representation learningmethod based on the bisimulation metric suitable for downstream control, which we call deepbisimulation for control (DBC). We additionally provide theoretical analysis that proves valuebounds between the optimal value function of the true MDP and the optimal value function ofthe MDP constructed by the learned representation. Empirical evaluations demonstrate our non-reconstructive approach using bisimulation is substantially more robust to task-irrelevant distractorswhen compared to prior approaches that use reconstruction losses or contrastive losses. Our initialexperiments insert natural videos into the background of MoJoCo control task as complex distrac-tion. Our second setup is a high-fidelity highway driving task using CARLA (Dosovitskiy et al.,2017), showing that our representations can be trained effectively even on highly realistic imageswith many distractions, such as trees, clouds, buildings, and shadows. For example videos seehttps://sites.google.com/view/deepbisim4control .2 Related WorkOur work builds on the extensive prior research on bisimulation in MDP state aggregation.Reconstruction-based Representations. Early works on deep reinforcement learning from im-ages (Lange & Riedmiller, 2010; Lange et al., 2012) used a two-step learning process where first anauto-encoder was trained using reconstruction loss to learn a low-dimensional representation, andsubsequently a controller was learned using this representation. This allows effective leveraging oflarge, unlabeled datasets for learning representations for control. In practice, there is no guaranteethat the learned representation will capture useful information for the control task, and significantexpert knowledge and tricks are often necessary for these approaches to work. In model-basedRL, one solution to this problem has been to jointly train the encoder and the dynamics modelend-to-end (Watter et al., 2015; Wahlström et al., 2015) – this proved effective in learning usefultask-oriented representations. Hafner et al. (2018) and Lee et al. (2019) learn latent state models usinga reconstruction loss, but these approaches suffer from the difficulty of learning accurate long-termpredictions and often still require significant manual tuning. Gelada et al. (2019) also propose alatent dynamics model-based method and connect their approach to bisimulation metrics, using areconstruction loss in Atari. They show that `2distance in the DeepMDP representation upper boundsthe bisimulation distance, whereas our objective directly learns a representation where distance inlatent space isthe bisimulation metric. Further, their results rely on the assumption that the learnedrepresentation is Lipschitz, whereas we show that, by directly learning a bisimilarity-based represen-tation, we guarantee a representation that generates a Lipschitz MDP. We show experimentally thatournon-reconstructive DBC method is substantially more robust to complex distractors.Contrastive-based Representations. Contrastive losses are a self-supervised approach to learnuseful representations by enforcing similarity constraints between data (van den Oord et al., 2018;Chen et al., 2020). Similarity functions can be provided as domain knowledge in the form ofheuristic data augmentation, where we maximize similarity between augmentations of the same datapoint (Laskin et al., 2020) or nearby image patches (Hénaff et al., 2019), and minimize similaritybetween different data points. In the absence of this domain knowledge, contrastive representationscan be trained by predicting the future (van den Oord et al., 2018). We compare to such an approachin our experiments, and show that DBC is substantially more robust. While contrastive losses donot require reconstruction, they do not inherently have a mechanism to determine downstream taskrelevance without manual engineering, and when trained only for prediction, they aim to capture all2Published as a conference paper at ICLR 2021predictable features in the observation, which performs poorly on real images for the same reasonsworld models do. A better method would be to incorporate knowledge of the downstream task intothe similarity function in a data-driven way, so that images that are very different pixel-wise (e.g.lighting or texture changes), can also be grouped as similar w.r.t. downstream objectives.Bisimulation. Various forms of state abstractions have been defined in Markov decision processes(MDPs) to group states into clusters whilst preserving some property (e.g. the optimal value, or allvalues, or all action values from each state) (Li et al., 2006). The strictest form, which generallypreserves the most properties, is bisimulation (Larsen & Skou, 1989). Bisimulation only groups statesthat are indistinguishable w.r.t. reward sequences output given any action sequence tested. A relatedconcept is bisimulation metrics (Ferns & Precup, 2014), which measure how “behaviorally similar”states are. Ferns et al. (2011) defines the bisimulation metric with respect to continuous MDPs,and propose a Monte Carlo algorithm for learning it using an exact computation of the Wassersteindistance between empirically measured transition distributions. However, this method does not scalewell to large state spaces. Taylor et al. (2009) relate MDP homomorphisms to lax probabilisticbisimulation, and define a lax bisimulation metric. They then compute a value bound based on thismetric for MDP homomorphisms, where approximately equivalent state-action pairs are aggregated.Most recently, Castro (2020) propose an algorithm for computing on-policy bisimulation metrics,but does so directly, without learning a representation. They focus on deterministic settings and thepolicy evaluation problem. We believe our work is the first to propose a gradient-based method fordirectly learning a representation space with the properties of bisimulation metrics and show that itworks in the policy optimization setting.3 PreliminariesWe start by introducing notation and outlining realistic assumptions about underlying structure in theenvironment. Then, we review state abstractions and metrics for state similarity.We assume the underlying environment is a Markov decision process (MDP), described by the tupleM= (S;A;P;R;), whereSis the state space,Athe action space,P(s0js;a)the probability oftransitioning from state s2Sto state s02S, and2[0;1)a discount factor. An “agent” choosesactions a2A according to a policy function a(s), which updates the system state s0P(s;a),yielding a reward r=R(s)2R. The agent’s goal is to maximize the expected cumulative discountedrewards by learning a good policy: maxEP[P1t=0[tR(st)]. While our primary concern is learningfrom images, we do not address the partial-observability problem explicitly: we instead approximatestacked pixel observations as the fully-observed system state s(explained further in Appendix B).Bisimulation is a form of state abstraction that groups states siandsjthat are “behaviorally equiv-alent” (Li et al., 2006). For any action sequence a0:1, the probabilistic sequence of rewards fromsiandsjare identical. A more compact definition has a recursive form: two states are bisimilarif they share both the same immediate reward and equivalent distributions over the next bisimilarstates (Larsen & Skou, 1989; Givan et al., 2003).Definition 1 (Bisimulation Relations (Givan et al., 2003)) .Given an MDPM, an equivalencerelationBbetween states is a bisimulation relation if, for all states si;sj2S that are equivalentunderB(denoted siBsj) the following conditions hold:R(si;a) =R(sj;a)8a2A; (1)P(Gjsi;a) =P(Gjsj;a)8a2A;8G2SB; (2)whereSBis the partition ofSunder the relation B(the set of all groups Gof equivalent states), andP(Gjs;a) =Ps02GP(s0js;a):Exact partitioning with bisimulation relations is generally impractical in continuous state spaces, asthe relation is highly sensitive to infinitesimal changes in the reward function or dynamics. For thisreason, Bisimulation Metrics (Ferns et al., 2011; Ferns & Precup, 2014; Castro, 2020) softens theconcept of state partitions, and instead defines a pseudometric space (S;d), where a distance functiond:SS7! R0measures the “behavioral similarity” between two states1.Defining a distance dbetween states requires defining both a distance between rewards (to softenEquation (1)), and distance between state distributions (to soften Equation (2)). Prior works use theWasserstein metric for the latter, originally used in the context of bisimulation metrics by van Breugel1Note thatdis a pseudometric, meaning the distance between two different states can be zero, correspondingto behavioral equivalence.3Published as a conference paper at ICLR 2021& Worrell (2001). The pthWasserstein metric is defined between two probability distributions PiandPjasWp(Pi;Pj;d) = (inf02(Pi;Pj)RSSd(si;sj)pd0(si;sj))1=p, where (Pi;Pj)is theset of all couplings of PiandPj. This is known as the “earth mover” distance, denoting the cost oftransporting mass from one distribution to another (Villani, 2003). Finally, the bisimulation metric isthe reward difference added to the Wasserstein distance between transition distributions:Definition 2 (Bisimulation Metric) .From Theorem 2.6 in Ferns et al. (2011) with c2[0;1):d(si;sj) = maxa2A(1c)jRasiRasjj+cW1(Pasi;Pasj;d): (3)4 Learning Representations for Control with Bisimulation MetricsFigure 2: Learning a bisimulation metric represen-tation: shaded in blue is the main model architecture,it is reused for both states, like a Siamese network.The loss is the reward and discounted transition dis-tribution distances (using Wasserstein metric W).Algorithm 1 Deep Bisimulation for Control (DBC)1:forTimet= 0to1do2: Encode state zt=(st)3: Execute action at(zt)4: Record data: D D[f st;at;st+1;rt+1g5: Sample batch BiD6: Permute batch: Bj=permute (Bi)7: Train policy: EBi[J()].Algorithm 28: Train encoder: EBi;Bj[J()].Equation (4)9: Train dynamics: J(^P;)=(^P((st);at)zt+1)2We propose Deep Bisimulation for Control (DBC), a data-efficient approach to learn control policiesfrom unstructured, high-dimensional states. In contrast to prior work on bisimulation, which typicallyaims to learn a distance function of the form d:SS7! R0between states, our aim is instead tolearn representationsZunder which `1distances correspond to bisimulation metrics, and then usethese representations to improve reinforcement learning. Our goal is to learn encoders :S7!Zthat capture representations of states that are suitable to control, while discarding any informationthat is irrelevant for control. Any representation that relies on reconstruction of the state cannot dothis, as these irrelevant details are still important for reconstruction. We hypothesize that bisimulationmetrics can acquire this type of representation, without any reconstruction.Bisimulation metrics are a useful form of state abstraction, but prior methods to train distancefunctions either do not scale to pixel observations (Ferns et al., 2011) (due to the max operatorin Equation (3)), or were only designed for the (fixed) policy evaluation setting (Castro, 2020).By contrast, we learn improved representations for policy inputs, as the policy improves online.Our-bisimulation metric is learned with gradient decent, and we prove it converges to a fixedpoint in Theorem 1 under some assumptions. To train our encoder towards our desired relationd(si;sj) :=jj(si)(sj)jj1, we draw batches of state pairs, and minimise the mean square errorbetween the on-policy bisimulation metric and `1distance in the latent space:J() =jjzizjjj1 jrirjj W2^P(jzi;ai);^P(jzj;aj)2; (4)where zi=(si),zj=(sj),rare rewards, and zdenotes(s)with stop gradients. Equation (4)also uses a probabilistic dynamics model ^Pwhich outputs a Gaussian distribution. For this reason,we use the 2-Wasserstein metric W2in Equation (4), as opposed to the 1-Wasserstein in Equation (3),since theW2metric has a convenient closed form: W2(N(i;i);N(j;j))2=jjijjj22+jj1=2i1=2jjj2F, wherejjjjFis the Frobenius norm. For all other distances we continue using the`1norm. Our model architecture and training is illustrated by Figure 2 and Algorithm 1.Algorithm 2 Train Policy (changes to SAC in blue)1: Get value: V= mini=1;2^Qi(^(s))log(aj(s))2: Train critics: J(Qi;) = (Qi((s))rV)23: Train actor: J() =logp(aj(s))mini=1;2Qi((s))4: Train alpha: J() =logp(aj(s))5: Update target critics: ^Qi QQi+ (1Q)^Qi6: Update target encoder: ^ + (1)^Incorporating control. We combine our rep-resentation learning approach (Algorithm 1)with the soft actor-critic (SAC) algorithm(Haarnoja et al., 2018) to devise a practicalreinforcement learning method. We modifiedSAC slightly in Algorithm 2 to allow the valuefunction to backprop to our encoder, whichcan improve performance further (Yarats et al.,2019; Rakelly et al., 2019). Although, in principle, our method could be combined with any RLalgorithm, including the model-free DQN (Mnih et al., 2015), or model-based PETS (Chua et al.,4Published as a conference paper at ICLR 20212018). Implementation details and hyperparameter values of DBC are summarized in the appendix,Table 2. We train DBC by iteratively updating three components in turn: a policy (in this case SAC),an encoder, and a dynamics model ^P(lines 7–9, Algorithm 1). We found a single loss function wasless stable to train. The inputs of each loss function J()in Algorithm 1 represents which componentsare updated. After each training step, the policy is used to step in the environment, the data iscollected in a replay buffer D, and a batch is randomly selected to repeat training.5 Generalization Bounds and Links to Causal InferenceWhile DBC enables representation learning without pixel reconstruction, it leaves open the questionof how good the resulting representations really are. In this section, we present theoretical analysisthat bounds the suboptimality of a value function trained on the representation learned via DBC.First, we show that our -bisimulation metric converges to a fixed point, starting from the initializedpolicy0and converging to an optimal policy .Theorem 1. Letmetbe the space of bounded pseudometrics on Sanda policy that is continuouslyimproving. DefineF:met7!metbyF(d;)(si;sj) = (1c)jrsirsjj+cW(d)(Psi;Psj): (5)ThenFhas a least fixed point ~dwhich is a-bisimulation metric.Proof in appendix. As evidenced by Definition 2, the bisimulation metric has no direct dependence onthe state space. Pixels can change, but bisimilarity will stay the same. Instead, bisimilarity is groundedin a recursion of future transition probabilities and rewards, which is closely related to the optimalvalue function. In fact, the bisimulation metric gives tight bounds on the optimal value functionwith discount factor . We show this using the property that the optimal value function is Lipschitzwith respect to the bisimulation metric, see Theorem 5 in Appendix (Ferns et al., 2004). This resultalso implies that the closer two states are in terms of ~d, the more likely they are to share the sameoptimal actions. This leads us to a generalization bound on the optimal value function of an MDPconstructed from a representation space using bisimulation metrics, jj(si)(sj)jj1:=~d(si;sj):We can construct a partition of this space for some >0, giving usnpartitions where1n<(1c).We denoteas the encoder that maps from the original state space Sto each-cluster. This denotesthe amount of approximation allowed, where large leads to a more compact bisimulation partitionat the expense of a looser bound on the optimal value function.Theorem 2 (Value bound based on bisimulation metrics) .Given an MDP Mconstructed by aggre-gating states in an -neighborhood, and an encoder that maps from states in the original MDP Mto these clusters, the optimal value functions for the two MDPs are bounded asjV(s)V((s))j2(1)(1c): (6)Proof in appendix. As !0the optimal value function of the aggregated MDP converges to theoriginal. Further, by defining a learning error for ,L:= supsi;sj2Sjj(si)(sj)jj1~d(si;sj),we can update the bound in Theorem 2 to incorporate L:jV(s)V((s))j2+2L(1)(1c):MDP dynamics have a strong connection to causal inference and causal graphs, which are directedacyclic graphs (Jonsson & Barto, 2006; Schölkopf, 2019; Zhang et al., 2020). Specifically, the stateand action at time tcausally affect the next state at time t+ 1. In this work, we care about thecomponents of the state space that causally affect current and future reward. Deep bisimulation forcontrol representations connect to causal feature sets , or the minimal feature set needed to predict atarget variable (Zhang et al., 2020).Theorem 3 (Connections to causal feature sets (Thm 1 in Zhang et al. (2020))) .If we partitionobservations using the bisimulation metric, those clusters (a bisimulation partition) correspond tothe causal feature set of the observation space with respect to current and future reward.This connection tells us that these features are the minimal sufficient statistic of the current and futurereward, and therefore consist of (and only consist of) the causal ancestors of the reward variable r.Definition 3 (Causal Ancestors) .In a causal graph where nodes correspond to variables and directededges between a parent node Pand child node Care causal relationships, the causal ancestorsAN(C)of a node are all nodes in the path from Cto a root node.If there are interventions on distractor variables , or variables that control the rendering function qand therefore the rendered observation but do not affect the reward, the causal feature set will be5Published as a conference paper at ICLR 2021robust to these interventions, and correctly predict current and future reward in the linear functionapproximation setting (Zhang et al., 2020). As an example, in autonomous driving, an interventioncan be a change from day to night which affects the observation space but not the dynamics or reward.Finally, we show that a representation based on the bisimulation metric generalizes to other rewardfunctions with the same causal ancestors.Theorem 4 (Task Generalization) .Given an encoder :S7!Z that maps observations to a latentbisimulation metric representation where jj(si)(sj)jj1:=~d(si;sj),Zencodes informationabout all the causal ancestors of the reward AN(R).Proof in appendix. This result shows that the learned representation will generalize to unseen rewardfunctions, as long as the new reward function has a subset of the same causal ancestors. As anexample, a representation learned for a robot to walk will likely generalize to learning to run, becausethe reward function depends on forward velocity and all the factors that contribute to forward velocity.However, that representation will not generalize to picking up objects, as those objects will be ignoredby the learned representation, since they are not likely to be causal ancestors of a reward functiondesigned for walking. Theorem 4 shows that the learned representation will be robust to spuriouscorrelations, or changes in factors that are not in AN(R). This complements Theorem 5, that therepresentation is a minimal sufficient statistic of the optimal value function, improving generalizationover non-minimal representations.Theorem 5 (Vis Lipschitz with respect to ~d).LetVbe the optimal value function for a givendiscount factor . Ifc, thenVis Lipschitz continuous with respect to ~dwith Lipschitz constant11c, where ~dis a-bisimulation metric.jV(si)V(sj)j11c~d(si;sj): (7)See Theorem 5.1 in Ferns et al. (2004) for proof. We show empirical validation of these findings inSection 6.2.6 ExperimentsOur central hypothesis is that our non-reconstructive bisimulation based representation learningapproach should be substantially more robust to task-irrelevant distractors. To that end, we evaluateour method in a clean setting without distractors, as well as a much more difficult setting withdistractors. We compare against several baselines. The first is Stochastic Latent Actor-Critic (SLAC,Lee et al. (2019)), a state-of-the-art method for pixel observations on DeepMind Control that learns adynamics model with a reconstruction loss. The second is DeepMDP (Gelada et al., 2019), a recentmethod that also learns a latent representation space using a latent dynamics model, reward model, anddistributional Q learning, but for which they needed a reconstruction loss to scale up to Atari. Finally,we compare against two methods using the same architecture as ours but exchange our bisimulationloss with (1) a reconstruction loss (“ Reconstruction ”) and (2) contrastive predictive coding (Oordet al., 2018) (“ Contrastive ”) to ground the dynamics model and learn a latent representation.6.1 Control with Background DistractionIn this section, we benchmark DBC and the previously described baselines on the DeepMind Control(DMC) suite (Tassa et al., 2018) in two settings and nine environments (Figure 3), finger_spin ,cheetah_run , andwalker_walk and additional environments in the appendix.Default Setting. Here, the pixel observations have simple backgrounds as shown in Figure 3 (top row)with training curves for our DBC and baselines. We see SLAC, a recent state-of-the-art model-basedrepresentation learning method that uses reconstruction, generally performs best.Simple Distractors Setting. Next, we include simple background distractors, shown in Figure 3(middle row), with easy-to-predict motions. We use a fixed number of colored circles that obey thedynamics of an ideal gas (no attraction or repulsion between objects) with no collisions. Note theperformance of DBC remains consistent, as other methods start decreasing.Natural Video Setting. Then, we incorporate natural video from the Kinetics dataset (Kay et al.,2017) as background (Zhang et al., 2018), shown in Figure 3 (bottom row). The results confirm ourhypothesis: although a number of prior methods can learn effectively in the absence of distractors,when complex distractions are introduced, our non-reconstructive bisimulation based method attainssubstantially better results.6Published as a conference paper at ICLR 20210 1 2 3 4 5 6 7 8Environment Steps 1e502004006008001000AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800900AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnwalker/walk0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturnwalker/walk0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e5050100150200250300AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700AverageReturnwalker/walkFigure 3: Left observations : Pixel observations in DMC in the default setting (top row) of the finger spin (leftcolumn), cheetah (middle column), and walker (right column), with simple distractors (middle row), and naturalvideo distractors (bottom row). Right training curves : Results comparing out DBC method to baselines on 10seeds with 1 standard error shaded in the default setting. The grid-location of each graph corresponds to thegrid-location of each observation.Figure 4: t-SNE of latent spaces learned with a bisimulation metric (left t-SNE) and V AE (right t-SNE)after training has completed, color-coded with predicted state values (higher value yellow, lower value purple).Neighboring points in the embedding space learned with a bisimulation metric have similar states and correspondto observations with the same task-related information (depicted as pairs of images with their correspondingembeddings), whereas no such structure is seen in the embedding space learned by V AE, where the same imagepairs are mapped far away from each other.To visualize the representation learned with our bisimulation metric loss function in Equation (4), weuse a t-SNE plot (Figure 4). We see that even when the background looks drastically different, our en-coder learns to ignore irrelevant information and maps observations with similar robot configurationsnear each other. See Appendix D for another visualization.6.2 Generalization ExperimentsWe test generalization of our learned representation in two ways. First, we show that the learnedrepresentation space can generalize to different types of distractors, by training with simple distractorsand testing on the natural video setting. Second, we show that our learned representation can beuseful reward functions other than those it was trained for.Generalizing over backgrounds. We first train on the simple distractors setting and eval-uate on natural video . Figure 5 shows an example of the simple distractors settingand performance during training time of two experiments, blue being the zero-shot transfer to thenatural video setting, and orange the baseline which trains on natural video . This resultempirically validates that the representations learned by DBC are able to effectively learn to ignorethe background, regardless of what the background contains or how dynamic it is.7Published as a conference paper at ICLR 2021Generalizing over reward functions. We evaluate (Figure 5) the generalization capabilities ofthe learned representation by training SAC with new reward functions walker_stand andwalker_run using the fixed representation learned from walker_walk . This is empiricalevidence that confirms Theorem 4: if the new reward functions are causally dependent on a subset ofthe same factors that determine the original reward function, then our representation is sufficient.0 200000 400000 600000 800000 1000000step0100200300400500600700episode_rewardwalker_walkDBC: Transfer ideal gas to kineticsDBC: Trained on kineticsDeepMDP: Transfer ideal gas to kinetics0 200000 400000 600000 800000 1000000step1002003004005006007008009001000episode_rewardwalker_standSAC trained on observationSAC trained with frozen DBC encoderSAC trained with frozen DeepMDP encoder0 200000 400000 600000 800000 1000000step50100150200250300episode_rewardwalker_runSAC trained on observationSAC trained with frozen DBC encoderSAC trained with frozen DeepMDP encoderFigure 5: Generalization of a model trained on simple distractors environment and evaluated onkinetics (left). Generalization of an encoder trained on walker_walk environment and evaluated onwalker_stand (center) and walker_run (right), all in the simple distractors setting. 10 seeds, 1standard error shaded.6.3 Comparison with other Bisimulation EncodersEven though the purpose of bisimulation metrics by Castro (2020) is learning distances d, notrepresentation spaces Z, it nevertheless implements dwith function approximation: d(si;sj) = (si);(sj)by encoding observations with before computing distances with , trained as:J(; ) = (si);(sj)jrirjj^ ^P(si;(si));^P(sj;(sj))2;(8)0 100000 200000 300000 400000 500000 600000 700000 800000step0100200300400500600700episode_rewardwalker_walk with natural videoDBCCastroFigure 6: Bisim. results. Blue is DBCand orange is Castro (2020).where ^and^ are target networks. A natural question is: howdoes the encoder above perform in control tasks? We com-bineabove with our policy in Algorithm 2 and use the samenetwork (single hidden layer 729 wide). Figure 6 shows rep-resentations from Castro (2020) can learn control (surprisinglywell given it was not designed to), but our method learns faster.Further, our method is simpler: by comparing Equation (8)to Equation (4), our method uses the `1distance between theencoding instead of introducing an addition network .6.4 Autonomous Driving with Visual RedundancyFigure 7: The driving task is to drive the red ego car(left) safely in traffic (middle) along a highway (right).Real-world control systems such as roboticsand autonomous vehicles must contend witha huge variety of task-irrelevant information,such as irrelevant objects (e.g. clouds) and ir-relevant details (e.g. obstacle color). To eval-uate DBC on tasks with more realistic obser-vations, we construct a highway driving sce-nario with photo-realistic visual observations using the CARLA simulator (Dosovitskiy et al.,2017) shown in Figure 7. The agent’s goal is to drive as far as possible along CARLA’sTown04’s figure-8 the highway in 1000 time-steps without colliding into the 20 other movingvehicles or barriers. Our objective function rewards highway progression and penalises collisions:rt=v>ego^uhighwaytiimpulsesjsteerj, where vegois the velocity vector of the ego vehi-cle, projected onto the highway’s unit vector ^uhighway , and multiplied by time discretization t= 0:05to measure highway progression in meters. Collisions result in impulses 2R+, measured in Newton-seconds. We found a steering penalty steer2[1;1]helped, and used weights i= 104ands= 1. While more specialized objectives exist like lane-keeping, this experiment’s purpose is onlyto compare representations with observations more characteristic of real robotic tasks. We use fivecameras on the vehicle’s roof, each with 60 degree views. By concatenating the images together, ourvehicle has a 300 degree view, observed as 84420pixels. Code and install instructions in appendix.Results in Figure 9 compare the same baselines as before, except for SLAC which is easily distracted(Figure 3). Instead we used SAC, which does not explicitly learn a representation, but performssurprisingly well from raw images. DeepMDP performs well too, perhaps given its similarly tobisimulation. But, Reconstruction and Contrastive methods again perform poorly with complex8Published as a conference paper at ICLR 2021images. More intuitive metrics are in Table 1 and Figure 8 depicts the representation space as a t-SNEwith corresponding observations. Each run took 12 hours on a GTX 1080 GPU.Figure 8: A t-SNE diagram of encoded first-person driving observations after 10k training steps of Algorithm 1,color coded by value ( Vin Algorithm 2). Top: the learned representation identifies an obstacle on the rightside. Whether that obstacle is a dark wall, bright car, or truck is task-irrelevant: these states are behaviourallyequivalent. Left: the ego vehicle has flipped onto its left side. The different wall colors, due to a setting sun, isirrelevant: all states are equally stuck and low-value (purple t-SNE color). Right : clear highway driving. Cloudsand sun position are irrelevant.0 20000 40000 60000 80000 100000step0255075100125150175episode_rewardcarlaContrastiveReconstructionSACDeepMDPDBC (ours)Figure 9: Performance comparison with 3 seeds on thedriving task. Our DBC method (red) performs betterthan DeepMDP (purple) or learning direct from pixelswithout a representation (SAC, green), and much betterthan contrastive methods (blue). Our method’s finalperformance is 46.8% better than the next best baseline.Table 1: Driving metrics, averaged over 100 episodes,after 100k training steps, with standard error. Arrowdirection indicates if metric desired larger or smaller.SAC DeepMDP DBC (ours)successes (100m)" 12% 17% 24%distance (m)" 123:27:43 106 :711:1 179:011:4crash intensity# 460430:7 195815:6 267338:5average steer#16:6%0:019% 10:4%0:015% 7:3%0:012%average brake#1:3%0:006% 4:3%0:033% 1:6%0:022%7 DiscussionThis paper presents Deep Bisimulation for Control: a new representation learning method thatconsiders downstream control. Observations are encoded into representations that are invariant todifferent task-irrelevant details in the observation. We show this is important when learning controlfrom outdoor images, or otherwise images with background “distractions”. In contrast to otherbisimulation methods, we show performance gains when distances in representation space match thebisimulation distance between observations.Future work: Several options exist for future work. First, our latent dynamics model ^Pwas onlyused for training our encoder in Equation (4), but could also be used for multi-step planning in latentspace. Second, estimating uncertainty could also be important to produce agents that can work inthe real world, perhaps via an ensemble of models f^PkgKk=1, to detect—and adapt to—distributionalshifts between training and test observations. Third, an undressed issue is that of partially observedsettings (that assumed approximately full observability by using stacked images), possibly usingexplicit memory or implicit memory such as an LSTM. Finally, investigating which metrics (L1 orL2) and dynamics distributions (Gaussians or not) would be beneficial.9Published as a conference paper at ICLR 2021ReferencesPablo Samuel Castro. Scalable methods for computing state similarity in deterministic Markovdecision processes. In Association for the Advancement of Artificial Intelligence (AAAI) , 2020.Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A Simple Framework forContrastive Learning of Visual Representations. arXiv:2002.05709 [cs, stat] , February 2020. URLhttp://arxiv.org/abs/2002.05709 . arXiv: 2002.05709.Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcementlearning in a handful of trials using probabilistic dynamics models. In Neural InformationProcessing Systems (NeurIPS) , pp. 4754–4765, 2018.Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA:An open urban driving simulator. arXiv preprint arXiv:1711.03938 , 2017.Simon S. Du, Akshay Krishnamurthy, Nan Jiang, Alekh Agarwal, Miroslav Dudík, and John Langford.Provably efficient RL with rich observations via latent state decoding. Computing ResearchRepository (CoRR) , abs/1901.09018, 2019. URL http://arxiv.org/abs/1901.09018 .Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite Markov decision processes.InUncertainty in Artificial Intelligence (UAI) , pp. 162–169, 2004. ISBN 0-9749039-0-6. URLhttp://dl.acm.org/citation.cfm?id=1036843.1036863 .Norm Ferns, Prakash Panangaden, and Doina Precup. Bisimulation metrics for continuous Markovdecision processes. Society for Industrial and Applied Mathematics , 40(6):1662–1714, December2011. ISSN 0097-5397. doi: 10.1137/10080484X. URL https://doi.org/10.1137/10080484X .Norman Ferns and Doina Precup. Bisimulation metrics are optimal value functions. In Uncertaintyin Artificial Intelligence (UAI) , pp. 210–219, 2014.Carles Gelada, Saurabh Kumar, Jacob Buckman, Ofir Nachum, and Marc G. Bellemare. DeepMDP:Learning continuous latent space models for representation learning. In Kamalika Chaudhuri andRuslan Salakhutdinov (eds.), International Conference on Machine Learning (ICML) , volume 97,pp. 2170–2179, Jun 2019.Robert Givan, Thomas L. Dean, and Matthew Greig. Equivalence notions and model minimization inMarkov decision processes. Artificial Intelligence , 147:163–223, 2003.Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maxi-mum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290 ,2018.Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and JamesDavidson. Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551 ,2018.Olivier J Hénaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, SM Eslami, andAaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXivpreprint arXiv:1905.09272 , 2019.Rico Jonschkowski and Oliver Brock. Learning state representations with robotic priors. AutonomousRobots , 39(3):407–428, 2015.Anders Jonsson and Andrew Barto. Causal graph based decomposition of factored MDPs. J. Mach.Learn. Res. , 7:2259–2301, December 2006. ISSN 1532-4435.Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan,Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman.The kinetics human action video dataset. Computing Research Repository (CoRR) , 2017. URLhttp://arxiv.org/abs/1705.06950 .10Published as a conference paper at ICLR 2021Sascha Lange and Martin Riedmiller. Deep auto-encoder neural networks in reinforcement learning.InInternational Joint Conference on Neural Networks (IJCNN) , pp. 1–8. IEEE, 2010.Sascha Lange, Martin Riedmiller, and Arne V oigtländer. Autonomous reinforcement learning on rawvisual input data in a real world application. In International Joint Conference on Neural Networks(IJCNN) , pp. 1–8, 2012. doi: 10.1109/IJCNN.2012.6252823.K. G. Larsen and A. Skou. Bisimulation through probabilistic testing (preliminary report). InSymposium on Principles of Programming Languages , pp. 344–352. Association for ComputingMachinery, 1989. ISBN 0897912942. doi: 10.1145/75277.75307. URL https://doi.org/10.1145/75277.75307 .Michael Laskin, Aravind Srinivas, and Pieter Abbeel. CURL: Contrastive unsupervised representa-tions for reinforcement learning. arXiv:2003.06417, 2020.Alex X Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic:Deep reinforcement learning with a latent variable model. arXiv preprint arXiv:1907.00953 , 2019.Lihong Li, Thomas J Walsh, and Michael L Littman. Towards a unified theory of state abstraction forMDPs. In ISAIM , 2006.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe-tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran,Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep rein-forcement learning. Nature , 518(7540):529–533, February 2015. ISSN 00280836. URLhttp://dx.doi.org/10.1038/nature14236 .Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictivecoding. arXiv preprint arXiv:1807.03748 , 2018.Kate Rakelly, Aurick Zhou, Deirdre Quillen, Chelsea Finn, and Sergey Levine. Efficient off-policymeta-reinforcement learning via probabilistic context variables. arXiv preprint arXiv:1903.08254 ,2019.Bernhard Schölkopf. Causality for machine learning, 2019.Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, DavidBudden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, and MartinRiedmiller. DeepMind control suite. Technical report, DeepMind, January 2018. URL https://arxiv.org/abs/1801.00690 .Jonathan Taylor, Doina Precup, and Prakash Panagaden. Bounding performance loss in approximateMDP homomorphisms. In Neural Information Processing (NeurIPS) , pp. 1649–1656, 2009.Franck van Breugel and James Worrell. Towards quantitative verification of probabilistic transitionsystems. In Fernando Orejas, Paul G. Spirakis, and Jan van Leeuwen (eds.), Automata, Languagesand Programming , pp. 421–432. Springer, 2001. ISBN 978-3-540-48224-6. doi: 10.1007/3-540-48224-5_35.Aäron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictivecoding. ArXiv , abs/1807.03748, 2018.Cédric Villani. Topics in optimal transportation . American Mathematical Society, 01 2003.Niklas Wahlström, Thomas Schön, and Marc Deisenroth. From pixels to torques: Policy learningwith deep dynamical models. arXiv preprint arXiv:1502.02251 , 2015.Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control:A locally linear latent dynamics model for control from raw images. In Neural InformationProcessing Systems (NeurIPS) , pp. 2728–2736, 2015.Denis Yarats and Ilya Kostrikov. Soft actor-critic (SAC) implementation in PyTorch. https://github.com/denisyarats/pytorch_sac , 2020.11Published as a conference paper at ICLR 2021Denis Yarats, Amy Zhang, Ilya Kostrikov, Brandon Amos, Joelle Pineau, and Rob Fergus. Im-proving sample efficiency in model-free reinforcement learning from images. arXiv preprintarXiv:1910.01741 , 2019.Amy Zhang, Yuxin Wu, and Joelle Pineau. Natural environment benchmarks for reinforcementlearning. Computing Research Repository (CoRR) , abs/1811.06032, 2018. URL http://arxiv.org/abs/1811.06032 .Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, YarinGal, and Doina Precup. Invariant causal prediction for block mdps. In International Conferenceon Machine Learning (ICML) , 2020.12Published as a conference paper at ICLR 2021A Additional Theorems and ProofsTheorem 1. Let metbe the space of bounded pseudometrics on Sand2a policy that iscontinuously improving in the space of policies . DefineF:met7!metbyF(d;)(si;sj) = (1c)jrsirsjj+cW(d)(Psi;Psj): (9)ThenFhas a least fixed point ~dwhich is a-bisimulation metric.Proof. Ideally, to prove this theorem we show that Fis monotonically increasing and continuous, andapply Fixed Point Theorem to show the existence of a fixed point that Fconverges to. Unfortunately,we can show thatFunderasmonotonically converges to isnotalso monotonic, unlike theoriginal bisimulation metric setting (Ferns et al., 2004) and the policy evaluation setting (Castro,2020). We start the iterates Fnfrom bottom?, denoted asFn(?). In Ferns et al. (2004) the max a2Acan be thought of as learning a policy between every two pairs of states to maximize their distance,and therefore this distance can only stay the same or grow over iterations of F. In Castro (2020), isfixed, and under a deterministic MDP it can also be shown that distance between states dn(si;sj)will only expand, not contract as nincreases. In the policy iteration setting, however, with startingfrom initialization 0and getting updated:k(s) = arg maxa2AXs02S[rass0+Vk1(s0)]; (10)there is no guarantee that the distance between two states dk1n1(si;sj)<dkn(si;sj)under policyiterationsk1;kand distance metric iterations dn1;dnfork;n2N, which is required formonotonicity.Instead, we show that using the policy improvement theorem which gives usVk(s)Vk1(s);8s2S; (11)will converge to a fixed point using the Fixed Point Theorem, and taking the result by Castro (2020)thatFhas a fixed point for every 2, we can show that a fixed point bisimulation metric will befound with policy iteration.Theorem 2. Given a new aggregated MDP Mconstructed by aggregating states in an -neighborhood, and an encoder that maps from states in the original MDP Mto these clusters, theoptimal value functions for the two MDPs are bounded asjV(s)V((s))j2(1)(1c): (12)Proof. From Theorem 5.1 in Ferns et al. (2004) we have:(1c)jV(s)V((s))jg(s;~d) +1maxu2Sg(u;~d)wheregis the average distance between a state and all other states in its equivalence class under thebisimulation metric ~d. By specifying a -neighborhood for each cluster of states we can replace g:(1c)jV(s)V((s))j2+12jV(s)V((s))j11c(2+12)=2(1)(1c):Theorem 4. Given an encoder :S7!Z that maps observations to a latent bisimulation metricrepresentation where jj(si)(sj)jj1:=~d(si;sj),Zencodes information about all the causalancestors of the reward AN(R).Proof. We assume a MDP with a state space S:=fS1;:::;SKgthat can be factorized into Kvariables with 1-step causal transition dynamics described by a causal graph G(example in Figure 10).We break the proof up into two parts: 1) show that if a factor Si=2AN(R)changes, the bisimulationdistance between the original state sand the new state s0is 0. and 2) show that if a factor Sj2AN(R)changes, the bisimulation distance can be >0.13Published as a conference paper at ICLR 2021Figure 10: Causal graph of transition dynamics. Reward depends only on s1as a causal parent, but s1causally depends on s2, so AN(R) is the set fs1;s2g.1) IfSi=2AN(R), an intervention on that factor does not affect current or future reward.~d(si;sj) = maxa2A(1c)jrasirasjj+cW(~d)(Pasi;Pasj)= maxa2AcW(~d)(Pasi;Pasj)siandsjhave the same reward.IfSidoes not affect future reward, then states siandsjwill have the same future reward conditionedon all future actions. This gives us~d(s;s0) = 0:2) If there is an intervention on Sj2AN(R)then current and/or future reward can change. Ifcurrent reward changes, then we already have max a2A(1c)jrasirasjj>0, giving us ~d(si;sj)>0. If only future reward changes, then those future states will have nonzero bisimilarity, andmax a2AW(~d)(Pasi;Pasj)>0, giving us ~d(si;sj)>0.B Definition of StateSince we are concerned primarily with learning from image observations, we could explicitlydistinguish the image observation space Ofrom an unknown state space S. However, since we arenot tackling the general POMDP problem, we consider the Block MDP (Du et al., 2019), whichassumes the state space is latent, and that we are instead given access to an observation space Oand rendering function q:S7!O . The crucial assumption that distinguishes the Block MDP frompartially observable MDPs is the following:Assumption 1 (Block structure (Du et al., 2019)) .Each observation ouniquely determines itsgenerating state s. That is, the observation space Ocan be partitioned into disjoint blocks Os, eachcontaining the support of the conditional distribution q(ojs).This assumption gives us the Markov property in the observation space o2O. As an example,one can think of the proprioceptive state consisting of positions and velocities of actuators as theunderlying state, and stacked pixel observations from a specific camera angle as a particular renderingfunction and corresponding observation space.C Additional DMC ResultsIn Figure 11 we show performance on the default setting on 9 different environments from DMC.Figures 12 and 13 give performance on the simple distractors and natural video settings for all 9environments.14Published as a conference paper at ICLR 20210 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturncartpole/swingupContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800900AverageReturncheetah/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e502004006008001000AverageReturnfinger/spinContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5050100150200250AverageReturnhopper/hopContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturnhopper/standContrastiveReconstructionBisimDeepMDPSLAC0 100000 200000 300000 400000 500000 600000 700000 800000step50100150200250300350episode_rewardreacher_easyContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600AverageReturnwalker/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e51002003004005006007008009001000AverageReturnwalker/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnwalker/walkContrastiveReconstructionBisimDeepMDPSLACFigure 11: Results for DBC in the default setting, in comparison to baselines with reconstruction loss,contrastive loss, and SLAC on 10 seeds with 1 standard error shaded.0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturncartpole/swingupContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 1 2 3 4 5 6 7 8Environment Steps 1e50102030405060AverageReturnhopper/hopContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500AverageReturnhopper/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5100200300400500600AverageReturnreacher/easyContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e550100150200250300350AverageReturnwalker/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e51002003004005006007008009001000AverageReturnwalker/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturnwalker/walkFigure 12: Results for DBC in the simple distractors setting, in comparison to baselines withreconstruction loss, contrastive loss, DeepMDP, and SLAC on 10 seeds with 1 standard error shaded.15Published as a conference paper at ICLR 20210 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700800AverageReturncartpole/swingupContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5050100150200250300AverageReturncheetah/run0 1 2 3 4 5 6 7 8Environment Steps 1e50200400600800AverageReturnfinger/spin0 100000 200000 300000 400000 500000 600000 700000 800000step010203040506070episode_rewardhopper_hop0 100000 200000 300000 400000 500000 600000 700000 800000step050100150200250300350episode_rewardhopper_stand0 1 2 3 4 5 6 7 8Environment Steps 1e550100150200250300350400450AverageReturnreacher/easyContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e550100150200250300AverageReturnwalker/runContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e5100200300400500600700800900AverageReturnwalker/standContrastiveReconstructionBisimDeepMDPSLAC0 1 2 3 4 5 6 7 8Environment Steps 1e50100200300400500600700AverageReturnwalker/walkFigure 13: Results for our bisimulation metric method in the natural video setting, in comparison tobaselines with reconstruction loss, contrastive loss, DeepMDP, and SLAC on 10 seeds with 1 standarderror shaded.D Additional VisualizationsIn addition to Figure 4, we also took 10 nearby points in the t-SNE plot and average the observations,shown on the far left of Figure 14. Note the robot agent is quite crisp, which means neighboringpoints encode the agent in similar positions, but the backgrounds are very different, and so are blurrywhen averaged.Figure 14: t-SNE of latent spaces learned with a bisimulation metric after training has completed, color-codedwith predicted state values (higher value yellow, lower value purple). Neighboring points (right) in the embeddingspace learned with a bisimulation metric have similar encodings (middle). When we sample from the same latentpoint, and average the images, we see the robot configuration is crisp, meaning neighboring points encode theagent in similar positions, but the backgrounds are very different, and so are blurry when averaged.16Published as a conference paper at ICLR 2021E Implementation DetailsWe use the same encoder architecture as in Yarats et al. (2019), which is an almost identical encoderarchitecture as in Tassa et al. (2018), with two more convolutional layers to the convnet trunk. Theencoder has kernels of size 33with32channels for all the convolutional layers and set stride to 1everywhere, except of the first convolutional layer, which has stride 2, and interpolate with ReLUactivations. Finally, we add tanh nonlinearity to the 50dimensional output of the fully-connectedlayer.For the reconstruction method, the decoder consists of a fully-connected layer followed by fourdeconvolutional layers. We use ReLU activations after each layer, except the final deconvolutionallayer that produces pixels representation. Each deconvolutional layer has kernels of size 33with32channels and stride 1, except of the last layer, where stride is 2.The dynamics and reward models are both MLPs with two hidden layers with 200 neurons each andReLU activations.Soft Actor Critic (SAC) (Haarnoja et al., 2018) is an off-policy actor-critic method that uses themaximum entropy framework for soft policy iteration. At each iteration, SAC performs soft policyevaluation and improvement steps. The policy evaluation step fits a parametric soft Q-functionQ(st;at)using transitions sampled from the replay buffer Dby minimizing the soft Bellman residual,J(Q) =E(st;st;rt;st+1)DQ(st;at)rtV(st+1)2: (13)The target value function Vis approximated via a Monte-Carlo estimate of the following expectation,V(st+1) =Eat+1Q(st+1;at+1)log(at+1jst+1); (14)where Qis the target soft Q-function parameterized by a weight vector obtained from an exponentiallymoving average of the Q-function weights to stabilize training. The policy improvement step thenattempts to project a parametric policy (atjst)by minimizing KL divergence between the policyand a Boltzmann distribution induced by the Q-function, producing the following objective,J() =EstDEat[log((atjst))Q(st;at)]: (15)We modify the Soft Actor-Critic PyTorch implementation by Yarats & Kostrikov (2020) and augmentwith a shared encoder between the actor and critic, the general model fsand task-specific models fe.The forward models are multi-layer perceptions with ReLU non-linearities and two hidden layers of200 neurons each. The encoder is a linear layer that maps to a 50-dim hidden representation. Thehyperparameters used for the RL experiments are in Table 2.Parameter name ValueReplay buffer capacity 106Batch size 128Discount 0:99Optimizer AdamCritic learning rate 105Critic target update frequency 2Critic Q-function soft-update rate Q 0.005Critic encoder soft-update rate 0.005Actor learning rate 105Actor update frequency 2Actor log stddev bounds [5;2]Encoder learning rate 105Decoder learning rate 105Decoder weight decay 107Temperature learning rate 104Temperature Adam’s 1 0:9Init temperature 0:1Table 2: A complete overview of used hyper parameters.17<|im_end|> <|im_start|>assistant ### Review Title The paper is well written and presents a clear approach to a well motivated problem with strong evaluation results. ### Review Text The authors propose an approach to robust representation learning of observations for reinforcement learning by training a model to align the euclidean distance between two observations with bisimulation metrics that quantify how similar the states that generated the observations are in terms of the control problem. This reduces the effect of irrelevant features in the observations on the representations. The paper is well written, the problem is clearly motivated and the approach and technical contribution is easy to follow. The approach to use the state bisimulation metric to supervise observation representation is intuitive and clearly motivated. Theoretical analysis is provided with generalization guarantees. "As an example, in the context of autonomous driving, an intervention can be a change in weather, or a change from day to night which affects the observation space but not the dynamics or reward." I do not agree with the example as weather can directly alter the dynamics and desired behavior of an AV system. The point in this paragraph is still clear but I would suggest a different example. The evaluations are strong and run on a number of different experiment settings against multiple strong SOTA models. In Figure 4 the proposed approach is outperformed by contrastive learning in the default setting. I understand that the goal is to learn robust representations for the natural setting, but can the authors comment on why it fails to beat the contrastive approach here and provide insight on how this may be addressed. Some problems have only a few distractors and may fall between the natural and the default setting. recommendation and reasoning The paper is well written and presents a clear approach to a well motivated problem with strong evaluation results. I recommend acceptance. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
KR-IAbeRpDH
NeurIPS.cc/2020/Workshop/SVRHM
2020
Assessing The Importance Of Colours For CNNs In Object Recognition
["Aditya Singh", "Alessandro Bay", "Andrea Mirabile"]
Humans rely heavily on shapes as a primary cue for object recognition. As secondary cues, colours and textures are also beneficial in this regard. Convolutional neural networks (CNNs), an imitation of biological neural networks, have been shown to exhibit conflicting properties. Some studies indicate that CNNs are biased towards textures whereas, another set of studies suggest shape bias for a classification task. However, they do not discuss the role of colours implying its possible humble role in the task of object recognition. In this paper, we empirically investigate the importance of colours in object recognition for CNNs. We are able to demonstrate that CNNs often rely heavily on colour information while making a prediction. Our results show that the degree of dependency on colours tend to vary from one dataset to another. Moreover, networks tend to rely more on colours if trained from scratch. Pre-training can allow the model to be less colour dependent. However, if it is forced to rely less on colours through data augmentations, this can negatively affect the overall accuracy. To facilitate these findings, we follow the framework often deployed in understanding role of colours in object recognition for humans. We evaluate a model trained with congruent images (images in original colours eg.\ red strawberries) on congruent, greyscale, and incongruent images (images in unnatural colours eg.\ blue strawberries). We measure and analyse network's predictive performance (top-1 accuracy) under these different stylisations. We utilise standard datasets of supervised image classification (CIFAR-100, STL-10, and Tiny ImageNet) and fine-grained image classification (CUB-200-2011, Oxford-Flowers, and Oxford-IIIT Pets) in our experiments.
["Image classification", "colour bias", "incongruent evaluation"]
Assessing The Importance Of Colours For CNNs InObject RecognitionAditya Singh, Alessandro Bay, Andrea MirabileZebra AI, Zebra TechnologiesLondon, United Kingdom{firstname.lastname}@zebra.comAbstractHumans rely heavily on shapes as a primary cue for object recognition. As sec-ondary cues, colours and textures are also beneficial in this regard. Convolutionalneural networks (CNNs), an imitation of biological neural networks, have beenshown to exhibit conflicting properties. Some studies indicate that CNNs arebiased towards textures whereas, another set of studies suggests shape bias for aclassification task. However, they do not discuss the role of colours, implying itspossible humble role in the task of object recognition. In this paper, we empiricallyinvestigate the importance of colours in object recognition for CNNs. We areable to demonstrate that CNNs often rely heavily on colour information whilemaking a prediction. Our results show that the degree of dependency on colourstend to vary from one dataset to another. Moreover, networks tend to rely moreon colours if trained from scratch. Pre-training can allow the model to be lesscolour dependent. To facilitate these findings, we follow the framework oftendeployed in understanding role of colours in object recognition for humans. Weevaluate a model trained with congruent images (images in original colours eg. redstrawberries) on congruent, greyscale, and incongruent images (images in unnatu-ral colours eg. blue strawberries). We measure and analyse network’s predictiveperformance (top-1 accuracy) under these different stylisations. We utilise standarddatasets of supervised image classification and fine-grained image classification inour experiments.1 IntroductionColours play a vital role in our day to day life. We utilise colours for visual identification [ 1], search[2], gaze guidance in natural scenes [ 3] etc. As an example, importance of colour can be understoodwhilst identifying ripe fruits in a background of foliage [ 1]. Initially, it was widely believed that onlyshapes and not colours influence object recognition in humans [ 4,5]. However, many studies [ 6,7]indicate that colours do assist object recognition in humans. The findings by Tanaka and Presnell[6]show that colours facilitate recognition of high colour diagnostic objects (natural objects likefruits) but have little effect on low colour diagnostic objects (man-made objects like airplanes). Theirexperiments were based on a variation of Stroop effect [ 8]. Human participants were asked to nameobjects in different colour schemes. They observed that naming of objects with congruent colourswas much faster than naming incongruently coloured objects. Greyscale images served as a neutralmedium and the response time for them were in between congruent and incongruent images. Ina similar study conducted by Hagen et al. [9]for investigating the role of colour in expert objectrecognition, similar findings were reported. Neural networks are models of machine learning designedto mimic the neurological working of a human brain [ 12]. Today, they are employed in many differentfields solving numerous tasks [ 13–15]. Considering the widespread adoption and blackbox nature ofneural networks, considerable studies have also been performed to understand their inner working2nd Workshop on Shared Visual Representations in Human and Machine Intelligence (SVRHM), NeurIPS 2020.(a) Congruent (b) Greyscale (c) Incongruent (d) Negative[10] (e) Re-textured [11]Figure 1: The texture and shape information is intact in original, greyscale and incongruent images.Additionally, negative images have a larger drift in distribution than incongruent images whenmeasured against congruent images.[16–19]. Zeiler and Fergus [16] illustrated the hierarchical nature of learnt features. Engilberge et al.[20] investigated the colour sensitivity of neural networks and observed that at the shallow end theneurons are more sensitive to colour information in an image. A number of existing studies highlightthe nature of representations learnt by the network however, do so with conflicting results. One set ofresults shows that neural networks rely predominantly on shapes [ 21–25]. On the other hand, manymore approaches oppose the theory of shape bias in CNNs [ 26,10,27–29]. Rather than primarilyrelying on shapes, neural network’s predictions are guided by the texture information of an image.Texture is referred to as “a function of the spatial variation in pixel intensities” [ 30,31]. Gatys et al.[32] showed that a linear classifier based on texture representations of a neural network performscomparable to the original model. Similarly, Geirhos et al. [11] demonstrated that ImageNet [ 33]trained model is biased towards texture.The objective of our paper is to help bridging the gap between human perception and artificial intelli-gence, providing empirical experiments based on classical neuroscience framework to exhaustivelyinvestigate the dependency of CNNs on colour information. We believe the majority of existingapproaches do highlight representation bias of CNNs but fail to address the role of colours. Bahnget al. [34] tend to the issue of colour bias in their experiments, however, do so with the motivation tolearn unbiased representations. Moreover, by either focusing on small number of test images [ 21,29]or a single dataset [11, 26] we are unable to observe a bigger picture.We believe a fair approach of highlighting the importance of colour is by utilising the frameworkas used by the psychophysical experiments of [ 1,6,9,35]. We can easily observe the relevance ofcolours by comparing performances on congruent and greyscale images. Additionally, by comparingthe performances on greyscale and incongruent images we will be able to observe the effect ofincorrect colour information (see Figure 4 for sample images).In this paper, we evaluate the importance of colours for numerous datasets under 2settings:1.Local Information (Section 4.1): Evaluating the importance of colours when a CNN canonly tend to small patches in a global shape agnostic manner.2.Global Information (Section 4.2): For this setting, no such restriction is applied on thenetwork and corresponds to standard approach of training a CNN.Apart from these 2modes of experimentation, we also evaluate a model under 4different trainingschemes to replicate a typical training scenario for a classification task. We train a network (i) fromscratch, (ii) with fine-tuning on pre-trained weights, (iii) with colour based augmentations (randomhue, saturation, brightness, etc.), and (iv) with incongruent images to reduce colour dependency. Weconclude in Section 5.2 DataIn our study we have employed datasets from image classification and fine-grained visual classification.The datasets used are: CIFAR-100 [ 36]STL-10 [ 37]Tiny ImageNet [ 38]CUB-200-2011[39]Oxford-Flowers [ 40]Oxford-IIIT Pets [ 41]. Table 5 in appendix provides some commonstatistics of the datasets used.23 MethodA datasetD=f(xi; yi); i= 1; : : : ; Ngis composed of images xi2RCHWand their corre-sponding labels yi.Dtrain,Dtestdenotes the split of the dataset into train and test sets respectively.We convertDinto different colour schemes as described below:Congruent Images (DC): These are the images in original colours. All the subsequenttransformations described below are applied to DC.Greyscale Images (DG): The congruent images converted into greyscale(luminance) im-ages, G= 0:29r+ 0:587g+ 0:114b. We copy the single channel greyscale imageinto3channels.Incongruent Images (DI): The channels of the congruent images are switched to generateunnaturally coloured images. Formally, the default correspondence for colour channels isasC[0] = red; C [1] = green; C [2] = blue. We switch the channels such that the newordering represents C[0] = green; C [1] = blue; C [2] = red. The advantage of it is thatfirstly, it preserves the texture and secondly, the changes in distribution are more containedthan approaches deployed by [ 10,11]. For example, the Jensen-Shannon divergence [ 42]JS(DtrainC;DtrainG) = 0 :06andJS(DtrainC;DtrainI) = 0 :1, whereas same for negativeimages (as in [ 10]) andDtrainC is0:3for STL-10 dataset. More comparison is provided inappendix A.2.Figure 4 shows an example of these stylisations. As humans, we learn from our surroundings whichwe perceive in congruent colours. Since we are following the framework used in identifying role ofcolours in humans, we train CNNs only on congruent images ( DtrainC ) while evaluating the top-1accuracy (represented as Acc) on the test sets of different stylisations described above.4 Experiments4.1 Access to local informationBaker et al. [29] report that CNNs can represent local shapes however, fail to utilise it in a globalcontext. Similarly, to highlight absence of shape bias in a CNN, Geirhos et al. [11] utilise BagNets[ 28]to compare model performance under artistic stylisations. BagNets only have access to small patcheswithin an image due to its design and utilises bag-of-features based framework for making a prediction.It does not make use of spatial ordering of the local features hence making it suitable to compare therelevance of colours to local shape and texture information.We use BagNet-91which has a 99receptive field over the image and is built upon the Resnet-50architecture. We report the mean accuracy and standard deviation over 3runs. The network is trainedfrom scratch and the data augmentations are limited to random horizontal flips, random rotationand random cropping. The details on hyper-parameters to train the network are provided in thesupplementary document.4.1.1 ResultsFigure 2 lists the relative accuracies w.r.t DCfor different stylisations(DGDCandDIDC). When comparingAcc(DC) with corresponding DIandDG, we observe significant drop in performance. We can alsosee the varying nature of the gaps in performance across the datasets. This shows that the coloursdo influence a network but in varying degree. Moreover, for STL-10 and Oxofrd-IIIT Pets, if wecompare Acc(DC) to Acc(DG) we observe that the drop in accuracy is there but comparatively lessthan the other datasets. This suggests that the network also relies on non-colour features (such aslocal shapes, textures) to make a decision. Additionally, by comparing Acc( DG) and Acc(DI) wecan notice the consistently lower performance for the latter. This suggests that incorrect colourinformation does indeed harm the prediction accuracy.These results indicate that even though a CNN can represent local shape [ 29] or is biased towardstexture[11], colours plays an important part at a local setting.1Sourced from official github implementation3Figure 2: Performance of BagNet-9 on different test stylisations4.2 Access to global informationIn the previous experiment, we limited the view of the network to only attend to 99patches of theimage in a global shape agnostic fashion. Here, we use ResNet-18 which has no such restriction onits receptive field. It can thus tend to global shape information in the image.To assess the importance of colours we train a network in the following 4ways:1.Vanilla training : Similar to the setting in Section 4.1, we train the network from scratch.The data augmentations used are random rotation, random cropping and random horizontalflips.2.Fine-tuning : In practice, often an ImageNet [ 33] pre-trained network is fine tuned on thedataset at hand [ 43]. In this experiment, we follow this protocol keeping the augmentationstrategy identical to vanilla training.3.Fine-tuning with Augmentations :Colour augmentation : Often colour based augmentations are used in training [ 44,45]allowing the network to be colour invariant. All the training settings are identical tofine-tune except for the fact that we also add random colour jitter (hue, saturation,contrast, and brightness) to an image while training.Channel switch pre-training : Many of the previous works studying the shape andtexture bias have imposed learning limitations by first training the network on styliseddata [ 10,11] and then fine-tuning on original images. This has been shown to improvepredictive performance of the model. Following this approach, we first start with anImageNet pre-trained model (on congruent images). Then we fine-tune the networkfollowing ‘colour augmentation’ protocol with randomised channel switching as anadditional data augmentation method. We do this in order to emulate stylised pre-training[ 11]. After fine-tuning the model, we disable the colour augmentations (‘colourjitter’ and ‘randomised channel switching’) and fine-tune the network further only oncongruent images.All the hyper-parameter details are provided in the appendix (see Appendix A.4). We also providevanilla training results for MobileNet-v2 and DenseNet-121 alongside ResNet-18 and BagNet-9 (seeAppendix 4.3).4.2.1 Results & ObservationsThe results are shown in figure 3. Detailed results are available in the appendix (see appendix A.3).We can draw the following observations from the results.1.For vanilla and fine-tuned networks, Acc(DtestC)> Acc (DtestG)> Acc (DtestI). This trendis similar to what existing studies report for object recognition by humans[ 9,35]. However,the difference in CNN accuracies across stylisations are significantly larger. Apart fromscoring human participants solely based on accuracy, their response time is also taken intoaccount. For a CNN, there is no variation in the inference time as the architecture remains4Figure 3: Top-1Dtestaccuracies for different training strategies.constant. However, it can be an interesting extension to understand the differences arisingover the predicted estimate. For instance, many approaches utilise predicted value for thewinning category as a network’s confidence in its prediction [ 46]. The aim of the study willbe then to observe the potential impact of colours on its confidence estimate.2.Fine-tuning a pre-trained model is widely known to improve the learnt representations ofa model and subsequently its accuracy. We observe the additional benefit of fine-tuningwhich leads to better performance for greyscale and incongruent images indicating lowerdependency on colours.3.Incorporating colour augmentations and channel-switching into training can enforce a modelto further rely less on colours. But, it does not improve the network’s accuracy for congruentimages.4.The variability for cross-style performance is high across datasets. For example, in thevanilla training setting CUB-200 shows a significantly low performance for greyscale whencompared to Oxford-IIIT Pets. We make a similar observation when comparing STL-10with CIFAR-100. One common property of STL-10 and Pets is that they consist of relativelysmaller number of classes ( 10and37respectively) when compared to CIFAR-100 andCUB-200 ( 100and200respectively). A direction for future work can be to investigatethe relationship between number of categories in the dataset and colour dependency of aCNN. Apart from exploring the dependency over the number of categories we can alsoinvestigate if this variation is dependent on categories in the dataset. As humans, we relymore on colours for recognising natural objects than man-made objects [ 6]. This property isreferred to as colour diagnosticity. A similar observation if it exists for CNNs can be worthexploring.5(a) CIFAR-100 (b) STL-10 (c) Tiny ImageNet(d) CUB-200-2011 (e) Oxford-Flowers (f) Oxford-IIIT PetsFigure 4: Vanilla training results for different architectures4.3 Vanilla Performance Across ArchitecturesIn this experiment we include MobileNet-v2 [ 47] and DenseNet-121 [ 48] along with ResNet-18and BagNet-9 to compare their performance across different datasets. This way we can examine ifarchitectural differences play a role in colour bias.We report the results on different CNN architectures trained under the vanilla setting in Figure 4.The results show that different architectures display similar behaviour for colour importance acrossdatasets. On the congruent images, the networks perform the best whereas the performance is worstfor incongruent images. This shows that the underlying the architecture plays a less significant rolein driving the bias of a network towards colour. The importance of colour is more dependent on thetask at hand.5 ConclusionWe believe ours is the first work to recognise unattributed impact of colours to the shape/texturedriven research for understanding bias in CNNs. By adopting the psychophysical experiment forCNNs, we have provided empirical evidence to highlight high impact of colours. We showed that avariety of different CNNs show high colour dependency for the classification task. This dependencyappears to be tied to the dataset than the underlying architecture. By default, the networks are highlycolour dependent and this dependency can be reduced by utilizing pre-trained weights and employingvarious augmentations in training as showed in our work.References[1]Inês Bramão, Luís Faísca, Karl Magnus Petersson, and Alexandra Reis. The contributionof color to object recognition. In Ioannis Kypraios, editor, Advances in Object RecognitionSystems , chapter 4. IntechOpen, Rijeka, 2012. doi: 10.5772/34821. URL https://doi.org/10.5772/34821 .[2]Aave Hannus, Ronald van den Berg, Harold Bekkering, Jos B. T. M. Roerdink, and Frans W.Cornelissen. Visual search near threshold: Some features are more equal than others. Journalof Vision , 6(4):15–15, 07 2006. ISSN 1534-7362. doi: 10.1167/6.4.15. URL https://doi.org/10.1167/6.4.15 .6[3]Antje Nuthmann and George L. Malcolm. Eye guidance during real-world scene search: Therole color plays in central and peripheral vision. Journal of Vision , 16(2):3–3, 01 2016. ISSN1534-7362. doi: 10.1167/16.2.3. URL https://doi.org/10.1167/16.2.3 .[4]Irving Biederman and Ginny Ju. Surface versus edge-based determinants of visual recognition.Cognitive Psychology , 20(1):38 – 64, 1988. ISSN 0010-0285. doi: https://doi.org/10.1016/0010-0285(88)90024-2. URL http://www.sciencedirect.com/science/article/pii/0010028588900242 .[5]Irving Biederman. Recognition-by-components: a theory of human image understanding.Psychological review , 94 2:115–147, 1987.[6]James W. Tanaka and Lynn M. Presnell. Color diagnosticity in object recognition. Perception& Psychophysics , 61(6):1140–1153, Aug 1999. ISSN 1532-5962. doi: 10.3758/BF03207619.URL https://doi.org/10.3758/BF03207619 .[7]Galit Naor-Raz, Michael J Tarr, and Daniel Kersten. Is color an intrinsic property of objectrepresentation? Perception , 32(6):667–680, 2003. doi: 10.1068/p5050. URL https://doi.org/10.1068/p5050 . PMID: 12892428.[8]J. R. Stroop. Studies of interference in serial verbal reactions. Journal of ExperimentalPsychology , 18(6):643–662, 1935. ISSN 0022-1015(Print). doi: 10.1037/h0054651. URLhttps://doi.org/10.1037/h0054651 .[9]Simen Hagen, Quoc C. Vuong, Lisa S. Scott, Tim Curran, and James W. Tanaka. The role ofcolor in expert object recognition. Journal of Vision , 14(9):9–9, 08 2014. ISSN 1534-7362. doi:10.1167/14.9.9. URL https://doi.org/10.1167/14.9.9 .[10] Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, and Radha Poovendran. Assessing shapebias property of convolutional neural networks. In The IEEE Conference on Computer Visionand Pattern Recognition (CVPR) Workshops , June 2018.[11] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, andWieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape biasimproves accuracy and robustness. In International Conference on Learning Representations ,2019. URL https://openreview.net/forum?id=Bygh9j09KX .[12] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks , 61:85 – 117, 2015. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2014.09.003. URLhttp://www.sciencedirect.com/science/article/pii/S0893608014002135 .[13] O. Abdel-Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu. Convolutional neuralnetworks for speech recognition. IEEE/ACM Transactions on Audio, Speech, and LanguageProcessing , 22(10):1533–1545, 2014.[14] Yoav Goldberg. Neural network methods for natural language processing. Syn-thesis Lectures on Human Language Technologies , 10(1):1–309, 2017. doi:10.2200/S00762ED1V01Y201703HLT037. URL https://doi.org/10.2200/S00762ED1V01Y201703HLT037 .[15] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affor-dance for direct perception in autonomous driving. In The IEEE International Conference onComputer Vision (ICCV) , December 2015.[16] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks.In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision –ECCV 2014 , pages 818–833, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10590-1.[17] Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting andunderstanding deep neural networks. Digital Signal Processing , 73:1 – 15, 2018. ISSN 1051-2004. doi: https://doi.org/10.1016/j.dsp.2017.10.011. URL http://www.sciencedirect.com/science/article/pii/S1051200417302385 .7[18] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Good-fellow, and Rob Fergus. Intriguing properties of neural networks. In International Conferenceon Learning Representations , 2014. URL http://arxiv.org/abs/1312.6199 .[19] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks:Visualising image classification models and saliency maps. In Workshop at InternationalConference on Learning Representations , 2014.[20] M. Engilberge, E. Collins, and S. Süsstrunk. Color representation in deep neural networks. In2017 IEEE International Conference on Image Processing (ICIP) , pages 2786–2790, 2017.[21] Jonas Kubilius, Stefania Bracci, and Hans P. Op de Beeck. Deep neural networks as a com-putational model for human shape sensitivity. PLOS Computational Biology , 12(4):1–26,04 2016. doi: 10.1371/journal.pcbi.1004896. URL https://doi.org/10.1371/journal.pcbi.1004896 .[22] Astrid A. Zeman, J. Brendan Ritchie, Stefania Bracci, and Hans Op de Beeck. Orthogonalrepresentations of object shape and category in deep convolutional neural networks and humanvisual cortex. Scientific Reports , 10(1):2453, Feb 2020. ISSN 2045-2322. doi: 10.1038/s41598-020-59175-0. URL https://doi.org/10.1038/s41598-020-59175-0 .[23] Samuel Ritter, David G. T. Barrett, Adam Santoro, and Matthew M Botvinick. Cognitivepsychology for deep neural networks: A shape bias case study. ArXiv , abs/1706.08606, 2017.[24] Reuben Feinman and Brenden M. Lake. Learning inductive biases with simple neural networks.ArXiv , abs/1802.02745, 2018.[25] Kamila M. Jozwik, Nikolaus Kriegeskorte, Katherine R. Storrs, and Marieke Mur. Deepconvolutional neural networks outperform feature-based but not categorical models in ex-plaining object similarity judgments. Frontiers in Psychology , 8:1726, 2017. ISSN 1664-1078. doi: 10.3389/fpsyg.2017.01726. URL https://www.frontiersin.org/article/10.3389/fpsyg.2017.01726 .[26] Gaurav Malhotra and Jeff Bowers. The contrasting roles of shape in human vision and convolu-tional neural networks. In CogSci , 2019.[27] Katherine L. Hermann and Simon Kornblith. Exploring the origins and prevalence of texturebias in convolutional neural networks, 2019.[28] Wieland Brendel and Matthias Bethge. Approximating cnns with bag-of-local-features modelsworks surprisingly well on imagenet. International Conference on Learning Representations ,2019. URL https://openreview.net/pdf?id=SkfMWhAqYQ .[29] Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J. Kellman. Deep convolutionalnetworks do not classify based on global object shape. PLOS Computational Biology , 14(12):1–43, 12 2018. doi: 10.1371/journal.pcbi.1006613. URL https://doi.org/10.1371/journal.pcbi.1006613 .[30] Mihran Tuceryan and Anil K. Jain. Texture Analysis , page 235–276. World Scientific PublishingCo., Inc., USA, 1993. ISBN 9810211368.[31] R. M. Haralick. Statistical and structural approaches to texture. Proceedings of the IEEE , 67(5):786–804, 1979.[32] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutionalneural networks. In Proceedings of the 28th International Conference on Neural InformationProcessing Systems - Volume 1 , NIPS’15, page 262–270, Cambridge, MA, USA, 2015. MITPress.[33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-ScaleHierarchical Image Database. In CVPR09 , 2009.[34] Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learningde-biased representations with biased representations, 2019.8[35] David J. Therriault, Richard H. Yaxley, and Rolf A. Zwaan. The role of color diagnosticity inobject recognition and representation. Cognitive Processing , 10:335–342, 2009.[36] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.[37] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsuper-vised feature learning. In Proceedings of the Fourteenth International Conference on ArtificialIntelligence and Statistics , Proceedings of Machine Learning Research, pages 215–223. PMLR,2011. URL http://proceedings.mlr.press/v15/coates11a.html .[38] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015.[39] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSDBirds 200. Technical report, California Institute of Technology, 2010.[40] Maria-Elena Nilsback and Andrew Zisserman. A visual vocabulary for flower classification. InIEEE Conference on Computer Vision and Pattern Recognition , volume 2, pages 1447–1454,2006.[41] Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V . Jawahar. Cats and dogs. InIEEE Conference on Computer Vision and Pattern Recognition , 2012.[42] B. Fuglede and F. Topsoe. Jensen-shannon divergence and hilbert space embedding. InInternational Symposium onInformation Theory, 2004. ISIT 2004. Proceedings. , pages 31–,2004.[43] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are featuresin deep neural networks? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence,and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27 ,pages 3320–3328. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf .[44] Andrew G. Howard. Some improvements on deep convolutional neural network based imageclassification. CoRR , abs/1312.5402, 2014.[45] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V . Vanhoucke,and A. Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , pages 1–9, 2015.[46] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neuralnetworks, 2017.[47] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen. Mobilenetv2: Inverted residualsand linear bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 4510–4520, June 2018. doi: 10.1109/CVPR.2018.00474. URL https://ieeexplore.ieee.org/document/8578572 .[48] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely connectedconvolutional networks. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition , 2017.9A AppendixA.1 DataCategory Dataset #ClassesjDtrainj jDtestjImage sizeImage classificationCIFAR-100[36] 100 50 ;000 10 ;000 3232STL-10[37] 10 5 ;000 8 ;000 9696Tiny ImageNet[38] 200 100 ;000 5 ;000 6464Fine-Grained Visual ClassificationCUB-200[39] 200 5 ;994 5 ;794 224224Oxford-Flowers[40]2102 6 ;149 2 ;040 224224Oxford-IIIT Pets[41] 37 3 ;680 3 ;669 224224Table 1: Dataset statisticsA.2 Jensen-Shannon MeasureTo reiterate the notations used, a dataset D=f(xi; yi); i= 1; : : : ; Ngis composed of imagesxi2RCHWand their corresponding labels yi.Dtrain,Dtestdenotes the split of the dataset intotrain and test sets respectively. Jensen Shannon divergence between two stylisations of the samedataset is defined as:JS(Dtrain1;Dtrain2) =1jDtrainjXi2Dtrain1jCjXj2CJS(T1(xi)[j]; T2(xi)[j])where, Tiis the transformation corresponding to Diand indexing [j]returns the normalised intensityhistogram for the jthchannel.The corresponding results in Table 2 show that switching channels is a gentler transformation thancomposing negatives in preserving original shape and texture. Greyscale is consistently with the leastamount of JS divergence and can suggest as to why consistently Acc (DG)>Acc(DI).Table 2: JS measure between stylisationsDatasets JS(DtrainC;DtrainG)JS(DtrainC;DtrainI)JS(DtrainC;DtrainNeg)CIFAR-100 0:07 0 :12 0 :33Tiny ImageNet 0:06 0 :1 0 :29STL10 0:06 0 :1 0 :3CUB-200 0:1 0 :15 0 :34Oxford-Flowers 0:09 0 :13 0 :32Oxford-IIIT Pets 0:04 0 :07 0 :27A.3 Detailed Test resultsA.4 Training detailsWe utilised Pytorch framework for all of our experiments. We list the detailed hyper-parametersbelow. Missing key-values in table ican be found via. recursive search in table i1. All thefine-grained datasets utilise similar training hyper-parameters and hence we have provided only thedetails for CUB-200.10Table 3:Test accuracies(in %) for different training strategies on classification datasets.Training approach Datasets Acc( DC) Acc(DG) Acc(DI)VanillaCIFAR-100 59:100:11 29 :900:03 22 :860:16STL-10 70:301:67 64 :132:13 52 :471:29STL-10 40:880:10 19 :400:02 15 :471:31Fine-tuningCIFAR-100 61 :390:12 34 :130:16 31 :530:55STL-10 88 :100:30 80 :590:39 73 :450:69STL-10 47 :520:09 24 :380:21 21 :960:19Colour augmentationCIFAR-100 60:060:29 36 :680:82 34 :670:24STL-10 88:040:23 83 :290:46 80 :880:06STL-10 47:420:39 27 :411:10 28 :270:09Channel switch pre-trainingCIFAR-100 60:240:22 37 :760:2645 :840:44STL-10 87:200:45 84 :010:1985 :230:21Tiny Imagenet 44:200:16 25 :750:46 32 :110:02Table 4:Test accuracies(in %) for different training strategies on fine-grained datasets.Training approach Datasets Acc( DC) Acc(DG) Acc(DI)VanillaCUB 61:390:03 12 :400:58 6 :030:23Oxford-Flowers 92:030:72 16 :884:81 9 :681:42OP 68:610:63 47 :750:80 31 :770:15Fine-tuningCUB 77 :390:40 30 :990:36 17 :622:30Oxford-Flowers 97:370:03 64 :771:42 56 :273:32OP 88 :510:86 72 :061:15 61 :592:11Colour augmentationCUB 75:070:09 38 :851:33 34 :090:15Oxford-Flowers 97 :500:14 78 :820:67 76 :371:82Pets 88:150:32 72 :860:05 68 :221:04Channel switch pre-trainingCUB 71:50:67 45 :410:3269 :930:57Oxford-Flowers 97:150:06 85 :441:2496 :170:21OP 87:161:04 77 :651:2985 :771:1111Table 5: Training details for CIFAR-100Approach Key ValueCommonModels Bagnet-9, Resnet-18, Densenet-121, Mobilenet-v2Image size 3232Train aug. Random(rotation, horizontal flip), standardisationTest aug. StandardisationBatch size 128Optimiser SGDLR decay rate 0:5VanillaEpochs 200LR 0:1Train aug. CommonLR decay epochs [50;100;150]Fine-tuningPre-trained weights ImageNetLR 0:01Epochs 30LR decay epochs [15]Colour augmentationPre-trained weights ImageNetTrain aug. Common + Random colour jittersLR 0:01Epochs 30LR decay epochs [15]Incongruent TrainingPre-trained weights ImageNetTrain aug. Common + Random (colour jitters, channel switching)Finetuned with Random(rotation, horizontal flip)LR 0:01Epochs 30;30LR decay epochs [15];[15]Table 6: Training details for STL-10Approach Element ValueCommonImage size 9696Batch size 64VanillaEpochs 200LR 0:1LR decay epochs [40;80;120;160]Fine-tuningEpochs 50LR 0:01LR decay epochs [15;30;45]Colour augmentationEpochs 50LR 0:01LR decay epochs [15;30;45]Incongruent TrainingEpochs 50;50LR 0:01LR decay epochs [15;30;45];[15;30;45]12Table 7: Training details for Tiny ImageNetApproach Element ValueCommonImage size 6464Batch size 128VanillaEpochs 150LR 0:1LR decay epochs [30;60;90;120]Fine-tuningEpochs 50LR 0:01LR decay epochs [15;30;45]Colour augmentationEpochs 50LR 0:01LR decay epochs [15;30;45]Incongruent TrainingEpochs 50;50LR 0:01LR decay epochs [15;30;45];[15;30;45]Table 8: Training details for CUB-200Approach Key ValueCommonImage size 224224Train aug. Random(rotation, horizontal flip, crop), standardisationTest aug. center crop(224), standardisationBatch size 32Optimiser SGDLR decay rate 0:5VanillaEpochs 200LR 0:1Train aug. CommonLR decay epochs [50;100;150]Fine-tuningPre-trained weights ImageNetLR 0:01Train aug. CommonEpochs 40LR decay epochs [15;30]Colour augmentationPre-trained weights ImageNetLR 0:01Train aug. Common + Random colour jittersEpochs 40LR decay epochs [15;30]Incongruent TrainingPre-trained weights ImageNetTrain aug. Common + Random (colour jitters, channel switching)Finetuned with Common aug.LR 0:01Epochs 40;40LR decay epochs [15;30];[15;30]13
kMFojYbqyuf
Do humans and deep networks rely on colour information similarly?
7: Good paper, accept
I find this line of investigation to study the impact of colour and its representation in deep networks very interesting and important for human- and machine-vision community. It would be nice to read more about the shared representation between humans and machines in the conclusion/discussion section linking (more strongly) the findings of this article to human colour vision. Pro: This paper is clearly written and easy to read. Con: The number of tasks and studied datasets is limited. Major comments: - "Neural networks are models of machine learning designed to mimic the neurological working of a human brain [12]." I agree that there are many similarities between the biological and artificial neural networks, however, I'm not sure to what extent the networks were designed to mimic the human brain. For instance, very relevant to this study, the human visual system consists of three processing channels (luminance, red-green and yellow-blue) that coexist in parallel at least to a large part of V1. Contrary to this, the chromatic information is often (almost always) collapsed in the first convolutional layer.  - Authors might be interested in the following articles: + Rafegas, I. and Vanrell, M., 2018. Color encoding in biologically-inspired convolutional neural networks. Vision Research, 151, pp.7-17. Flachot, A. and Gegenfurtner, K.R., 2018. Processing of chromatic information in a deep convolutional neural network. JOSA A, 35(4), pp.B334-B346. These articles show that colour opponency emerges in deep networks. + Akbarinia, Arash, and Raquel Gil-Rodríguez. "Deciphering image contrast in object classification deep networks." Vision Research 173 (2020): 61-76. This article shows edges (contours of shapes) are of importance to object classification networks. - It would be nice to analyse the difference between C100, TIN and S10 to better understand why S10 is less dependent on colour: + Quantitative analysis, for instance, the distribution of colour in those datasets. + Visualising a few images from each dataset to facilitate their comprehension for readers. Minor comments: - It would be nice to have the reference for used datasets in Table 1 for reproduction purposes. - I think the first paragraph of Section 4.1 is more appropriate to be placed in the introduction when the concept of shape-texture is compared. - What is VA in P4-L113? - It would be nice to have consistency in the order of datasets in Tables 1-4.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Assessing The Importance Of Colours For CNNs In Object Recognition ### Paper Abstract Humans rely heavily on shapes as a primary cue for object recognition. As secondary cues, colours and textures are also beneficial in this regard. Convolutional neural networks (CNNs), an imitation of biological neural networks, have been shown to exhibit conflicting properties. Some studies indicate that CNNs are biased towards textures whereas, another set of studies suggest shape bias for a classification task. However, they do not discuss the role of colours implying its possible humble role in the task of object recognition. In this paper, we empirically investigate the importance of colours in object recognition for CNNs. We are able to demonstrate that CNNs often rely heavily on colour information while making a prediction. Our results show that the degree of dependency on colours tend to vary from one dataset to another. Moreover, networks tend to rely more on colours if trained from scratch. Pre-training can allow the model to be less colour dependent. However, if it is forced to rely less on colours through data augmentations, this can negatively affect the overall accuracy. To facilitate these findings, we follow the framework often deployed in understanding role of colours in object recognition for humans. We evaluate a model trained with congruent images (images in original colours eg.\ red strawberries) on congruent, greyscale, and incongruent images (images in unnatural colours eg.\ blue strawberries). We measure and analyse network's predictive performance (top-1 accuracy) under these different stylisations. We utilise standard datasets of supervised image classification (CIFAR-100, STL-10, and Tiny ImageNet) and fine-grained image classification (CUB-200-2011, Oxford-Flowers, and Oxford-IIIT Pets) in our experiments. ### Paper Keywords ["Image classification", "colour bias", "incongruent evaluation"] ### Paper Content Assessing The Importance Of Colours For CNNs InObject RecognitionAditya Singh, Alessandro Bay, Andrea MirabileZebra AI, Zebra TechnologiesLondon, United Kingdom{firstname.lastname}@zebra.comAbstractHumans rely heavily on shapes as a primary cue for object recognition. As sec-ondary cues, colours and textures are also beneficial in this regard. Convolutionalneural networks (CNNs), an imitation of biological neural networks, have beenshown to exhibit conflicting properties. Some studies indicate that CNNs arebiased towards textures whereas, another set of studies suggests shape bias for aclassification task. However, they do not discuss the role of colours, implying itspossible humble role in the task of object recognition. In this paper, we empiricallyinvestigate the importance of colours in object recognition for CNNs. We areable to demonstrate that CNNs often rely heavily on colour information whilemaking a prediction. Our results show that the degree of dependency on colourstend to vary from one dataset to another. Moreover, networks tend to rely moreon colours if trained from scratch. Pre-training can allow the model to be lesscolour dependent. To facilitate these findings, we follow the framework oftendeployed in understanding role of colours in object recognition for humans. Weevaluate a model trained with congruent images (images in original colours eg. redstrawberries) on congruent, greyscale, and incongruent images (images in unnatu-ral colours eg. blue strawberries). We measure and analyse network’s predictiveperformance (top-1 accuracy) under these different stylisations. We utilise standarddatasets of supervised image classification and fine-grained image classification inour experiments.1 IntroductionColours play a vital role in our day to day life. We utilise colours for visual identification [ 1], search[2], gaze guidance in natural scenes [ 3] etc. As an example, importance of colour can be understoodwhilst identifying ripe fruits in a background of foliage [ 1]. Initially, it was widely believed that onlyshapes and not colours influence object recognition in humans [ 4,5]. However, many studies [ 6,7]indicate that colours do assist object recognition in humans. The findings by Tanaka and Presnell[6]show that colours facilitate recognition of high colour diagnostic objects (natural objects likefruits) but have little effect on low colour diagnostic objects (man-made objects like airplanes). Theirexperiments were based on a variation of Stroop effect [ 8]. Human participants were asked to nameobjects in different colour schemes. They observed that naming of objects with congruent colourswas much faster than naming incongruently coloured objects. Greyscale images served as a neutralmedium and the response time for them were in between congruent and incongruent images. Ina similar study conducted by Hagen et al. [9]for investigating the role of colour in expert objectrecognition, similar findings were reported. Neural networks are models of machine learning designedto mimic the neurological working of a human brain [ 12]. Today, they are employed in many differentfields solving numerous tasks [ 13–15]. Considering the widespread adoption and blackbox nature ofneural networks, considerable studies have also been performed to understand their inner working2nd Workshop on Shared Visual Representations in Human and Machine Intelligence (SVRHM), NeurIPS 2020.(a) Congruent (b) Greyscale (c) Incongruent (d) Negative[10] (e) Re-textured [11]Figure 1: The texture and shape information is intact in original, greyscale and incongruent images.Additionally, negative images have a larger drift in distribution than incongruent images whenmeasured against congruent images.[16–19]. Zeiler and Fergus [16] illustrated the hierarchical nature of learnt features. Engilberge et al.[20] investigated the colour sensitivity of neural networks and observed that at the shallow end theneurons are more sensitive to colour information in an image. A number of existing studies highlightthe nature of representations learnt by the network however, do so with conflicting results. One set ofresults shows that neural networks rely predominantly on shapes [ 21–25]. On the other hand, manymore approaches oppose the theory of shape bias in CNNs [ 26,10,27–29]. Rather than primarilyrelying on shapes, neural network’s predictions are guided by the texture information of an image.Texture is referred to as “a function of the spatial variation in pixel intensities” [ 30,31]. Gatys et al.[32] showed that a linear classifier based on texture representations of a neural network performscomparable to the original model. Similarly, Geirhos et al. [11] demonstrated that ImageNet [ 33]trained model is biased towards texture.The objective of our paper is to help bridging the gap between human perception and artificial intelli-gence, providing empirical experiments based on classical neuroscience framework to exhaustivelyinvestigate the dependency of CNNs on colour information. We believe the majority of existingapproaches do highlight representation bias of CNNs but fail to address the role of colours. Bahnget al. [34] tend to the issue of colour bias in their experiments, however, do so with the motivation tolearn unbiased representations. Moreover, by either focusing on small number of test images [ 21,29]or a single dataset [11, 26] we are unable to observe a bigger picture.We believe a fair approach of highlighting the importance of colour is by utilising the frameworkas used by the psychophysical experiments of [ 1,6,9,35]. We can easily observe the relevance ofcolours by comparing performances on congruent and greyscale images. Additionally, by comparingthe performances on greyscale and incongruent images we will be able to observe the effect ofincorrect colour information (see Figure 4 for sample images).In this paper, we evaluate the importance of colours for numerous datasets under 2settings:1.Local Information (Section 4.1): Evaluating the importance of colours when a CNN canonly tend to small patches in a global shape agnostic manner.2.Global Information (Section 4.2): For this setting, no such restriction is applied on thenetwork and corresponds to standard approach of training a CNN.Apart from these 2modes of experimentation, we also evaluate a model under 4different trainingschemes to replicate a typical training scenario for a classification task. We train a network (i) fromscratch, (ii) with fine-tuning on pre-trained weights, (iii) with colour based augmentations (randomhue, saturation, brightness, etc.), and (iv) with incongruent images to reduce colour dependency. Weconclude in Section 5.2 DataIn our study we have employed datasets from image classification and fine-grained visual classification.The datasets used are: CIFAR-100 [ 36]STL-10 [ 37]Tiny ImageNet [ 38]CUB-200-2011[39]Oxford-Flowers [ 40]Oxford-IIIT Pets [ 41]. Table 5 in appendix provides some commonstatistics of the datasets used.23 MethodA datasetD=f(xi; yi); i= 1; : : : ; Ngis composed of images xi2RCHWand their corre-sponding labels yi.Dtrain,Dtestdenotes the split of the dataset into train and test sets respectively.We convertDinto different colour schemes as described below:Congruent Images (DC): These are the images in original colours. All the subsequenttransformations described below are applied to DC.Greyscale Images (DG): The congruent images converted into greyscale(luminance) im-ages, G= 0:29r+ 0:587g+ 0:114b. We copy the single channel greyscale imageinto3channels.Incongruent Images (DI): The channels of the congruent images are switched to generateunnaturally coloured images. Formally, the default correspondence for colour channels isasC[0] = red; C [1] = green; C [2] = blue. We switch the channels such that the newordering represents C[0] = green; C [1] = blue; C [2] = red. The advantage of it is thatfirstly, it preserves the texture and secondly, the changes in distribution are more containedthan approaches deployed by [ 10,11]. For example, the Jensen-Shannon divergence [ 42]JS(DtrainC;DtrainG) = 0 :06andJS(DtrainC;DtrainI) = 0 :1, whereas same for negativeimages (as in [ 10]) andDtrainC is0:3for STL-10 dataset. More comparison is provided inappendix A.2.Figure 4 shows an example of these stylisations. As humans, we learn from our surroundings whichwe perceive in congruent colours. Since we are following the framework used in identifying role ofcolours in humans, we train CNNs only on congruent images ( DtrainC ) while evaluating the top-1accuracy (represented as Acc) on the test sets of different stylisations described above.4 Experiments4.1 Access to local informationBaker et al. [29] report that CNNs can represent local shapes however, fail to utilise it in a globalcontext. Similarly, to highlight absence of shape bias in a CNN, Geirhos et al. [11] utilise BagNets[ 28]to compare model performance under artistic stylisations. BagNets only have access to small patcheswithin an image due to its design and utilises bag-of-features based framework for making a prediction.It does not make use of spatial ordering of the local features hence making it suitable to compare therelevance of colours to local shape and texture information.We use BagNet-91which has a 99receptive field over the image and is built upon the Resnet-50architecture. We report the mean accuracy and standard deviation over 3runs. The network is trainedfrom scratch and the data augmentations are limited to random horizontal flips, random rotationand random cropping. The details on hyper-parameters to train the network are provided in thesupplementary document.4.1.1 ResultsFigure 2 lists the relative accuracies w.r.t DCfor different stylisations(DGDCandDIDC). When comparingAcc(DC) with corresponding DIandDG, we observe significant drop in performance. We can alsosee the varying nature of the gaps in performance across the datasets. This shows that the coloursdo influence a network but in varying degree. Moreover, for STL-10 and Oxofrd-IIIT Pets, if wecompare Acc(DC) to Acc(DG) we observe that the drop in accuracy is there but comparatively lessthan the other datasets. This suggests that the network also relies on non-colour features (such aslocal shapes, textures) to make a decision. Additionally, by comparing Acc( DG) and Acc(DI) wecan notice the consistently lower performance for the latter. This suggests that incorrect colourinformation does indeed harm the prediction accuracy.These results indicate that even though a CNN can represent local shape [ 29] or is biased towardstexture[11], colours plays an important part at a local setting.1Sourced from official github implementation3Figure 2: Performance of BagNet-9 on different test stylisations4.2 Access to global informationIn the previous experiment, we limited the view of the network to only attend to 99patches of theimage in a global shape agnostic fashion. Here, we use ResNet-18 which has no such restriction onits receptive field. It can thus tend to global shape information in the image.To assess the importance of colours we train a network in the following 4ways:1.Vanilla training : Similar to the setting in Section 4.1, we train the network from scratch.The data augmentations used are random rotation, random cropping and random horizontalflips.2.Fine-tuning : In practice, often an ImageNet [ 33] pre-trained network is fine tuned on thedataset at hand [ 43]. In this experiment, we follow this protocol keeping the augmentationstrategy identical to vanilla training.3.Fine-tuning with Augmentations :Colour augmentation : Often colour based augmentations are used in training [ 44,45]allowing the network to be colour invariant. All the training settings are identical tofine-tune except for the fact that we also add random colour jitter (hue, saturation,contrast, and brightness) to an image while training.Channel switch pre-training : Many of the previous works studying the shape andtexture bias have imposed learning limitations by first training the network on styliseddata [ 10,11] and then fine-tuning on original images. This has been shown to improvepredictive performance of the model. Following this approach, we first start with anImageNet pre-trained model (on congruent images). Then we fine-tune the networkfollowing ‘colour augmentation’ protocol with randomised channel switching as anadditional data augmentation method. We do this in order to emulate stylised pre-training[ 11]. After fine-tuning the model, we disable the colour augmentations (‘colourjitter’ and ‘randomised channel switching’) and fine-tune the network further only oncongruent images.All the hyper-parameter details are provided in the appendix (see Appendix A.4). We also providevanilla training results for MobileNet-v2 and DenseNet-121 alongside ResNet-18 and BagNet-9 (seeAppendix 4.3).4.2.1 Results & ObservationsThe results are shown in figure 3. Detailed results are available in the appendix (see appendix A.3).We can draw the following observations from the results.1.For vanilla and fine-tuned networks, Acc(DtestC)> Acc (DtestG)> Acc (DtestI). This trendis similar to what existing studies report for object recognition by humans[ 9,35]. However,the difference in CNN accuracies across stylisations are significantly larger. Apart fromscoring human participants solely based on accuracy, their response time is also taken intoaccount. For a CNN, there is no variation in the inference time as the architecture remains4Figure 3: Top-1Dtestaccuracies for different training strategies.constant. However, it can be an interesting extension to understand the differences arisingover the predicted estimate. For instance, many approaches utilise predicted value for thewinning category as a network’s confidence in its prediction [ 46]. The aim of the study willbe then to observe the potential impact of colours on its confidence estimate.2.Fine-tuning a pre-trained model is widely known to improve the learnt representations ofa model and subsequently its accuracy. We observe the additional benefit of fine-tuningwhich leads to better performance for greyscale and incongruent images indicating lowerdependency on colours.3.Incorporating colour augmentations and channel-switching into training can enforce a modelto further rely less on colours. But, it does not improve the network’s accuracy for congruentimages.4.The variability for cross-style performance is high across datasets. For example, in thevanilla training setting CUB-200 shows a significantly low performance for greyscale whencompared to Oxford-IIIT Pets. We make a similar observation when comparing STL-10with CIFAR-100. One common property of STL-10 and Pets is that they consist of relativelysmaller number of classes ( 10and37respectively) when compared to CIFAR-100 andCUB-200 ( 100and200respectively). A direction for future work can be to investigatethe relationship between number of categories in the dataset and colour dependency of aCNN. Apart from exploring the dependency over the number of categories we can alsoinvestigate if this variation is dependent on categories in the dataset. As humans, we relymore on colours for recognising natural objects than man-made objects [ 6]. This property isreferred to as colour diagnosticity. A similar observation if it exists for CNNs can be worthexploring.5(a) CIFAR-100 (b) STL-10 (c) Tiny ImageNet(d) CUB-200-2011 (e) Oxford-Flowers (f) Oxford-IIIT PetsFigure 4: Vanilla training results for different architectures4.3 Vanilla Performance Across ArchitecturesIn this experiment we include MobileNet-v2 [ 47] and DenseNet-121 [ 48] along with ResNet-18and BagNet-9 to compare their performance across different datasets. This way we can examine ifarchitectural differences play a role in colour bias.We report the results on different CNN architectures trained under the vanilla setting in Figure 4.The results show that different architectures display similar behaviour for colour importance acrossdatasets. On the congruent images, the networks perform the best whereas the performance is worstfor incongruent images. This shows that the underlying the architecture plays a less significant rolein driving the bias of a network towards colour. The importance of colour is more dependent on thetask at hand.5 ConclusionWe believe ours is the first work to recognise unattributed impact of colours to the shape/texturedriven research for understanding bias in CNNs. By adopting the psychophysical experiment forCNNs, we have provided empirical evidence to highlight high impact of colours. We showed that avariety of different CNNs show high colour dependency for the classification task. This dependencyappears to be tied to the dataset than the underlying architecture. By default, the networks are highlycolour dependent and this dependency can be reduced by utilizing pre-trained weights and employingvarious augmentations in training as showed in our work.References[1]Inês Bramão, Luís Faísca, Karl Magnus Petersson, and Alexandra Reis. The contributionof color to object recognition. In Ioannis Kypraios, editor, Advances in Object RecognitionSystems , chapter 4. IntechOpen, Rijeka, 2012. doi: 10.5772/34821. URL https://doi.org/10.5772/34821 .[2]Aave Hannus, Ronald van den Berg, Harold Bekkering, Jos B. T. M. Roerdink, and Frans W.Cornelissen. Visual search near threshold: Some features are more equal than others. Journalof Vision , 6(4):15–15, 07 2006. ISSN 1534-7362. doi: 10.1167/6.4.15. URL https://doi.org/10.1167/6.4.15 .6[3]Antje Nuthmann and George L. Malcolm. Eye guidance during real-world scene search: Therole color plays in central and peripheral vision. Journal of Vision , 16(2):3–3, 01 2016. ISSN1534-7362. doi: 10.1167/16.2.3. URL https://doi.org/10.1167/16.2.3 .[4]Irving Biederman and Ginny Ju. Surface versus edge-based determinants of visual recognition.Cognitive Psychology , 20(1):38 – 64, 1988. ISSN 0010-0285. doi: https://doi.org/10.1016/0010-0285(88)90024-2. URL http://www.sciencedirect.com/science/article/pii/0010028588900242 .[5]Irving Biederman. Recognition-by-components: a theory of human image understanding.Psychological review , 94 2:115–147, 1987.[6]James W. Tanaka and Lynn M. Presnell. Color diagnosticity in object recognition. Perception& Psychophysics , 61(6):1140–1153, Aug 1999. ISSN 1532-5962. doi: 10.3758/BF03207619.URL https://doi.org/10.3758/BF03207619 .[7]Galit Naor-Raz, Michael J Tarr, and Daniel Kersten. Is color an intrinsic property of objectrepresentation? Perception , 32(6):667–680, 2003. doi: 10.1068/p5050. URL https://doi.org/10.1068/p5050 . PMID: 12892428.[8]J. R. Stroop. Studies of interference in serial verbal reactions. Journal of ExperimentalPsychology , 18(6):643–662, 1935. ISSN 0022-1015(Print). doi: 10.1037/h0054651. URLhttps://doi.org/10.1037/h0054651 .[9]Simen Hagen, Quoc C. Vuong, Lisa S. Scott, Tim Curran, and James W. Tanaka. The role ofcolor in expert object recognition. Journal of Vision , 14(9):9–9, 08 2014. ISSN 1534-7362. doi:10.1167/14.9.9. URL https://doi.org/10.1167/14.9.9 .[10] Hossein Hosseini, Baicen Xiao, Mayoore Jaiswal, and Radha Poovendran. Assessing shapebias property of convolutional neural networks. In The IEEE Conference on Computer Visionand Pattern Recognition (CVPR) Workshops , June 2018.[11] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, andWieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape biasimproves accuracy and robustness. In International Conference on Learning Representations ,2019. URL https://openreview.net/forum?id=Bygh9j09KX .[12] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks , 61:85 – 117, 2015. ISSN 0893-6080. doi: https://doi.org/10.1016/j.neunet.2014.09.003. URLhttp://www.sciencedirect.com/science/article/pii/S0893608014002135 .[13] O. Abdel-Hamid, A. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu. Convolutional neuralnetworks for speech recognition. IEEE/ACM Transactions on Audio, Speech, and LanguageProcessing , 22(10):1533–1545, 2014.[14] Yoav Goldberg. Neural network methods for natural language processing. Syn-thesis Lectures on Human Language Technologies , 10(1):1–309, 2017. doi:10.2200/S00762ED1V01Y201703HLT037. URL https://doi.org/10.2200/S00762ED1V01Y201703HLT037 .[15] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affor-dance for direct perception in autonomous driving. In The IEEE International Conference onComputer Vision (ICCV) , December 2015.[16] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks.In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision –ECCV 2014 , pages 818–833, Cham, 2014. Springer International Publishing. ISBN 978-3-319-10590-1.[17] Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting andunderstanding deep neural networks. Digital Signal Processing , 73:1 – 15, 2018. ISSN 1051-2004. doi: https://doi.org/10.1016/j.dsp.2017.10.011. URL http://www.sciencedirect.com/science/article/pii/S1051200417302385 .7[18] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Good-fellow, and Rob Fergus. Intriguing properties of neural networks. In International Conferenceon Learning Representations , 2014. URL http://arxiv.org/abs/1312.6199 .[19] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks:Visualising image classification models and saliency maps. In Workshop at InternationalConference on Learning Representations , 2014.[20] M. Engilberge, E. Collins, and S. Süsstrunk. Color representation in deep neural networks. In2017 IEEE International Conference on Image Processing (ICIP) , pages 2786–2790, 2017.[21] Jonas Kubilius, Stefania Bracci, and Hans P. Op de Beeck. Deep neural networks as a com-putational model for human shape sensitivity. PLOS Computational Biology , 12(4):1–26,04 2016. doi: 10.1371/journal.pcbi.1004896. URL https://doi.org/10.1371/journal.pcbi.1004896 .[22] Astrid A. Zeman, J. Brendan Ritchie, Stefania Bracci, and Hans Op de Beeck. Orthogonalrepresentations of object shape and category in deep convolutional neural networks and humanvisual cortex. Scientific Reports , 10(1):2453, Feb 2020. ISSN 2045-2322. doi: 10.1038/s41598-020-59175-0. URL https://doi.org/10.1038/s41598-020-59175-0 .[23] Samuel Ritter, David G. T. Barrett, Adam Santoro, and Matthew M Botvinick. Cognitivepsychology for deep neural networks: A shape bias case study. ArXiv , abs/1706.08606, 2017.[24] Reuben Feinman and Brenden M. Lake. Learning inductive biases with simple neural networks.ArXiv , abs/1802.02745, 2018.[25] Kamila M. Jozwik, Nikolaus Kriegeskorte, Katherine R. Storrs, and Marieke Mur. Deepconvolutional neural networks outperform feature-based but not categorical models in ex-plaining object similarity judgments. Frontiers in Psychology , 8:1726, 2017. ISSN 1664-1078. doi: 10.3389/fpsyg.2017.01726. URL https://www.frontiersin.org/article/10.3389/fpsyg.2017.01726 .[26] Gaurav Malhotra and Jeff Bowers. The contrasting roles of shape in human vision and convolu-tional neural networks. In CogSci , 2019.[27] Katherine L. Hermann and Simon Kornblith. Exploring the origins and prevalence of texturebias in convolutional neural networks, 2019.[28] Wieland Brendel and Matthias Bethge. Approximating cnns with bag-of-local-features modelsworks surprisingly well on imagenet. International Conference on Learning Representations ,2019. URL https://openreview.net/pdf?id=SkfMWhAqYQ .[29] Nicholas Baker, Hongjing Lu, Gennady Erlikhman, and Philip J. Kellman. Deep convolutionalnetworks do not classify based on global object shape. PLOS Computational Biology , 14(12):1–43, 12 2018. doi: 10.1371/journal.pcbi.1006613. URL https://doi.org/10.1371/journal.pcbi.1006613 .[30] Mihran Tuceryan and Anil K. Jain. Texture Analysis , page 235–276. World Scientific PublishingCo., Inc., USA, 1993. ISBN 9810211368.[31] R. M. Haralick. Statistical and structural approaches to texture. Proceedings of the IEEE , 67(5):786–804, 1979.[32] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Texture synthesis using convolutionalneural networks. In Proceedings of the 28th International Conference on Neural InformationProcessing Systems - Volume 1 , NIPS’15, page 262–270, Cambridge, MA, USA, 2015. MITPress.[33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-ScaleHierarchical Image Database. In CVPR09 , 2009.[34] Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, and Seong Joon Oh. Learningde-biased representations with biased representations, 2019.8[35] David J. Therriault, Richard H. Yaxley, and Rolf A. Zwaan. The role of color diagnosticity inobject recognition and representation. Cognitive Processing , 10:335–342, 2009.[36] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.[37] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsuper-vised feature learning. In Proceedings of the Fourteenth International Conference on ArtificialIntelligence and Statistics , Proceedings of Machine Learning Research, pages 215–223. PMLR,2011. URL http://proceedings.mlr.press/v15/coates11a.html .[38] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, ZhihengHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) , 115(3):211–252, 2015.[39] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSDBirds 200. Technical report, California Institute of Technology, 2010.[40] Maria-Elena Nilsback and Andrew Zisserman. A visual vocabulary for flower classification. InIEEE Conference on Computer Vision and Pattern Recognition , volume 2, pages 1447–1454,2006.[41] Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V . Jawahar. Cats and dogs. InIEEE Conference on Computer Vision and Pattern Recognition , 2012.[42] B. Fuglede and F. Topsoe. Jensen-shannon divergence and hilbert space embedding. InInternational Symposium onInformation Theory, 2004. ISIT 2004. Proceedings. , pages 31–,2004.[43] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are featuresin deep neural networks? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence,and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27 ,pages 3320–3328. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf .[44] Andrew G. Howard. Some improvements on deep convolutional neural network based imageclassification. CoRR , abs/1312.5402, 2014.[45] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V . Vanhoucke,and A. Rabinovich. Going deeper with convolutions. In 2015 IEEE Conference on ComputerVision and Pattern Recognition (CVPR) , pages 1–9, 2015.[46] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neuralnetworks, 2017.[47] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen. Mobilenetv2: Inverted residualsand linear bottlenecks. In 2018 IEEE/CVF Conference on Computer Vision and PatternRecognition , pages 4510–4520, June 2018. doi: 10.1109/CVPR.2018.00474. URL https://ieeexplore.ieee.org/document/8578572 .[48] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q Weinberger. Densely connectedconvolutional networks. In Proceedings of the IEEE Conference on Computer Vision andPattern Recognition , 2017.9A AppendixA.1 DataCategory Dataset #ClassesjDtrainj jDtestjImage sizeImage classificationCIFAR-100[36] 100 50 ;000 10 ;000 3232STL-10[37] 10 5 ;000 8 ;000 9696Tiny ImageNet[38] 200 100 ;000 5 ;000 6464Fine-Grained Visual ClassificationCUB-200[39] 200 5 ;994 5 ;794 224224Oxford-Flowers[40]2102 6 ;149 2 ;040 224224Oxford-IIIT Pets[41] 37 3 ;680 3 ;669 224224Table 1: Dataset statisticsA.2 Jensen-Shannon MeasureTo reiterate the notations used, a dataset D=f(xi; yi); i= 1; : : : ; Ngis composed of imagesxi2RCHWand their corresponding labels yi.Dtrain,Dtestdenotes the split of the dataset intotrain and test sets respectively. Jensen Shannon divergence between two stylisations of the samedataset is defined as:JS(Dtrain1;Dtrain2) =1jDtrainjXi2Dtrain1jCjXj2CJS(T1(xi)[j]; T2(xi)[j])where, Tiis the transformation corresponding to Diand indexing [j]returns the normalised intensityhistogram for the jthchannel.The corresponding results in Table 2 show that switching channels is a gentler transformation thancomposing negatives in preserving original shape and texture. Greyscale is consistently with the leastamount of JS divergence and can suggest as to why consistently Acc (DG)>Acc(DI).Table 2: JS measure between stylisationsDatasets JS(DtrainC;DtrainG)JS(DtrainC;DtrainI)JS(DtrainC;DtrainNeg)CIFAR-100 0:07 0 :12 0 :33Tiny ImageNet 0:06 0 :1 0 :29STL10 0:06 0 :1 0 :3CUB-200 0:1 0 :15 0 :34Oxford-Flowers 0:09 0 :13 0 :32Oxford-IIIT Pets 0:04 0 :07 0 :27A.3 Detailed Test resultsA.4 Training detailsWe utilised Pytorch framework for all of our experiments. We list the detailed hyper-parametersbelow. Missing key-values in table ican be found via. recursive search in table i1. All thefine-grained datasets utilise similar training hyper-parameters and hence we have provided only thedetails for CUB-200.10Table 3:Test accuracies(in %) for different training strategies on classification datasets.Training approach Datasets Acc( DC) Acc(DG) Acc(DI)VanillaCIFAR-100 59:100:11 29 :900:03 22 :860:16STL-10 70:301:67 64 :132:13 52 :471:29STL-10 40:880:10 19 :400:02 15 :471:31Fine-tuningCIFAR-100 61 :390:12 34 :130:16 31 :530:55STL-10 88 :100:30 80 :590:39 73 :450:69STL-10 47 :520:09 24 :380:21 21 :960:19Colour augmentationCIFAR-100 60:060:29 36 :680:82 34 :670:24STL-10 88:040:23 83 :290:46 80 :880:06STL-10 47:420:39 27 :411:10 28 :270:09Channel switch pre-trainingCIFAR-100 60:240:22 37 :760:2645 :840:44STL-10 87:200:45 84 :010:1985 :230:21Tiny Imagenet 44:200:16 25 :750:46 32 :110:02Table 4:Test accuracies(in %) for different training strategies on fine-grained datasets.Training approach Datasets Acc( DC) Acc(DG) Acc(DI)VanillaCUB 61:390:03 12 :400:58 6 :030:23Oxford-Flowers 92:030:72 16 :884:81 9 :681:42OP 68:610:63 47 :750:80 31 :770:15Fine-tuningCUB 77 :390:40 30 :990:36 17 :622:30Oxford-Flowers 97:370:03 64 :771:42 56 :273:32OP 88 :510:86 72 :061:15 61 :592:11Colour augmentationCUB 75:070:09 38 :851:33 34 :090:15Oxford-Flowers 97 :500:14 78 :820:67 76 :371:82Pets 88:150:32 72 :860:05 68 :221:04Channel switch pre-trainingCUB 71:50:67 45 :410:3269 :930:57Oxford-Flowers 97:150:06 85 :441:2496 :170:21OP 87:161:04 77 :651:2985 :771:1111Table 5: Training details for CIFAR-100Approach Key ValueCommonModels Bagnet-9, Resnet-18, Densenet-121, Mobilenet-v2Image size 3232Train aug. Random(rotation, horizontal flip), standardisationTest aug. StandardisationBatch size 128Optimiser SGDLR decay rate 0:5VanillaEpochs 200LR 0:1Train aug. CommonLR decay epochs [50;100;150]Fine-tuningPre-trained weights ImageNetLR 0:01Epochs 30LR decay epochs [15]Colour augmentationPre-trained weights ImageNetTrain aug. Common + Random colour jittersLR 0:01Epochs 30LR decay epochs [15]Incongruent TrainingPre-trained weights ImageNetTrain aug. Common + Random (colour jitters, channel switching)Finetuned with Random(rotation, horizontal flip)LR 0:01Epochs 30;30LR decay epochs [15];[15]Table 6: Training details for STL-10Approach Element ValueCommonImage size 9696Batch size 64VanillaEpochs 200LR 0:1LR decay epochs [40;80;120;160]Fine-tuningEpochs 50LR 0:01LR decay epochs [15;30;45]Colour augmentationEpochs 50LR 0:01LR decay epochs [15;30;45]Incongruent TrainingEpochs 50;50LR 0:01LR decay epochs [15;30;45];[15;30;45]12Table 7: Training details for Tiny ImageNetApproach Element ValueCommonImage size 6464Batch size 128VanillaEpochs 150LR 0:1LR decay epochs [30;60;90;120]Fine-tuningEpochs 50LR 0:01LR decay epochs [15;30;45]Colour augmentationEpochs 50LR 0:01LR decay epochs [15;30;45]Incongruent TrainingEpochs 50;50LR 0:01LR decay epochs [15;30;45];[15;30;45]Table 8: Training details for CUB-200Approach Key ValueCommonImage size 224224Train aug. Random(rotation, horizontal flip, crop), standardisationTest aug. center crop(224), standardisationBatch size 32Optimiser SGDLR decay rate 0:5VanillaEpochs 200LR 0:1Train aug. CommonLR decay epochs [50;100;150]Fine-tuningPre-trained weights ImageNetLR 0:01Train aug. CommonEpochs 40LR decay epochs [15;30]Colour augmentationPre-trained weights ImageNetLR 0:01Train aug. Common + Random colour jittersEpochs 40LR decay epochs [15;30]Incongruent TrainingPre-trained weights ImageNetTrain aug. Common + Random (colour jitters, channel switching)Finetuned with Common aug.LR 0:01Epochs 40;40LR decay epochs [15;30];[15;30]13<|im_end|> <|im_start|>assistant ### Review Title Do humans and deep networks rely on colour information similarly? ### Review Text I find this line of investigation to study the impact of colour and its representation in deep networks very interesting and important for human- and machine-vision community. It would be nice to read more about the shared representation between humans and machines in the conclusion/discussion section linking (more strongly) the findings of this article to human colour vision. Pro: This paper is clearly written and easy to read. Con: The number of tasks and studied datasets is limited. Major comments: - "Neural networks are models of machine learning designed to mimic the neurological working of a human brain [12]." I agree that there are many similarities between the biological and artificial neural networks, however, I'm not sure to what extent the networks were designed to mimic the human brain. For instance, very relevant to this study, the human visual system consists of three processing channels (luminance, red-green and yellow-blue) that coexist in parallel at least to a large part of V1. Contrary to this, the chromatic information is often (almost always) collapsed in the first convolutional layer.  - Authors might be interested in the following articles: + Rafegas, I. and Vanrell, M., 2018. Color encoding in biologically-inspired convolutional neural networks. Vision Research, 151, pp.7-17. Flachot, A. and Gegenfurtner, K.R., 2018. Processing of chromatic information in a deep convolutional neural network. JOSA A, 35(4), pp.B334-B346. These articles show that colour opponency emerges in deep networks. + Akbarinia, Arash, and Raquel Gil-Rodríguez. "Deciphering image contrast in object classification deep networks." Vision Research 173 (2020): 61-76. This article shows edges (contours of shapes) are of importance to object classification networks. - It would be nice to analyse the difference between C100, TIN and S10 to better understand why S10 is less dependent on colour: + Quantitative analysis, for instance, the distribution of colour in those datasets. + Visualising a few images from each dataset to facilitate their comprehension for readers. Minor comments: - It would be nice to have the reference for used datasets in Table 1 for reproduction purposes. - I think the first paragraph of Section 4.1 is more appropriate to be placed in the introduction when the concept of shape-texture is compared. - What is VA in P4-L113? - It would be nice to have consistency in the order of datasets in Tables 1-4. ### Review Rating 7: Good paper, accept ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
SygT21SFvB
ICLR.cc/2020/Conference
2020
Towards Understanding Generalization in Gradient-Based Meta-Learning
["Simon Guiroy", "Vikas Verma", "Christopher J. Pal"]
In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We experimentally demonstrate that as meta-training progresses, the meta-test solutions obtained by adapting the meta-train solution of the model to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution. We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning. Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution. We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization.
["meta-learning", "objective landscapes"]
ABSTRACTIn this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We exper-imentally demonstrate that as meta-training progresses, the meta-test solutionsobtained by adapting the meta-train solution of the model to new tasks via few stepsof gradient-based fine-tuning, become flatter, lower in loss, and further away fromthe meta-train solution. We also show that those meta-test solutions become flattereven as generalization starts to degrade, thus providing an experimental evidenceagainst the correlation between generalization and flat minima in the paradigm ofgradient-based meta-leaning. Furthermore, we provide empirical evidence that gen-eralization to new tasks is correlated with the coherence between their adaptationtrajectories in parameter space, measured by the average cosine similarity betweentask-specific trajectory directions, starting from a same meta-train solution. Wealso show that coherence of meta-test gradients, measured by the average innerproduct between the task-specific gradient vectors evaluated at meta-train solution,is also correlated with generalization.1 I NTRODUCTIONTo address the problem of the few-shot learning, many meta-learning approaches have been proposedrecently (Finn et al., 2017), (Ravi and Larochelle, 2017), (Rothfuss et al., 2018), (Oreshkin et al.,2018) and (Snell et al., 2017) among others. In this work, we take steps towards understanding thecharacteristics of the landscapes of the loss functions, and their relation to generalization, in thecontext of gradient-based few-shot meta-learning. While we are interested in understanding theproperties of optimization landscapes that are linked to generalization in gradient-based meta-learningin general, we focus our experimental work here within a setup that follows the recently proposedModel Agnostic Meta-Learning (MAML) algorithm (Finn et al., 2017). The MAML algorithm isa good candidate for studying gradient-based meta-learning because of its independence from theunderlying network architecture.Our main insights and contributions can be summarized as follows:1. As gradient-based meta-training progresses:the adapted meta-test solutions become flatter on average, while the opposite occurswhen using a finetuning baseline.the adapted final solutions reach lower average support loss values, which neverincreases, while the opposite occurs when using a finetuning baseline.2.When generalization starts to degrade due to overtraining, meta-test solutions keep gettingflatter, implying that, in the context of gradient-based meta-learning, flatness of minima isnot correlated with generalization to new tasks.3.We empirically show that generalization to new tasks is correlated with the coherencebetween their adaptation trajectories, measured by the average cosine similarity betweentrajectory directions. Also correlated with generalization is the coherence between meta-test gradients, measured by the average inner product between meta-test gradient vectorsevaluated at meta-train solution. We also show that this metric is correlated to generalizationfor few-shot regression tasks where the model must learn to fit sine function curves.1Under review as a conference paper at ICLR 2020Furthermore, based on these observations, we take initial steps to propose a regularizer for MAMLbased training and provide experimental evidence for its effectiveness.2 R ELATED WORKThere has been extensive research efforts on studying the optimization landscapes of neural networksin the standard supervised learning setup. Such work has focused on the presence of saddle pointsversus local minima in high dimensional landscapes (Pascanu et al., 2014),(Dauphin et al., 2014),the role of overparametrization in generalization (Freeman and Bruna, 2016), loss barriers betweenminima and their connectivity along low loss paths, (Garipov et al., 2018); (Draxler et al., 2018), toname a few examples.One hypothesis that has gained popularity is that the flatness of minima of the loss function foundby stochastic gradient-based methods results in good generalization, (Hochreiter and Schmidhuber,1997); (Keskar et al., 2016). (Xing et al., 2018) and (Li et al., 2017) measure the flatness by thespectral norm of the hessian of the loss, with respect to the parameters, at a given point in theparameter space. Both (Smith and Le, 2017) and (Jastrzebski et al., 2017) consider the determinantof the hessian of the loss, with respect to the parameters, for the measure of flatness. For all of thework on flatness of minima cited above, authors have found that flatter minima correlate with bettergeneralization.In contrast to previous work on understanding the objective landscapes of neural networks in theclassical supervised learning paradigm, in our work, we explore the properties of objective landscapesin the setting of gradient-based meta-learning.3 G RADIENT -BASED META -LEARNINGWe consider the meta-learning scenario where we have a distribution over tasks p(T), and a model fparametrized by , that must learn to adapt to tasks Tisampled from p(T). The model is trained on aset of training tasks fTigtrainand evaluated on a set of testing tasks fTigtest, all drawn from p(T).In this work we only consider classification tasks, with fTigtrainandfTigtestusing disjoint sets ofclasses to constitute their tasks. Here we consider the setting of k-shot learning, that is, when fadaptsto a taskTtesti, it only has access to a set of few support samples Di=f(x(1)i;y(1)i);:::;(x(k)i;y(k)i)gdrawn fromTtesti. We then evaluate the model’s performance on Ttesti using a new set of targetsamplesD0i. By gradient-based meta-learning, we imply that fis trained using information about thegradient of a certain loss function L(f(Di;))on the tasks. Throughout this work the loss functionis the cross-entropy between the predicted and true class.3.1 M ODEL -AGNOSTIC META-LEARNING (MAML)MAML learns an initial set of parameters such that on average, given a new task Ttesti, only a fewsamples are required for fto learn and generalize well to that task. During a meta-training iterations, where the current parametrization of fiss, a batch ofntraining tasks is sampled from p(T). Foreach taskTi, a set of support samples Diis drawn and fadapts toTiby performing Tsteps of fullbatch gradient descent on L(f(Di;))w.r.t., obtaining the adapted solution ~i:~i=sT1Xt=0rL(f(Di;(t)i)) (1)where(t)i=(t1)irL(f(Di;(t1)i))and all adaptations are independent and start froms, i.e.(0)i=s;8i. Then from each Ti, a set of target samples D0iis drawn, and the adaptedmeta-training solution s+1is obtained by averaging the target gradients, such that:s+1=s1nnXi=1rL(f(D0i;~i)) (2)As one can see in Eq.1 and Eq.2, deriving the meta-gradients implies computing second-orderderivatives, which can come at a significant computational expense. The authors introduced a first-2Under review as a conference paper at ICLR 2020order approximation of MAML, where these second-order derivatives are ommited, and we refer tothat other algorithm as First-Order MAML.3.2 F INETUNING BASELINEFor the finetuning baseline, the model is trained in a standard supervised learning setup: the model istrained to classify all the classes from the training split using a stochastic gradient-based optimizationalgorithm, its output layer size being equal to the number of meta-train classes. During evaluation onmeta-test tasks, the model’s final layer (fully-connected) is replaced by a layer with the appropriatesize for the given meta-test task (e.g. if 5-way classification, the output layer has five logits), with itsparameter values initialized to random values or with another initialization algorithm, then all themodel parameters are optimized to the meta-test task, just like for the other meta-learning algorithms.4 A NALYZING THE OBJECTIVE LANDSCAPESFigure 1: Visualizations of metrics measuring properties of objective loss landscapes. The black arrowsrepresent the descent on the support loss and the dotted lines represent the corresponding displacement in theparameter space. (1): Curvature of the loss for an adapted meta-test solution ~i(for a taskTi), is measuredas the spectral norm of the hessian matrix of the loss. (2): Coherence of adaptation trajectories to differentmeta-test tasks is measured as the average cosine similarity for pairs of trajectory directions. A direction vectoris obtained by dividing a trajectory displacement vector (from meta-train solution sto meta-test solution~i) by its Euclidean norm, i.e. ~i= (~is)=k~isk2. (3): Characterizing a meta-train solution by thecoherence of the meta-test gradients, measured by the average inner product for pairs of meta-test gradientvectors gi=rL(f(Di;s)).In the context of gradient-based meta-learning, we define generalization as the model’s ability toreach a high accuracy on a testing task Ttesti, evaluated with a set of target samples D0i, for severaltesting tasks. This accuracy is computed after f, starting from a given meta-training parametriza-tions, has optimized its parameters to the task Ttesti using only a small set of support samplesDi, resulting in the adapted solution ~testi(minima). We thus care about the average accuracyETtestip(T)[Acc(f(D0i;~testi)]. With these definitions in mind, for many meta-test tasks Ttesti, weconsider the optimization landscapes L(f(Di;)), and 1) the properties of these loss landscapesevaluated at the solutions ~testi; 2) the adaptation trajectories when f, starting from s, adapts tothose solutions; as well as 3) the properties of those landscapes evaluated at the meta-train solutionss. See Figure 1 for a visualization of our different metrics. We follow the evolution of the metricsas meta-training progresses: after each epoch, which results in a different parametrization s, weadaptfto several meta-test tasks, compute the metrics averaged over those tasks, and compare withE[Acc(f(D0i;~testi)]. We do not deal with the objective landscapes involved during meta-training, asthis is beyond the scope of this work. From here on, we drop the superscript test from our notation, aswe exclusively deal with objective landscapes involving meta-test tasks Ti, unless specified otherwise.3Under review as a conference paper at ICLR 20204.1 F LATNESS OF MINIMAWe start our analysis of the objective loss landscapes by measuring properties of the landscapes atthe adapted meta-test solutions ~i. More concretely, we measure the curvature of the loss at thoseminima, and whether flatter minima are indicative of better generalization for the meta-test tasks.Aftersmeta-training iterations, we have a model fparametrized by s. During the meta-test, fmust adapt to several meta-test tasks Tiindependently. For a given Ti,fadapts by performing afew steps of full-batch gradient descent on the objective landscape L(f(Di;)), using the set ofsupport samplesDi, and reaches an adapted solution ~i. Here we are interested in the curvatureofL(f(Di;~i)), that is, the objective landscape when evaluated at such solution, and whether onaverage, flatter solutions favour better generalization. Considering the hessian matrix of this loss w.r.tthe model parameters, defined as H(Di;~i):=r2L(f(Di;~i)), we measure the curvature of theloss surface around ~iusing the spectral norm kkof this hessian matrix:H(Di;~i)=rmaxH(Di;~i)HH(Di;~i)=max(H(Di;~i)) (3)as illustrated in Figure 1 (1). (We get kH(Di;~i)k=max(H(Di;~i))sinceH(Di;~i)is realand symmetric.)We define the average loss curvature for meta-test solutions ~i, obtained from a meta-train solutions, as:ETip(T)[kH(Di;~i)k] (4)Note that we do not measure curvature of the loss at s, sincesis not a point of convergence of ffor the meta-test tasks. In fact, at s, since the model has not been adapted to the unseen meta-testclasses, the target accuracy for the meta-test tasks is random chance on average. Thus, measuringthe curvature of the meta-test support loss at sdoes not relate to the notion of flatness of minima.Instead, in this work we characterize the meta-train solution sby measuring the average innerproduct between the meta-test gradients, as explained later in Section 4.3.4.2 C OHERENCE OF ADAPTATION TRAJECTORIESOther than analyzing the objective landscapes at the different minima reached when fadapts to newtasks, we also analyze the adaptation trajectories to those new tasks, and whether some similaritybetween them can be indicative of good generalization. Let’s consider a model fadapting to a taskTiby starting from s, moving in parameter space by performing Tsteps of full-batch gradient descentwithrL(f(Di;))until reaching ~i. We define the adaptation trajectory to a task Tistarting fromsas the sequence of iterates (s;(1)i;(2)i;:::;~i). To simplify the analyses and alleviate some of thechallenges in dealing with trajectories of multiple steps in a parameter space of very high dimension,we define the trajectory displacement vector (~is). We define a trajectory direction vector ~iasthe unit vector: ~i:= (~is)=k~isk2.We define a metric for the coherence of adaptation trajectories to meta-test tasks Ti, starting from ameta-train solution s, as the average inner product between their direction vectors:ETi;Tjp(T)[~Ti~j] (5)The inner product between two meta-test trajectory direction vectors is illustrated in Figure 1 (2).4.3 CHARACTERIZING META -TRAIN SOLUTIONS BY THE AVERAGE INNER PRODUCT BETWEENMETA -TEST GRADIENTSIn addition to characterizing the adaptation trajectories at meta-test time, we characterize the objectivelandscapes at the meta-train solutions s. More concretely, we measure the coherence of the meta-testgradientsrL(f(Di;s))evaluated at s.The coherence between the meta-test gradients can be viewed in relation to the metric for coherenceof adaptation trajectories of Eq. 5 from Section 4.2. Even after simplifying an adaptation trajectory by4Under review as a conference paper at ICLR 2020its displacement vector, measuring distances between trajectories of multiple steps in the parameterspace can be problematic: because of the symmetries within the architectures of neural networks,where neurons can be permuted, different parameterizations can represent identically the samefunctionfthat maps inputs to outputs. This problem is even more prevalent for networks withhigher number of parameters. Since here we ultimately care about the functional differences thatfundergoes in the adaptation trajectories, measuring distances between functions in the parameterspace, either using Euclidean norm or cosine similarity between direction vectors, can be problematic(Benjamin et al., 2018).Thus to further simplify the analyses on adaptation trajectories, we can measure coherence betweentrajectories of only one step ( T= 1). Since we are interested in the relation between such trajectoriesand the generalization performance of the models, we measure the target accuracy at those meta-testsolutions obtained after only one step of gradient descent. We define those solutions as: s+gi,with meta-test gradient gi=rL(f(Di;s)). To make meta-training consistent with meta-testing,for the meta-learning algorithms we also use T= 1for the inner loop updates of Eq. 1.We thus measure coherence between the meta-test gradient vectors githat lead to those solutions.Note that the learning rate is constant and is the same for all experiments on a same dataset. Incontrast to Section 4.2, here we observed in practice that the average inner product between meta-testgradient vectors, and not just their direction vectors, is more correlated to the average target accuracy.The resulting metric is thus the average inner product between meta-test gradients evaluated at s.We define the average inner product between meta-test gradient vectors gi, evaluated at a meta-trainsolutions, as:ETi;Tjp(T)[gTigj] (6)The inner product between two meta-test gradients, evaluated at s, is illustrated in Figure 1 (3).We show in the experimental results in Section 5.2 and 5.3 that the coherence of the adaptationtrajectories, as well as of the meta-test gradients, correlate with generalization on the meta-test tasks.5 E XPERIMENTSWe apply our analyses to the two most widely used benchmark datasets for few-shot classificationproblems: Omniglot and MiniImagenet datasets. We use the standardized CNN architecture usedby (Vinyals et al., 2016) and (Finn et al., 2017). We perform our experiments using three differentgradient-based meta-learning algorithms: MAML, First-Order MAML and a Finetuning baseline.For more details on the meta-learning datasets, architecture and meta-learning hyperparameters, seeAppendix AWe closely follow the experimental setup of (Finn et al., 2017). Except for the Finetune baseline, themeta-learning algorithms use during meta-training the same number of ways and shots as during meta-testing. For our experiments, we follow the setting of (Vinyals et al., 2016): for MiniImagenet, trainingand testing our models on 5-way classification 1-shot learning, as well as 5-way 5-shot, and forOmniglot, 5-way 1-shot; 5-way 5-shot; 20-way 1-shot; 20-way 5-shot. Each experiment was repeatedfor five independent runs. For the meta-learning algorithms, the choice of hyperparameters closelyfollows (Finn et al., 2017). For our finetuning baseline, most of the original MAML hyperparameterswere left unchanged, as we want to compare the effect of the pre-training procedure, thus are keptfixed the architecture and meta-test procedures. We kept the same optimizer as for the meta-updateof MAML (ADAM), and performed hyperparameter search on the mini-batch size to use, for eachsetting that we present. (For our reproduction results on the meta-train and meta-test accuracy, seeFigure 10a and 10b in B.1.)5.1 F LATNESS OF META -TEST SOLUTIONSAfter each training epoch, we compute E[kH(Di;~i)k]using a fixed set of 60 randomly sampledmeta-test tasksTi. Across all settings, we observe that MAML first finds sharper solutions ~iuntilreaching a peak, then as the number of epoch grows, those solutions become flatter, as seen in Figure2. To verify the correlation between E[kH(Di;~i)k]andE[Acc(f(D0i;~i))], we train models foran extra number of epochs until clearly observing a decrease in the generalization performanceE[Acc(f(D0i;~i))], using First-Order MAML with 5-way 1-shot learning on MiniImagenet, and we5Under review as a conference paper at ICLR 20201 20 40 60 80 100Epoch0.00.51.01.52.02.53.03.5Spectral Norm1-shot, Second-Order1-shot, First-Order5-shot, Second-Order5-shot, First-Order(a) Omniglot 5-way1 20 40 60 80 100Epoch0.02.55.07.510.012.515.017.520.01-shot, Second-Order1-shot, First-Order5-shot, Second-Order5-shot, First-Order (b) Omniglot 20-way1 20 40 60 80 100Epoch405060708090100Spectral NormSecond-OrderFirst-Order (c) MiniImagenet5-way, 1-shot1 20 40 60 80 100Epoch100120140160180Spectral NormSecond-OrderFirst-Order(d) MiniImagenet5-way, 5-shotFigure 2: Flatness of meta-test solutions for MAML and First-Order MAML, on Omniglot andMiniImagenetverify if it is reflected by an increase in E[kH(Di;~i)k]. On the contrary, and remarkably, even asfstarts to show poorer generalization (see Figure 3a), the solutions keep getting flatter, as shown inFigure 3c. Thus for the case of gradient-based meta-learning, flatter minima don’t appear to favourbetter generalization. We perform the same analysis for our finetuning baseline (Figures 4a, 4c), withresults suggesting that flatness of solutions might be more linked with E[L(f(Di;~i))], the averagelevel of support loss attained by the solutions ~i(see Figures 4b and 3b), which is not an indicator forgeneralization. We also noted that across all settings involving MAML and First-Order MAML, thisaverage meta-test support loss E[L(f(Di;~i))]decreases monotonically as meta-training progresses.125 50 75100 125 150 175 200Epoch0.300.320.340.360.380.400.420.44Target Accuracy(a) Target Accuracy125 50 75100 125 150 175 200Epoch0.00.10.20.30.40.50.6Support loss (b) Support loss125 50 75100 125 150 175 200Epoch304050607080Spectral Norm (c) Curvature of solutionsFigure 3: MAML: Characterization of meta-test solutions1 20 40 60 80 100Epoch0.2550.2600.2650.2700.2750.2800.2850.290Target Accuracy(a) Target accuracy1 20 40 60 80 100Epoch0.060.080.100.120.14Support Loss (b) Support loss1 20 40 60 80 100Epoch455055606570Spectral norm (c) Curvature of solutionsFigure 4: Finetune baseline : Characterization of meta-test solutions5.2 C OHERENCE OF ADAPTATION TRAJECTORIESIn this section, we use the same experimental setup as in Section 5.1, except here we measureE[~Ti~j]. To reduce the variance on our results, we sample 500 tasks after each meta-training epoch.Also for experiments on Omniglot, we drop the analyses with First-Order MAML, since it yieldsperformance very similar to that of the Second-Order MAML. We start our analyses with the settingof "MiniImagenet, First-Order MAML, 5-way 1-shot", as it allowed us to test and invalidate thecorrelation between flatness of solutions and generalization, earlier in Section 5.1.6Under review as a conference paper at ICLR 2020120406080100120140160180200Epoch0.0000.0250.0500.0750.1000.1250.1500.175[Tij]120406080100120140160180200Epoch0.300.320.340.360.380.400.420.44Target Accuracy(a) MiniImagenet, 5-way, 1-shot, First-Order120406080100120140160180200Epoch0.0000.0250.0500.0750.1000.1250.1500.1750.200[Tij]120406080100120140160180200Epoch0.320.340.360.380.400.420.440.460.48Target Accuracy (b) MiniImagenet, 5-way, 1-shot, Second-OrderFigure 5: Comparison between average inner product between meta-test trajectory direction vectors(orange), and average target accuracy on meta-test tasks (blue), MAML First-Order and Second-Order,MiniImagenet 5-way 1-shot. See Figure 11 in Appendix B.2 for full set of experiments.We clearly observe a correlation between the coherence of adaptation trajectories and generalizationto new tasks, with higher average inner product between trajectory directions, thus smaller angles,being linked to higher average target accuracy on those new tasks, as shown in Figure 5a. We thenperformed the analysis on the other settings, with the same observations (see Figure 5b and Figure 11in Appendix B.2 for full set of experiments). We also perform the analysis on the Finetuning baselines,which reach much lower target accuracies, and where we see that E[~Ti~j]remains much closer tozero, meaning that trajectory directions are roughly orthogonal to each other, akin to random vectorsin high dimension (see Figure 6a). As an added observation, here we include our experimental resultson the average meta-test trajectory norm E[k~isk2], in Figure 6c and 6d, where E[k~isk2]grows as meta-training progresses when fis meta-trained with MAML, as opposed to the Finetunebaseline, and note that this norm does not reflect generalization.1 20 40 60 80 100Epoch0.010.020.030.040.050.060.070.08[Tij]5-way, 1-shot5-way, 5-shot(a) Trajectories coherence1 20 40 60 80 100Epoch0.00050.00100.00150.00200.00250.0030[gTigj]5-way, 1-shot5-way, 5-shot (b) Gradients coherence1 20 40 60 80 100Epoch0.250.300.350.400.45[|is|2]MAMLFirst-Order MAMLFinetune (c)l2norm of trajectories(1-shot)1 20 40 60 80 100Epoch0.20.30.40.50.60.7[|is|2]MAMLFirst-Order MAMLFinetune(d)l2norm of trajectories(5-shot)Figure 6: (a): Average inner product between meta-test adaptation direction vectors, for Finetuningbaseline on MiniImagenet. (b): Average inner product between meta-test gradients, for Finetuningbaseline on MiniImagenet. Average l2norm of meta-test adaptation trajectories, all algorithms onMiniImagenet, (c): 1-shot learning, (d): 5-shot learning.5.3 CHARACTERIZING META -TRAIN SOLUTIONS BY THE AVERAGE INNER PRODUCT BETWEENMETA -TEST GRADIENTSDespite the clear correlation between E[~Ti~j]and generalization for the settings that we showin Figure 5 and 11, we observed that for some other settings, this relationship appears less linear.We conjecture that such behavior might arise from the difficulties of measuring distances betweennetworks in the parameter space, as explained in Section 4.3. Here we present our results on thecharacterization of the objective landscapes at the meta-train solutions s, by measuring the averageinner product between meta-test gradient vectors gi.We observe that coherence between meta-test gradients is correlated to generalization, which isconsistent with the observations on the coherence of adaptation trajectories from Section 5.2. InFigure 7, we compare E[gTigj]to the target accuracy (here we show results for individual modelruns rather than the averages over the runs). See Figure 12 in Appendix B.3 for the full set of7Under review as a conference paper at ICLR 2020120406080100120140160180200Epoch0.2750.3000.3250.3500.3750.4000.425Target Accuracyseed 0seed 1seed 2seed 3seed 4120406080100120140160180200Epoch0.000.010.020.030.040.05[gTigj]seed 0seed 1seed 2seed 3seed 4(a) MiniImagenet, 5-way, 5-shot, First-Order120406080100120140160180200Epoch0.300.350.400.450.500.55Target Accuracyseed 1seed 2seed 3seed 6120406080100120140160180200Epoch0.0000.0050.0100.0150.020[gTigj]seed 1seed 2seed 3seed 6 (b) MiniImagenet, 5-way, 5-shot, Second-OrderFigure 7: Comparison between average inner product between meta-test gradient vectors, evaluated atmeta-train solution, and average target accuracy on meta-test tasks, with higher average inner productbeing linked to better generalization. See Figure 12 in Appendix B.3 for full set of experiments.experiments. This metric consistently correlates with generalization across the different settings.Similarly as in Section 5.2, for our finetuning baselines we observe very low coherence betweenmeta-test gradients (see Figure 6b). Based on the observations we make in Section 5.2 and 5.3, wepropose to regularize gradient-based meta-learning as described in Section 6.5.3.1 FEW-SHOT REGRESSION : AVERAGE INNER PRODUCT BETWEEN META -TEST GRADIENTSHere we extend our analysis by presenting experimental results on E[gTigj]for few-shot regression.Specifically we use a leaning problem which is composed of training task and test tasks, where eachof these tasks are sine functions parameterized as y=asin(bx+c). We train a two-layer MLPwhich learns to fit meta-training sine functions using only few support samples, and generalizationimplies reaching a low Mean Squared Error (MSE) averaged over the target set of many meta-testsine functions. Results are presented in Figure 8. Similar to our analysis of Few-shot classificationsetting, we observe in the case of Few-shot regression, generalization (negative average target MSEon Meta-test Task) strongly correlates with E[gTigj]. See Appendix A.4 for the experimentaldetails.0 50 100 150 200 250Epoch765432Negative Target MSE(a)5-shot, 1 task per batch0 50 100 150 200 250Epoch0.040.060.080.10[gTigj](b)5-shot, 1 task per batch0 50 100 150 200 250Epoch8642Negative Target MSE(c)2-shot3-shot4-shot5-shot7-shot10-shot15-shot0 50 100 150 200 250Epoch0.000.020.040.060.080.100.120.14[gTigj](d)5 4 3 2 1Negative Target MSE0.040.060.080.100.120.14[gTigj]Correlation coeff R = 0.9822 p-value: 2.2796e-58(e)Figure 8: Analysis for Few-shot regression. Comparison between E[gTigj]and average negativetarget Mean Squared Error on meta-test tasks(generalization performance). (a) and (b) show general-ization performance correlates with E[gTigj]through-out the meta-training (c) and (d) show thecorrelation across many values of k(number of shots), while (e) shows the correlation coefficient Rbetween E[gTigj]and final generalization performance, for models with kvarying between 2and15.6 F IRST STEPS TOWARDS REGULARIZING MAMLAlthough, MAML has become a popular method for meta-training, there exist a significant gener-alization gap between its performance on target set of the meta-train tasks and the target set of themeta-test task, and regularizing MAML has not received much research attention yet. Based on ourobservations on the coherence of adaptation trajectories, we take first steps in this direction by addinga regularization term based on E[~Ti~j]. Within a meta-training iteration, we first let fadapt to thentraining tasksTifollowing Eq 1. We then compute the average direction vector ~=1nPni=1~i.For each task, we want to reduce the angle defined by ~Ti~, and thus introduce the penalty on8Under review as a conference paper at ICLR 2020() =~Ti~, obtaining the regularized solutions ^i. The outer loop gradients are then computed,just like in MAML following Eq 2, but using these regularized solutions ^iinstead of ~i. We obtainthe variant of MAML with regularized inner loop updates, as detailed in Algorithm 1. We used thisregularizer with MAML (Second-Order), for "Omniglot 20-way 1-shot", thereby tackling the mostchallenging few-shot classification setting for Omniglot. As shown in Figure 9, we observed anincrease in meta-test target accuracy: the performance increases from 94:05% to95:38% (averageover five trials, 600 test tasks each), providing 23% relative reduction in meta-test target error.Algorithm 1 Regularized MAML: Added penaltyon angles between inner loop updates1:Sample a batch of ntasksTip(T)2:for allTido3: Perform inner loop adaptation as in Eq. 1:~i=sPT1t=0rL(f(Di;(t)i))4:end for5:Compute the average direction vector:~=1nPni=1~i6:Compute the corrected inner loop updates:7:for allTido8: ^i=~ir()where () =~Ti~9:end for10:Perform the meta-update as in Eq. 2, but usingthe corrected solutions:s+1=s1nPni=1rL(f(D0i;^i))1 10 20 30 40 50 60 70 80 90 100Epoch0.860.880.900.920.940.960.98Target AccuracyRegularized, =0.5No RegularizationFigure 9: Average target accuracy on meta-testtasks using our proposed regularizer on MAML,for Omniglot 20-way 1-shot learning, with regular-ization coefficient = 0:57 C ONCLUSIONWe experimentally demonstrate that when using gradient-based meta-learning algorithms such asMAML, meta-test solutions, obtained after adapting neural networks to new tasks via few-shotlearning, become flatter, lower in loss, and further away from the meta-train solution, as meta-training progresses. We also show that those meta-test solutions keep getting flatter even whengeneralization starts to degrade, thus providing an experimental argument against the correlationbetween generalization and flat minima. More importantly, we empirically show that generalizationto new tasks is correlated with the coherence between their adaptation trajectories, measured bythe average cosine similarity between the adaptation trajectory directions, but also correlated withthe coherence between the meta-test gradients, measured by the average inner product betweenmeta-test gradient vectors evaluated at meta-train solution. We also show this correlation for few-shotregression tasks. Based on these observations, we take first steps towards regularizing MAML basedmeta-training. As a future work, we plan to test the effectiveness of this regularizer on various datasetsand meta-learning problem settings, architectures and gradient-based meta-learning algorithms.9Under review as a conference paper at ICLR 2020
rkeynNmgqB
Official Blind Review #3
3: Weak Reject
This paper addresses empirical study of generalization behavior in gradient-based meta-learning. To this end, authors evaluated: (1) flatness of minima in terms of the spectral norm of Hessian matrix; (2) coherence of adaptation trajectories; (3) the average inner product between meta-test gradient vectors. Finally a regularized MAML is proposed, adding a penalty on angles between inner loop updates. Experiments are shown, measuring three quantities mentioned above. ---Strength--- - Empirical analysis of various properties of the objective landscape in gradient-based meta-learning is interesting and new. ---Weakness--- - Recent theoretical work on meta-generalization bound or convergence properties of MAML is available. For instance, [1] M. Khodak (2019), "Provable guarantees for gradient-based meta-learning," ICML. [2] N. Golmant (2018), "On the convergence of MAML," NeurIPS. - While empirical study could be interesting, but not much insight was not provided. - A regularized MAML is proposed but its effectiveness is not well studied yet. To sum up, the paper provides a few interesting empirical results but it is not clear what benefits are gained from these results.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Towards Understanding Generalization in Gradient-Based Meta-Learning ### Paper Abstract In this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We experimentally demonstrate that as meta-training progresses, the meta-test solutions obtained by adapting the meta-train solution of the model to new tasks via few steps of gradient-based fine-tuning, become flatter, lower in loss, and further away from the meta-train solution. We also show that those meta-test solutions become flatter even as generalization starts to degrade, thus providing an experimental evidence against the correlation between generalization and flat minima in the paradigm of gradient-based meta-leaning. Furthermore, we provide empirical evidence that generalization to new tasks is correlated with the coherence between their adaptation trajectories in parameter space, measured by the average cosine similarity between task-specific trajectory directions, starting from a same meta-train solution. We also show that coherence of meta-test gradients, measured by the average inner product between the task-specific gradient vectors evaluated at meta-train solution, is also correlated with generalization. ### Paper Keywords ["meta-learning", "objective landscapes"] ### Paper Content ABSTRACTIn this work we study generalization of neural networks in gradient-based meta-learning by analyzing various properties of the objective landscapes. We exper-imentally demonstrate that as meta-training progresses, the meta-test solutionsobtained by adapting the meta-train solution of the model to new tasks via few stepsof gradient-based fine-tuning, become flatter, lower in loss, and further away fromthe meta-train solution. We also show that those meta-test solutions become flattereven as generalization starts to degrade, thus providing an experimental evidenceagainst the correlation between generalization and flat minima in the paradigm ofgradient-based meta-leaning. Furthermore, we provide empirical evidence that gen-eralization to new tasks is correlated with the coherence between their adaptationtrajectories in parameter space, measured by the average cosine similarity betweentask-specific trajectory directions, starting from a same meta-train solution. Wealso show that coherence of meta-test gradients, measured by the average innerproduct between the task-specific gradient vectors evaluated at meta-train solution,is also correlated with generalization.1 I NTRODUCTIONTo address the problem of the few-shot learning, many meta-learning approaches have been proposedrecently (Finn et al., 2017), (Ravi and Larochelle, 2017), (Rothfuss et al., 2018), (Oreshkin et al.,2018) and (Snell et al., 2017) among others. In this work, we take steps towards understanding thecharacteristics of the landscapes of the loss functions, and their relation to generalization, in thecontext of gradient-based few-shot meta-learning. While we are interested in understanding theproperties of optimization landscapes that are linked to generalization in gradient-based meta-learningin general, we focus our experimental work here within a setup that follows the recently proposedModel Agnostic Meta-Learning (MAML) algorithm (Finn et al., 2017). The MAML algorithm isa good candidate for studying gradient-based meta-learning because of its independence from theunderlying network architecture.Our main insights and contributions can be summarized as follows:1. As gradient-based meta-training progresses:the adapted meta-test solutions become flatter on average, while the opposite occurswhen using a finetuning baseline.the adapted final solutions reach lower average support loss values, which neverincreases, while the opposite occurs when using a finetuning baseline.2.When generalization starts to degrade due to overtraining, meta-test solutions keep gettingflatter, implying that, in the context of gradient-based meta-learning, flatness of minima isnot correlated with generalization to new tasks.3.We empirically show that generalization to new tasks is correlated with the coherencebetween their adaptation trajectories, measured by the average cosine similarity betweentrajectory directions. Also correlated with generalization is the coherence between meta-test gradients, measured by the average inner product between meta-test gradient vectorsevaluated at meta-train solution. We also show that this metric is correlated to generalizationfor few-shot regression tasks where the model must learn to fit sine function curves.1Under review as a conference paper at ICLR 2020Furthermore, based on these observations, we take initial steps to propose a regularizer for MAMLbased training and provide experimental evidence for its effectiveness.2 R ELATED WORKThere has been extensive research efforts on studying the optimization landscapes of neural networksin the standard supervised learning setup. Such work has focused on the presence of saddle pointsversus local minima in high dimensional landscapes (Pascanu et al., 2014),(Dauphin et al., 2014),the role of overparametrization in generalization (Freeman and Bruna, 2016), loss barriers betweenminima and their connectivity along low loss paths, (Garipov et al., 2018); (Draxler et al., 2018), toname a few examples.One hypothesis that has gained popularity is that the flatness of minima of the loss function foundby stochastic gradient-based methods results in good generalization, (Hochreiter and Schmidhuber,1997); (Keskar et al., 2016). (Xing et al., 2018) and (Li et al., 2017) measure the flatness by thespectral norm of the hessian of the loss, with respect to the parameters, at a given point in theparameter space. Both (Smith and Le, 2017) and (Jastrzebski et al., 2017) consider the determinantof the hessian of the loss, with respect to the parameters, for the measure of flatness. For all of thework on flatness of minima cited above, authors have found that flatter minima correlate with bettergeneralization.In contrast to previous work on understanding the objective landscapes of neural networks in theclassical supervised learning paradigm, in our work, we explore the properties of objective landscapesin the setting of gradient-based meta-learning.3 G RADIENT -BASED META -LEARNINGWe consider the meta-learning scenario where we have a distribution over tasks p(T), and a model fparametrized by , that must learn to adapt to tasks Tisampled from p(T). The model is trained on aset of training tasks fTigtrainand evaluated on a set of testing tasks fTigtest, all drawn from p(T).In this work we only consider classification tasks, with fTigtrainandfTigtestusing disjoint sets ofclasses to constitute their tasks. Here we consider the setting of k-shot learning, that is, when fadaptsto a taskTtesti, it only has access to a set of few support samples Di=f(x(1)i;y(1)i);:::;(x(k)i;y(k)i)gdrawn fromTtesti. We then evaluate the model’s performance on Ttesti using a new set of targetsamplesD0i. By gradient-based meta-learning, we imply that fis trained using information about thegradient of a certain loss function L(f(Di;))on the tasks. Throughout this work the loss functionis the cross-entropy between the predicted and true class.3.1 M ODEL -AGNOSTIC META-LEARNING (MAML)MAML learns an initial set of parameters such that on average, given a new task Ttesti, only a fewsamples are required for fto learn and generalize well to that task. During a meta-training iterations, where the current parametrization of fiss, a batch ofntraining tasks is sampled from p(T). Foreach taskTi, a set of support samples Diis drawn and fadapts toTiby performing Tsteps of fullbatch gradient descent on L(f(Di;))w.r.t., obtaining the adapted solution ~i:~i=sT1Xt=0rL(f(Di;(t)i)) (1)where(t)i=(t1)irL(f(Di;(t1)i))and all adaptations are independent and start froms, i.e.(0)i=s;8i. Then from each Ti, a set of target samples D0iis drawn, and the adaptedmeta-training solution s+1is obtained by averaging the target gradients, such that:s+1=s1nnXi=1rL(f(D0i;~i)) (2)As one can see in Eq.1 and Eq.2, deriving the meta-gradients implies computing second-orderderivatives, which can come at a significant computational expense. The authors introduced a first-2Under review as a conference paper at ICLR 2020order approximation of MAML, where these second-order derivatives are ommited, and we refer tothat other algorithm as First-Order MAML.3.2 F INETUNING BASELINEFor the finetuning baseline, the model is trained in a standard supervised learning setup: the model istrained to classify all the classes from the training split using a stochastic gradient-based optimizationalgorithm, its output layer size being equal to the number of meta-train classes. During evaluation onmeta-test tasks, the model’s final layer (fully-connected) is replaced by a layer with the appropriatesize for the given meta-test task (e.g. if 5-way classification, the output layer has five logits), with itsparameter values initialized to random values or with another initialization algorithm, then all themodel parameters are optimized to the meta-test task, just like for the other meta-learning algorithms.4 A NALYZING THE OBJECTIVE LANDSCAPESFigure 1: Visualizations of metrics measuring properties of objective loss landscapes. The black arrowsrepresent the descent on the support loss and the dotted lines represent the corresponding displacement in theparameter space. (1): Curvature of the loss for an adapted meta-test solution ~i(for a taskTi), is measuredas the spectral norm of the hessian matrix of the loss. (2): Coherence of adaptation trajectories to differentmeta-test tasks is measured as the average cosine similarity for pairs of trajectory directions. A direction vectoris obtained by dividing a trajectory displacement vector (from meta-train solution sto meta-test solution~i) by its Euclidean norm, i.e. ~i= (~is)=k~isk2. (3): Characterizing a meta-train solution by thecoherence of the meta-test gradients, measured by the average inner product for pairs of meta-test gradientvectors gi=rL(f(Di;s)).In the context of gradient-based meta-learning, we define generalization as the model’s ability toreach a high accuracy on a testing task Ttesti, evaluated with a set of target samples D0i, for severaltesting tasks. This accuracy is computed after f, starting from a given meta-training parametriza-tions, has optimized its parameters to the task Ttesti using only a small set of support samplesDi, resulting in the adapted solution ~testi(minima). We thus care about the average accuracyETtestip(T)[Acc(f(D0i;~testi)]. With these definitions in mind, for many meta-test tasks Ttesti, weconsider the optimization landscapes L(f(Di;)), and 1) the properties of these loss landscapesevaluated at the solutions ~testi; 2) the adaptation trajectories when f, starting from s, adapts tothose solutions; as well as 3) the properties of those landscapes evaluated at the meta-train solutionss. See Figure 1 for a visualization of our different metrics. We follow the evolution of the metricsas meta-training progresses: after each epoch, which results in a different parametrization s, weadaptfto several meta-test tasks, compute the metrics averaged over those tasks, and compare withE[Acc(f(D0i;~testi)]. We do not deal with the objective landscapes involved during meta-training, asthis is beyond the scope of this work. From here on, we drop the superscript test from our notation, aswe exclusively deal with objective landscapes involving meta-test tasks Ti, unless specified otherwise.3Under review as a conference paper at ICLR 20204.1 F LATNESS OF MINIMAWe start our analysis of the objective loss landscapes by measuring properties of the landscapes atthe adapted meta-test solutions ~i. More concretely, we measure the curvature of the loss at thoseminima, and whether flatter minima are indicative of better generalization for the meta-test tasks.Aftersmeta-training iterations, we have a model fparametrized by s. During the meta-test, fmust adapt to several meta-test tasks Tiindependently. For a given Ti,fadapts by performing afew steps of full-batch gradient descent on the objective landscape L(f(Di;)), using the set ofsupport samplesDi, and reaches an adapted solution ~i. Here we are interested in the curvatureofL(f(Di;~i)), that is, the objective landscape when evaluated at such solution, and whether onaverage, flatter solutions favour better generalization. Considering the hessian matrix of this loss w.r.tthe model parameters, defined as H(Di;~i):=r2L(f(Di;~i)), we measure the curvature of theloss surface around ~iusing the spectral norm kkof this hessian matrix:H(Di;~i)=rmaxH(Di;~i)HH(Di;~i)=max(H(Di;~i)) (3)as illustrated in Figure 1 (1). (We get kH(Di;~i)k=max(H(Di;~i))sinceH(Di;~i)is realand symmetric.)We define the average loss curvature for meta-test solutions ~i, obtained from a meta-train solutions, as:ETip(T)[kH(Di;~i)k] (4)Note that we do not measure curvature of the loss at s, sincesis not a point of convergence of ffor the meta-test tasks. In fact, at s, since the model has not been adapted to the unseen meta-testclasses, the target accuracy for the meta-test tasks is random chance on average. Thus, measuringthe curvature of the meta-test support loss at sdoes not relate to the notion of flatness of minima.Instead, in this work we characterize the meta-train solution sby measuring the average innerproduct between the meta-test gradients, as explained later in Section 4.3.4.2 C OHERENCE OF ADAPTATION TRAJECTORIESOther than analyzing the objective landscapes at the different minima reached when fadapts to newtasks, we also analyze the adaptation trajectories to those new tasks, and whether some similaritybetween them can be indicative of good generalization. Let’s consider a model fadapting to a taskTiby starting from s, moving in parameter space by performing Tsteps of full-batch gradient descentwithrL(f(Di;))until reaching ~i. We define the adaptation trajectory to a task Tistarting fromsas the sequence of iterates (s;(1)i;(2)i;:::;~i). To simplify the analyses and alleviate some of thechallenges in dealing with trajectories of multiple steps in a parameter space of very high dimension,we define the trajectory displacement vector (~is). We define a trajectory direction vector ~iasthe unit vector: ~i:= (~is)=k~isk2.We define a metric for the coherence of adaptation trajectories to meta-test tasks Ti, starting from ameta-train solution s, as the average inner product between their direction vectors:ETi;Tjp(T)[~Ti~j] (5)The inner product between two meta-test trajectory direction vectors is illustrated in Figure 1 (2).4.3 CHARACTERIZING META -TRAIN SOLUTIONS BY THE AVERAGE INNER PRODUCT BETWEENMETA -TEST GRADIENTSIn addition to characterizing the adaptation trajectories at meta-test time, we characterize the objectivelandscapes at the meta-train solutions s. More concretely, we measure the coherence of the meta-testgradientsrL(f(Di;s))evaluated at s.The coherence between the meta-test gradients can be viewed in relation to the metric for coherenceof adaptation trajectories of Eq. 5 from Section 4.2. Even after simplifying an adaptation trajectory by4Under review as a conference paper at ICLR 2020its displacement vector, measuring distances between trajectories of multiple steps in the parameterspace can be problematic: because of the symmetries within the architectures of neural networks,where neurons can be permuted, different parameterizations can represent identically the samefunctionfthat maps inputs to outputs. This problem is even more prevalent for networks withhigher number of parameters. Since here we ultimately care about the functional differences thatfundergoes in the adaptation trajectories, measuring distances between functions in the parameterspace, either using Euclidean norm or cosine similarity between direction vectors, can be problematic(Benjamin et al., 2018).Thus to further simplify the analyses on adaptation trajectories, we can measure coherence betweentrajectories of only one step ( T= 1). Since we are interested in the relation between such trajectoriesand the generalization performance of the models, we measure the target accuracy at those meta-testsolutions obtained after only one step of gradient descent. We define those solutions as: s+gi,with meta-test gradient gi=rL(f(Di;s)). To make meta-training consistent with meta-testing,for the meta-learning algorithms we also use T= 1for the inner loop updates of Eq. 1.We thus measure coherence between the meta-test gradient vectors githat lead to those solutions.Note that the learning rate is constant and is the same for all experiments on a same dataset. Incontrast to Section 4.2, here we observed in practice that the average inner product between meta-testgradient vectors, and not just their direction vectors, is more correlated to the average target accuracy.The resulting metric is thus the average inner product between meta-test gradients evaluated at s.We define the average inner product between meta-test gradient vectors gi, evaluated at a meta-trainsolutions, as:ETi;Tjp(T)[gTigj] (6)The inner product between two meta-test gradients, evaluated at s, is illustrated in Figure 1 (3).We show in the experimental results in Section 5.2 and 5.3 that the coherence of the adaptationtrajectories, as well as of the meta-test gradients, correlate with generalization on the meta-test tasks.5 E XPERIMENTSWe apply our analyses to the two most widely used benchmark datasets for few-shot classificationproblems: Omniglot and MiniImagenet datasets. We use the standardized CNN architecture usedby (Vinyals et al., 2016) and (Finn et al., 2017). We perform our experiments using three differentgradient-based meta-learning algorithms: MAML, First-Order MAML and a Finetuning baseline.For more details on the meta-learning datasets, architecture and meta-learning hyperparameters, seeAppendix AWe closely follow the experimental setup of (Finn et al., 2017). Except for the Finetune baseline, themeta-learning algorithms use during meta-training the same number of ways and shots as during meta-testing. For our experiments, we follow the setting of (Vinyals et al., 2016): for MiniImagenet, trainingand testing our models on 5-way classification 1-shot learning, as well as 5-way 5-shot, and forOmniglot, 5-way 1-shot; 5-way 5-shot; 20-way 1-shot; 20-way 5-shot. Each experiment was repeatedfor five independent runs. For the meta-learning algorithms, the choice of hyperparameters closelyfollows (Finn et al., 2017). For our finetuning baseline, most of the original MAML hyperparameterswere left unchanged, as we want to compare the effect of the pre-training procedure, thus are keptfixed the architecture and meta-test procedures. We kept the same optimizer as for the meta-updateof MAML (ADAM), and performed hyperparameter search on the mini-batch size to use, for eachsetting that we present. (For our reproduction results on the meta-train and meta-test accuracy, seeFigure 10a and 10b in B.1.)5.1 F LATNESS OF META -TEST SOLUTIONSAfter each training epoch, we compute E[kH(Di;~i)k]using a fixed set of 60 randomly sampledmeta-test tasksTi. Across all settings, we observe that MAML first finds sharper solutions ~iuntilreaching a peak, then as the number of epoch grows, those solutions become flatter, as seen in Figure2. To verify the correlation between E[kH(Di;~i)k]andE[Acc(f(D0i;~i))], we train models foran extra number of epochs until clearly observing a decrease in the generalization performanceE[Acc(f(D0i;~i))], using First-Order MAML with 5-way 1-shot learning on MiniImagenet, and we5Under review as a conference paper at ICLR 20201 20 40 60 80 100Epoch0.00.51.01.52.02.53.03.5Spectral Norm1-shot, Second-Order1-shot, First-Order5-shot, Second-Order5-shot, First-Order(a) Omniglot 5-way1 20 40 60 80 100Epoch0.02.55.07.510.012.515.017.520.01-shot, Second-Order1-shot, First-Order5-shot, Second-Order5-shot, First-Order (b) Omniglot 20-way1 20 40 60 80 100Epoch405060708090100Spectral NormSecond-OrderFirst-Order (c) MiniImagenet5-way, 1-shot1 20 40 60 80 100Epoch100120140160180Spectral NormSecond-OrderFirst-Order(d) MiniImagenet5-way, 5-shotFigure 2: Flatness of meta-test solutions for MAML and First-Order MAML, on Omniglot andMiniImagenetverify if it is reflected by an increase in E[kH(Di;~i)k]. On the contrary, and remarkably, even asfstarts to show poorer generalization (see Figure 3a), the solutions keep getting flatter, as shown inFigure 3c. Thus for the case of gradient-based meta-learning, flatter minima don’t appear to favourbetter generalization. We perform the same analysis for our finetuning baseline (Figures 4a, 4c), withresults suggesting that flatness of solutions might be more linked with E[L(f(Di;~i))], the averagelevel of support loss attained by the solutions ~i(see Figures 4b and 3b), which is not an indicator forgeneralization. We also noted that across all settings involving MAML and First-Order MAML, thisaverage meta-test support loss E[L(f(Di;~i))]decreases monotonically as meta-training progresses.125 50 75100 125 150 175 200Epoch0.300.320.340.360.380.400.420.44Target Accuracy(a) Target Accuracy125 50 75100 125 150 175 200Epoch0.00.10.20.30.40.50.6Support loss (b) Support loss125 50 75100 125 150 175 200Epoch304050607080Spectral Norm (c) Curvature of solutionsFigure 3: MAML: Characterization of meta-test solutions1 20 40 60 80 100Epoch0.2550.2600.2650.2700.2750.2800.2850.290Target Accuracy(a) Target accuracy1 20 40 60 80 100Epoch0.060.080.100.120.14Support Loss (b) Support loss1 20 40 60 80 100Epoch455055606570Spectral norm (c) Curvature of solutionsFigure 4: Finetune baseline : Characterization of meta-test solutions5.2 C OHERENCE OF ADAPTATION TRAJECTORIESIn this section, we use the same experimental setup as in Section 5.1, except here we measureE[~Ti~j]. To reduce the variance on our results, we sample 500 tasks after each meta-training epoch.Also for experiments on Omniglot, we drop the analyses with First-Order MAML, since it yieldsperformance very similar to that of the Second-Order MAML. We start our analyses with the settingof "MiniImagenet, First-Order MAML, 5-way 1-shot", as it allowed us to test and invalidate thecorrelation between flatness of solutions and generalization, earlier in Section 5.1.6Under review as a conference paper at ICLR 2020120406080100120140160180200Epoch0.0000.0250.0500.0750.1000.1250.1500.175[Tij]120406080100120140160180200Epoch0.300.320.340.360.380.400.420.44Target Accuracy(a) MiniImagenet, 5-way, 1-shot, First-Order120406080100120140160180200Epoch0.0000.0250.0500.0750.1000.1250.1500.1750.200[Tij]120406080100120140160180200Epoch0.320.340.360.380.400.420.440.460.48Target Accuracy (b) MiniImagenet, 5-way, 1-shot, Second-OrderFigure 5: Comparison between average inner product between meta-test trajectory direction vectors(orange), and average target accuracy on meta-test tasks (blue), MAML First-Order and Second-Order,MiniImagenet 5-way 1-shot. See Figure 11 in Appendix B.2 for full set of experiments.We clearly observe a correlation between the coherence of adaptation trajectories and generalizationto new tasks, with higher average inner product between trajectory directions, thus smaller angles,being linked to higher average target accuracy on those new tasks, as shown in Figure 5a. We thenperformed the analysis on the other settings, with the same observations (see Figure 5b and Figure 11in Appendix B.2 for full set of experiments). We also perform the analysis on the Finetuning baselines,which reach much lower target accuracies, and where we see that E[~Ti~j]remains much closer tozero, meaning that trajectory directions are roughly orthogonal to each other, akin to random vectorsin high dimension (see Figure 6a). As an added observation, here we include our experimental resultson the average meta-test trajectory norm E[k~isk2], in Figure 6c and 6d, where E[k~isk2]grows as meta-training progresses when fis meta-trained with MAML, as opposed to the Finetunebaseline, and note that this norm does not reflect generalization.1 20 40 60 80 100Epoch0.010.020.030.040.050.060.070.08[Tij]5-way, 1-shot5-way, 5-shot(a) Trajectories coherence1 20 40 60 80 100Epoch0.00050.00100.00150.00200.00250.0030[gTigj]5-way, 1-shot5-way, 5-shot (b) Gradients coherence1 20 40 60 80 100Epoch0.250.300.350.400.45[|is|2]MAMLFirst-Order MAMLFinetune (c)l2norm of trajectories(1-shot)1 20 40 60 80 100Epoch0.20.30.40.50.60.7[|is|2]MAMLFirst-Order MAMLFinetune(d)l2norm of trajectories(5-shot)Figure 6: (a): Average inner product between meta-test adaptation direction vectors, for Finetuningbaseline on MiniImagenet. (b): Average inner product between meta-test gradients, for Finetuningbaseline on MiniImagenet. Average l2norm of meta-test adaptation trajectories, all algorithms onMiniImagenet, (c): 1-shot learning, (d): 5-shot learning.5.3 CHARACTERIZING META -TRAIN SOLUTIONS BY THE AVERAGE INNER PRODUCT BETWEENMETA -TEST GRADIENTSDespite the clear correlation between E[~Ti~j]and generalization for the settings that we showin Figure 5 and 11, we observed that for some other settings, this relationship appears less linear.We conjecture that such behavior might arise from the difficulties of measuring distances betweennetworks in the parameter space, as explained in Section 4.3. Here we present our results on thecharacterization of the objective landscapes at the meta-train solutions s, by measuring the averageinner product between meta-test gradient vectors gi.We observe that coherence between meta-test gradients is correlated to generalization, which isconsistent with the observations on the coherence of adaptation trajectories from Section 5.2. InFigure 7, we compare E[gTigj]to the target accuracy (here we show results for individual modelruns rather than the averages over the runs). See Figure 12 in Appendix B.3 for the full set of7Under review as a conference paper at ICLR 2020120406080100120140160180200Epoch0.2750.3000.3250.3500.3750.4000.425Target Accuracyseed 0seed 1seed 2seed 3seed 4120406080100120140160180200Epoch0.000.010.020.030.040.05[gTigj]seed 0seed 1seed 2seed 3seed 4(a) MiniImagenet, 5-way, 5-shot, First-Order120406080100120140160180200Epoch0.300.350.400.450.500.55Target Accuracyseed 1seed 2seed 3seed 6120406080100120140160180200Epoch0.0000.0050.0100.0150.020[gTigj]seed 1seed 2seed 3seed 6 (b) MiniImagenet, 5-way, 5-shot, Second-OrderFigure 7: Comparison between average inner product between meta-test gradient vectors, evaluated atmeta-train solution, and average target accuracy on meta-test tasks, with higher average inner productbeing linked to better generalization. See Figure 12 in Appendix B.3 for full set of experiments.experiments. This metric consistently correlates with generalization across the different settings.Similarly as in Section 5.2, for our finetuning baselines we observe very low coherence betweenmeta-test gradients (see Figure 6b). Based on the observations we make in Section 5.2 and 5.3, wepropose to regularize gradient-based meta-learning as described in Section 6.5.3.1 FEW-SHOT REGRESSION : AVERAGE INNER PRODUCT BETWEEN META -TEST GRADIENTSHere we extend our analysis by presenting experimental results on E[gTigj]for few-shot regression.Specifically we use a leaning problem which is composed of training task and test tasks, where eachof these tasks are sine functions parameterized as y=asin(bx+c). We train a two-layer MLPwhich learns to fit meta-training sine functions using only few support samples, and generalizationimplies reaching a low Mean Squared Error (MSE) averaged over the target set of many meta-testsine functions. Results are presented in Figure 8. Similar to our analysis of Few-shot classificationsetting, we observe in the case of Few-shot regression, generalization (negative average target MSEon Meta-test Task) strongly correlates with E[gTigj]. See Appendix A.4 for the experimentaldetails.0 50 100 150 200 250Epoch765432Negative Target MSE(a)5-shot, 1 task per batch0 50 100 150 200 250Epoch0.040.060.080.10[gTigj](b)5-shot, 1 task per batch0 50 100 150 200 250Epoch8642Negative Target MSE(c)2-shot3-shot4-shot5-shot7-shot10-shot15-shot0 50 100 150 200 250Epoch0.000.020.040.060.080.100.120.14[gTigj](d)5 4 3 2 1Negative Target MSE0.040.060.080.100.120.14[gTigj]Correlation coeff R = 0.9822 p-value: 2.2796e-58(e)Figure 8: Analysis for Few-shot regression. Comparison between E[gTigj]and average negativetarget Mean Squared Error on meta-test tasks(generalization performance). (a) and (b) show general-ization performance correlates with E[gTigj]through-out the meta-training (c) and (d) show thecorrelation across many values of k(number of shots), while (e) shows the correlation coefficient Rbetween E[gTigj]and final generalization performance, for models with kvarying between 2and15.6 F IRST STEPS TOWARDS REGULARIZING MAMLAlthough, MAML has become a popular method for meta-training, there exist a significant gener-alization gap between its performance on target set of the meta-train tasks and the target set of themeta-test task, and regularizing MAML has not received much research attention yet. Based on ourobservations on the coherence of adaptation trajectories, we take first steps in this direction by addinga regularization term based on E[~Ti~j]. Within a meta-training iteration, we first let fadapt to thentraining tasksTifollowing Eq 1. We then compute the average direction vector ~=1nPni=1~i.For each task, we want to reduce the angle defined by ~Ti~, and thus introduce the penalty on8Under review as a conference paper at ICLR 2020() =~Ti~, obtaining the regularized solutions ^i. The outer loop gradients are then computed,just like in MAML following Eq 2, but using these regularized solutions ^iinstead of ~i. We obtainthe variant of MAML with regularized inner loop updates, as detailed in Algorithm 1. We used thisregularizer with MAML (Second-Order), for "Omniglot 20-way 1-shot", thereby tackling the mostchallenging few-shot classification setting for Omniglot. As shown in Figure 9, we observed anincrease in meta-test target accuracy: the performance increases from 94:05% to95:38% (averageover five trials, 600 test tasks each), providing 23% relative reduction in meta-test target error.Algorithm 1 Regularized MAML: Added penaltyon angles between inner loop updates1:Sample a batch of ntasksTip(T)2:for allTido3: Perform inner loop adaptation as in Eq. 1:~i=sPT1t=0rL(f(Di;(t)i))4:end for5:Compute the average direction vector:~=1nPni=1~i6:Compute the corrected inner loop updates:7:for allTido8: ^i=~ir()where () =~Ti~9:end for10:Perform the meta-update as in Eq. 2, but usingthe corrected solutions:s+1=s1nPni=1rL(f(D0i;^i))1 10 20 30 40 50 60 70 80 90 100Epoch0.860.880.900.920.940.960.98Target AccuracyRegularized, =0.5No RegularizationFigure 9: Average target accuracy on meta-testtasks using our proposed regularizer on MAML,for Omniglot 20-way 1-shot learning, with regular-ization coefficient = 0:57 C ONCLUSIONWe experimentally demonstrate that when using gradient-based meta-learning algorithms such asMAML, meta-test solutions, obtained after adapting neural networks to new tasks via few-shotlearning, become flatter, lower in loss, and further away from the meta-train solution, as meta-training progresses. We also show that those meta-test solutions keep getting flatter even whengeneralization starts to degrade, thus providing an experimental argument against the correlationbetween generalization and flat minima. More importantly, we empirically show that generalizationto new tasks is correlated with the coherence between their adaptation trajectories, measured bythe average cosine similarity between the adaptation trajectory directions, but also correlated withthe coherence between the meta-test gradients, measured by the average inner product betweenmeta-test gradient vectors evaluated at meta-train solution. We also show this correlation for few-shotregression tasks. Based on these observations, we take first steps towards regularizing MAML basedmeta-training. As a future work, we plan to test the effectiveness of this regularizer on various datasetsand meta-learning problem settings, architectures and gradient-based meta-learning algorithms.9Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #3 ### Review Text This paper addresses empirical study of generalization behavior in gradient-based meta-learning. To this end, authors evaluated: (1) flatness of minima in terms of the spectral norm of Hessian matrix; (2) coherence of adaptation trajectories; (3) the average inner product between meta-test gradient vectors. Finally a regularized MAML is proposed, adding a penalty on angles between inner loop updates. Experiments are shown, measuring three quantities mentioned above. ---Strength--- - Empirical analysis of various properties of the objective landscape in gradient-based meta-learning is interesting and new. ---Weakness--- - Recent theoretical work on meta-generalization bound or convergence properties of MAML is available. For instance, [1] M. Khodak (2019), "Provable guarantees for gradient-based meta-learning," ICML. [2] N. Golmant (2018), "On the convergence of MAML," NeurIPS. - While empirical study could be interesting, but not much insight was not provided. - A regularized MAML is proposed but its effectiveness is not well studied yet. To sum up, the paper provides a few interesting empirical results but it is not clear what benefits are gained from these results. ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
BkByNuojz
MIDL.amsterdam/2018/Conference
2018
Fully Automatic Segmentation of Sphenoid Sinus in CT Images with 3D Convolutional Neural Networks
["Kamal Souadih", "Ahror Belaid", "Douraied Ben Salem"]
Today, Deep learning algorithms have quickly become essential in the field of medical image analysis. Compared to the traditional methods, these Deep learning techniques are more efficient in extracting compact information leading towards significant improvement performance of medical image analysis system. We present in this paper a new technique for Sphenoid sinus automatic segmentation using a 3D Convolutional Neural Networks (CNN). Due to the scarcity of medical data, we chose to used a 3D CNN model learned on a small training set. Mathematical morphology operations are then used to automatically detect and segment the region of interest. Our proposed method is tested and compared with a semi-automatic method and manual delineations made by a specialist. The preliminary results from the Head Computed Tomography (CT) volumes seem to be very promising.
["3D Convolutional neural network", "Computed Tomography", "Automatic Segmentation", "Sphenoid Sinus"]
Fully Automatic Segmentation of Sphenoid Sinus inCT Images with 3D Convolutional Neural NetworksKamal SouadihMedical Computing Laboratory (LIMED),University of Abderrahmane Mira,06000, Bejaia, Algeriakamal.souadih@univ-bejaia.dzAhror BelaidMedical Computing Laboratory (LIMED),University of Abderrahmane Mira,06000, Bejaia, Algeriaahror.belaid@univ-bejaia.dzDouraied Ben SalemINSERM UMR 1101Laboratory of Medical Information Processing (LaTIM),5 avenue Foch, 29200 Brest, France,Neuroradiology and Forensic Imaging Department,CHRU Brest, La Cavale Blanche Hospital. Boulevard Tanguy Prigent, 29609 Brest, France,douraied.bensalem@chu.brest.frAbstractToday, Deep learning algorithms have quickly become essential in the field ofmedical image analysis. Compared to the traditional methods, these Deep learningtechniques are more efficient in extracting compact information leading towards sig-nificant improvement performance of medical image analysis system. We presentin this paper a new technique for sphenoid sinus automatic segmentation using a3D Convolutional Neural Networks (CNN). Due to the scarcity of medical data, wechose to used a 3D CNN model learned on a small training set. Mathematical mor-phology operations are then used to automatically detect and segment the regionof interest. Our proposed method is tested and compared with a semi-automaticmethod and manual delineations made by a specialist. The preliminary results fromthe Computed Tomography (CT) volumes seem to be very promising.1 IntroductionThe sinuses anatomy in general is very complex and variable [1]. Sphenoid sinus is too, a very variablecavity, an important landmark in surgery and at the same time it is hard to isolate [2-3-4]. Fig. 1 showsa diagrammatic representation of the paranasal sinuses location. Another difficulty is that the sinusescan also be divided into many nooks, which communicate with each other through an incompletebone wall [5], which further complicates their localization, see for e.g. [5]. The complications whileoperating on sphenoid sinus are easily avoided if we know its anatomical features [6].As it has been established, the sphenoid sinus is the most inaccessible part of the face, being insidethe sphenoid bone and involving a number of different structures. Its deep anatomical location makesit difficult to approach. This deep location can be beneficial in the case of forensic identification.Unlike other sinuses, the sphenoid sinus is well protected from traumatic degradation resulting fromexternal causes.sphenoid sinuses can be classified according to their positions in the sella turcica into four types [7]:Conchal: complete missing or minimal sphenoid sinus;1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.Pre-sellar: the posterior wall of sphenoid sinus is in front of the anterior wall of the sellaturcicaSellar: the posterior wall of the sphenoid sinus is between the anterior and posterior walls ofsella turcicaPost-sellar: the posterior wall of sphenoid sinus is behind the posterior wall of the sellaturcicaThese types of sphenoid sinuses and their basic dimensions (height, width and depth) can generallyhelp predict the risk of accidental injury, but also useful for individual identification as can be seen in[8].Figure 1: Diagrammatic representation of paranasal sinuses.The computed tomography (CT) is an excellent imaging method used for the assessment of sinusesanatomy as it allows precise evaluation craniofacial of bones and the extent of there pneumatization[9-4]. By using segmentation of three dimensional (3D) CT-images of sphenoid sinus we could makeuseful measurements of its volume anatomy [10].A 3D segmentation is a technique that consist labeling each voxel in an image and assigning it ingroup of voxels that define an anatomical structure. This technique has a wide variety of applicationsin medical research and Computer-Aided Diagnosis.It is a very useful method, it allows to extractand recognize organs like: the heart, the brain, the spine, the blood vessels,etc.It is used too toimprove visualization of medical images and allow quantitative measurements of organs structureson the image. Segmentation is also important in building anatomical atlases, researching shapes ofanatomical structures and tracking their changes over time [11].The artificial intelligence techniques represented by machine learning are increasingly used in medicalimage analysis and segmentation. In the recent years, the appearance of deep learning techniqueshas contributed significantly improving for medical image analysis, based on convolutional neuralnetworks (CNN) that give the ability to automatically learn significant patterns and extract realstructures from images [3-12].One of the main reasons for the success of the CNN model was that it possible to directly use thepre-trained model to do various other tasks which it was not originally intended for. It becameremarkably easy to download a learned model, and then tweak it slightly to suit the application athand [13]. To the best of our knowledge, there is no an automatic segmentation approach dedicatedto sphenoid sinuses. This is probably due to the complex anatomy and high anatomical variability.Another particular challenge we had to overcome includes the opening of sphenoid sinus ostium,making the wall delimitation very difficult. In this paper, we made a sphenoid sinus automaticsegmentation tool that uses conventional CT-images based on 3D CNN. The proposed method isefficient, robust, and is able to obtain good results with small training dataset.2 MethodOur automatic sphenoid sinus segmentation method consists of three main steps, where the resultof step is the input of another one. The first step is a preprocessing step; we create and transform2automatically the images volume given from a PACS to an image of the region of interest. Thanwe perform a segmentation with 3D deep CNN [14], that we adapted and parameterized to producehighly accurate sinus segmentation. Finally a postprocessing based on mathematical morphologyoperations is carry out sinus measurement and refine segmentation (Figure.1). This splitting in stagesallowed us to improve and simplify the use of CNN at the CPU level. In the following we describethe method stage:Figure 2: Flow chart of the sphenoid sinus segmentation scheme.2.1 The automatic ROI extraction for the CT-image:The preprocessing step uses some interesting techniques with slight transformations that are adaptedto improve the effectiveness of the specific type of segmentation method used in the next step. Thesetransformations are made so that common parameters can be used for all images of all intensityranges. In other words, we aim to operate only on a reduced 3D region, a region of interest centeredon the sinus at issue and not on the whole image. This region of interest must be the same in terms3of dimensions for all images in data set of training or test. To achieve this, we first selected a targetimage with a well-oriented head and a clear sinus.We manually traced a large rectangle, enough to contain the sinus whatever its shape,size does notexceed 200 x 200 x 200 pixels. This rectangle will also serve as a reference bounding-box. Then, allother database images are registered onto this target image with its bounding-box. As the imagesare coming from different persons, we choose to use a rigid registration, allowing a correction of thedifferent positions and orientations arising from the clinical exam. Since the natural size of the skullsis different from one person to another, we have avoided using affine registration[15], which risksdistorting the estimation volume that will be used later as parameter for identification. Thereby, wewere able to build a new database consisting only of regions of interest, with the same size as thereference box.2.2 Sinus Segmentation with deep 3D CNNsIn this step we employ DeepMedic [16] realized as an open source software [17], it is an architecturewith adjustable number of deep layers, double-pathway and 3D Convolutional Neural Network,developed for the segmentation of brain lesions [14]. This system segments MRI 3D imagescorresponding to a multi-modal 3D patch at multiple scales. For our study we used the lightweightversion CPU-based of this software to drive our sinus automatic segmentation model; in our casewe use one modality and a CT images format. This CPU model gives a satisfactory solution to ourproblem.The robustness of this CNN was tested when less training data were available or fewer filters wereused, this architecture was further benchmarked on the BRATS 2016 Challenge, where it achievedvery good performance despite the simplicity of the pipeline [17]. It was demonstrated that it ispossible to train this 3D CNN on a small dataset of 28 cases. This network was given a good resulton the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% afterpost processing) on the ISLES 2015 training dataset, ranking among the top entries [14]. Thisarchitecture[16]based on :Two parallel convolutional pathways that process the input at multiple scales to achieve alarge receptive field for the final classification while keeping the computational cost low.A small convolutional kernels. That gives efficiency to building deeper CNNs withoutseverely increasing the number of trainable parameters and Inspired by VGG (Very deepconvolutional networks)[18].Building high performing and efficient 3D CNNs thanks to themuch smaller computation required for the convolution with small 33kernels .Full convolutional fashion on image segments in both training and testing stage.In what follows we will present the main algorithms that make up this architecture, In [14] thecreators and authors of this architecture presented a very clear and detailed of DeepMedic architecturewith its theoretical background, here we just giving a summary of each step, which make up thissoftware:1- Each layer l2[1;L]consists ofClfeaturemaps (FM)also referred to as Channels2- EveryFM represents a group of neurons that detect a particular pattern (a feature, in the channelsof the previous layer).3- APattern is defined by kernel weights associated with the FM4- If the neurons of the mthFM in thelthlayer are arranged in a 3D grid, their activationsconstitute the image defined bye:yml=f cl1Xn=1km;nl!ynl1+bml:ymlis the result of convolving each of the previous layer channels with a 3-dimensional.km;nlIs akernel , adding a learned biasbmlapplying a non-linearity fThe imageyn0is the input to the first layer, correspond to the channels of the original inputimage.5- Each kernel is a matrix of learned hidden weights Wm;nl6- Eachclass of segments has a Clnumber of .47- The activations of Clare fed into a position-wise softmax function that produces the predictedposteriorpc= exp (ycL(X))=CLXc=1exp(ycL):ycLis the activation of the FM at position l2N38- The size of the neighbourhood of voxels 'lin the input that influence the activation of a neuron isa receptive field, increases at each subsequent layer and is given by the 3-dimensional vector:'fx;y;zgl='fx;y;zgl1+kfx;y;zgl1fx;y;zgl: (1)Wherekl;l2N3are vectors expressing the size of the kernels and stride of the receptive field atlayerllis given by the product of the strides of kernels in layers preceding, in this system thel= (1;1;1)'CNN ='L: This is called theCNN’s receptive field; the receptive field of a neuron inthe classification layer corresponds to the image patch that influences the prediction for itscentral voxel.9- The dimensions of the FMs in Layer lis given by:fx;y;zgl="fx;y;zgl'fx;y;zglfx;y;zgl+ 1#: (2)10- If an input of size inis provided, in='CNN is a size of input patch in the commonpatch-wise. The FMs of this classification layer have 13.11- CNNs are trained patch-by-patch and random patches of size 'CNN are extracted from thetraining images.12- To maximize the log likelihood of the data or, equally, minimize the Cross Entropy via the costfunction is used:J;Ii;Ci=1=BBXi=1log(P(Y=cijIi;)) =1=BBXi=1logpCi: (3)Bis the size of batch, which is then processed by the network for one training iteration ofStochastic Gradient Descent (SGD).The pair (Ii;Ci);8i2[1;B]is theithpatch in the batch and the true label of its centralvoxel.The scalarpCiis the predicted posterior for Class CiRegularization terms were omitted for simplicity. Multiple CiSequential optimization stepsover different batches gradually lead to convergence.13- The classification layer is the activation of the last layer of CNN.Memory requirements and computing time increase with batch size, which is the limitation of 3DCNNs, DeepMedic uses a strategy that exploits the dense inference technique on image segments.Following from Eq.(2), if an image segment of size greater than 'CNN is given as input to thenetwork, the output is a posterior probability for multiple voxels V=Qi=fx;y;zgil. If the trainingbatches are formed of Bsegments extracted from the training images, the cost function Eq.(3), in thecase of dense-training[14] becomes:5JD(;Is;Cs) =1BVBXs=1VXv=1pcvs(xv): (4)WhereIsandCsare thesthsegment of the batch and the true labels of its vthvoxel.xvthecorresponding position in the classification FMs and pcvsthe output of the softmax function. Theeffective lot size is increased by a factor V without corresponding increase in calculation and memoryrequirements DeepMedic architecture is also a deep architecture based on small 33kernels that arefaster to convolve with and contain less weights[14].We have adapted the 3D CNN for five layers, with a receptive field of size 173and one modality.The classification layer (the last layer) is implemented like a convolutional with 13kernels, whichenables efficient dense inference. When the network segments an input it predicts multiple voxelssimultaneously, one for each shift of its receptive field over the input (see Figure 4). The trainingtime required for convergence of the final system is roughly 20 minutes using a CPU Intel I5-7300with 2x2.5 GHz. Segmentation of a 3D scan of a sphenoid sinus requires 1 minute.Figure 3: Architecture of the deepMedic for automatic sphenoid sinus segmentation.2.3 Post processingThe segmentation result obtained by the 3D CNN of the precedente step method does not make itpossible to distinguish between the sphenoid sinus from the other sinuses. The nasal cavities as wellas the paranasal sinuses have almost the same gray level intensity. To differentiate the sinuses, wehave used a prior knowledge about the positioning of these sinuses. Indeed, the sphenoid sinus is thedeepest cavity starting from the front face, and therefore it is the first cavity encountered from theback of the skull at the median. Thus, using the operations of mathematical morphology we havebeen able to locate the sphenoid sinus. We have first applied an erosion operation to the segmentedimage which allows removing the residues, but especially the potential connections between thesphenoid sinus and other cavities. More precisely, erosion operation allows to remove the ostium andto well separate the two hemisinus of the sphenoid sinus.Once the sphenoid sinus cleared, we have subsequently calculated the centres of gravity of all theregions on the image. After sorting the centers coordinates along the coronal axis, the deepest centrecorresponds, of course, to the region of the sphenoid sinus, or more precisely corresponds to thedeepest hemisphere. When the hemisphere is segmented from the rest of the cavities, a dilationoperation (with the same parameters as the previews erosion) is applied to recover some details of theshape lost during erosion operation. As can be seen, the detection of the two hemispheres of the sinusis sequential. Indeed, after removing the first detected hemisphere, the same process is launched onthe initially segmented image.3 Result3.1 DatasetOur dataset has 24 Head CT images,which were performed on a helical, multi-detector CT scan-ner.Some data exclusion criteria have been set. All CT exam with head fractures, tumors, or anypathological process involving the sphenoid bone and the surrounding structures, but also with sinusesmucosa thickening, or any abnormality of the sinuses contents, were not included in the study. Afterthe preprocessing we have obtained 3D CT- images less than 200 x 200 x 200. We have used 156Table 1: Elements of our datasetData set Number of imagesTotal CT exam considered 24Total CT exam on train step in the 3D CNN Algorithm 5Total CT exam on validation step in the 3D CNN Algorithm 10Total images for test step (automatic segmented) 9Total CT exam manually segmented by an expert assistance for train step 5Total CT exam manually segmented by an expert assistance for validation step 10Figure 4: Segmentation examples for 3 CT-images, shows a superior, left, interior and front views.images for training step (training and validation) in the 3D CNN algorithm and 9 images to test.The training dataset need a manual segmentation of spheroid sinus for each image, so we did thismanual segmentation assisted by a radiologist, a description of the dataset images used in the 3DCNN Algorithm is illustrated and summarized in Table 1.3.2 ResultsAn example of 3 segmentations is reported in Figure 4. It shows the result of the segmentation andthe extracted a sphenoid sinus as explained in the previous sections. The segmentation is performedusing the 3D CNN and affine with the morphological operations.3.3 ValidationTo evaluate the accuracy and robustness of the proposed automated approach, the results from thesame 9 sphenoid sinus automatically segmented with our tool were compared with a semi-automaticClustering method segmentation of ITK-SNAP Software with a manual segmentation that wasperformed with an experienced radiologist using a standard procedure. Each image was segmented bycarefully tracing the outlines of the sphenoid sinus while following the inner bone surface, proceedingin an axial direction. An example of the Spheroid sinus manual segmentation process of one slice isshown in Figure 5.The Dice Similarity Coefficient (DSC), Hausdorff distance (HD) and Mean Absolute Distance (MAD),were used for evaluating the proposed method. The dice Coefficient (DSC), one of the most commonmethods for evaluating segmentation results, indicates a level of similarity between the reference(manual segmentation) and segmented result (automatic segmentation), the formulation of DSC isgiven by:7Figure 5: Example of the process of manual segmentation on one slice. From left to right: an axial,sagittal and coronal view.DSC =2N(S1\S2)N(S1) +N(S2): (5)WhereS1andS2represent the obtained segmentation and the ground truth respectively (manualsegmentation), and N(:)defines the number of pixels.DCS2[0;1]the closer the DCS value to 1, the better the segmentation is. The Hausdorff distance is metricrepresents the spatial distance between two point sets, i.e., is the maximum distance between twopoint setsC1andC2, from each point a2C1to pointb2C2and vice versa. HD is defined asfollows:HD(C1;C2) = max (h(C1;C2);h(C2;C2)): (6)The Mean Absolute Distance (MAD) metric . Is given as follows:MAD (C1;C2) =12241nnXi=1d(ai;c2) +1mmXj=1d(bj;c 1)35:Where the distance between the point ai and the closet point bjis given by :d(ai;c2) =minkbjaik:Wherebj2C2.The three metrics: DSC, HD and MAD were measured for all segmentations; Tables 2 and 3 illustratethe associated results and a comparison between our automatic segmentation and semi-automaticclustering of ITK-SNAP for the nine CT- images respectively with a manual segmentation. Relatedmean, median and standard deviation are shown in the same tables.Table 2: Comparison results with manual delineations. Are shown, DSC, Hausdorff (HD) and MADmeasures for proposed approach and semi-automatic method using ITK-SNAP.Measures DSC ( %) HD (mm) MAD (mm)Index Mean Median SD Mean Median SD Mean Median SDOur tool 95.81 96.16 1.48 9.87 8.39 5.31 3.24 2.22 2.20ITK-SNAP 96.01 95.94 0.54 9.23 8.19 5.30 3.16 2.10 2.204 Discussion and conclusionTo our knowledge, only manual or semi-automatic methods have been applied for sphenoid sinus seg-mentation. These techniques present inter- and intra-observer variability and are both time-consuming.In the present work, we have developed a fully automated method for sinus sphenoid segmentation8Table 3: Detailed results of comparison between the proposed automatic and semi-automatic (ITK-SNAP) segmentation for 9 volumes, using respectively DSC, HD and MAD distances.CT V olumes 1 2 3 4 5 6 7 8 9Our results 96.52 96.10 95.59 92.10 95.71 96.84 95.91 95.48 96.87ITK-SNAP results 95.78 96.10 95.74 96.74 95.21 97.15 96.16 95.77 96.33Our results 4.09 7.02 6.62 2.71 0.96 2.08 1.58 1.85 2.22ITK-SNAP results 4.08 6.88 6.61 2.06 0.93 2.10 1.54 2.01 2.20Our results 10.43 15.43 19.46 13.32 3.79 8.07 4.76 5.15 8.39ITK-SNAP results 43.76 15.43 19.46 6.29 3.87 8.19 4.67 5.18 8.41based on CT images. The statistical comparisons between our automated tool segmentation and theclustering ITK-SNAP semi-automatic segmentation with manual segmentation methods revealedstrong agreement and low dispersion between variables. These promising findings were maintainedover the entire range of sphenoid sinus segmentation evaluation, and the mean difference betweenthe automated and manual techniques was approximately 5% for both measurements. These differ-ences are sufficiently small and this gives us a good confidence in this segmentation method. Thisautomated tool has the ability of segmenting a 3D CT-image in approximatively under then 1 minute.Furthermore, this tool does not require complex or expensive equipment although it uses 3D CNN.This method may be applied using conventional computers, thus allowing better implementation inclinical practice.The present study has some limitations. Our methodology was analyzed using only one protocol witha slice thickness of 0.5 mm. Prionas et al.[19] reported a greater error of volume quantification forthicker slices. Further studies are needed to evaluate volume in patient groups with different ages,genders, and ethnicities. Nevertheless, the automated tool may be adapted to quantify volume inother paranasal sinuses.In conclusion, the present study found a good correlation between the manual and automatedsphenoidal sinus volume estimation techniques. Our automated measurements of sphenoidal sinusvolume based on CT exams were reliable, robust, and accurate compared with the manual method.Our findings suggest that this automated tool may be applied in clinical practice.It does not requiresubstantial user expertise, and it is reproducible and fast.AcknowledgmentsThe authors would like to thank Rabeh Djabri for English proofreading.References[1] Giacomini, G. Pavan, A.L.M. Altemani, J.M.C. Duarte,S.B. Fortaleza, C.M.C.B. Miranda,J.R. &Pina,D.R. (2018)Computed tomography-based volumetric tool for standardized measurement of themaxillary sinus PLoS ONE, 13(1): e0190770. doi:10.1371/journal.pone.0190770[2]Knisely,A. Holmes,T. Barham,H. Sacks,R.& Harvey,R.(2016) Isolated sphenoid sinus opacifica-tion: A systematic revie American Journal of OtolaryngologyHead and Neck Medicine and Surger[3] Stokovic,N. Trkulja,V . Dumic-Cule,V . Cukovic-Bagic,I. T.L.S Vukicevic, T.L.S. & Grgurevic,L. (2015) Sphenoid sinus types, dimensions and relationship with surrounding structures Annals ofAnatomy (2015), http://dx.doi.org/10.1016/j.aanat.2015.02.013[4] Burke M.C. Taheri,R Bhojwani,R & Singh,A (2015) A Practical Approach to theImaging Interpretation of Sphenoid Sinus Pathology Current Problems in Diagnostic,http://dx.doi.org/10.1067/j.cpradiol.2015.02.002[5] Hacl,A. Costa,A.L.F. Oliveira,J.M. Tucunduva, M.J. Girondi, J.R, Raphaelli,A.C. & Scocate,N.(2016)Three-dimensional volumetric analysis of frontal sinus using medical software.(2017) Journalof Forensic Radiology and Imaging http://dx.doi.org/10.1016/j.jofri.2017.08.004[6] Wu,H.B. Zhu,L. Yuan,H.S. & Hou,C. (2011)Surgical measurement to sphenoid sinus for theChinese in Asia based on CT using sagittal reconstruction images Eur Arch Otorhinolaryngol (2011)268 :241 246 .9[7] Guldnerc,C. Pistorius,S. Diogo,I. Bien,S. Sesterhenn,A.& Werner,J. (2012)Analysis of pneumati-zation and neurovascular structures of the sphenoid sinus using cone-beam tomography (cbt) Acta.Radiol., vol. 53, no. 2, pp. 214-9, 2012[8] Auffret,M. Garetier,M. Diallo I. Aho, S. & Ben Salem, D. (2016)Contribution of the computedtomography of the anatomical aspects of the sphenoid sinuses to forensic identification J. Neuroradiol.,vol. 43, no. 6, pp. 404414, 2016[9] Uthman,A.T. AL-Rawi, N.H. Al-Naaimi,A.S. Tawfeeq A.S. & Suhail E.H. (2009)Evaluationof frontal sinus and skull measurements using spiral CT scanning: An aid in unknown personidentification Forensic Science International 197 (2010) 124.e1 124.e7[10] Kawari, Y . Fukushima, K. Ogawa,T. Nishizaki,K. Gunduz,M. Fujimoto, M.& Yu Masuda(1999)V olume Quantification of Healthy Paranasal Cavity by Three-Dimensional CT Imaging ActaOtolaryngol (Stockh) 1999; Suppl 540: 45-49[11] Ahirwar,A. (2013)Study of Techniques used for Medical Image Segmentation and Computationof Statistical Test for Region Classification of Brain MRI I.J. Information Technology and ComputerScience, 2013, 05, 44 53[12] Shen,D. Wu, G.& Suk,H.(2017) Annu. Rev. Biomed. Eng. 2017. 19:22148 (The Annual Reviewof Biomedical Engineering is online at bioeng.annualreviews.org)[13] Srinivas,S. Sarvadevabhatla,R.K. Mopuri, K.R. Prabhu, N. Kruthiventi,S.S.S. & Babu R.V .(2017)Introduction to Deep Convolutional Neural Nets for Computer Vision Deep Learning forMedical Image Analysis -2017, Pages 25-52[14] Kamnitsas, K. Ledig, C. Newcombe,V .F.J. Simpson,J.P. Kane, A.D. Menon, D.K. Rueckert, D.& Glocker, B.(2015)Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesionsegmentation proceeding of ISLES challenge, MICCAI 2015[15] Jani,A. Savsani, V . & Pandya A. (2017)3D affine registration using TeachingLearning BasedOptimization 3D Research Center, Kwangwoon University and Springer 2013[16] Kamnitsas, K. Ferrante, E. Parisot, S. Ledig, C. Nori, A. Criminisi, A Rueckert, D. & Glocker,B. (2016)DeepMedic for Brain Tumor Segmentation Biomedical Image Analysis Group[17] https://github.com/Kamnitsask/deepmedic/blob/master/LICENSE.txt[18] Simonyan, K. & Zisserman, A. (2014) Very deep convolutional networks for large-scale imagerecognition arXiv preprint arXiv:1409.1556 (2014)[19] Prionas, ND. Ray, S.& Boone, JM. (2011) V olume assessment accuracy in computed tomography:a phantom study J Appl Clin Med Phys 2011; 11(2):3037.10
SJgc5z9af
limited contribution, preliminary validation
1: Strong rejection
This paper addresses the problem of segmenting sphenoid sinus in CT scans. This is an application paper, where a standard CNN model is applied to perform patch based segmentation, with pre- and post-processing steps. The pre-processing step involves the extraction of a region of interest and a registration step. The post-processing step is composed of mathematical morphology operations. pros + relevant problem cons - contribution is limited, validation is preliminary The main concern based on this paper is related to contribution and the experimental evaluation. *Although it might be the case that CNNs have not previously been applied to segment sphenoid sinus, the proposed pipeline has been heavily exploited in the medical imaging literature. *The paper is longer than what suggested by the conference guidelines. Given that there are a couple of pages detailing how to perform a convolution operation, explaining concepts such as receptive fields and feature maps, it does not seem necessary to go beyond 8 pages. *Please review all Eq in the paper. The output of a layer seems to sum the kernel values and apply a non-linearity to the sum, prior to the convolution operation. There is no need to have 2 notations for the kernel (k and W). *The model is trained in a very constrained setting, where data containing fractures and so on has been removed. Why not train the models with all the data you have instead of removing the samples with fractures and so on? *Table 1: it is not clear whether you actually have expertly labeled segmentation for the test samples. *Comparison to state-of-the-art literature is very limited, the method is only compared to a semi-automatic toolkit, without highlighting benefits and limitations of each one of those. *Reported results are preliminary and might constitute a good baseline if properly compared to previous approaches. *Why not use fully convolutional networks as an alternative segmentation model? *Why not train the model end to end, including ROI extraction and registration?
2: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Fully Automatic Segmentation of Sphenoid Sinus in CT Images with 3D Convolutional Neural Networks ### Paper Abstract Today, Deep learning algorithms have quickly become essential in the field of medical image analysis. Compared to the traditional methods, these Deep learning techniques are more efficient in extracting compact information leading towards significant improvement performance of medical image analysis system. We present in this paper a new technique for Sphenoid sinus automatic segmentation using a 3D Convolutional Neural Networks (CNN). Due to the scarcity of medical data, we chose to used a 3D CNN model learned on a small training set. Mathematical morphology operations are then used to automatically detect and segment the region of interest. Our proposed method is tested and compared with a semi-automatic method and manual delineations made by a specialist. The preliminary results from the Head Computed Tomography (CT) volumes seem to be very promising. ### Paper Keywords ["3D Convolutional neural network", "Computed Tomography", "Automatic Segmentation", "Sphenoid Sinus"] ### Paper Content Fully Automatic Segmentation of Sphenoid Sinus inCT Images with 3D Convolutional Neural NetworksKamal SouadihMedical Computing Laboratory (LIMED),University of Abderrahmane Mira,06000, Bejaia, Algeriakamal.souadih@univ-bejaia.dzAhror BelaidMedical Computing Laboratory (LIMED),University of Abderrahmane Mira,06000, Bejaia, Algeriaahror.belaid@univ-bejaia.dzDouraied Ben SalemINSERM UMR 1101Laboratory of Medical Information Processing (LaTIM),5 avenue Foch, 29200 Brest, France,Neuroradiology and Forensic Imaging Department,CHRU Brest, La Cavale Blanche Hospital. Boulevard Tanguy Prigent, 29609 Brest, France,douraied.bensalem@chu.brest.frAbstractToday, Deep learning algorithms have quickly become essential in the field ofmedical image analysis. Compared to the traditional methods, these Deep learningtechniques are more efficient in extracting compact information leading towards sig-nificant improvement performance of medical image analysis system. We presentin this paper a new technique for sphenoid sinus automatic segmentation using a3D Convolutional Neural Networks (CNN). Due to the scarcity of medical data, wechose to used a 3D CNN model learned on a small training set. Mathematical mor-phology operations are then used to automatically detect and segment the regionof interest. Our proposed method is tested and compared with a semi-automaticmethod and manual delineations made by a specialist. The preliminary results fromthe Computed Tomography (CT) volumes seem to be very promising.1 IntroductionThe sinuses anatomy in general is very complex and variable [1]. Sphenoid sinus is too, a very variablecavity, an important landmark in surgery and at the same time it is hard to isolate [2-3-4]. Fig. 1 showsa diagrammatic representation of the paranasal sinuses location. Another difficulty is that the sinusescan also be divided into many nooks, which communicate with each other through an incompletebone wall [5], which further complicates their localization, see for e.g. [5]. The complications whileoperating on sphenoid sinus are easily avoided if we know its anatomical features [6].As it has been established, the sphenoid sinus is the most inaccessible part of the face, being insidethe sphenoid bone and involving a number of different structures. Its deep anatomical location makesit difficult to approach. This deep location can be beneficial in the case of forensic identification.Unlike other sinuses, the sphenoid sinus is well protected from traumatic degradation resulting fromexternal causes.sphenoid sinuses can be classified according to their positions in the sella turcica into four types [7]:Conchal: complete missing or minimal sphenoid sinus;1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherlands.Pre-sellar: the posterior wall of sphenoid sinus is in front of the anterior wall of the sellaturcicaSellar: the posterior wall of the sphenoid sinus is between the anterior and posterior walls ofsella turcicaPost-sellar: the posterior wall of sphenoid sinus is behind the posterior wall of the sellaturcicaThese types of sphenoid sinuses and their basic dimensions (height, width and depth) can generallyhelp predict the risk of accidental injury, but also useful for individual identification as can be seen in[8].Figure 1: Diagrammatic representation of paranasal sinuses.The computed tomography (CT) is an excellent imaging method used for the assessment of sinusesanatomy as it allows precise evaluation craniofacial of bones and the extent of there pneumatization[9-4]. By using segmentation of three dimensional (3D) CT-images of sphenoid sinus we could makeuseful measurements of its volume anatomy [10].A 3D segmentation is a technique that consist labeling each voxel in an image and assigning it ingroup of voxels that define an anatomical structure. This technique has a wide variety of applicationsin medical research and Computer-Aided Diagnosis.It is a very useful method, it allows to extractand recognize organs like: the heart, the brain, the spine, the blood vessels,etc.It is used too toimprove visualization of medical images and allow quantitative measurements of organs structureson the image. Segmentation is also important in building anatomical atlases, researching shapes ofanatomical structures and tracking their changes over time [11].The artificial intelligence techniques represented by machine learning are increasingly used in medicalimage analysis and segmentation. In the recent years, the appearance of deep learning techniqueshas contributed significantly improving for medical image analysis, based on convolutional neuralnetworks (CNN) that give the ability to automatically learn significant patterns and extract realstructures from images [3-12].One of the main reasons for the success of the CNN model was that it possible to directly use thepre-trained model to do various other tasks which it was not originally intended for. It becameremarkably easy to download a learned model, and then tweak it slightly to suit the application athand [13]. To the best of our knowledge, there is no an automatic segmentation approach dedicatedto sphenoid sinuses. This is probably due to the complex anatomy and high anatomical variability.Another particular challenge we had to overcome includes the opening of sphenoid sinus ostium,making the wall delimitation very difficult. In this paper, we made a sphenoid sinus automaticsegmentation tool that uses conventional CT-images based on 3D CNN. The proposed method isefficient, robust, and is able to obtain good results with small training dataset.2 MethodOur automatic sphenoid sinus segmentation method consists of three main steps, where the resultof step is the input of another one. The first step is a preprocessing step; we create and transform2automatically the images volume given from a PACS to an image of the region of interest. Thanwe perform a segmentation with 3D deep CNN [14], that we adapted and parameterized to producehighly accurate sinus segmentation. Finally a postprocessing based on mathematical morphologyoperations is carry out sinus measurement and refine segmentation (Figure.1). This splitting in stagesallowed us to improve and simplify the use of CNN at the CPU level. In the following we describethe method stage:Figure 2: Flow chart of the sphenoid sinus segmentation scheme.2.1 The automatic ROI extraction for the CT-image:The preprocessing step uses some interesting techniques with slight transformations that are adaptedto improve the effectiveness of the specific type of segmentation method used in the next step. Thesetransformations are made so that common parameters can be used for all images of all intensityranges. In other words, we aim to operate only on a reduced 3D region, a region of interest centeredon the sinus at issue and not on the whole image. This region of interest must be the same in terms3of dimensions for all images in data set of training or test. To achieve this, we first selected a targetimage with a well-oriented head and a clear sinus.We manually traced a large rectangle, enough to contain the sinus whatever its shape,size does notexceed 200 x 200 x 200 pixels. This rectangle will also serve as a reference bounding-box. Then, allother database images are registered onto this target image with its bounding-box. As the imagesare coming from different persons, we choose to use a rigid registration, allowing a correction of thedifferent positions and orientations arising from the clinical exam. Since the natural size of the skullsis different from one person to another, we have avoided using affine registration[15], which risksdistorting the estimation volume that will be used later as parameter for identification. Thereby, wewere able to build a new database consisting only of regions of interest, with the same size as thereference box.2.2 Sinus Segmentation with deep 3D CNNsIn this step we employ DeepMedic [16] realized as an open source software [17], it is an architecturewith adjustable number of deep layers, double-pathway and 3D Convolutional Neural Network,developed for the segmentation of brain lesions [14]. This system segments MRI 3D imagescorresponding to a multi-modal 3D patch at multiple scales. For our study we used the lightweightversion CPU-based of this software to drive our sinus automatic segmentation model; in our casewe use one modality and a CT images format. This CPU model gives a satisfactory solution to ourproblem.The robustness of this CNN was tested when less training data were available or fewer filters wereused, this architecture was further benchmarked on the BRATS 2016 Challenge, where it achievedvery good performance despite the simplicity of the pipeline [17]. It was demonstrated that it ispossible to train this 3D CNN on a small dataset of 28 cases. This network was given a good resulton the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% afterpost processing) on the ISLES 2015 training dataset, ranking among the top entries [14]. Thisarchitecture[16]based on :Two parallel convolutional pathways that process the input at multiple scales to achieve alarge receptive field for the final classification while keeping the computational cost low.A small convolutional kernels. That gives efficiency to building deeper CNNs withoutseverely increasing the number of trainable parameters and Inspired by VGG (Very deepconvolutional networks)[18].Building high performing and efficient 3D CNNs thanks to themuch smaller computation required for the convolution with small 33kernels .Full convolutional fashion on image segments in both training and testing stage.In what follows we will present the main algorithms that make up this architecture, In [14] thecreators and authors of this architecture presented a very clear and detailed of DeepMedic architecturewith its theoretical background, here we just giving a summary of each step, which make up thissoftware:1- Each layer l2[1;L]consists ofClfeaturemaps (FM)also referred to as Channels2- EveryFM represents a group of neurons that detect a particular pattern (a feature, in the channelsof the previous layer).3- APattern is defined by kernel weights associated with the FM4- If the neurons of the mthFM in thelthlayer are arranged in a 3D grid, their activationsconstitute the image defined bye:yml=f cl1Xn=1km;nl!ynl1+bml:ymlis the result of convolving each of the previous layer channels with a 3-dimensional.km;nlIs akernel , adding a learned biasbmlapplying a non-linearity fThe imageyn0is the input to the first layer, correspond to the channels of the original inputimage.5- Each kernel is a matrix of learned hidden weights Wm;nl6- Eachclass of segments has a Clnumber of .47- The activations of Clare fed into a position-wise softmax function that produces the predictedposteriorpc= exp (ycL(X))=CLXc=1exp(ycL):ycLis the activation of the FM at position l2N38- The size of the neighbourhood of voxels 'lin the input that influence the activation of a neuron isa receptive field, increases at each subsequent layer and is given by the 3-dimensional vector:'fx;y;zgl='fx;y;zgl1+kfx;y;zgl1fx;y;zgl: (1)Wherekl;l2N3are vectors expressing the size of the kernels and stride of the receptive field atlayerllis given by the product of the strides of kernels in layers preceding, in this system thel= (1;1;1)'CNN ='L: This is called theCNN’s receptive field; the receptive field of a neuron inthe classification layer corresponds to the image patch that influences the prediction for itscentral voxel.9- The dimensions of the FMs in Layer lis given by:fx;y;zgl="fx;y;zgl'fx;y;zglfx;y;zgl+ 1#: (2)10- If an input of size inis provided, in='CNN is a size of input patch in the commonpatch-wise. The FMs of this classification layer have 13.11- CNNs are trained patch-by-patch and random patches of size 'CNN are extracted from thetraining images.12- To maximize the log likelihood of the data or, equally, minimize the Cross Entropy via the costfunction is used:J;Ii;Ci=1=BBXi=1log(P(Y=cijIi;)) =1=BBXi=1logpCi: (3)Bis the size of batch, which is then processed by the network for one training iteration ofStochastic Gradient Descent (SGD).The pair (Ii;Ci);8i2[1;B]is theithpatch in the batch and the true label of its centralvoxel.The scalarpCiis the predicted posterior for Class CiRegularization terms were omitted for simplicity. Multiple CiSequential optimization stepsover different batches gradually lead to convergence.13- The classification layer is the activation of the last layer of CNN.Memory requirements and computing time increase with batch size, which is the limitation of 3DCNNs, DeepMedic uses a strategy that exploits the dense inference technique on image segments.Following from Eq.(2), if an image segment of size greater than 'CNN is given as input to thenetwork, the output is a posterior probability for multiple voxels V=Qi=fx;y;zgil. If the trainingbatches are formed of Bsegments extracted from the training images, the cost function Eq.(3), in thecase of dense-training[14] becomes:5JD(;Is;Cs) =1BVBXs=1VXv=1pcvs(xv): (4)WhereIsandCsare thesthsegment of the batch and the true labels of its vthvoxel.xvthecorresponding position in the classification FMs and pcvsthe output of the softmax function. Theeffective lot size is increased by a factor V without corresponding increase in calculation and memoryrequirements DeepMedic architecture is also a deep architecture based on small 33kernels that arefaster to convolve with and contain less weights[14].We have adapted the 3D CNN for five layers, with a receptive field of size 173and one modality.The classification layer (the last layer) is implemented like a convolutional with 13kernels, whichenables efficient dense inference. When the network segments an input it predicts multiple voxelssimultaneously, one for each shift of its receptive field over the input (see Figure 4). The trainingtime required for convergence of the final system is roughly 20 minutes using a CPU Intel I5-7300with 2x2.5 GHz. Segmentation of a 3D scan of a sphenoid sinus requires 1 minute.Figure 3: Architecture of the deepMedic for automatic sphenoid sinus segmentation.2.3 Post processingThe segmentation result obtained by the 3D CNN of the precedente step method does not make itpossible to distinguish between the sphenoid sinus from the other sinuses. The nasal cavities as wellas the paranasal sinuses have almost the same gray level intensity. To differentiate the sinuses, wehave used a prior knowledge about the positioning of these sinuses. Indeed, the sphenoid sinus is thedeepest cavity starting from the front face, and therefore it is the first cavity encountered from theback of the skull at the median. Thus, using the operations of mathematical morphology we havebeen able to locate the sphenoid sinus. We have first applied an erosion operation to the segmentedimage which allows removing the residues, but especially the potential connections between thesphenoid sinus and other cavities. More precisely, erosion operation allows to remove the ostium andto well separate the two hemisinus of the sphenoid sinus.Once the sphenoid sinus cleared, we have subsequently calculated the centres of gravity of all theregions on the image. After sorting the centers coordinates along the coronal axis, the deepest centrecorresponds, of course, to the region of the sphenoid sinus, or more precisely corresponds to thedeepest hemisphere. When the hemisphere is segmented from the rest of the cavities, a dilationoperation (with the same parameters as the previews erosion) is applied to recover some details of theshape lost during erosion operation. As can be seen, the detection of the two hemispheres of the sinusis sequential. Indeed, after removing the first detected hemisphere, the same process is launched onthe initially segmented image.3 Result3.1 DatasetOur dataset has 24 Head CT images,which were performed on a helical, multi-detector CT scan-ner.Some data exclusion criteria have been set. All CT exam with head fractures, tumors, or anypathological process involving the sphenoid bone and the surrounding structures, but also with sinusesmucosa thickening, or any abnormality of the sinuses contents, were not included in the study. Afterthe preprocessing we have obtained 3D CT- images less than 200 x 200 x 200. We have used 156Table 1: Elements of our datasetData set Number of imagesTotal CT exam considered 24Total CT exam on train step in the 3D CNN Algorithm 5Total CT exam on validation step in the 3D CNN Algorithm 10Total images for test step (automatic segmented) 9Total CT exam manually segmented by an expert assistance for train step 5Total CT exam manually segmented by an expert assistance for validation step 10Figure 4: Segmentation examples for 3 CT-images, shows a superior, left, interior and front views.images for training step (training and validation) in the 3D CNN algorithm and 9 images to test.The training dataset need a manual segmentation of spheroid sinus for each image, so we did thismanual segmentation assisted by a radiologist, a description of the dataset images used in the 3DCNN Algorithm is illustrated and summarized in Table 1.3.2 ResultsAn example of 3 segmentations is reported in Figure 4. It shows the result of the segmentation andthe extracted a sphenoid sinus as explained in the previous sections. The segmentation is performedusing the 3D CNN and affine with the morphological operations.3.3 ValidationTo evaluate the accuracy and robustness of the proposed automated approach, the results from thesame 9 sphenoid sinus automatically segmented with our tool were compared with a semi-automaticClustering method segmentation of ITK-SNAP Software with a manual segmentation that wasperformed with an experienced radiologist using a standard procedure. Each image was segmented bycarefully tracing the outlines of the sphenoid sinus while following the inner bone surface, proceedingin an axial direction. An example of the Spheroid sinus manual segmentation process of one slice isshown in Figure 5.The Dice Similarity Coefficient (DSC), Hausdorff distance (HD) and Mean Absolute Distance (MAD),were used for evaluating the proposed method. The dice Coefficient (DSC), one of the most commonmethods for evaluating segmentation results, indicates a level of similarity between the reference(manual segmentation) and segmented result (automatic segmentation), the formulation of DSC isgiven by:7Figure 5: Example of the process of manual segmentation on one slice. From left to right: an axial,sagittal and coronal view.DSC =2N(S1\S2)N(S1) +N(S2): (5)WhereS1andS2represent the obtained segmentation and the ground truth respectively (manualsegmentation), and N(:)defines the number of pixels.DCS2[0;1]the closer the DCS value to 1, the better the segmentation is. The Hausdorff distance is metricrepresents the spatial distance between two point sets, i.e., is the maximum distance between twopoint setsC1andC2, from each point a2C1to pointb2C2and vice versa. HD is defined asfollows:HD(C1;C2) = max (h(C1;C2);h(C2;C2)): (6)The Mean Absolute Distance (MAD) metric . Is given as follows:MAD (C1;C2) =12241nnXi=1d(ai;c2) +1mmXj=1d(bj;c 1)35:Where the distance between the point ai and the closet point bjis given by :d(ai;c2) =minkbjaik:Wherebj2C2.The three metrics: DSC, HD and MAD were measured for all segmentations; Tables 2 and 3 illustratethe associated results and a comparison between our automatic segmentation and semi-automaticclustering of ITK-SNAP for the nine CT- images respectively with a manual segmentation. Relatedmean, median and standard deviation are shown in the same tables.Table 2: Comparison results with manual delineations. Are shown, DSC, Hausdorff (HD) and MADmeasures for proposed approach and semi-automatic method using ITK-SNAP.Measures DSC ( %) HD (mm) MAD (mm)Index Mean Median SD Mean Median SD Mean Median SDOur tool 95.81 96.16 1.48 9.87 8.39 5.31 3.24 2.22 2.20ITK-SNAP 96.01 95.94 0.54 9.23 8.19 5.30 3.16 2.10 2.204 Discussion and conclusionTo our knowledge, only manual or semi-automatic methods have been applied for sphenoid sinus seg-mentation. These techniques present inter- and intra-observer variability and are both time-consuming.In the present work, we have developed a fully automated method for sinus sphenoid segmentation8Table 3: Detailed results of comparison between the proposed automatic and semi-automatic (ITK-SNAP) segmentation for 9 volumes, using respectively DSC, HD and MAD distances.CT V olumes 1 2 3 4 5 6 7 8 9Our results 96.52 96.10 95.59 92.10 95.71 96.84 95.91 95.48 96.87ITK-SNAP results 95.78 96.10 95.74 96.74 95.21 97.15 96.16 95.77 96.33Our results 4.09 7.02 6.62 2.71 0.96 2.08 1.58 1.85 2.22ITK-SNAP results 4.08 6.88 6.61 2.06 0.93 2.10 1.54 2.01 2.20Our results 10.43 15.43 19.46 13.32 3.79 8.07 4.76 5.15 8.39ITK-SNAP results 43.76 15.43 19.46 6.29 3.87 8.19 4.67 5.18 8.41based on CT images. The statistical comparisons between our automated tool segmentation and theclustering ITK-SNAP semi-automatic segmentation with manual segmentation methods revealedstrong agreement and low dispersion between variables. These promising findings were maintainedover the entire range of sphenoid sinus segmentation evaluation, and the mean difference betweenthe automated and manual techniques was approximately 5% for both measurements. These differ-ences are sufficiently small and this gives us a good confidence in this segmentation method. Thisautomated tool has the ability of segmenting a 3D CT-image in approximatively under then 1 minute.Furthermore, this tool does not require complex or expensive equipment although it uses 3D CNN.This method may be applied using conventional computers, thus allowing better implementation inclinical practice.The present study has some limitations. Our methodology was analyzed using only one protocol witha slice thickness of 0.5 mm. Prionas et al.[19] reported a greater error of volume quantification forthicker slices. Further studies are needed to evaluate volume in patient groups with different ages,genders, and ethnicities. Nevertheless, the automated tool may be adapted to quantify volume inother paranasal sinuses.In conclusion, the present study found a good correlation between the manual and automatedsphenoidal sinus volume estimation techniques. Our automated measurements of sphenoidal sinusvolume based on CT exams were reliable, robust, and accurate compared with the manual method.Our findings suggest that this automated tool may be applied in clinical practice.It does not requiresubstantial user expertise, and it is reproducible and fast.AcknowledgmentsThe authors would like to thank Rabeh Djabri for English proofreading.References[1] Giacomini, G. Pavan, A.L.M. Altemani, J.M.C. Duarte,S.B. Fortaleza, C.M.C.B. Miranda,J.R. &Pina,D.R. (2018)Computed tomography-based volumetric tool for standardized measurement of themaxillary sinus PLoS ONE, 13(1): e0190770. doi:10.1371/journal.pone.0190770[2]Knisely,A. Holmes,T. Barham,H. Sacks,R.& Harvey,R.(2016) Isolated sphenoid sinus opacifica-tion: A systematic revie American Journal of OtolaryngologyHead and Neck Medicine and Surger[3] Stokovic,N. Trkulja,V . Dumic-Cule,V . Cukovic-Bagic,I. T.L.S Vukicevic, T.L.S. & Grgurevic,L. (2015) Sphenoid sinus types, dimensions and relationship with surrounding structures Annals ofAnatomy (2015), http://dx.doi.org/10.1016/j.aanat.2015.02.013[4] Burke M.C. Taheri,R Bhojwani,R & Singh,A (2015) A Practical Approach to theImaging Interpretation of Sphenoid Sinus Pathology Current Problems in Diagnostic,http://dx.doi.org/10.1067/j.cpradiol.2015.02.002[5] Hacl,A. Costa,A.L.F. Oliveira,J.M. Tucunduva, M.J. Girondi, J.R, Raphaelli,A.C. & Scocate,N.(2016)Three-dimensional volumetric analysis of frontal sinus using medical software.(2017) Journalof Forensic Radiology and Imaging http://dx.doi.org/10.1016/j.jofri.2017.08.004[6] Wu,H.B. Zhu,L. Yuan,H.S. & Hou,C. (2011)Surgical measurement to sphenoid sinus for theChinese in Asia based on CT using sagittal reconstruction images Eur Arch Otorhinolaryngol (2011)268 :241 246 .9[7] Guldnerc,C. Pistorius,S. Diogo,I. Bien,S. Sesterhenn,A.& Werner,J. (2012)Analysis of pneumati-zation and neurovascular structures of the sphenoid sinus using cone-beam tomography (cbt) Acta.Radiol., vol. 53, no. 2, pp. 214-9, 2012[8] Auffret,M. Garetier,M. Diallo I. Aho, S. & Ben Salem, D. (2016)Contribution of the computedtomography of the anatomical aspects of the sphenoid sinuses to forensic identification J. Neuroradiol.,vol. 43, no. 6, pp. 404414, 2016[9] Uthman,A.T. AL-Rawi, N.H. Al-Naaimi,A.S. Tawfeeq A.S. & Suhail E.H. (2009)Evaluationof frontal sinus and skull measurements using spiral CT scanning: An aid in unknown personidentification Forensic Science International 197 (2010) 124.e1 124.e7[10] Kawari, Y . Fukushima, K. Ogawa,T. Nishizaki,K. Gunduz,M. Fujimoto, M.& Yu Masuda(1999)V olume Quantification of Healthy Paranasal Cavity by Three-Dimensional CT Imaging ActaOtolaryngol (Stockh) 1999; Suppl 540: 45-49[11] Ahirwar,A. (2013)Study of Techniques used for Medical Image Segmentation and Computationof Statistical Test for Region Classification of Brain MRI I.J. Information Technology and ComputerScience, 2013, 05, 44 53[12] Shen,D. Wu, G.& Suk,H.(2017) Annu. Rev. Biomed. Eng. 2017. 19:22148 (The Annual Reviewof Biomedical Engineering is online at bioeng.annualreviews.org)[13] Srinivas,S. Sarvadevabhatla,R.K. Mopuri, K.R. Prabhu, N. Kruthiventi,S.S.S. & Babu R.V .(2017)Introduction to Deep Convolutional Neural Nets for Computer Vision Deep Learning forMedical Image Analysis -2017, Pages 25-52[14] Kamnitsas, K. Ledig, C. Newcombe,V .F.J. Simpson,J.P. Kane, A.D. Menon, D.K. Rueckert, D.& Glocker, B.(2015)Efficient multi-scale 3d cnn with fully connected crf for accurate brain lesionsegmentation proceeding of ISLES challenge, MICCAI 2015[15] Jani,A. Savsani, V . & Pandya A. (2017)3D affine registration using TeachingLearning BasedOptimization 3D Research Center, Kwangwoon University and Springer 2013[16] Kamnitsas, K. Ferrante, E. Parisot, S. Ledig, C. Nori, A. Criminisi, A Rueckert, D. & Glocker,B. (2016)DeepMedic for Brain Tumor Segmentation Biomedical Image Analysis Group[17] https://github.com/Kamnitsask/deepmedic/blob/master/LICENSE.txt[18] Simonyan, K. & Zisserman, A. (2014) Very deep convolutional networks for large-scale imagerecognition arXiv preprint arXiv:1409.1556 (2014)[19] Prionas, ND. Ray, S.& Boone, JM. (2011) V olume assessment accuracy in computed tomography:a phantom study J Appl Clin Med Phys 2011; 11(2):3037.10<|im_end|> <|im_start|>assistant ### Review Title limited contribution, preliminary validation ### Review Text This paper addresses the problem of segmenting sphenoid sinus in CT scans. This is an application paper, where a standard CNN model is applied to perform patch based segmentation, with pre- and post-processing steps. The pre-processing step involves the extraction of a region of interest and a registration step. The post-processing step is composed of mathematical morphology operations. pros + relevant problem cons - contribution is limited, validation is preliminary The main concern based on this paper is related to contribution and the experimental evaluation. *Although it might be the case that CNNs have not previously been applied to segment sphenoid sinus, the proposed pipeline has been heavily exploited in the medical imaging literature. *The paper is longer than what suggested by the conference guidelines. Given that there are a couple of pages detailing how to perform a convolution operation, explaining concepts such as receptive fields and feature maps, it does not seem necessary to go beyond 8 pages. *Please review all Eq in the paper. The output of a layer seems to sum the kernel values and apply a non-linearity to the sum, prior to the convolution operation. There is no need to have 2 notations for the kernel (k and W). *The model is trained in a very constrained setting, where data containing fractures and so on has been removed. Why not train the models with all the data you have instead of removing the samples with fractures and so on? *Table 1: it is not clear whether you actually have expertly labeled segmentation for the test samples. *Comparison to state-of-the-art literature is very limited, the method is only compared to a semi-automatic toolkit, without highlighting benefits and limitations of each one of those. *Reported results are preliminary and might constitute a good baseline if properly compared to previous approaches. *Why not use fully convolutional networks as an alternative segmentation model? *Why not train the model end to end, including ROI extraction and registration? ### Review Rating 1: Strong rejection ### Review Confidence 2: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
c77KhoLYSwF
ICLR.cc/2021/Conference
2021
Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks
["Avi Schwarzschild", "Micah Goldblum", "Arjun Gupta", "John P Dickerson", "Tom Goldstein"]
Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, we find that the impressive performance evaluations from data poisoning attacks are, in large part, artifacts of inconsistent experimental design. Moreover, we find that existing poisoning methods have been tested in contrived scenarios, and many fail in more realistic settings. In order to promote fair comparison in future work, we develop standardized benchmarks for data poisoning and backdoor attacks.
["Poisoning", "backdoor", "attack", "benchmark"]
ABSTRACTData poisoning and backdoor attacks manipulate training data in order to causemodels to fail during inference. A recent survey of industry practitioners foundthat data poisoning is the number one concern among threats ranging from modelstealing to adversarial attacks. However, we find that the impressive performanceevaluations from data poisoning attacks are, in large part, artifacts of inconsistentexperimental design. Moreover, we find that existing poisoning methods have beentested in contrived scenarios, and many fail in more realistic settings. In order topromote fair comparison in future work, we develop standardized benchmarks fordata poisoning and backdoor attacks.1 I NTRODUCTIONData poisoning is a security threat to machine learning systems in which an attacker controls thebehavior of a system by manipulating its training data. This class of threats is particularly germaneto deep learning systems because they require large amounts of data to train and are therefore oftentrained (or pre-trained) on large datasets scraped from the web. For example, the Open Images andthe Amazon Products datasets contain approximately 9 million and 233 million samples, respectively,that are scraped from a wide range of potentially insecure, and in many cases unknown, sources(Kuznetsova et al., 2020; Ni, 2018). At this scale, it is often infeasible to properly vet content.Furthermore, many practitioners create datasets by harvesting system inputs (e.g., emails received,files uploaded) or scraping user-created content (e.g., profiles, text messages, advertisements) withoutany mechanisms to bar malicious actors from contributing data. The dependence of industrial AIsystems on datasets that are not manually inspected has led to fear that corrupted training data couldproduce faulty models (Jiang et al., 2017). In fact, a recent survey of 28industry organizationsfound that these companies are significantly more afraid of data poisoning than other threats fromadversarial machine learning (Kumar et al., 2020).A spectrum of poisoning attacks exists in the literature. Backdoor data poisoning causes a modelto misclassify test-time samples that contain a trigger – a visual feature in images or a particularcharacter sequence in the natural language setting (Chen et al., 2017; Dai et al., 2019; Saha et al.,2019; Turner et al., 2018). For example, one might tamper with training images so that a vision systemfails to identify any person wearing a shirt with the trigger symbol printed on it. In this threat model,the attacker modifies data at both train time (by placing poisons) and at inference time (by insertingthe trigger). Triggerless poisoning attacks, on the other hand, do not require modification at inferencetime (Biggio et al., 2012; Huang et al., 2020; Muñoz-González et al., 2017; Shafahi et al., 2018; Zhuet al., 2019; Aghakhani et al., 2020b; Geiping et al., 2020). A variety of innovative backdoor andtriggerless poisoning attacks – and defenses – have emerged in recent years, but inconsistent andperfunctory experimentation has rendered performance evaluations and comparisons misleading.In this paper, we develop a framework for benchmarking and evaluating a wide range of poisonattacks on image classifiers. Specifically, we provide a way to compare attack strategies and shedlight on the differences between them.Our goal is to address the following weaknesses in the current literature. First, we observe that thereported success of poisoning attacks in the literature is often dependent on specific (and sometimesunrealistic) choices of network architecture and training protocol, making it difficult to assess the1Under review as a conference paper at ICLR 2021viability of attacks in real-world scenarios. Second, we find that the percentage of training datathat an attacker can modify, the standard budget measure in the poisoning literature, is not a usefulmetric for comparisons. The flaw in this metric invalidates comparisons because even with a fixedpercentage of the dataset poisoned, the success rate of an attack can still be strongly dependent on thedataset size, which is not standardized across experiments to date. Third, we find that some attacksthat claim to be “clean label,” such that poisoned data still appears natural and properly labeled uponhuman inspection, are not.Our proposed benchmarks measure the effectiveness of attacks in standardized scenarios usingmodern network architectures. We benchmark from-scratch training scenarios and also white-boxand black-box transfer learning settings. Also, we constrain poisoned images to be clean in the senseof small perturbations. Furthermore, our benchmarks are publicly available as a proving ground forexisting and future data poisoning attacks.The data poisoning literature contains attacks in a variety of settings including image classification,facial recognition, and text classification (Shafahi et al., 2018; Chen et al., 2017; Dai et al., 2019).Attacks on the fairness of models, on on speech recognition, and recommendation engines have alsobeen developed (Solans et al., 2020; Aghakhani et al., 2020a; Li et al., 2016; Fang et al., 2018; Huet al., 2019; Fang et al., 2020). While we acknowledge the merits of studying poisoning in a range ofmodalities, our benchmark focuses on image classification since it is by far the most common settingin the existing literature.2 A SYNOPSIS OF TRIGGERLESS AND BACKDOOR DATA POISONINGEarly poisoning attacks targeted support vector machines and simple neural networks (Biggio et al.,2012; Koh & Liang, 2017). As poisoning gained popularity, various strategies for triggerless attackson deep architectures emerged (Muñoz-González et al., 2017; Shafahi et al., 2018; Zhu et al., 2019;Huang et al., 2020; Aghakhani et al., 2020b; Geiping et al., 2020). The early backdoor attackscontained triggers in the poisoned data and in some cases changed the label, thus were not clean-label(Chen et al., 2017; Gu et al., 2017; Liu et al., 2017). However, methods that produce poison exampleswhich don’t visibly contain a trigger also show positive results (Chen et al., 2017; Turner et al.,2018; Saha et al., 2019). Poisoning attacks have also precipitated several defense strategies, butsanitization-based defenses may be overwhelmed by some attacks (Koh et al., 2018; Liu et al., 2018;Chacon et al., 2019; Peri et al., 2019).We focus on attacks that achieve targeted misclassification. That is, under both the triggerless andbackdoor threat models, the end goal of an attacker is to cause a target sample to be misclassifiedas another specified class. Other objectives, such as decreasing overall test accuracy, have beenstudied, but less work exists on this topic with respect to neural networks (Xiao et al., 2015; Liu et al.,2019). In both triggerless and backdoor data poisoning, the clean images, called base images , that aremodified by an attacker come from a single class, the base class . This class is often chosen to beprecisely the same class into which the attacker wants the target image or class to be misclassified.There are two major differences between triggerless and backdoor threat models in the literature.First and foremost, backdoor attacks alter their targets during inference by adding a trigger. In theworks we consider, triggers take the form of small patches added to an image (Turner et al., 2018;Saha et al., 2019). Second, these works on backdoor attacks cause a victim to misclassify any imagecontaining the trigger rather than a particular sample. Triggerless attacks instead cause the victimto misclassify an individual image called the target image (Shafahi et al., 2018; Zhu et al., 2019;Aghakhani et al., 2020b; Geiping et al., 2020). This second distinction between the two threat modelsis not essential; for example, triggerless attacks could be designed to cause the victim to misclassify acollection of images rather than a single target. To be consistent with the literature at large, we focuson triggerless attacks that target individual samples and backdoor attacks that target whole classes ofimages.We focus on the clean-label backdoor attack and the hidden trigger backdoor attack , where poisonsare crafted with optimization procedures and do not contain noticeable patches (Saha et al., 2019;Turner et al., 2018). For triggerless attacks, we focus on the feature collision andconvex polytopemethods, the most highly cited attacks of the last two years that have appeared at prominent MLconferences (Shafahi et al., 2018; Zhu et al., 2019). We include the recent triggerless methods2Under review as a conference paper at ICLR 2021Bullseye Polytope (BP) and Witches’ Brew (WiB) in the section where we present metrics on ourbenchmark problems (Aghakhani et al., 2020b; Geiping et al., 2020). The following section detailsthe attacks that serve as the subjects of our experiments.Technical details Before formally describing various poisoning methods, we begin with notation.LetXcbe the set of all clean training data, and let Xp=fx(j)pgJj=1denote the set of Jpoisonexamples with corresponding clean base image fx(j)bgJj=1. Letxtbe the target image. Labels aredenoted byyandYfor a single image and a set of images, respectively, and are indexed to match thedata. We use fto denote a feature extractor network.Feature Collision (FC) Poisons in this attack are crafted by adding small perturbations to baseimages so that their feature representations lie extremely close to that of the target (Shafahi et al.,2018). Formally, each poison is the solution to the following optimization problem.x(j)p= argminxkf(x)f(xt)k22+kxx(j)bk22: (1)When we enforce `1-norm constraints, we drop the last term in Equation (1)and instead enforcekx(j)px(j)bk1";8jby projecting onto the `1ball after each iteration.Convex Polytope (CP) This attack crafts poisons such that the target’s feature representation is aconvex combination of the poisons’ feature representations by solving the following optimizationproblem (Zhu et al., 2019).X=pargminfcjg;fx(j)g12kf(xt)PJj=1cjf(x(j))k22kf(xt)k22subject toPJj=1cj= 1andcj08j;andkx(j)x(j)bk1"8j(2)Clean Label Backdoor (CLBD) This backdoor attack begins by computing an adversarial pertur-bation to each base image (Turner et al., 2018). Formally,^x(j)p=x(j)b+ argmaxkk1"L(x(j)b+;y(j);); (3)whereLdenotes cross-entropy loss. Then, a patch is added to each image in f^x(j)pgto generate thefinal poisonsfx(j)pg. The patched image is subject to an `1-norm constraint.Hidden Trigger Backdoor (HTBD) A backdoor analogue of the FC attack, where poisons arecrafted to remain close to the base images but collide in feature space with a patched image from thetarget class (Saha et al., 2019). Let ~x(j)tdenote a patched training image from the target class (thisimage is not clean), then we solve the following optimization problem to find poison images.x(j)p= argminxkf(x)f(~x(j)t)k22s.t.kxx(j)bk1" (4)3 W HY DO WE NEED BENCHMARKS ?Backdoor and triggerless attacks have been tested in a wide range of disparate settings. From modelarchitecture to target/base class pairs, the literature is inconsistent. Experiments are also lacking inthe breadth of trials performed, sometimes using only one model initialization for all experiments, ortesting against one single target image. We find that inconsistencies in experimental settings havea large impact on performance evaluations, and have resulted in comparisons that are difficult tointerpret. For example, in CP the authors compare their `1-constrained attack to FC, which is craftedwith an`2penalty. In other words, these methods have never been compared on a level playing field.To study these attacks thoroughly and rigorously, we employ sampling techniques that allow us todraw conclusions about the attacks taking into account variance across model initializations and classchoice. For a single trial, we sample one of ten checkpoints of a given architecture, then randomlyselect the target image, base class, and base images. In Section 4, all figures are averages from 100trials with our sampling techniques.3Under review as a conference paper at ICLR 2021Disparate evaluation settings from the literature To understand how differences in evaluationsettings impact results, we re-create the various original performance tests for each of the methodsdescribed above within our common evaluation framework. We try to be as faithful as possible tothe original works, however we employ our own sampling techniques described above to increasestatistical significance. Then, we tweak these experiments one component at a time revealing thefragility of each method to changes in experimental design.Establishing baselines For the FC setting, following one of the main setups in the original paper,we craft 50 poisons on an AlexNet variant (for details on the specific architecture, see (Krizhevskyet al., 2012; Shafahi et al., 2018)) pre-trained on CIFAR-10 (Krizhevsky et al., 2009), and we use the`2-norm penalty version of the attack. We then evaluate poisons on the same AlexNet, using the sameCIFAR-10 data to train for 20 more epochs to “fine tune” the model end to end. Note that this is notreally transfer learning in the usual sense, as the fine tuning utilizes the same dataset as pre-training,except with poisons inserted (Shafahi et al., 2018).The CP setting involves crafting 5 poisons using a ResNet-18 model pre-trained on CIFAR-10, andthen fine tuning the linear layer of the same ResNet-18 model with a subset of the CIFAR-10 trainingcomprising 50 images per class (including the poisons) (He et al., 2016). This setup is also notrepresentative of typical transfer learning, as the fine-tuning data is sub-sampled from the pre-trainingdataset. In this baseline we set "=25:5=255matching the original work (Zhu et al., 2019).One of the original evaluation settings for CLBD uses 500 poisons. We craft these on an adversariallytrained ResNet-18 and modify them with a 33patch in the lower right-hand corner. The pertur-bations are bounded with "=16=255. We then train a narrow ResNet model from scratch with theCIFAR-10 training set (including the poisons) (Turner et al., 2018).For the HTBD setting, we generate 800 poisons with another modified AlexNet (for architecturaldetails, see Appendix A.13) which is pre-trained on CIFAR-10 dataset. Then, an 88trigger patch isadded to the lower right corner of the target image, and the perturbations are bounded with "=16=255.We use the entire CIFAR-10 dataset (including the poisons) to fine tune the last fully connected layerof the same model used for crafting. Once again, the fine-tuning data in this setup is not disjoint fromthe pre-training data (Saha et al., 2019). See the left-most bars of Figure 3 for all baseline results.Inconsistencies in previous work The baselines defined above do not serve as a fair comparisonacross methods, since the original works to which we try and stay faithful are inconsistent. Table 1summarizes experimental settings in the original works. If a particular component (column header)was considered anywhere in the original paper’s experiments, we mark a ( X), leaving exes () whensomething was not present in any experiments. Table 1 shows the presence of data normalization andaugmentation as well as optimizers (SGD or ADAM). It also shows which learning setup the originalworks considered: frozen feature extractor (FFE), end-to-end fine tuning (E2E), or from-scratchtraining (FST), as well as which threat levels were tested, white, grey or black box (WB, GB, BB).We also consider whether or not an ensembled attack was used. The "values reported are out of 255and represent the smallest bound considered in the papers; note FC uses an `2penalty so no bound isenforced despite the attack being called “clean-label” in the original work. We conclude from Table 1that experimental design in this field is extremely inconsistent.Table 1: Various experimental designs used in data poisoning research.Data Opt. Transfer Learning Threat ModelAttack Norm. Aug. SGD FFE E2E FST WB GB BB Ensembles "FC X XX -CP X X X X X X 25.5CLBD X X X X 8HTBD X X X X 84Under review as a conference paper at ICLR 20214 J UST HOW TOXIC ARE POISONING METHODS REALLY ?In this section, we look at weaknesses and inconsistencies in existing experimental setups, and howthese lead to potentially misleading comparisons between methods. We use our testing framework toput triggerless and backdoor attacks to the test under a variety of circumstances, and get a tighter gripon just how reliable existing poisoning methods are.Training without SGD or data augmentation Both FC and CP attacks have been tested withvictim models pre-trained with the ADAM optimizer. However, SGD with momentum has becomethe dominant optimizer for training CNNs (Wilson et al., 2017). Interestingly, we find that modelstrained with SGD are significantly harder to poison, rendering these attacks ineffective in practicalsettings. Moreover, none of the baselines include simple data augmentation such as horizontal flipsand random crops. We find that data augmentation, standard in the deep learning literature, alsogreatly reduces the effectiveness of all of the attacks. For example, FC and CP success rates plummetin this setting to 51.00% and 19.09%, respectively. Complete results including hyperparameters,success rates, and confidence intervals are reported in Appendix A.3. We conclude that these attacksmay be significantly less effective against a real practitioner than originally advertised.Victim architecture matters Two attacks, FC and HTBD, are originally tested on AlexNet variants,and CLBD is tested with a narrow ResNet. These models are not widely used, and they are unlikely tobe employed by a realistic victim. We observe that many attacks are significantly less effective againstResNet-18 victims. See Figure 3, where for example, the success rate of HTBD on these victims is aslow as 18%. See Appendix A.4 for a table of numerical results. These ablation studies are conductedin the baseline settings but with a ResNet-18 victim architecture. These ResNet experiments serve asan example of how performance can be highly dependent on the selection of architecture.“Clean” attacks are sometimes dirty Each of the original works we consider purports to produce“clean-label” poison examples that look like natural images. However these methods often produceeasily visible image artifacts and distortions due to the large values of used. See Figure 1 forexamples generated by two of the methods, where FC perturbs a clean “cat” into an unrecognizablepoison (left), and CP generates an extremely noisy poison from a base in the “airplane” class (right).These images are not surprising since the FC method is tested with an `2penalty in the original work,and CP is`1constrained with a large radius of 25:5=255.Figure 1: Bases (top) and poisons (bot-tom).In many contexts, avoiding detection by automated sys-tems may be more important than maintaining perceptualsimilarity. In our work, we focus on perceptual similarityas defined by the `1constraint as this reflects the explicitgoal of most of the attacks we examine, and it is, in gen-eral, a much more common area of study. Adaptive attacksthat avoid defense or detection is relatively unexplored andan interesting area for future research (Koh et al., 2018).Borrowing from common practice in the evasion attackand defense literature, we test each method with an `1constraint of radius 8=255and find that the effectivenessof every attack is significantly diminished (Madry et al.,2017; Dong et al., 2020). Thus, a standardized constrainton poison examples is necessary for fair comparison ofattacks, and these previous attacks are not nearly as threatening under constraints that enforce cleanpoisons. See Figure 3, and see Appendix A.5 for a table of numerical results.Proper transfer learning is less vulnerable Of the attacks we study here, FC, CP, and HTBDwere originally proposed in settings referred to as “transfer learning.” Each particular setup varies,but none are true transfer learning since the pre-training datasets and fine-tuning datasets overlap. Forexample, FC uses the entire CIFAR-10 training dataset for both pre-training and fine tuning. Thus,their threat model entails allowing an adversary to modify the training dataset but only for the lastfew epochs. Furthermore, these attacks use inconsistently sized fine-tuning datasets.5Under review as a conference paper at ICLR 2021To simulate transfer learning, we test each attack with ResNet-18 feature extractors pre-trained onCIFAR-100. We fine tune with CIFAR-10 data in both cases, showing that these methods actuallyperform better in the setting with real transfer learning, i.e.where the pre-training data and fine-tuningdata are not from the same datasets and do not contain the same classes. In Figure 3, every attackaside from CP shows worse performance when transfer learning is done on data that is disjoint fromthe pre-training dataset. The attacks designed for transfer learning may not work as advertised inmore realistic transfer learning settings. See Appendix A.6.Performance is not invariant to dataset size Existing work on data poisoning measures an at-tacker’s budget in terms of what percentage of the training data they may modify. This begs thequestion whether percentage alone is enough to characterize the budget. Does the actual size of thetraining set matter? We find the number of images in the training set has a large impact on attackperformance, and that performance curves for FC and CP intersect. When we hold the percentagepoisoned constant at 1%, but we change the number of poisons and the size of the training setaccordingly, we see no consistent trends in how the attacks are affected. Figure 2 shows the successof each attack as a function of dataset size (shaded region is one standard error). This observationsuggests that one cannot compare attacks tested on different sized datasets by only fixing the percentof the dataset poisoned. See Appendix A.7.Figure 2: Scaling the dataset size while fixing thepoison budget.0 10,000 20,000 30,000 40,000 50,000Trainset Size (subsets of CIFAR-10)0255075100Success Rate (%)AttackFCCPCLBDHTBDBlack-box performance is low Whether con-sidering transfer learning or training fromscratch, testing these methods against a black-box victim is surely one of the most realistictests of the threat they pose. Since, FC, CP andHTBD do not consider the black-box scenario inthe original works, we take the poisons craftedusing baseline methods and evaluate them onmodels of different architectures than those usedfor crafting. The attacks show much lower per-formance in the black-box settings than in thebaselines, in particular FC, CP, and HTBD allhave success rates lower than 20%. See Figure3, and see Appendix A.8 for more details.Small sample sizes and non-random targetsOn top of inconsistencies in experimental setups,existing work on data poisoning often test onlyon specific target/base class pairs. For example,FC largely uses “frog” as the base class and“airplane” as the target class. CP, on the other hand, only uses “ship” and “frog” as the base and targetclasses, respectively. Neither work contains experiments where each trial consists of a randomlyselected target/base class pair. We find that the success rates are highly class pair dependent andchange dramatically under random class pair sampling. Thus, random sampling is critical forperformance evaluation. See Appendix A.9 for a comparison of the specific class pairs from theseoriginal works with randomly sampled class pairs.In addition to inconsistent class pairs, data poisoning papers often evaluate performance with very fewtrials since the methods are computationally expensive. In their original works, FC and CP use 30and50trials, respectively, for each experiment, and these experiments are performed on the same exactpre-trained models each time. And while HTBD does test randomized pairs, they only show resultsfor ten trials on CIFAR-10. These small sample sizes yield wide error bars in performance evaluation.We choose to run 100trials per experiment in our own work. While we acknowledge that a largernumber would be even more compelling, 100is a compromise between thorough experimentationand practicality since each trial requires re-training a classifier.Attacks are highly specific to the target image Triggerless attacks have been proposed as a threatagainst systems deployed in the physical world. For example, blue Toyota sedans may go undetectedby a poisoned system so that an attacker may fly under the radar. However, triggerless attacks aregenerally crafted against a specific target image, while a physical object may appear differently under6Under review as a conference paper at ICLR 2021difference real-world circumstances. We upper-bound the robustness of poison attacks by applyingsimple horizontal flips to the target images, and we find that poisoning methods are significantly lesssuccessful when the exact target image is unknown. For example, FC is only successful 7% of thetime when simply flipping the target image. See Figure 3 and Appendix A.10.Backdoor success depends on patch size Backdoor attacks add a patch to target images to triggermisclassification. In real-world scenarios, a small patch may be critical to avoid being caught. Theoriginal HTBD attack uses an 88patch, while the CLBD attack originally uses a 33patch (Sahaet al., 2019; Turner et al., 2018). In order to understand the impact on attack performance, we testdifferent patch sizes. We find a strong correlation between the patch size and attack performance, seeAppendix A.12. We conclude that backdoor attacks must be compared using identical patch sizes.Figure 3: We show the fragility of poisoning methods to experimental design. This figure depictsbaselines along with the results of ablation studies. Different methods respond differently to thesetesting scenarios, supporting the need for consistent and thorough testing. Horizontal lines denoteperformance on baselines described in Section 3, and bars represent the results of changing a specificfeature in an individual method’s baseline. Tables of these results with confidence intervals can befound in the appendices.0255075100Success Rate (%)Baseline Data Aug. SGD ResNet-18 Transfer = 8/255 Black-box FlipFeature Collision Convex Polytope Clean Label Backdoor Hidden Trigger Backdoor5 U NIFIED BENCHMARKS FOR DATA POISONING ATTACKSOur Benchmark We propose new benchmarks for measuring the efficacy of both backdoorand triggerless data poisoning attacks. We standardize the datasets and problem settings for ourbenchmarks below.1Target and base images are chosen from the testing and training sets, respectively,according to a seeded/reproducible random assignment. Poison examples crafted from the basesmust remain within the `1-ball of radius 8=255centered at the corresponding base images. Seedingthe random assignment allows us to test against a significant number of different random choicesof base/target, while always using the same choices for each method, thus removing a source ofvariation from the results. We consider two different training modes:ITransfer Learning: A feature extractor pre-trained on clean data is frozen and used whiletraining a linear classification head on a disjoint set of training data that contains poisons.IITraining From Scratch: A network is trained from random initialization on data containingpoison examples in the training set.To further standardize these tests, we provide pre-trained architectures to test against. The parametersof one model are given to the attacker. We then evaluate the strength of the attacks in white-box andblack-box scenarios. For white-box tests in the transfer learning benchmarks, we use the same frozenfeature extractor that is given to the attacker for evaluation. While in the black-box setting, we craftpoisons using the known model but we test on the two models the attacker has not seen, averaging theresults. When training from scratch, models are trained from a random initialization on the poisoned1Code is available at (suppressed for anonymity).7Under review as a conference paper at ICLR 2021dataset. We report averages over 100 independent trials for each test. Backdoor attacks can use any55patch. Note that the number of attacker-victim network pairs is kept small in our benchmarkbecause each of the 100 trials requires re-training (in some cases from scratch), and we want to keepthe benchmark within reach for researchers with modest computing resources.CIFAR-10 benchmarks Models are pretrained on CIFAR-100, and the fine-tuning data is a subsetof CIFAR-10. We choose this subset to be the first 250 images from each class, allowing for 25poison examples. This amount of data motivates the use of transfer learning, since training fromscratch on only 2,500 images yields poor generalization. See Appendix A.13 for examples. We allow500 poisons when training from scratch, see Appendix A.15 for a case-study in which we investigatehow many poisons an attacker may be able to place in a dataset compiled by querying the internetfor images. We allow the attacker access to a ResNet-18, and we do black-box tests on a VGG11(Simonyan & Zisserman, 2014), and a MobileNetV2 (Sandler et al., 2018), and we use one of eachmodel when training from scratch and report the average.TinyImageNet benchmarks Additionally, we pre-train VGG16, ResNet-34, MobileNetV2 modelson the first 100 classes of the TinyImageNet dataset (Le & Yang, 2015). We fine tune these models onthe second half of the dataset, allowing for 250 poison images. As above, the attacker has access to theparticular VGG16 model, and black-box tests are done on the other two models. In the from-scratchsetting, we train a VGG16 model on the entire TinyImageNet dataset with 250 images poisoned.2Benchmark hyperparameters We pre-train models on CIFAR-100 with SGD for 400 epochsstarting with a learning rate of 0.1, which decays by a factor of 10 after epochs 200, 300, and 350.Models pre-trained on the first half of TinyImageNet are trained with SGD for 200 epochs startingwith a learning rate of 0.1, which decays by a factor of 10 after epochs 100 and 150. In both cases,we apply per-channel data normalization, random crops, and horizontal flips, and we use batches of128 images (augmentation is also applied to the poisoned images). We then fine tune with poisoneddata for 40 epochs with a learning rate that starts at 0.01 and drops to 0.001 after the 30thepoch (thisapplies to the transfer learning settings).When training from scratch on CIFAR-10, we include the 500 perturbed poisons in the standardtraining set. We use SGD and train for 200 epochs with batches of 128 images and an initial learningrate of 0.1 that decays by a factor of 10 after epochs 100 and 150. Here too, we use data normalizationand augmentation as described above. When training from scratch on TinyImageNet, we allow for250 poisoned images. All other hyperparameters are identical.Our evaluations of six different attacks are shown in Table 2. These attacks are not easily ranked,as the strongest attacks in some settings are not the strongest in others. Witches’ Brew (WiB) is notevaluated in the transfer learning settings, since it is not considered in the original work (Geipinget al., 2020).) See Appendix A.16 for tables with confidence intervals. We find that by using disjointand standardized datasets for transfer learning, and common training practices like data normalizationand scheduled learning rate decay, we overcome the deficits in previous work. Our benchmarks canprovide useful evaluations of data poisoning methods and meaningful comparisons between them.Table 2: Benchmark success rates (%) (best in each column is in bold).CIFAR-10 TinyImageNetTransfer From Scratch Transfer From ScratchAttack WB BB WB BBFC 22:0 7:0 1 :33 49:0 2:0 4 :0CP 33:0 7:0 0 :67 14:0 1:0 0 :0BP 85:08:5 2 :33 100:0 10:5 44 :0WiB - - 26:0 - - 32:0CLBD 5:0 6:5 1 :00 3:0 1:0 0 :0HTBD 10:09:5 2:67 3:0 0:5 0 :02The TinyImageNet from-scratch benchmark is done with 25 independent trials to keep this problem withinreach for researchers with modest resources.8Under review as a conference paper at ICLR 20216 C ONCLUSIONThe threat of data poisoning is at the forefront of fears around emerging ML systems (Siva Kumaret al., 2020). While many of the methods claiming to do so do not pose a practical threat, some ofthe recent methods are cause for practitioner concern. With real threats emerging, there is a need forfair comparison. The diversity of attacks, and in particular the difficulty in ordering them by efficacy,calls for a diverse set of benchmarks. With those we present here, practitioners and researchers cancompare attacks on a level playing field and gain an understanding of how existing methods match upwith one another and where they might fail.Since the future advancement of these methods is inevitable, our benchmarks will also serve thedata poisoning community as a standardized test problem on which to evaluate and future attackmethodologies. As even stronger attacks emerge, trepidation on the part of practitioners will bematched by the potential harm of poisoning attacks. We are arming the community with the highquality metrics this evolving situation calls for.9Under review as a conference paper at ICLR 2021
OvP4cdmNanw
Review for Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks
4: Ok but not good enough - rejection
Reject This paper consists of two main parts, essentially inspired from the argumentation of the Baconian method. In particular, the first part criticizes current work in the area of data poisoning attacks (pars destruens), while the second part tries to overcome the limits described in the first half of the paper (pars construens). In their pars destruens, the authors heavily criticize the lack of standardized evaluations across different papers, claiming that: "inconsistent and perfunctory experimentation has rendered performance evaluations and comparisons misleading". The authors then continue by highlighting the inconsistencies observed across different papers and summarize them in Table 1. First, I think that this criticism is not fully justified. None of the considered papers had the goal of proposing a common benchmark evaluation. They were all proposing different backdoor attacks, under different scenarios. Hence, it is unfair to claim that their evaluations are inconsistent. Of course they are. Every paper considers a more or less different experimental setups according to their hypotheses, to validate or reject them. And this is true also for many other different research topics and areas. I believe that the authors should revise the presentation of their paper, acknowledging that a benchmark methodology is lacking and it is required, but without blaming the others because they did not develop it. Their goal was different. Second, this work does not consider the whole family of poisoning attacks, but only "targeted" (or better, integrity) ones - this includes backdoor attacks or attacks aimed to misclassify only specific test samples, but not poisoning attacks that aim to indiscriminately increase the test error (i.e., availability / denial-of-service attacks). In addition, the whole paper only considers deep neural networks, and not other models. This should be thus clarified since the beginning in the paper, and better reflected also in the title. A clearer taxonomy of the whole family of data poisoning attacks should also be reported (e.g., in the form of a table to help understand the existing different types of threat in the area of data poisoning). I recommend the authors to refer to these papers that may help categorizing data poisoning attacks (and systematize nomenclature): - https://arxiv.org/abs/1910.03137 - https://arxiv.org/abs/1712.03141 - https://dl.acm.org/doi/10.1145/2046684.2046692 The main inconsistencies/issues identified by the authors in the evaluation of backdoor attacks are delineated in Sect. 4: 1. Training without SGD or data augmentation; 2. Victim architecture matters; 3. "Clean" attacks are sometimes dirty; 4. Proper transfer learning is less vulnerable; 5. Performance is not invariant to dataset size; 6. Black-box performance is low; 7. Small sample sizes and non-random targets; 8. Attacks are highly specific to the target image; 9. Backdoor success depends on patch size. These 9 causes, according to the authors of this work, hinder the impact of the 4 backdoor attacks (FC, CP, CLBD, HTBD) considered in this paper. I am quite convinced that in some specific cases, as the ones identified in Sect. 4, the attacks may fail, and I agree with the arguments posed by the authors in this section. I am only concerned by Issue no. 3 about the need of "clean-label" attacks in realistic settings. This is a common criticism/misconception also related to adversarial examples with imperceptible perturbations. Why the perturbation should be required to be small? Are there practical scenarios where humans are going to observe the samples and be trained to detect that these samples are "dangerous"? As the authors of this work seem to be quite concerned on the realism of these attacks, this point should be better discussed, as well as the need of considering the perturbation model to be l-inf with size 8/255 (or anyway fixed). In this respect, note that one more pertinent motivation for requiring small perturbations may be the detectability of the attack by automatic tools (rather than imperceptibility to the human eye); see, e.g., the discussion in https://arxiv.org/abs/1802.07295 and consider expanding the paper to discuss this issue. A final comment for the part destruens is that, eventually, it is not well systematized. Besides the 9 issues delineated above, a clear systematization/taxonomy of the potential issues is lacking. For example, issues can be related to the model (architecture), training algorithm (SGD/Adam, etc.), training hyperparameters, etc. Unfortunately, this step is lacking in the paper. And, as we will see, this impacts the development of a proper evaluation framework. After their part destruens, in Section 5, the authors move to the part construens of their argumentation, in which they propose a standardized benchmark for evaluation of clean-label and hidden trigger backdoor attacks. Again, I agree that providing a benchmark to assess poisoning attack effectiveness is a valuable contribution, and the authors have done a good job highlighting the factors which may impact the performance of these attacks. However, I am also concerned about the proposed benchmarking framework. In particular, as the authors have shown, factors such as the type of the target model, the training dataset size, and the size of the perturbation that the attacker can inject into the poisoning samples substantially impact the attack effectiveness. However, in the proposed framework, those factors assume a single value that may unreasonably favor an approach rather than another. When a factor has a substantial impact on the results, it is recommended to analyze the performance when that factor assumes different values, as it is usually done for the size of the perturbation that an attacker can add to evasion samples (see, e.g., http://arxiv.org/abs/1902.06705). More generally, a clear evaluation procedure or methodology is neither discussed nor provided, and this stems from the fact that a clear systematization of the causes of failure is lacking in the previous part of the paper. If we identify, e.g., that the model architecture, the training algorithm and the perturbation size are all affecting the attack impact, a proper evaluation framework for attacks (and defenses) should then consider variants of all these factors, which means: - testing the attack/defense on different models; - (for each model) testing with different training algorithms; - (for each model, training algorithm) testing with different perturbation sizes. This would indeed give a much more detailed understanding of how an attack/defense performs w.r.t. previously-proposed or existing ones. To conclude, I mostly liked the idea presented in the paper, but a much higher level of systematization is required to propose a comprehensive framework for evaluation of poisoning attacks and defenses, as well as clarifying that the scope of the framework is also restricted to backdoor/integrity attacks on DNNs. I would anyway encourage the authors to continue working on this benchmark to make it more systematic, fairer and inclusive.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks ### Paper Abstract Data poisoning and backdoor attacks manipulate training data in order to cause models to fail during inference. A recent survey of industry practitioners found that data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks. However, we find that the impressive performance evaluations from data poisoning attacks are, in large part, artifacts of inconsistent experimental design. Moreover, we find that existing poisoning methods have been tested in contrived scenarios, and many fail in more realistic settings. In order to promote fair comparison in future work, we develop standardized benchmarks for data poisoning and backdoor attacks. ### Paper Keywords ["Poisoning", "backdoor", "attack", "benchmark"] ### Paper Content ABSTRACTData poisoning and backdoor attacks manipulate training data in order to causemodels to fail during inference. A recent survey of industry practitioners foundthat data poisoning is the number one concern among threats ranging from modelstealing to adversarial attacks. However, we find that the impressive performanceevaluations from data poisoning attacks are, in large part, artifacts of inconsistentexperimental design. Moreover, we find that existing poisoning methods have beentested in contrived scenarios, and many fail in more realistic settings. In order topromote fair comparison in future work, we develop standardized benchmarks fordata poisoning and backdoor attacks.1 I NTRODUCTIONData poisoning is a security threat to machine learning systems in which an attacker controls thebehavior of a system by manipulating its training data. This class of threats is particularly germaneto deep learning systems because they require large amounts of data to train and are therefore oftentrained (or pre-trained) on large datasets scraped from the web. For example, the Open Images andthe Amazon Products datasets contain approximately 9 million and 233 million samples, respectively,that are scraped from a wide range of potentially insecure, and in many cases unknown, sources(Kuznetsova et al., 2020; Ni, 2018). At this scale, it is often infeasible to properly vet content.Furthermore, many practitioners create datasets by harvesting system inputs (e.g., emails received,files uploaded) or scraping user-created content (e.g., profiles, text messages, advertisements) withoutany mechanisms to bar malicious actors from contributing data. The dependence of industrial AIsystems on datasets that are not manually inspected has led to fear that corrupted training data couldproduce faulty models (Jiang et al., 2017). In fact, a recent survey of 28industry organizationsfound that these companies are significantly more afraid of data poisoning than other threats fromadversarial machine learning (Kumar et al., 2020).A spectrum of poisoning attacks exists in the literature. Backdoor data poisoning causes a modelto misclassify test-time samples that contain a trigger – a visual feature in images or a particularcharacter sequence in the natural language setting (Chen et al., 2017; Dai et al., 2019; Saha et al.,2019; Turner et al., 2018). For example, one might tamper with training images so that a vision systemfails to identify any person wearing a shirt with the trigger symbol printed on it. In this threat model,the attacker modifies data at both train time (by placing poisons) and at inference time (by insertingthe trigger). Triggerless poisoning attacks, on the other hand, do not require modification at inferencetime (Biggio et al., 2012; Huang et al., 2020; Muñoz-González et al., 2017; Shafahi et al., 2018; Zhuet al., 2019; Aghakhani et al., 2020b; Geiping et al., 2020). A variety of innovative backdoor andtriggerless poisoning attacks – and defenses – have emerged in recent years, but inconsistent andperfunctory experimentation has rendered performance evaluations and comparisons misleading.In this paper, we develop a framework for benchmarking and evaluating a wide range of poisonattacks on image classifiers. Specifically, we provide a way to compare attack strategies and shedlight on the differences between them.Our goal is to address the following weaknesses in the current literature. First, we observe that thereported success of poisoning attacks in the literature is often dependent on specific (and sometimesunrealistic) choices of network architecture and training protocol, making it difficult to assess the1Under review as a conference paper at ICLR 2021viability of attacks in real-world scenarios. Second, we find that the percentage of training datathat an attacker can modify, the standard budget measure in the poisoning literature, is not a usefulmetric for comparisons. The flaw in this metric invalidates comparisons because even with a fixedpercentage of the dataset poisoned, the success rate of an attack can still be strongly dependent on thedataset size, which is not standardized across experiments to date. Third, we find that some attacksthat claim to be “clean label,” such that poisoned data still appears natural and properly labeled uponhuman inspection, are not.Our proposed benchmarks measure the effectiveness of attacks in standardized scenarios usingmodern network architectures. We benchmark from-scratch training scenarios and also white-boxand black-box transfer learning settings. Also, we constrain poisoned images to be clean in the senseof small perturbations. Furthermore, our benchmarks are publicly available as a proving ground forexisting and future data poisoning attacks.The data poisoning literature contains attacks in a variety of settings including image classification,facial recognition, and text classification (Shafahi et al., 2018; Chen et al., 2017; Dai et al., 2019).Attacks on the fairness of models, on on speech recognition, and recommendation engines have alsobeen developed (Solans et al., 2020; Aghakhani et al., 2020a; Li et al., 2016; Fang et al., 2018; Huet al., 2019; Fang et al., 2020). While we acknowledge the merits of studying poisoning in a range ofmodalities, our benchmark focuses on image classification since it is by far the most common settingin the existing literature.2 A SYNOPSIS OF TRIGGERLESS AND BACKDOOR DATA POISONINGEarly poisoning attacks targeted support vector machines and simple neural networks (Biggio et al.,2012; Koh & Liang, 2017). As poisoning gained popularity, various strategies for triggerless attackson deep architectures emerged (Muñoz-González et al., 2017; Shafahi et al., 2018; Zhu et al., 2019;Huang et al., 2020; Aghakhani et al., 2020b; Geiping et al., 2020). The early backdoor attackscontained triggers in the poisoned data and in some cases changed the label, thus were not clean-label(Chen et al., 2017; Gu et al., 2017; Liu et al., 2017). However, methods that produce poison exampleswhich don’t visibly contain a trigger also show positive results (Chen et al., 2017; Turner et al.,2018; Saha et al., 2019). Poisoning attacks have also precipitated several defense strategies, butsanitization-based defenses may be overwhelmed by some attacks (Koh et al., 2018; Liu et al., 2018;Chacon et al., 2019; Peri et al., 2019).We focus on attacks that achieve targeted misclassification. That is, under both the triggerless andbackdoor threat models, the end goal of an attacker is to cause a target sample to be misclassifiedas another specified class. Other objectives, such as decreasing overall test accuracy, have beenstudied, but less work exists on this topic with respect to neural networks (Xiao et al., 2015; Liu et al.,2019). In both triggerless and backdoor data poisoning, the clean images, called base images , that aremodified by an attacker come from a single class, the base class . This class is often chosen to beprecisely the same class into which the attacker wants the target image or class to be misclassified.There are two major differences between triggerless and backdoor threat models in the literature.First and foremost, backdoor attacks alter their targets during inference by adding a trigger. In theworks we consider, triggers take the form of small patches added to an image (Turner et al., 2018;Saha et al., 2019). Second, these works on backdoor attacks cause a victim to misclassify any imagecontaining the trigger rather than a particular sample. Triggerless attacks instead cause the victimto misclassify an individual image called the target image (Shafahi et al., 2018; Zhu et al., 2019;Aghakhani et al., 2020b; Geiping et al., 2020). This second distinction between the two threat modelsis not essential; for example, triggerless attacks could be designed to cause the victim to misclassify acollection of images rather than a single target. To be consistent with the literature at large, we focuson triggerless attacks that target individual samples and backdoor attacks that target whole classes ofimages.We focus on the clean-label backdoor attack and the hidden trigger backdoor attack , where poisonsare crafted with optimization procedures and do not contain noticeable patches (Saha et al., 2019;Turner et al., 2018). For triggerless attacks, we focus on the feature collision andconvex polytopemethods, the most highly cited attacks of the last two years that have appeared at prominent MLconferences (Shafahi et al., 2018; Zhu et al., 2019). We include the recent triggerless methods2Under review as a conference paper at ICLR 2021Bullseye Polytope (BP) and Witches’ Brew (WiB) in the section where we present metrics on ourbenchmark problems (Aghakhani et al., 2020b; Geiping et al., 2020). The following section detailsthe attacks that serve as the subjects of our experiments.Technical details Before formally describing various poisoning methods, we begin with notation.LetXcbe the set of all clean training data, and let Xp=fx(j)pgJj=1denote the set of Jpoisonexamples with corresponding clean base image fx(j)bgJj=1. Letxtbe the target image. Labels aredenoted byyandYfor a single image and a set of images, respectively, and are indexed to match thedata. We use fto denote a feature extractor network.Feature Collision (FC) Poisons in this attack are crafted by adding small perturbations to baseimages so that their feature representations lie extremely close to that of the target (Shafahi et al.,2018). Formally, each poison is the solution to the following optimization problem.x(j)p= argminxkf(x)f(xt)k22+kxx(j)bk22: (1)When we enforce `1-norm constraints, we drop the last term in Equation (1)and instead enforcekx(j)px(j)bk1";8jby projecting onto the `1ball after each iteration.Convex Polytope (CP) This attack crafts poisons such that the target’s feature representation is aconvex combination of the poisons’ feature representations by solving the following optimizationproblem (Zhu et al., 2019).X=pargminfcjg;fx(j)g12kf(xt)PJj=1cjf(x(j))k22kf(xt)k22subject toPJj=1cj= 1andcj08j;andkx(j)x(j)bk1"8j(2)Clean Label Backdoor (CLBD) This backdoor attack begins by computing an adversarial pertur-bation to each base image (Turner et al., 2018). Formally,^x(j)p=x(j)b+ argmaxkk1"L(x(j)b+;y(j);); (3)whereLdenotes cross-entropy loss. Then, a patch is added to each image in f^x(j)pgto generate thefinal poisonsfx(j)pg. The patched image is subject to an `1-norm constraint.Hidden Trigger Backdoor (HTBD) A backdoor analogue of the FC attack, where poisons arecrafted to remain close to the base images but collide in feature space with a patched image from thetarget class (Saha et al., 2019). Let ~x(j)tdenote a patched training image from the target class (thisimage is not clean), then we solve the following optimization problem to find poison images.x(j)p= argminxkf(x)f(~x(j)t)k22s.t.kxx(j)bk1" (4)3 W HY DO WE NEED BENCHMARKS ?Backdoor and triggerless attacks have been tested in a wide range of disparate settings. From modelarchitecture to target/base class pairs, the literature is inconsistent. Experiments are also lacking inthe breadth of trials performed, sometimes using only one model initialization for all experiments, ortesting against one single target image. We find that inconsistencies in experimental settings havea large impact on performance evaluations, and have resulted in comparisons that are difficult tointerpret. For example, in CP the authors compare their `1-constrained attack to FC, which is craftedwith an`2penalty. In other words, these methods have never been compared on a level playing field.To study these attacks thoroughly and rigorously, we employ sampling techniques that allow us todraw conclusions about the attacks taking into account variance across model initializations and classchoice. For a single trial, we sample one of ten checkpoints of a given architecture, then randomlyselect the target image, base class, and base images. In Section 4, all figures are averages from 100trials with our sampling techniques.3Under review as a conference paper at ICLR 2021Disparate evaluation settings from the literature To understand how differences in evaluationsettings impact results, we re-create the various original performance tests for each of the methodsdescribed above within our common evaluation framework. We try to be as faithful as possible tothe original works, however we employ our own sampling techniques described above to increasestatistical significance. Then, we tweak these experiments one component at a time revealing thefragility of each method to changes in experimental design.Establishing baselines For the FC setting, following one of the main setups in the original paper,we craft 50 poisons on an AlexNet variant (for details on the specific architecture, see (Krizhevskyet al., 2012; Shafahi et al., 2018)) pre-trained on CIFAR-10 (Krizhevsky et al., 2009), and we use the`2-norm penalty version of the attack. We then evaluate poisons on the same AlexNet, using the sameCIFAR-10 data to train for 20 more epochs to “fine tune” the model end to end. Note that this is notreally transfer learning in the usual sense, as the fine tuning utilizes the same dataset as pre-training,except with poisons inserted (Shafahi et al., 2018).The CP setting involves crafting 5 poisons using a ResNet-18 model pre-trained on CIFAR-10, andthen fine tuning the linear layer of the same ResNet-18 model with a subset of the CIFAR-10 trainingcomprising 50 images per class (including the poisons) (He et al., 2016). This setup is also notrepresentative of typical transfer learning, as the fine-tuning data is sub-sampled from the pre-trainingdataset. In this baseline we set "=25:5=255matching the original work (Zhu et al., 2019).One of the original evaluation settings for CLBD uses 500 poisons. We craft these on an adversariallytrained ResNet-18 and modify them with a 33patch in the lower right-hand corner. The pertur-bations are bounded with "=16=255. We then train a narrow ResNet model from scratch with theCIFAR-10 training set (including the poisons) (Turner et al., 2018).For the HTBD setting, we generate 800 poisons with another modified AlexNet (for architecturaldetails, see Appendix A.13) which is pre-trained on CIFAR-10 dataset. Then, an 88trigger patch isadded to the lower right corner of the target image, and the perturbations are bounded with "=16=255.We use the entire CIFAR-10 dataset (including the poisons) to fine tune the last fully connected layerof the same model used for crafting. Once again, the fine-tuning data in this setup is not disjoint fromthe pre-training data (Saha et al., 2019). See the left-most bars of Figure 3 for all baseline results.Inconsistencies in previous work The baselines defined above do not serve as a fair comparisonacross methods, since the original works to which we try and stay faithful are inconsistent. Table 1summarizes experimental settings in the original works. If a particular component (column header)was considered anywhere in the original paper’s experiments, we mark a ( X), leaving exes () whensomething was not present in any experiments. Table 1 shows the presence of data normalization andaugmentation as well as optimizers (SGD or ADAM). It also shows which learning setup the originalworks considered: frozen feature extractor (FFE), end-to-end fine tuning (E2E), or from-scratchtraining (FST), as well as which threat levels were tested, white, grey or black box (WB, GB, BB).We also consider whether or not an ensembled attack was used. The "values reported are out of 255and represent the smallest bound considered in the papers; note FC uses an `2penalty so no bound isenforced despite the attack being called “clean-label” in the original work. We conclude from Table 1that experimental design in this field is extremely inconsistent.Table 1: Various experimental designs used in data poisoning research.Data Opt. Transfer Learning Threat ModelAttack Norm. Aug. SGD FFE E2E FST WB GB BB Ensembles "FC X XX -CP X X X X X X 25.5CLBD X X X X 8HTBD X X X X 84Under review as a conference paper at ICLR 20214 J UST HOW TOXIC ARE POISONING METHODS REALLY ?In this section, we look at weaknesses and inconsistencies in existing experimental setups, and howthese lead to potentially misleading comparisons between methods. We use our testing framework toput triggerless and backdoor attacks to the test under a variety of circumstances, and get a tighter gripon just how reliable existing poisoning methods are.Training without SGD or data augmentation Both FC and CP attacks have been tested withvictim models pre-trained with the ADAM optimizer. However, SGD with momentum has becomethe dominant optimizer for training CNNs (Wilson et al., 2017). Interestingly, we find that modelstrained with SGD are significantly harder to poison, rendering these attacks ineffective in practicalsettings. Moreover, none of the baselines include simple data augmentation such as horizontal flipsand random crops. We find that data augmentation, standard in the deep learning literature, alsogreatly reduces the effectiveness of all of the attacks. For example, FC and CP success rates plummetin this setting to 51.00% and 19.09%, respectively. Complete results including hyperparameters,success rates, and confidence intervals are reported in Appendix A.3. We conclude that these attacksmay be significantly less effective against a real practitioner than originally advertised.Victim architecture matters Two attacks, FC and HTBD, are originally tested on AlexNet variants,and CLBD is tested with a narrow ResNet. These models are not widely used, and they are unlikely tobe employed by a realistic victim. We observe that many attacks are significantly less effective againstResNet-18 victims. See Figure 3, where for example, the success rate of HTBD on these victims is aslow as 18%. See Appendix A.4 for a table of numerical results. These ablation studies are conductedin the baseline settings but with a ResNet-18 victim architecture. These ResNet experiments serve asan example of how performance can be highly dependent on the selection of architecture.“Clean” attacks are sometimes dirty Each of the original works we consider purports to produce“clean-label” poison examples that look like natural images. However these methods often produceeasily visible image artifacts and distortions due to the large values of used. See Figure 1 forexamples generated by two of the methods, where FC perturbs a clean “cat” into an unrecognizablepoison (left), and CP generates an extremely noisy poison from a base in the “airplane” class (right).These images are not surprising since the FC method is tested with an `2penalty in the original work,and CP is`1constrained with a large radius of 25:5=255.Figure 1: Bases (top) and poisons (bot-tom).In many contexts, avoiding detection by automated sys-tems may be more important than maintaining perceptualsimilarity. In our work, we focus on perceptual similarityas defined by the `1constraint as this reflects the explicitgoal of most of the attacks we examine, and it is, in gen-eral, a much more common area of study. Adaptive attacksthat avoid defense or detection is relatively unexplored andan interesting area for future research (Koh et al., 2018).Borrowing from common practice in the evasion attackand defense literature, we test each method with an `1constraint of radius 8=255and find that the effectivenessof every attack is significantly diminished (Madry et al.,2017; Dong et al., 2020). Thus, a standardized constrainton poison examples is necessary for fair comparison ofattacks, and these previous attacks are not nearly as threatening under constraints that enforce cleanpoisons. See Figure 3, and see Appendix A.5 for a table of numerical results.Proper transfer learning is less vulnerable Of the attacks we study here, FC, CP, and HTBDwere originally proposed in settings referred to as “transfer learning.” Each particular setup varies,but none are true transfer learning since the pre-training datasets and fine-tuning datasets overlap. Forexample, FC uses the entire CIFAR-10 training dataset for both pre-training and fine tuning. Thus,their threat model entails allowing an adversary to modify the training dataset but only for the lastfew epochs. Furthermore, these attacks use inconsistently sized fine-tuning datasets.5Under review as a conference paper at ICLR 2021To simulate transfer learning, we test each attack with ResNet-18 feature extractors pre-trained onCIFAR-100. We fine tune with CIFAR-10 data in both cases, showing that these methods actuallyperform better in the setting with real transfer learning, i.e.where the pre-training data and fine-tuningdata are not from the same datasets and do not contain the same classes. In Figure 3, every attackaside from CP shows worse performance when transfer learning is done on data that is disjoint fromthe pre-training dataset. The attacks designed for transfer learning may not work as advertised inmore realistic transfer learning settings. See Appendix A.6.Performance is not invariant to dataset size Existing work on data poisoning measures an at-tacker’s budget in terms of what percentage of the training data they may modify. This begs thequestion whether percentage alone is enough to characterize the budget. Does the actual size of thetraining set matter? We find the number of images in the training set has a large impact on attackperformance, and that performance curves for FC and CP intersect. When we hold the percentagepoisoned constant at 1%, but we change the number of poisons and the size of the training setaccordingly, we see no consistent trends in how the attacks are affected. Figure 2 shows the successof each attack as a function of dataset size (shaded region is one standard error). This observationsuggests that one cannot compare attacks tested on different sized datasets by only fixing the percentof the dataset poisoned. See Appendix A.7.Figure 2: Scaling the dataset size while fixing thepoison budget.0 10,000 20,000 30,000 40,000 50,000Trainset Size (subsets of CIFAR-10)0255075100Success Rate (%)AttackFCCPCLBDHTBDBlack-box performance is low Whether con-sidering transfer learning or training fromscratch, testing these methods against a black-box victim is surely one of the most realistictests of the threat they pose. Since, FC, CP andHTBD do not consider the black-box scenario inthe original works, we take the poisons craftedusing baseline methods and evaluate them onmodels of different architectures than those usedfor crafting. The attacks show much lower per-formance in the black-box settings than in thebaselines, in particular FC, CP, and HTBD allhave success rates lower than 20%. See Figure3, and see Appendix A.8 for more details.Small sample sizes and non-random targetsOn top of inconsistencies in experimental setups,existing work on data poisoning often test onlyon specific target/base class pairs. For example,FC largely uses “frog” as the base class and“airplane” as the target class. CP, on the other hand, only uses “ship” and “frog” as the base and targetclasses, respectively. Neither work contains experiments where each trial consists of a randomlyselected target/base class pair. We find that the success rates are highly class pair dependent andchange dramatically under random class pair sampling. Thus, random sampling is critical forperformance evaluation. See Appendix A.9 for a comparison of the specific class pairs from theseoriginal works with randomly sampled class pairs.In addition to inconsistent class pairs, data poisoning papers often evaluate performance with very fewtrials since the methods are computationally expensive. In their original works, FC and CP use 30and50trials, respectively, for each experiment, and these experiments are performed on the same exactpre-trained models each time. And while HTBD does test randomized pairs, they only show resultsfor ten trials on CIFAR-10. These small sample sizes yield wide error bars in performance evaluation.We choose to run 100trials per experiment in our own work. While we acknowledge that a largernumber would be even more compelling, 100is a compromise between thorough experimentationand practicality since each trial requires re-training a classifier.Attacks are highly specific to the target image Triggerless attacks have been proposed as a threatagainst systems deployed in the physical world. For example, blue Toyota sedans may go undetectedby a poisoned system so that an attacker may fly under the radar. However, triggerless attacks aregenerally crafted against a specific target image, while a physical object may appear differently under6Under review as a conference paper at ICLR 2021difference real-world circumstances. We upper-bound the robustness of poison attacks by applyingsimple horizontal flips to the target images, and we find that poisoning methods are significantly lesssuccessful when the exact target image is unknown. For example, FC is only successful 7% of thetime when simply flipping the target image. See Figure 3 and Appendix A.10.Backdoor success depends on patch size Backdoor attacks add a patch to target images to triggermisclassification. In real-world scenarios, a small patch may be critical to avoid being caught. Theoriginal HTBD attack uses an 88patch, while the CLBD attack originally uses a 33patch (Sahaet al., 2019; Turner et al., 2018). In order to understand the impact on attack performance, we testdifferent patch sizes. We find a strong correlation between the patch size and attack performance, seeAppendix A.12. We conclude that backdoor attacks must be compared using identical patch sizes.Figure 3: We show the fragility of poisoning methods to experimental design. This figure depictsbaselines along with the results of ablation studies. Different methods respond differently to thesetesting scenarios, supporting the need for consistent and thorough testing. Horizontal lines denoteperformance on baselines described in Section 3, and bars represent the results of changing a specificfeature in an individual method’s baseline. Tables of these results with confidence intervals can befound in the appendices.0255075100Success Rate (%)Baseline Data Aug. SGD ResNet-18 Transfer = 8/255 Black-box FlipFeature Collision Convex Polytope Clean Label Backdoor Hidden Trigger Backdoor5 U NIFIED BENCHMARKS FOR DATA POISONING ATTACKSOur Benchmark We propose new benchmarks for measuring the efficacy of both backdoorand triggerless data poisoning attacks. We standardize the datasets and problem settings for ourbenchmarks below.1Target and base images are chosen from the testing and training sets, respectively,according to a seeded/reproducible random assignment. Poison examples crafted from the basesmust remain within the `1-ball of radius 8=255centered at the corresponding base images. Seedingthe random assignment allows us to test against a significant number of different random choicesof base/target, while always using the same choices for each method, thus removing a source ofvariation from the results. We consider two different training modes:ITransfer Learning: A feature extractor pre-trained on clean data is frozen and used whiletraining a linear classification head on a disjoint set of training data that contains poisons.IITraining From Scratch: A network is trained from random initialization on data containingpoison examples in the training set.To further standardize these tests, we provide pre-trained architectures to test against. The parametersof one model are given to the attacker. We then evaluate the strength of the attacks in white-box andblack-box scenarios. For white-box tests in the transfer learning benchmarks, we use the same frozenfeature extractor that is given to the attacker for evaluation. While in the black-box setting, we craftpoisons using the known model but we test on the two models the attacker has not seen, averaging theresults. When training from scratch, models are trained from a random initialization on the poisoned1Code is available at (suppressed for anonymity).7Under review as a conference paper at ICLR 2021dataset. We report averages over 100 independent trials for each test. Backdoor attacks can use any55patch. Note that the number of attacker-victim network pairs is kept small in our benchmarkbecause each of the 100 trials requires re-training (in some cases from scratch), and we want to keepthe benchmark within reach for researchers with modest computing resources.CIFAR-10 benchmarks Models are pretrained on CIFAR-100, and the fine-tuning data is a subsetof CIFAR-10. We choose this subset to be the first 250 images from each class, allowing for 25poison examples. This amount of data motivates the use of transfer learning, since training fromscratch on only 2,500 images yields poor generalization. See Appendix A.13 for examples. We allow500 poisons when training from scratch, see Appendix A.15 for a case-study in which we investigatehow many poisons an attacker may be able to place in a dataset compiled by querying the internetfor images. We allow the attacker access to a ResNet-18, and we do black-box tests on a VGG11(Simonyan & Zisserman, 2014), and a MobileNetV2 (Sandler et al., 2018), and we use one of eachmodel when training from scratch and report the average.TinyImageNet benchmarks Additionally, we pre-train VGG16, ResNet-34, MobileNetV2 modelson the first 100 classes of the TinyImageNet dataset (Le & Yang, 2015). We fine tune these models onthe second half of the dataset, allowing for 250 poison images. As above, the attacker has access to theparticular VGG16 model, and black-box tests are done on the other two models. In the from-scratchsetting, we train a VGG16 model on the entire TinyImageNet dataset with 250 images poisoned.2Benchmark hyperparameters We pre-train models on CIFAR-100 with SGD for 400 epochsstarting with a learning rate of 0.1, which decays by a factor of 10 after epochs 200, 300, and 350.Models pre-trained on the first half of TinyImageNet are trained with SGD for 200 epochs startingwith a learning rate of 0.1, which decays by a factor of 10 after epochs 100 and 150. In both cases,we apply per-channel data normalization, random crops, and horizontal flips, and we use batches of128 images (augmentation is also applied to the poisoned images). We then fine tune with poisoneddata for 40 epochs with a learning rate that starts at 0.01 and drops to 0.001 after the 30thepoch (thisapplies to the transfer learning settings).When training from scratch on CIFAR-10, we include the 500 perturbed poisons in the standardtraining set. We use SGD and train for 200 epochs with batches of 128 images and an initial learningrate of 0.1 that decays by a factor of 10 after epochs 100 and 150. Here too, we use data normalizationand augmentation as described above. When training from scratch on TinyImageNet, we allow for250 poisoned images. All other hyperparameters are identical.Our evaluations of six different attacks are shown in Table 2. These attacks are not easily ranked,as the strongest attacks in some settings are not the strongest in others. Witches’ Brew (WiB) is notevaluated in the transfer learning settings, since it is not considered in the original work (Geipinget al., 2020).) See Appendix A.16 for tables with confidence intervals. We find that by using disjointand standardized datasets for transfer learning, and common training practices like data normalizationand scheduled learning rate decay, we overcome the deficits in previous work. Our benchmarks canprovide useful evaluations of data poisoning methods and meaningful comparisons between them.Table 2: Benchmark success rates (%) (best in each column is in bold).CIFAR-10 TinyImageNetTransfer From Scratch Transfer From ScratchAttack WB BB WB BBFC 22:0 7:0 1 :33 49:0 2:0 4 :0CP 33:0 7:0 0 :67 14:0 1:0 0 :0BP 85:08:5 2 :33 100:0 10:5 44 :0WiB - - 26:0 - - 32:0CLBD 5:0 6:5 1 :00 3:0 1:0 0 :0HTBD 10:09:5 2:67 3:0 0:5 0 :02The TinyImageNet from-scratch benchmark is done with 25 independent trials to keep this problem withinreach for researchers with modest resources.8Under review as a conference paper at ICLR 20216 C ONCLUSIONThe threat of data poisoning is at the forefront of fears around emerging ML systems (Siva Kumaret al., 2020). While many of the methods claiming to do so do not pose a practical threat, some ofthe recent methods are cause for practitioner concern. With real threats emerging, there is a need forfair comparison. The diversity of attacks, and in particular the difficulty in ordering them by efficacy,calls for a diverse set of benchmarks. With those we present here, practitioners and researchers cancompare attacks on a level playing field and gain an understanding of how existing methods match upwith one another and where they might fail.Since the future advancement of these methods is inevitable, our benchmarks will also serve thedata poisoning community as a standardized test problem on which to evaluate and future attackmethodologies. As even stronger attacks emerge, trepidation on the part of practitioners will bematched by the potential harm of poisoning attacks. We are arming the community with the highquality metrics this evolving situation calls for.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Review for Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks ### Review Text Reject This paper consists of two main parts, essentially inspired from the argumentation of the Baconian method. In particular, the first part criticizes current work in the area of data poisoning attacks (pars destruens), while the second part tries to overcome the limits described in the first half of the paper (pars construens). In their pars destruens, the authors heavily criticize the lack of standardized evaluations across different papers, claiming that: "inconsistent and perfunctory experimentation has rendered performance evaluations and comparisons misleading". The authors then continue by highlighting the inconsistencies observed across different papers and summarize them in Table 1. First, I think that this criticism is not fully justified. None of the considered papers had the goal of proposing a common benchmark evaluation. They were all proposing different backdoor attacks, under different scenarios. Hence, it is unfair to claim that their evaluations are inconsistent. Of course they are. Every paper considers a more or less different experimental setups according to their hypotheses, to validate or reject them. And this is true also for many other different research topics and areas. I believe that the authors should revise the presentation of their paper, acknowledging that a benchmark methodology is lacking and it is required, but without blaming the others because they did not develop it. Their goal was different. Second, this work does not consider the whole family of poisoning attacks, but only "targeted" (or better, integrity) ones - this includes backdoor attacks or attacks aimed to misclassify only specific test samples, but not poisoning attacks that aim to indiscriminately increase the test error (i.e., availability / denial-of-service attacks). In addition, the whole paper only considers deep neural networks, and not other models. This should be thus clarified since the beginning in the paper, and better reflected also in the title. A clearer taxonomy of the whole family of data poisoning attacks should also be reported (e.g., in the form of a table to help understand the existing different types of threat in the area of data poisoning). I recommend the authors to refer to these papers that may help categorizing data poisoning attacks (and systematize nomenclature): - https://arxiv.org/abs/1910.03137 - https://arxiv.org/abs/1712.03141 - https://dl.acm.org/doi/10.1145/2046684.2046692 The main inconsistencies/issues identified by the authors in the evaluation of backdoor attacks are delineated in Sect. 4: 1. Training without SGD or data augmentation; 2. Victim architecture matters; 3. "Clean" attacks are sometimes dirty; 4. Proper transfer learning is less vulnerable; 5. Performance is not invariant to dataset size; 6. Black-box performance is low; 7. Small sample sizes and non-random targets; 8. Attacks are highly specific to the target image; 9. Backdoor success depends on patch size. These 9 causes, according to the authors of this work, hinder the impact of the 4 backdoor attacks (FC, CP, CLBD, HTBD) considered in this paper. I am quite convinced that in some specific cases, as the ones identified in Sect. 4, the attacks may fail, and I agree with the arguments posed by the authors in this section. I am only concerned by Issue no. 3 about the need of "clean-label" attacks in realistic settings. This is a common criticism/misconception also related to adversarial examples with imperceptible perturbations. Why the perturbation should be required to be small? Are there practical scenarios where humans are going to observe the samples and be trained to detect that these samples are "dangerous"? As the authors of this work seem to be quite concerned on the realism of these attacks, this point should be better discussed, as well as the need of considering the perturbation model to be l-inf with size 8/255 (or anyway fixed). In this respect, note that one more pertinent motivation for requiring small perturbations may be the detectability of the attack by automatic tools (rather than imperceptibility to the human eye); see, e.g., the discussion in https://arxiv.org/abs/1802.07295 and consider expanding the paper to discuss this issue. A final comment for the part destruens is that, eventually, it is not well systematized. Besides the 9 issues delineated above, a clear systematization/taxonomy of the potential issues is lacking. For example, issues can be related to the model (architecture), training algorithm (SGD/Adam, etc.), training hyperparameters, etc. Unfortunately, this step is lacking in the paper. And, as we will see, this impacts the development of a proper evaluation framework. After their part destruens, in Section 5, the authors move to the part construens of their argumentation, in which they propose a standardized benchmark for evaluation of clean-label and hidden trigger backdoor attacks. Again, I agree that providing a benchmark to assess poisoning attack effectiveness is a valuable contribution, and the authors have done a good job highlighting the factors which may impact the performance of these attacks. However, I am also concerned about the proposed benchmarking framework. In particular, as the authors have shown, factors such as the type of the target model, the training dataset size, and the size of the perturbation that the attacker can inject into the poisoning samples substantially impact the attack effectiveness. However, in the proposed framework, those factors assume a single value that may unreasonably favor an approach rather than another. When a factor has a substantial impact on the results, it is recommended to analyze the performance when that factor assumes different values, as it is usually done for the size of the perturbation that an attacker can add to evasion samples (see, e.g., http://arxiv.org/abs/1902.06705). More generally, a clear evaluation procedure or methodology is neither discussed nor provided, and this stems from the fact that a clear systematization of the causes of failure is lacking in the previous part of the paper. If we identify, e.g., that the model architecture, the training algorithm and the perturbation size are all affecting the attack impact, a proper evaluation framework for attacks (and defenses) should then consider variants of all these factors, which means: - testing the attack/defense on different models; - (for each model) testing with different training algorithms; - (for each model, training algorithm) testing with different perturbation sizes. This would indeed give a much more detailed understanding of how an attack/defense performs w.r.t. previously-proposed or existing ones. To conclude, I mostly liked the idea presented in the paper, but a much higher level of systematization is required to propose a comprehensive framework for evaluation of poisoning attacks and defenses, as well as clarifying that the scope of the framework is also restricted to backdoor/integrity attacks on DNNs. I would anyway encourage the authors to continue working on this benchmark to make it more systematic, fairer and inclusive. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
B1fysiAqK7
ICLR.cc/2019/Conference
2019
Probabilistic Binary Neural Networks
["Jorn W.T. Peters", "Tim Genewein", "Max Welling"]
Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks. In this work, we present a probabilistic training method for Neural Network with both binary weights and activations, called PBNet. By embracing stochasticity during training, we circumvent the need to approximate the gradient of functions for which the derivative is zero almost always, such as $\textrm{sign}(\cdot)$, while still obtaining a fully Binary Neural Network at test time. Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution. Since all operations in a layer of the PBNet operate on random variables, we introduce stochastic versions of Batch Normalization and max pooling, which transfer well to a deterministic network at test time. We evaluate two related training methods for the PBNet: one in which activation distributions are propagated throughout the network, and one in which binary activations are sampled in each layer. Our experiments indicate that sampling the binary activations is an important element for stochastic training of binary Neural Networks.
["binary neural Network", "efficient deep learning", "stochastic training", "discrete neural network", "efficient inference"]
ABSTRACTLow bit-width weights and activations are an effective way of combating theincreasing need for both memory and compute power of Deep Neural Networks. Inthis work, we present a probabilistic training method for Neural Network with bothbinary weights and activations, called PBNet. By embracing stochasticity duringtraining, we circumvent the need to approximate the gradient of functions for whichthe derivative is zero almost always, such as sign(), while still obtaining a fullyBinary Neural Network at test time. Moreover, it allows for anytime ensemblepredictions for improved performance and uncertainty estimates by sampling fromthe weight distribution. Since all operations in a layer of the PBNet operate onrandom variables, we introduce stochastic versions of Batch Normalization andmax pooling, which transfer well to a deterministic network at test time. Weevaluate two related training methods for the PBNet: one in which activationdistributions are propagated throughout the network, and one in which binaryactivations are sampled in each layer. Our experiments indicate that sampling thebinary activations is an important element for stochastic training of binary NeuralNetworks.1 I NTRODUCTIONDeep Neural Networks are notorious for having vast memory and computation requirements, bothduring training and test/prediction time. As such, Deep Neural Networks may be unfeasible in variousenvironments such as battery powered devices, embedded devices (because of memory requirement),on body devices (due to heat dissipation), or environments in which constrains may be imposed by alimited economical budget. Hence, there is a clear need for Neural Networks that can operate in theseresource limited environments.One method for reducing the memory and computational requirements for Neural Networks is toreduce the bit-width of the parameters and activations of the Neural Network. This can be achievedeither during training (e.g., Ullrich et al. (2017); Achterhold et al. (2018)) or using post-trainingmechanisms (e.g., Louizos et al. (2017), Han et al. (2015)). By taking the reduction of the bit-widthfor weights and activations to the extreme, i.e., a single bit, one obtains a Binary Neural Network.Binary Neural Networks have several advantageous properties, i.e., a 32reduction in memoryrequirements and the forward pass can be implemented using XNOR operations and bit-counting,which results in a 58speedup on CPU (Rastegari et al., 2016). Moreover, Binary Neural Networksare more robust to adversarial examples (Galloway et al., 2018).Shayer et al. (2018) introduced a probabilistic training method for Neural Networks with binaryweights, but allow for full precision activations. In this paper, we propose a probabilistic trainingmethod for Neural Networks with both binary weights and binary activations, which are even morememory and computation efficient. In short, obtain a closed form forward pass for probabilisticneural networks if we constrain the input and weights to binary (random) variables. The output of theMultiply and Accumulate (MAC) operations, or pre-activation, is approximated using a factorizedNormal distribution. Subsequently, we introduce stochastic versions of Max-Pooling and BatchNormalization that allow us to propagate the pre-activatoins throughout a single layer. By applyingthesign()activation function to the random pre-activation, we not only obtain a distribution overbinary activations, it also allows for backpropagation through the sign()operation. This is especiallyconvenient as this in a deterministic Neural Network all gradient information is zeroed out whenusing sign as activation. We explore two different methods for training this probabilistic binary neuralnetwork: In the first method the activation distribution of layer lis propagated to layer (l+ 1) , which1Under review as a conference paper at ICLR 2019means the MAC operation is performed on two binary random variables. In the second method thebinary activation is sampled as the last operation in a layer using the concrete relaxation Maddisonet al. (2016). This can be thought of as a form of local reparametrization Kingma et al. (2015). Wecall the networks obtained using these methods PBNet and PBNet-S, respectively.At test time, we obtain a single deterministic Binary Neural Network, an ensemble of Binary NeuralNetworks by sampling from the parameter distribution, or a Ternary Neural Network based onthe Binary weight distribution. An advantage of our method is that we can take samples from theparameter distribution indefinitely—without retraining. Hence, this method allows for anytimeensemble predictions and uncertainty estimates. Note that while in this work we only consider thebinary case, our method supports any discrete distribution over weights and activations.2 P ROBABILISTIC BINARY NEURAL NETWORKAlgorithm 1: Pseudo code for forward passof single layer in PBNet(-S). al1denotesthe activation of the previous layer, Btherandom binary weight matrix, is the tem-perature used for the concrete distribution,f(;)the linear transformation used in thelayer,>0a small constant for numericalstability,Dthe dimensionality of the innerproduct inf, and&are the parametersfor batch normalization.Input: al1,Bp(B),,f(;),,,Result: Binary activation al// CLT approximationifal1is a binary random variable then=f(E[B];E[al1]);2=Df((E[B])2;(E[al1])2);else=f(E[B];al1);2=f(V[B];a2l1);end// Batch normalizationm=channel-wise-mean ();v=channel-wise-variance (;2;m);=(m)=pv++;2=22=(v+);// Max poolingifmax pooling required thennN(0;I);s=+n;=max-pooling-indices (s);;2=select-at-indices (;2;);end// Binarization and samplingp= 1(0j;2);ifsample activation thenalBinaryConcrete (p;);return al;elsereturn Binary( p)endIn this section we introduce the probabilistic settingof the PBNet. Moreover, the approximation of thedistribution on the pre-activations is introduced. Foran explanation of the other operations in the PBNet,see Section 2.1 for the activation, Section 2.1.1 for thesampling of activations, and Section 2.2 for Poolingand Normalization.We aim to train a probabilistic Binary Neural Network.As such, we pose a binary distribution over the weightsof the network and optimize the parameters of this dis-tribution instead of the parameters directly. This way,we obtain a distribution over parameters, but also dealwith the inherent discreteness of a Binary Neural Net-work. Given an objective function L(), this approachcan be thought of in terms of the variational optimiza-tion framework Staines & Barber (2012). Specifically,by optimizing the parameters of the weight distribu-tions, we optimize a bound on the actual loss:minBL(B)Eq(B)[L(B)]; (1)where Bare the binary weights of the network andq(B)is a distribution over the binary weights. Forq(B)a slight reparametrization of the Bernoulli dis-tribution is used, which we will refer to as the Bi-nary distribution. This distribution is parameterizedby2[1;1]and is defined by:aBinary ()()a+ 12Bernoulli (+ 12):(2)For the properties of this distribution, please refer toAppendix A.We will now consider using the Binary distributionfor both the weights and the activations in a NeuralNetwork. Since the pre-activations in a Neural Networkare computed using MAC operations, which are thesame for each value in the pre-activation, we will onlyconsider a single value in our discussion here. Let wBinary ()andhBinary ()be the weight and inputrandom variable for a given layer. As such, the inner-product between the weights and input is distributedaccording to a translated and scaled Poisson binomialdistribution:wh+D2PoiBin (2[]1): (3)2Under review as a conference paper at ICLR 2019WhereDis the dimensionality of handwanddenotes element-wise multiplication. See thepicket fence on the top in Figure 1 for an illustration of the PMF of a Poisson binomial distribution.Although the scaled and translated Poisson binomial distribution is the exact solution for the innerproduct between the weight and activation random variables, it is hard to work with in subsequentlayers. For this reason, and the fact that the Poisson binomial distribution is well approximated bya Normal distribution (Wang & Manning, 2013), we use a Normal approximation to the Poissonbinomial distribution, which allows for easier manipulations. Using the properties of the Binarydistribution and the Poisson binomial distribution, the approximation for the pre-activation ais givenby:a=wh_ N (DXd=1dd;DDXd=12d2d): (4)Note that, this is exactly the approximation one would obtain by using the Lyapunov CentralLimit Theorem (CLT), which was used by Shayer et al. (2018). This allows us to obtain a closeapproximation to the pre-activation distribution, which we can propagate through the layer and/ornetwork. So far, only the MAC operation in a given layer is discussed, in Section 2.1 application ofthe binary activation is discussed and in Section 2.1. The stochastic versions of Batch Normalizationand Max Pooling are introduced in Section 2.2. For specifics on sampling the binary activation, seeSection 2.1.1. The full forward pass for a single layer is given in detail in Algorithms 1.2.1 S TOCHASTIC BINARY ACTIVATIONSince the output of a linear operation using binary inputs is not restricted to be binary, it is requiredto apply a binarization operation to the pre-activation in order to obtain binary activations. Variousworks – e.g., Hubara et al. (2016) and Rastegari et al. (2016) – use either deterministic or stochasticbinarization functions, i.e.,bdet(a) =+1 ifa01otherwisebstoch(a) =+1 with probability p=sigmoid (a)1with probability 1p: (5)In our case the pre-activations are random variables. Hence, applying a deterministic binarizationfunction to a random pre-activations results in a stochastic binary activation. Specifically, letaiN(i;2i)be a random pre-ctivation obtained using the normal approximation, as introducedin the previous section, then the activation (after binarization) is again given as a Binary randomvariable". Interestingly, the Binary probability can be computed in closed form by evaluating theprobability density that lies above the binarization threshold:hi=bdet(ai)Binary (qi); q i= 1(0ji;2i); (6)where (j;2)denotes the CDF of N(;2). Applying the binarization function to a randompre-activation has two advantages. First, the derivatives @qi=@iand@qi=@iare not zero almosteverywhere, in contrast to the derivatives of bdetandbstochwhen applied to a deterministic input.Second, the distribution over hireflects the true uncertainty about the sign of the activation, giventhe stochastic weights, whereas bstochuses the magnitude of the pre-activation as a substitute. Forexample, a pre-activation with a high positive magnitude and high variance will be deterministicallymapped to 1 by bstoch. In contrast, our method takes the variance into account and correctly assignssome probability mass to 1. See Figure 1 for a graphical depiction of the stochastic binary activation.2.1.1 S AMPLING THE BINARY ACTIVATIONSSo far, we have discussed propagating distributions throughout the network. Alternatively, the binaryactivations can be sampled using the Concrete distribution (Maddison et al., 2016) during training.specifically, we use the hard sample method as discussed by Jang et al. (2016). By sampling theactivations, the input for subsequent layers will match the input that is observed at test time moreclosely.As a consequence of sampling the activation, the input to a layer is no longer a distribution but ah2f 1;+1gDvector instead. As such, the normal approximation to the pre-activation is computed3Under review as a conference paper at ICLR 201900 -1 +1Figure 1: The discrete Poisson binomial distribution (in green) is approximated by a continuousGaussian distribution (in purple). By applying bdetto a random pre-activation we obtain a binaryactivation distribution.slightly different. From the Lyapunov CLT it follows that the approximation to the distribution of thepre-activation is given by:a=wh_ N (DXd=1dhd;DXd=12dh2d); (7)where wBinary ()is a random weight. Similarly, the pre-activation of the input layer is alsocomputed using this approximation—given a real-valued input vector. We will refer to a PBNet thatuses activation sampling as PBNet-S.2.2 N ORMALIZATION AND POOLINGOther than a linear operation and an (non-linear) activation function, Batch Normalization (Ioffe &Szegedy, 2015) and pooling are two popular building blocks for Convolutional Neural Networks.For Binary Neural Networks, applying Batch Normalization to a binarized activation will result ina non-binary result. Moreover, the application of max pooling on a binary activation will result ina feature map containing mostly +1s. Hence, both operations must be applied before binarization.However, in the PBNet, the binarization operation is applied before sampling. As a consequence, theBatch Normalization and pooling operations can only be applied on random pre-activations. For thisreason, we define these methods for random variables. Although there are various ways to definethese operation in a stochastic fashion, our guiding principle is to only leverage stochasticity duringtraining, i.e., at test time, the stochastic operations are replaced by their conventional implementationsand parameters learned in the stochastic setting must be transferred to their deterministic counterparts.2.2.1 S TOCHASTIC BATCH NORMALIZATIONBatch Normalization (BN) (Ioffe & Szegedy, 2015) — including an affine transformation — isdefined as follows:^ai=aimpv++; (8)where aidenotes the pre-activation before BN, ^athe pre-activation after BN, and m&vdenotethe sample mean and variance of faigMi=1, for anM-dimensional pre-activation. In essence, BNtranslates and scales the pre-activations such that they have approximately zero mean and unitvariance, followed by an affine transformation. Hence, in the stochastic case, our aim is that samplesfrom the pre-activation distribution after BN also have approximately zero mean and unit variance—toensure that the stochastic batch normalization can be transfered to a deterministic binary neuralnetwork. This is achieved by subtracting the population mean from each pre-activation randomvariable and by dividing by the population variance. However, since aiis a random variable in thePBNet, simply using the population mean and variance equations will result in non-standardizedoutput. Instead, to ensure a standardized distribution over activations, we compute the expected4Under review as a conference paper at ICLR 2019population mean and variance under the pre-activation distribution:Ep(ajB;h)[m] =E"1MMXi=1ai#=1MMXi=1E[ai] =1MMXi=1i (9)Ep(ajB;h)[v] =E"1M1MXi=1(aiE[m])2#=1M1(KXi=12i+MXi=1(iE[m])2);(10)whereMis the total number of activations and aiN(i;i)are the random pre-activations. Bysubstituting mandvin Equation 8 by Equation 9 and 10, we obtain the following batch normalizedGaussian distributions for the pre-activations:^ai=aiE[m]pE[v] ++) ^aiN iE[m]pE[v] ++;2E[v] +2i!: (11)Note that this assumes a single channel, but is easily extended to 2d batch norm in a similar fashionas conventional Batch Normalization. At test time, Batch Normalization in a Binary Neural Networkcan be reduced to an addition and sign flip of the activation, see Appendix B for more details.2.2.2 S TOCHASTIC MAXPOOLINGIn general, pooling applies an aggregation operation to a set of (spatially oriented) pre-activations.Here we discuss max pooling for stochastic pre-activations, however, similar considerations apply forother types of aggregation functions.In the case of max-pooling, given a spatial region containing stochastic pre-activations a1;:::;aK,we aim to stochastically select one of the ai. Note that, although the distribution of max(a1;:::;aK)is well-defined (Nadarajah & Kotz, 2008), its distribution is not Gaussian and thus does not matchone of the input distributions. Instead, we sample one of the input random variables in everyspatial region according to the probability of that variable being greater than all other variables,i.e.,i=p(ai>zni), where zni= max(fajgj6=i).icould be obtained by evaluating the CDFof(zniai)at 0, but to our knowledge this has no analytical form. Alternatively, we can useMonte-Carlo integration to obtain :1LLXl=1one-hot (arg max s(l));s(l)p(a1;a2;:::;aK) =KYi=1N(i;2i) (12)where one-hot (i)returns aK-dimensional one-hot vector with the ith elements set to one. The poolingindex is then sampled from Cat(). However, more efficiently, we can sample sp(a1;:::;aK)and select the index of the maximum in s, which is equivalent sampling from Cat(). Hence, for agiven max pooling region, it is sufficient to obtain a single sample from each normal distributionassociated with each pre-activation and keep the random variable for which this sample is maximum.A graphical overview of this is given in Figure 2.Other forms of stochastic or probabilistic max pooling were introduced by Lee et al. (2009) and Zeiler& Fergus (2013), however, in both cases a single activation is sampled based on the magnitude of theactivations. In contrast, in our procedure we stochastically propagate one of the input distributionsover activations.2.3 W EIGHT INITIALIZATIONFor the PBNet the parameters forq(B)are initialized from a uniform U(1;1)distribution.Although the final parameter distribution more closely follows a Beta(;)distribution, for <1,we did not observe any significant impact choosing another initialization method for the PBNet.In the case of the PBNet-S, we observed a significant improvement in training speed and performanceby initializing the parameters based on the parameters of a pre-trained full precission Neural Network.This initializes the convolutional filters with more structure than a random initialization. This isdesirable as in order to flip the value of a weight, the parameter governing the weight has to passthrough a high variance regime, which can slow down convergence considerably.5Under review as a conference paper at ICLR 2019Select maximumper region2Sample frominput distributions1Keep maximum distributionfor each region3Figure 2: Max pooling for random variables is performed by taking a single sample from each of theinput distributions. The output random variable for each pooling region is the random variable that isassociated with the maximum sample.For the PBNet-S, We use the weight transfer method introduced by Shayer et al. (2018) in which theparameters of the weight distribution for each layer are initialized such that the expected value of therandom weights equals the full precision weight divided by the standard deviation of the weights inthe given layer. Since not all rescaled weights lay in the [1;1]range, all binary weight parametersare clipped between [0:9;0:9]. This transfer method transfers the structure present in the filtersof the full precision network and ensures that a significant part of the parameter distributions isinitialized with low variance.2.4 D ETERMINISTIC BINARY NEURAL NETWORKIn our training procedure, a stochastic neural network is trained. However, at test time (or onhardware) we want to leverage all the advantages of a full binary Neural Network. Therefore, weobtain a deterministic binary Neural Network from the parameter distribution q(B)at test time. Weconsider three approaches for obtaining a deterministic network: a deterministic network based on themode ofq(B)called PBN ET-MAP, an ensemble of binary Neural Networks sampled from q(B)named PBN ET-x, and a ternary Neural Network ( PBN ET-TERNARY ), in which a single parameterWimay be set to zero based on q, i.e.:Wi=8<:+1 ifq(Bi= +1)3=41ifq(Bi=1)3=40 otherwise(13)The ternary network can also be viewed as a sparse PBNet, however, sparse memory look-ups mayslow down inference.Note that, even when using multiple binary neural networks in an ensemble, the ensemble is stillmore efficient in terms of computation and memory when compared to a full precision alternative.Moreover, it allows for anytime ensemble predictions for improved performance and uncertaintyestimates by sampling from the weight distribution.Since the trained weight distribution is not fully deterministic, the sampling of individual weightinstantiations will result in a shift of the batch statistics. As a consequence, the learned batch normstatistics no longer closely match the true statistics. This is alleviated by re-estimating the batchnorm statistics based on (a subset of) the training set after weight sampling using a moving mean andvariance estimator. We observed competitive results using as little as 20 batches from the training set.3 R ELATED WORKBinary and low precision neural networks have received significant interest in recent years. Mostsimilar to our work, in terms of the final neural network, is the work on Binarized Neural Networksby Hubara et al. (2016). in this work a real-valued shadow weight is used and binary weights are6Under review as a conference paper at ICLR 2019obtained by binarizing the shadow weights. Similarly the pre-activations are binarized using thesame binarization function. In order to back-propagate through the binarization operation the straight-through estimator (Hinton, 2012) is used. Several extensions to Binarized Neural Networks havebeen proposed which — more or less — qualify as binary neural networks: XNOR-net (Rastegariet al., 2016) in which the real-valued parameter tensor and activation tensor is approximated by abinary tensor and a scaling factor per channel. ABC-nets Lin et al. (2017) take this approach onestep further and approximate the weight tensor by a linear combination of binary tensors. Both ofthese approaches perform the linear operations in the forward pass using binary weights and/or binaryactivations, followed by a scaling or linear combination of the pre-activations. In McDonnell (2018),similar methods to Hubara et al. (2016) are used to binarize a wide resnet (Zagoruyko & Komodakis,2016) to obtain results on ImageNet very close to the full precision performance. Another method fortraining binary neural networks is Expectation Backpropagation (Soudry et al., 2014) in which thecentral limit theorem and online expectation propagation is used to find an approximate posterior.This method is similar in spirit to ours, but the training method is completely different. Most relatedto our work is the work by Shayer et al. (2018) which use the local reparametrization trick to traina Neural Network with binary weights and the work by Baldassi et al. (2018) which also discussa binary Neural Network in which the activation distribution are propagated through the network.Moreover, in (Wang & Manning, 2013) the CLT was used to approximate dropout noise duringtraining in order to speed up training, however, there is no aim to learn binary (or discrete) weights oruse binary activations in this work.4 E XPERIMENTSWe evaluate the PBNet on the MNIST and CIFAR-10 benchmarks and compare the results toBinarized Neural Networks (Hubara et al., 2016), since the architectures of the deterministic networksobtained by training the PBNet are equivalent.4.1 E XPERIMENTAL DETAILSThe PBNets are trained using either a cross-entropy (CE) loss or a binary cross entropy for eachclass (BCE). For the CE loss there is no binarization step in the final layer, instead the mean of theGaussian approximation is used as the input to a softmax layer. For BCE, there is a binarization step,and we treat the probability of the ith output being +1as the probability of the input belonging to theith class. Specifically, for an output vector p2[0;1]CforCclasses and the true class y, the BCEloss for a single sample is defined asLBCE(p;y) =CXc=1[c=y] logpc+ [c6=y] log(1pc): (14)The weights for the PBNet-S networks are initialized using the transfer method described in Sec-tion 2.3 and the PBNets are initialized using a uniform initialization scheme. All models are optimizedusing Adam (Kingma & Ba, 2014) and a validation loss plateau learning rate decay scheme. Wekeep the temperature for the binary concrete distribution static at 1:0during training. For all settings,we optimize model parameters until convergence, after which the best model is selected based on avalidation set. Our code is implemented using PyTorch (Paszke et al., 2017).For Binarized Neural Networks we use the training procedure described by Hubara et al. (2016),i.e., a squared hinge loss and layer specific learning rates that are determined based on the Glorotinitialization method (Glorot & Bengio, 2010).Experimental details specific to datasets are given in Appendix C and the results are presented inTable 1. We report both test set accuracy obtained after binarizing the network as well as the thetest set accuracy obtained by the stochastic network during training (i.e., by propagating activationdistributions).4.2 E NSEMBLE BASED UNCERTAINTY ESTIMATIONAs presented in Table 1 the accuracy improves when using an ensemble. Moreover, the predictions ofthe ensemble members can be used to obtain an estimate of the certainty of the ensemble as a whole.7Under review as a conference paper at ICLR 2019Table 1: Test accuracy on MNIST and CIFAR-10 for Binarized NN (Hubara et al., 2016), PBNet,and a full precission network (FPNet). PBNet-map refers to a deterministic PBNet using the mapestimate, PBNet-Ternary is a ternary deterministic network obtained from q, and PBNet- Xrefersto an ensemble of Xnetworks, each sampled from the same weight distribution. For the ensembleresults both mean and standard deviation are presented. The propagate column contains resultsobtained using the stochastic network whereas results in the binarized column are obtained using adeterministic binary Neural Network.MNIST CIFAR -10PROPAGATE BINARIZED PROPAGATE BINARIZEDBINARIZED NN – 99.17 – 88.17PBN ET-MAP (BCE) 99.35 99.13 88.24 79.98PBN ET-MAP (CE) 99.24 98.64 86.73 75.05PBN ET-S-MAP (BCE) 99.26 99:22 89.58 89:10PBN ET-S-MAP (CE) 99.14 99.05 88.67 88.54PBN ET-S-T ERNARY (BCE) 99.26 89.70PBN ET-S-2 ( BCE) 99:250:047 89 :750:205PBN ET-S-5 ( BCE) 99:290:036 90 :750:202PBN ET-S-16 ( BCE) 99:300:025 91 :280:112FPN ET 99.48 92.45To evaluate this, we plot an error-coverage curve (Geifman & El-Yaniv, 2017) in Figure 3a. This curveis obtained by sorting the samples according to a statistic and computing the error percentage in thetopx% of the samples – according to the statistic. For the Binarized Neural Network and PBNet-MAPthe highest softmax score is used, whereas for the ensembles the variance in the prediction of the topclass is used. The figure suggests that the ensemble variance is a better estimator of network certainty,and moreover, the estimation improves as the ensemble sizes increases.4.3 E FFECT OF BATCH STATISTICS RE-ESTIMATIONAs discussed in Section 2.4, after sampling the parameters of a deterministic network the batchstatistics used by Batch Normalization must be re-estimated. Figure 3b shows the results obtainedusing a various number of batches from the training set to re-estimate the statistics. This shows thateven a small number of samples is sufficient to estimate the statistics.4.4 A BLATION STUDIESWe perform an ablation study on both the use of (stochastic) Batch Normalization and the use ofweight transfer for the PBNet-S on CIFAR-10. For Batch Normalization, we removed all batchnormalization layers from the PBNet-S and retrained the model on CIFAR-10. This resulted in a testset accuracy of 79.21%. For the weight initialization experiment, the PBNet-S weights are initializedusing a uniform initialization scheme and is trained on CIFAR-10, resulting in a test set accuracy of83.61%. Moreover, the accuracy on the validation set during training is presented in Figure 3c. Notethat these numbers are obtained without sampling a binarized network from the weight distribution,i.e., local reparametrization and binary activation samples are used. The PBNet-S that uses bothweight transfer and stochastic Batch Normalization results in a significant performance improvement,indicating that both stochastic Batch Normalization and weight transfer are necessary componentsfor the PBNet-S.4.5 S AMPLING OF BINARY ACTIVATIONSThe results of our experiments show that, following our training procedure, sampling of the binaryactivations is a necessary component. Although the stochastic PBNet generalizes well to unseendata, there is a significant drop in test accuracy when a binary Neural Network is obtained from8Under review as a conference paper at ICLR 20190.25 0.50 0.75 1.00coverage (%)0.000.020.040.060.080.100.12error (%)PBNet-2PBNet-5PBNet-16PBNetBinarized Net(a) Error coverage for CIFAR-10.12510152086 88 90test accuracy (%)number of batchesall (b) Test set performance with in-creasing number of batches used tore-estimate the batch statistics onCIFAR-10.0 50 100 150 200Epochs60657075808590Validation accuracy (%)PBNet-MAPNo BnormUniform Init(c) Accuracy on validation set dur-ing training, i.e., using stochasticweights, local reparametrization andbinary activation sampling.Figure 3: Error coverage curve, batch statistic re-estimation results and ablation study results forCIFAR-10.the stochastic PBNet. In contrast, this performance drop is not observed for PBNet-S. A potentialexplanation of this phenomenon is that by sampling the binary activation during training, the networkis forced to become more robust to the inherent binarization noise that is present at test time of thebinarized Neural Network. If this is the case, then sampling the binary activation can be thought of asa regularization strategy that prepares the network for a more noisy binary setting. However, otherregularization strategies may also exist.5 C ONCLUSIONWe have presented a stochastic method for training Binary Neural Networks. The method is evaluatedon multiple standardized benchmarks and reached competitive results. The PBNet has variousadvantageous properties as a result of the training method. The weight distribution allows one togenerate ensembles online which results in improved accuracy and better uncertainty estimations.Moreover, the Bayesian formulation of the PBNet allows for further pruning of the network, whichwe leave as future work.
H1eCvLlshm
The investigated problem is interesting, but methods used to handle binary networks are not impressive
5: Marginally below acceptance threshold
To reduce the deep neural networks' reliance on memory and high power consumption, the paper proposed a kind of probabilistic neural networks with both binary hidden node and binary weights. The paper presents a probabilistic way to binarize the activation and weight values. Also, it proposed a random version of batch-normalization and max-pooling. The binarization of hidden node is achieved through stochastic sampling according to the sign of the stochastic pre-activation values. The weight binarization is analogously done by sampling from a binary distribution. There is no too much new here, and is a standard way to obtain binary values probabilistically. The paper said in the introduction that the binary model will be trained with the re-parameterization trick, through either propagating the distributions or the samples from concrete distribution. But I am still not very clear how this training process is done, especially for the training of weight parameters. Overall, the problem investigated in this paper is very interesting and is of practical importance, the experimental results are preliminary but encouraging. But all the techniques used in this paper to binarize neural networks are standard, and no too much new here.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Probabilistic Binary Neural Networks ### Paper Abstract Low bit-width weights and activations are an effective way of combating the increasing need for both memory and compute power of Deep Neural Networks. In this work, we present a probabilistic training method for Neural Network with both binary weights and activations, called PBNet. By embracing stochasticity during training, we circumvent the need to approximate the gradient of functions for which the derivative is zero almost always, such as $\textrm{sign}(\cdot)$, while still obtaining a fully Binary Neural Network at test time. Moreover, it allows for anytime ensemble predictions for improved performance and uncertainty estimates by sampling from the weight distribution. Since all operations in a layer of the PBNet operate on random variables, we introduce stochastic versions of Batch Normalization and max pooling, which transfer well to a deterministic network at test time. We evaluate two related training methods for the PBNet: one in which activation distributions are propagated throughout the network, and one in which binary activations are sampled in each layer. Our experiments indicate that sampling the binary activations is an important element for stochastic training of binary Neural Networks. ### Paper Keywords ["binary neural Network", "efficient deep learning", "stochastic training", "discrete neural network", "efficient inference"] ### Paper Content ABSTRACTLow bit-width weights and activations are an effective way of combating theincreasing need for both memory and compute power of Deep Neural Networks. Inthis work, we present a probabilistic training method for Neural Network with bothbinary weights and activations, called PBNet. By embracing stochasticity duringtraining, we circumvent the need to approximate the gradient of functions for whichthe derivative is zero almost always, such as sign(), while still obtaining a fullyBinary Neural Network at test time. Moreover, it allows for anytime ensemblepredictions for improved performance and uncertainty estimates by sampling fromthe weight distribution. Since all operations in a layer of the PBNet operate onrandom variables, we introduce stochastic versions of Batch Normalization andmax pooling, which transfer well to a deterministic network at test time. Weevaluate two related training methods for the PBNet: one in which activationdistributions are propagated throughout the network, and one in which binaryactivations are sampled in each layer. Our experiments indicate that sampling thebinary activations is an important element for stochastic training of binary NeuralNetworks.1 I NTRODUCTIONDeep Neural Networks are notorious for having vast memory and computation requirements, bothduring training and test/prediction time. As such, Deep Neural Networks may be unfeasible in variousenvironments such as battery powered devices, embedded devices (because of memory requirement),on body devices (due to heat dissipation), or environments in which constrains may be imposed by alimited economical budget. Hence, there is a clear need for Neural Networks that can operate in theseresource limited environments.One method for reducing the memory and computational requirements for Neural Networks is toreduce the bit-width of the parameters and activations of the Neural Network. This can be achievedeither during training (e.g., Ullrich et al. (2017); Achterhold et al. (2018)) or using post-trainingmechanisms (e.g., Louizos et al. (2017), Han et al. (2015)). By taking the reduction of the bit-widthfor weights and activations to the extreme, i.e., a single bit, one obtains a Binary Neural Network.Binary Neural Networks have several advantageous properties, i.e., a 32reduction in memoryrequirements and the forward pass can be implemented using XNOR operations and bit-counting,which results in a 58speedup on CPU (Rastegari et al., 2016). Moreover, Binary Neural Networksare more robust to adversarial examples (Galloway et al., 2018).Shayer et al. (2018) introduced a probabilistic training method for Neural Networks with binaryweights, but allow for full precision activations. In this paper, we propose a probabilistic trainingmethod for Neural Networks with both binary weights and binary activations, which are even morememory and computation efficient. In short, obtain a closed form forward pass for probabilisticneural networks if we constrain the input and weights to binary (random) variables. The output of theMultiply and Accumulate (MAC) operations, or pre-activation, is approximated using a factorizedNormal distribution. Subsequently, we introduce stochastic versions of Max-Pooling and BatchNormalization that allow us to propagate the pre-activatoins throughout a single layer. By applyingthesign()activation function to the random pre-activation, we not only obtain a distribution overbinary activations, it also allows for backpropagation through the sign()operation. This is especiallyconvenient as this in a deterministic Neural Network all gradient information is zeroed out whenusing sign as activation. We explore two different methods for training this probabilistic binary neuralnetwork: In the first method the activation distribution of layer lis propagated to layer (l+ 1) , which1Under review as a conference paper at ICLR 2019means the MAC operation is performed on two binary random variables. In the second method thebinary activation is sampled as the last operation in a layer using the concrete relaxation Maddisonet al. (2016). This can be thought of as a form of local reparametrization Kingma et al. (2015). Wecall the networks obtained using these methods PBNet and PBNet-S, respectively.At test time, we obtain a single deterministic Binary Neural Network, an ensemble of Binary NeuralNetworks by sampling from the parameter distribution, or a Ternary Neural Network based onthe Binary weight distribution. An advantage of our method is that we can take samples from theparameter distribution indefinitely—without retraining. Hence, this method allows for anytimeensemble predictions and uncertainty estimates. Note that while in this work we only consider thebinary case, our method supports any discrete distribution over weights and activations.2 P ROBABILISTIC BINARY NEURAL NETWORKAlgorithm 1: Pseudo code for forward passof single layer in PBNet(-S). al1denotesthe activation of the previous layer, Btherandom binary weight matrix, is the tem-perature used for the concrete distribution,f(;)the linear transformation used in thelayer,>0a small constant for numericalstability,Dthe dimensionality of the innerproduct inf, and&are the parametersfor batch normalization.Input: al1,Bp(B),,f(;),,,Result: Binary activation al// CLT approximationifal1is a binary random variable then=f(E[B];E[al1]);2=Df((E[B])2;(E[al1])2);else=f(E[B];al1);2=f(V[B];a2l1);end// Batch normalizationm=channel-wise-mean ();v=channel-wise-variance (;2;m);=(m)=pv++;2=22=(v+);// Max poolingifmax pooling required thennN(0;I);s=+n;=max-pooling-indices (s);;2=select-at-indices (;2;);end// Binarization and samplingp= 1(0j;2);ifsample activation thenalBinaryConcrete (p;);return al;elsereturn Binary( p)endIn this section we introduce the probabilistic settingof the PBNet. Moreover, the approximation of thedistribution on the pre-activations is introduced. Foran explanation of the other operations in the PBNet,see Section 2.1 for the activation, Section 2.1.1 for thesampling of activations, and Section 2.2 for Poolingand Normalization.We aim to train a probabilistic Binary Neural Network.As such, we pose a binary distribution over the weightsof the network and optimize the parameters of this dis-tribution instead of the parameters directly. This way,we obtain a distribution over parameters, but also dealwith the inherent discreteness of a Binary Neural Net-work. Given an objective function L(), this approachcan be thought of in terms of the variational optimiza-tion framework Staines & Barber (2012). Specifically,by optimizing the parameters of the weight distribu-tions, we optimize a bound on the actual loss:minBL(B)Eq(B)[L(B)]; (1)where Bare the binary weights of the network andq(B)is a distribution over the binary weights. Forq(B)a slight reparametrization of the Bernoulli dis-tribution is used, which we will refer to as the Bi-nary distribution. This distribution is parameterizedby2[1;1]and is defined by:aBinary ()()a+ 12Bernoulli (+ 12):(2)For the properties of this distribution, please refer toAppendix A.We will now consider using the Binary distributionfor both the weights and the activations in a NeuralNetwork. Since the pre-activations in a Neural Networkare computed using MAC operations, which are thesame for each value in the pre-activation, we will onlyconsider a single value in our discussion here. Let wBinary ()andhBinary ()be the weight and inputrandom variable for a given layer. As such, the inner-product between the weights and input is distributedaccording to a translated and scaled Poisson binomialdistribution:wh+D2PoiBin (2[]1): (3)2Under review as a conference paper at ICLR 2019WhereDis the dimensionality of handwanddenotes element-wise multiplication. See thepicket fence on the top in Figure 1 for an illustration of the PMF of a Poisson binomial distribution.Although the scaled and translated Poisson binomial distribution is the exact solution for the innerproduct between the weight and activation random variables, it is hard to work with in subsequentlayers. For this reason, and the fact that the Poisson binomial distribution is well approximated bya Normal distribution (Wang & Manning, 2013), we use a Normal approximation to the Poissonbinomial distribution, which allows for easier manipulations. Using the properties of the Binarydistribution and the Poisson binomial distribution, the approximation for the pre-activation ais givenby:a=wh_ N (DXd=1dd;DDXd=12d2d): (4)Note that, this is exactly the approximation one would obtain by using the Lyapunov CentralLimit Theorem (CLT), which was used by Shayer et al. (2018). This allows us to obtain a closeapproximation to the pre-activation distribution, which we can propagate through the layer and/ornetwork. So far, only the MAC operation in a given layer is discussed, in Section 2.1 application ofthe binary activation is discussed and in Section 2.1. The stochastic versions of Batch Normalizationand Max Pooling are introduced in Section 2.2. For specifics on sampling the binary activation, seeSection 2.1.1. The full forward pass for a single layer is given in detail in Algorithms 1.2.1 S TOCHASTIC BINARY ACTIVATIONSince the output of a linear operation using binary inputs is not restricted to be binary, it is requiredto apply a binarization operation to the pre-activation in order to obtain binary activations. Variousworks – e.g., Hubara et al. (2016) and Rastegari et al. (2016) – use either deterministic or stochasticbinarization functions, i.e.,bdet(a) =+1 ifa01otherwisebstoch(a) =+1 with probability p=sigmoid (a)1with probability 1p: (5)In our case the pre-activations are random variables. Hence, applying a deterministic binarizationfunction to a random pre-activations results in a stochastic binary activation. Specifically, letaiN(i;2i)be a random pre-ctivation obtained using the normal approximation, as introducedin the previous section, then the activation (after binarization) is again given as a Binary randomvariable". Interestingly, the Binary probability can be computed in closed form by evaluating theprobability density that lies above the binarization threshold:hi=bdet(ai)Binary (qi); q i= 1(0ji;2i); (6)where (j;2)denotes the CDF of N(;2). Applying the binarization function to a randompre-activation has two advantages. First, the derivatives @qi=@iand@qi=@iare not zero almosteverywhere, in contrast to the derivatives of bdetandbstochwhen applied to a deterministic input.Second, the distribution over hireflects the true uncertainty about the sign of the activation, giventhe stochastic weights, whereas bstochuses the magnitude of the pre-activation as a substitute. Forexample, a pre-activation with a high positive magnitude and high variance will be deterministicallymapped to 1 by bstoch. In contrast, our method takes the variance into account and correctly assignssome probability mass to 1. See Figure 1 for a graphical depiction of the stochastic binary activation.2.1.1 S AMPLING THE BINARY ACTIVATIONSSo far, we have discussed propagating distributions throughout the network. Alternatively, the binaryactivations can be sampled using the Concrete distribution (Maddison et al., 2016) during training.specifically, we use the hard sample method as discussed by Jang et al. (2016). By sampling theactivations, the input for subsequent layers will match the input that is observed at test time moreclosely.As a consequence of sampling the activation, the input to a layer is no longer a distribution but ah2f 1;+1gDvector instead. As such, the normal approximation to the pre-activation is computed3Under review as a conference paper at ICLR 201900 -1 +1Figure 1: The discrete Poisson binomial distribution (in green) is approximated by a continuousGaussian distribution (in purple). By applying bdetto a random pre-activation we obtain a binaryactivation distribution.slightly different. From the Lyapunov CLT it follows that the approximation to the distribution of thepre-activation is given by:a=wh_ N (DXd=1dhd;DXd=12dh2d); (7)where wBinary ()is a random weight. Similarly, the pre-activation of the input layer is alsocomputed using this approximation—given a real-valued input vector. We will refer to a PBNet thatuses activation sampling as PBNet-S.2.2 N ORMALIZATION AND POOLINGOther than a linear operation and an (non-linear) activation function, Batch Normalization (Ioffe &Szegedy, 2015) and pooling are two popular building blocks for Convolutional Neural Networks.For Binary Neural Networks, applying Batch Normalization to a binarized activation will result ina non-binary result. Moreover, the application of max pooling on a binary activation will result ina feature map containing mostly +1s. Hence, both operations must be applied before binarization.However, in the PBNet, the binarization operation is applied before sampling. As a consequence, theBatch Normalization and pooling operations can only be applied on random pre-activations. For thisreason, we define these methods for random variables. Although there are various ways to definethese operation in a stochastic fashion, our guiding principle is to only leverage stochasticity duringtraining, i.e., at test time, the stochastic operations are replaced by their conventional implementationsand parameters learned in the stochastic setting must be transferred to their deterministic counterparts.2.2.1 S TOCHASTIC BATCH NORMALIZATIONBatch Normalization (BN) (Ioffe & Szegedy, 2015) — including an affine transformation — isdefined as follows:^ai=aimpv++; (8)where aidenotes the pre-activation before BN, ^athe pre-activation after BN, and m&vdenotethe sample mean and variance of faigMi=1, for anM-dimensional pre-activation. In essence, BNtranslates and scales the pre-activations such that they have approximately zero mean and unitvariance, followed by an affine transformation. Hence, in the stochastic case, our aim is that samplesfrom the pre-activation distribution after BN also have approximately zero mean and unit variance—toensure that the stochastic batch normalization can be transfered to a deterministic binary neuralnetwork. This is achieved by subtracting the population mean from each pre-activation randomvariable and by dividing by the population variance. However, since aiis a random variable in thePBNet, simply using the population mean and variance equations will result in non-standardizedoutput. Instead, to ensure a standardized distribution over activations, we compute the expected4Under review as a conference paper at ICLR 2019population mean and variance under the pre-activation distribution:Ep(ajB;h)[m] =E"1MMXi=1ai#=1MMXi=1E[ai] =1MMXi=1i (9)Ep(ajB;h)[v] =E"1M1MXi=1(aiE[m])2#=1M1(KXi=12i+MXi=1(iE[m])2);(10)whereMis the total number of activations and aiN(i;i)are the random pre-activations. Bysubstituting mandvin Equation 8 by Equation 9 and 10, we obtain the following batch normalizedGaussian distributions for the pre-activations:^ai=aiE[m]pE[v] ++) ^aiN iE[m]pE[v] ++;2E[v] +2i!: (11)Note that this assumes a single channel, but is easily extended to 2d batch norm in a similar fashionas conventional Batch Normalization. At test time, Batch Normalization in a Binary Neural Networkcan be reduced to an addition and sign flip of the activation, see Appendix B for more details.2.2.2 S TOCHASTIC MAXPOOLINGIn general, pooling applies an aggregation operation to a set of (spatially oriented) pre-activations.Here we discuss max pooling for stochastic pre-activations, however, similar considerations apply forother types of aggregation functions.In the case of max-pooling, given a spatial region containing stochastic pre-activations a1;:::;aK,we aim to stochastically select one of the ai. Note that, although the distribution of max(a1;:::;aK)is well-defined (Nadarajah & Kotz, 2008), its distribution is not Gaussian and thus does not matchone of the input distributions. Instead, we sample one of the input random variables in everyspatial region according to the probability of that variable being greater than all other variables,i.e.,i=p(ai>zni), where zni= max(fajgj6=i).icould be obtained by evaluating the CDFof(zniai)at 0, but to our knowledge this has no analytical form. Alternatively, we can useMonte-Carlo integration to obtain :1LLXl=1one-hot (arg max s(l));s(l)p(a1;a2;:::;aK) =KYi=1N(i;2i) (12)where one-hot (i)returns aK-dimensional one-hot vector with the ith elements set to one. The poolingindex is then sampled from Cat(). However, more efficiently, we can sample sp(a1;:::;aK)and select the index of the maximum in s, which is equivalent sampling from Cat(). Hence, for agiven max pooling region, it is sufficient to obtain a single sample from each normal distributionassociated with each pre-activation and keep the random variable for which this sample is maximum.A graphical overview of this is given in Figure 2.Other forms of stochastic or probabilistic max pooling were introduced by Lee et al. (2009) and Zeiler& Fergus (2013), however, in both cases a single activation is sampled based on the magnitude of theactivations. In contrast, in our procedure we stochastically propagate one of the input distributionsover activations.2.3 W EIGHT INITIALIZATIONFor the PBNet the parameters forq(B)are initialized from a uniform U(1;1)distribution.Although the final parameter distribution more closely follows a Beta(;)distribution, for <1,we did not observe any significant impact choosing another initialization method for the PBNet.In the case of the PBNet-S, we observed a significant improvement in training speed and performanceby initializing the parameters based on the parameters of a pre-trained full precission Neural Network.This initializes the convolutional filters with more structure than a random initialization. This isdesirable as in order to flip the value of a weight, the parameter governing the weight has to passthrough a high variance regime, which can slow down convergence considerably.5Under review as a conference paper at ICLR 2019Select maximumper region2Sample frominput distributions1Keep maximum distributionfor each region3Figure 2: Max pooling for random variables is performed by taking a single sample from each of theinput distributions. The output random variable for each pooling region is the random variable that isassociated with the maximum sample.For the PBNet-S, We use the weight transfer method introduced by Shayer et al. (2018) in which theparameters of the weight distribution for each layer are initialized such that the expected value of therandom weights equals the full precision weight divided by the standard deviation of the weights inthe given layer. Since not all rescaled weights lay in the [1;1]range, all binary weight parametersare clipped between [0:9;0:9]. This transfer method transfers the structure present in the filtersof the full precision network and ensures that a significant part of the parameter distributions isinitialized with low variance.2.4 D ETERMINISTIC BINARY NEURAL NETWORKIn our training procedure, a stochastic neural network is trained. However, at test time (or onhardware) we want to leverage all the advantages of a full binary Neural Network. Therefore, weobtain a deterministic binary Neural Network from the parameter distribution q(B)at test time. Weconsider three approaches for obtaining a deterministic network: a deterministic network based on themode ofq(B)called PBN ET-MAP, an ensemble of binary Neural Networks sampled from q(B)named PBN ET-x, and a ternary Neural Network ( PBN ET-TERNARY ), in which a single parameterWimay be set to zero based on q, i.e.:Wi=8<:+1 ifq(Bi= +1)3=41ifq(Bi=1)3=40 otherwise(13)The ternary network can also be viewed as a sparse PBNet, however, sparse memory look-ups mayslow down inference.Note that, even when using multiple binary neural networks in an ensemble, the ensemble is stillmore efficient in terms of computation and memory when compared to a full precision alternative.Moreover, it allows for anytime ensemble predictions for improved performance and uncertaintyestimates by sampling from the weight distribution.Since the trained weight distribution is not fully deterministic, the sampling of individual weightinstantiations will result in a shift of the batch statistics. As a consequence, the learned batch normstatistics no longer closely match the true statistics. This is alleviated by re-estimating the batchnorm statistics based on (a subset of) the training set after weight sampling using a moving mean andvariance estimator. We observed competitive results using as little as 20 batches from the training set.3 R ELATED WORKBinary and low precision neural networks have received significant interest in recent years. Mostsimilar to our work, in terms of the final neural network, is the work on Binarized Neural Networksby Hubara et al. (2016). in this work a real-valued shadow weight is used and binary weights are6Under review as a conference paper at ICLR 2019obtained by binarizing the shadow weights. Similarly the pre-activations are binarized using thesame binarization function. In order to back-propagate through the binarization operation the straight-through estimator (Hinton, 2012) is used. Several extensions to Binarized Neural Networks havebeen proposed which — more or less — qualify as binary neural networks: XNOR-net (Rastegariet al., 2016) in which the real-valued parameter tensor and activation tensor is approximated by abinary tensor and a scaling factor per channel. ABC-nets Lin et al. (2017) take this approach onestep further and approximate the weight tensor by a linear combination of binary tensors. Both ofthese approaches perform the linear operations in the forward pass using binary weights and/or binaryactivations, followed by a scaling or linear combination of the pre-activations. In McDonnell (2018),similar methods to Hubara et al. (2016) are used to binarize a wide resnet (Zagoruyko & Komodakis,2016) to obtain results on ImageNet very close to the full precision performance. Another method fortraining binary neural networks is Expectation Backpropagation (Soudry et al., 2014) in which thecentral limit theorem and online expectation propagation is used to find an approximate posterior.This method is similar in spirit to ours, but the training method is completely different. Most relatedto our work is the work by Shayer et al. (2018) which use the local reparametrization trick to traina Neural Network with binary weights and the work by Baldassi et al. (2018) which also discussa binary Neural Network in which the activation distribution are propagated through the network.Moreover, in (Wang & Manning, 2013) the CLT was used to approximate dropout noise duringtraining in order to speed up training, however, there is no aim to learn binary (or discrete) weights oruse binary activations in this work.4 E XPERIMENTSWe evaluate the PBNet on the MNIST and CIFAR-10 benchmarks and compare the results toBinarized Neural Networks (Hubara et al., 2016), since the architectures of the deterministic networksobtained by training the PBNet are equivalent.4.1 E XPERIMENTAL DETAILSThe PBNets are trained using either a cross-entropy (CE) loss or a binary cross entropy for eachclass (BCE). For the CE loss there is no binarization step in the final layer, instead the mean of theGaussian approximation is used as the input to a softmax layer. For BCE, there is a binarization step,and we treat the probability of the ith output being +1as the probability of the input belonging to theith class. Specifically, for an output vector p2[0;1]CforCclasses and the true class y, the BCEloss for a single sample is defined asLBCE(p;y) =CXc=1[c=y] logpc+ [c6=y] log(1pc): (14)The weights for the PBNet-S networks are initialized using the transfer method described in Sec-tion 2.3 and the PBNets are initialized using a uniform initialization scheme. All models are optimizedusing Adam (Kingma & Ba, 2014) and a validation loss plateau learning rate decay scheme. Wekeep the temperature for the binary concrete distribution static at 1:0during training. For all settings,we optimize model parameters until convergence, after which the best model is selected based on avalidation set. Our code is implemented using PyTorch (Paszke et al., 2017).For Binarized Neural Networks we use the training procedure described by Hubara et al. (2016),i.e., a squared hinge loss and layer specific learning rates that are determined based on the Glorotinitialization method (Glorot & Bengio, 2010).Experimental details specific to datasets are given in Appendix C and the results are presented inTable 1. We report both test set accuracy obtained after binarizing the network as well as the thetest set accuracy obtained by the stochastic network during training (i.e., by propagating activationdistributions).4.2 E NSEMBLE BASED UNCERTAINTY ESTIMATIONAs presented in Table 1 the accuracy improves when using an ensemble. Moreover, the predictions ofthe ensemble members can be used to obtain an estimate of the certainty of the ensemble as a whole.7Under review as a conference paper at ICLR 2019Table 1: Test accuracy on MNIST and CIFAR-10 for Binarized NN (Hubara et al., 2016), PBNet,and a full precission network (FPNet). PBNet-map refers to a deterministic PBNet using the mapestimate, PBNet-Ternary is a ternary deterministic network obtained from q, and PBNet- Xrefersto an ensemble of Xnetworks, each sampled from the same weight distribution. For the ensembleresults both mean and standard deviation are presented. The propagate column contains resultsobtained using the stochastic network whereas results in the binarized column are obtained using adeterministic binary Neural Network.MNIST CIFAR -10PROPAGATE BINARIZED PROPAGATE BINARIZEDBINARIZED NN – 99.17 – 88.17PBN ET-MAP (BCE) 99.35 99.13 88.24 79.98PBN ET-MAP (CE) 99.24 98.64 86.73 75.05PBN ET-S-MAP (BCE) 99.26 99:22 89.58 89:10PBN ET-S-MAP (CE) 99.14 99.05 88.67 88.54PBN ET-S-T ERNARY (BCE) 99.26 89.70PBN ET-S-2 ( BCE) 99:250:047 89 :750:205PBN ET-S-5 ( BCE) 99:290:036 90 :750:202PBN ET-S-16 ( BCE) 99:300:025 91 :280:112FPN ET 99.48 92.45To evaluate this, we plot an error-coverage curve (Geifman & El-Yaniv, 2017) in Figure 3a. This curveis obtained by sorting the samples according to a statistic and computing the error percentage in thetopx% of the samples – according to the statistic. For the Binarized Neural Network and PBNet-MAPthe highest softmax score is used, whereas for the ensembles the variance in the prediction of the topclass is used. The figure suggests that the ensemble variance is a better estimator of network certainty,and moreover, the estimation improves as the ensemble sizes increases.4.3 E FFECT OF BATCH STATISTICS RE-ESTIMATIONAs discussed in Section 2.4, after sampling the parameters of a deterministic network the batchstatistics used by Batch Normalization must be re-estimated. Figure 3b shows the results obtainedusing a various number of batches from the training set to re-estimate the statistics. This shows thateven a small number of samples is sufficient to estimate the statistics.4.4 A BLATION STUDIESWe perform an ablation study on both the use of (stochastic) Batch Normalization and the use ofweight transfer for the PBNet-S on CIFAR-10. For Batch Normalization, we removed all batchnormalization layers from the PBNet-S and retrained the model on CIFAR-10. This resulted in a testset accuracy of 79.21%. For the weight initialization experiment, the PBNet-S weights are initializedusing a uniform initialization scheme and is trained on CIFAR-10, resulting in a test set accuracy of83.61%. Moreover, the accuracy on the validation set during training is presented in Figure 3c. Notethat these numbers are obtained without sampling a binarized network from the weight distribution,i.e., local reparametrization and binary activation samples are used. The PBNet-S that uses bothweight transfer and stochastic Batch Normalization results in a significant performance improvement,indicating that both stochastic Batch Normalization and weight transfer are necessary componentsfor the PBNet-S.4.5 S AMPLING OF BINARY ACTIVATIONSThe results of our experiments show that, following our training procedure, sampling of the binaryactivations is a necessary component. Although the stochastic PBNet generalizes well to unseendata, there is a significant drop in test accuracy when a binary Neural Network is obtained from8Under review as a conference paper at ICLR 20190.25 0.50 0.75 1.00coverage (%)0.000.020.040.060.080.100.12error (%)PBNet-2PBNet-5PBNet-16PBNetBinarized Net(a) Error coverage for CIFAR-10.12510152086 88 90test accuracy (%)number of batchesall (b) Test set performance with in-creasing number of batches used tore-estimate the batch statistics onCIFAR-10.0 50 100 150 200Epochs60657075808590Validation accuracy (%)PBNet-MAPNo BnormUniform Init(c) Accuracy on validation set dur-ing training, i.e., using stochasticweights, local reparametrization andbinary activation sampling.Figure 3: Error coverage curve, batch statistic re-estimation results and ablation study results forCIFAR-10.the stochastic PBNet. In contrast, this performance drop is not observed for PBNet-S. A potentialexplanation of this phenomenon is that by sampling the binary activation during training, the networkis forced to become more robust to the inherent binarization noise that is present at test time of thebinarized Neural Network. If this is the case, then sampling the binary activation can be thought of asa regularization strategy that prepares the network for a more noisy binary setting. However, otherregularization strategies may also exist.5 C ONCLUSIONWe have presented a stochastic method for training Binary Neural Networks. The method is evaluatedon multiple standardized benchmarks and reached competitive results. The PBNet has variousadvantageous properties as a result of the training method. The weight distribution allows one togenerate ensembles online which results in improved accuracy and better uncertainty estimations.Moreover, the Bayesian formulation of the PBNet allows for further pruning of the network, whichwe leave as future work.<|im_end|> <|im_start|>assistant ### Review Title The investigated problem is interesting, but methods used to handle binary networks are not impressive ### Review Text To reduce the deep neural networks' reliance on memory and high power consumption, the paper proposed a kind of probabilistic neural networks with both binary hidden node and binary weights. The paper presents a probabilistic way to binarize the activation and weight values. Also, it proposed a random version of batch-normalization and max-pooling. The binarization of hidden node is achieved through stochastic sampling according to the sign of the stochastic pre-activation values. The weight binarization is analogously done by sampling from a binary distribution. There is no too much new here, and is a standard way to obtain binary values probabilistically. The paper said in the introduction that the binary model will be trained with the re-parameterization trick, through either propagating the distributions or the samples from concrete distribution. But I am still not very clear how this training process is done, especially for the training of weight parameters. Overall, the problem investigated in this paper is very interesting and is of practical importance, the experimental results are preliminary but encouraging. But all the techniques used in this paper to binarize neural networks are standard, and no too much new here. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
W59BHjEDfz
logconference.io/LOG/2022/Conference
2022
Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery
["Niklas Funk", "Svenja Menzenbach", "Georgia Chalvatzaki", "Jan Peters"]
Robot assembly discovery is a challenging problem that lives at the intersection of resource allocation and motion planning. The goal is to combine a predefined set of objects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robot. Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other. On the high level, we run a classical mixed-integer program for global optimization of blocktype selection and the blocks’ final poses to recreate the desired shape. Its output is then exploited as a prior to efficiently guide the exploration of an underlying reinforcement learning (RL) policy handling decisions regarding structural stability and robotic feasibility. This RL policy draws its generalization properties from a flexible graph-based neural network that is learned through Q-learning and can be refined with search. Lastly, a grasp and motion planner transforms the desired assembly commands into robot joint movements. We demonstrate our proposed method’s performance on a set of competitive simulated and real-world robot assembly discovery environments and report performance and robustness gains compared to an unstructured graph-based end-to-end approach. Videos are available at https://sites.google.com/view/milp-gnn-for-rad .
["graph neural networks", "reinforcement learning", "robotics"]
Graph-based Reinforcement Learning meets Mixed IntegerPrograms: An application to 3D robot assembly discoveryNiklas FunkTechnical University of Darmstadtniklas@robot-learning.deSvenja MenzenbachTechnical University of Darmstadtsvenja.menzenbach@stud.tu-darmstadt.deGeorgia ChalvatzakiTechnical University of Darmstadtgeorgia@robot-learning.deJan PetersTechnical University of DarmstadtGerman Research Center for AI (DFKI)Hessian.AICentre for Cognitive Sciencejan@robot-learning.deAbstractRobot assembly discovery is a challenging problem that lives at the intersection ofresource allocation and motion planning. The goal is to combine a predefined set ofobjects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined targetstructures entirely from scratch using a set of Tetris-like building blocks and a robot.Our novel hierarchical approach aims at efficiently decomposing the overall taskinto three feasible levels that benefit mutually from each other. On the high level,we run a classical mixed-integer program for global optimization of blocktypeselection and the blocks’ final poses to recreate the desired shape. Its outputis then exploited as a prior to efficiently guide the exploration of an underlyingreinforcement learning (RL) policy handling decisions regarding structural stabilityand robotic feasibility. This RL policy draws its generalization properties froma flexible graph-based neural network that is learned through Q-learning andcan be refined with search. Lastly, a grasp and motion planner transforms thedesired assembly commands into robot joint movements. We demonstrate ourproposed method’s performance on a set of competitive simulated and real-worldrobot assembly discovery environments and report performance and robustnessgains compared to an unstructured graph-based end-to-end approach. Videos areavailable at https://sites.google.com/view/milp-gnn-for-rad .1 Introduction & Problem DefinitionFigure 1: Illustrating a simulated RAD en-vironment (left) and all three components ofour proposed hierarchical approach (right).A common desire amongst many industry sectors isto increase resource efficiency. The construction in-dustry could significantly reduce its environmentalimpact by re-using existing material more efficiently[1]. There is a fundamental need for combining in-telligent algorithms for reasoning on how existingmaterial can be recombined to form something new,with autonomous execution [2].Herein, we tackle the problem of autonomous roboticassembly discovery (RAD), i.e., a robotic agentshould reason about abstract 3D target shapes thatN. Funk et al., Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robotassembly discovery (Extended Abstract). Presented at the First Learning on Graphs Conference (LoG 2022),Virtual Event, December 9–12, 2022.need to be fulfilled given a set of available building blocks (cf. Fig. 1). Unlike other assemblyproblems with known instructions, in RAD, the agent does neither have any prior information aboutwhich blocks to use and their final poses, nor about the execution sequence. Contrarily, the RADagent should discover the possible ways of combining the building blocks, find appropriate actionsequences, and put them into practice. RAD can thus be structured into two difficulty levels. Onthe high level, a goal-defined resource allocation problem has to be solved, which is typically NP-complete for discrete resources, and can be viewed as a real-world version of the Knapsack Problem[3]. The low level requires solving a motion planning problem, i.e., having to come up with an overallfeasible action sequence of picking and placing actions taking into account the robot’s kinematics,structural stability throughout the assembly, and avoiding any collisions.One way of approaching RAD are end-to-end approaches that directly map from problem definitionto low level actions [ 4–6]. They are typically straightforward to design, and based on learned graphneural network (GNN) representations. Due to their ability to learn relational encodings [ 7,8] andinvariant representations, they can overcome combinatorial barriers [ 9], and be combined with searchfor improved generalization [ 5,6,10]. Yet, they often require extensive training in combinatorialaction spaces, and typically lack interpretability. On the other end of the spectrum are Task andMotion Planning approaches [ 11,12], which naturally represent problem’s hierarchy and necessitatefull prior knowledge of geometrics and kinematics. They are usually unsuitable for real-time reactivecontrol, as the full joint optimization suffers from combinatorics and non-convex constraints.We propose a novel hierarchical method for 3D RAD that addresses both, resource allocation andmotion planning. On the high level, a model-based mixed-integer linear program (MILP), handlingthe process of block-type selection and optimizing the blocks’ final poses for optimally resemblingthe desired target shape, is solved. The MILP’s solution is then used as a guiding exploration signalin a graph-based Reinforcement Learning (RL) framework. We define a GNN for capturing thegeometric, structural, and physical relationships between building blocks, robot, and target shape,thereby incorporating all effects that have not been modelled on the higher level. The GNN is trainedthrough model-free Q-learning allowing the integration with tree search for improved long-termdecisions [ 10]. To put the previous reasoning into practice, at the lowest level, we rely on simplegrasp and motion planning. We present an empirical evaluation of our proposed approach in a set ofcompetitive simulated RAD tasks. The results show superior performance of our approach againstboth empirical and end-end GNN baselines, thereby underlining its effectiveness.Problem DefinitionFigure 2: 2D RAD environmentwith one placed block consisting oftwo primitive elements (shown inbrown/blue). The grid-cells are visual-ized through their centre points. Pinkpoints represent target grid-cells thatare to be filled & non-target grid-cells(green) should remain unoccupied.We formulate the problem of having to combine rectlinearblocks into a desired target shape as Markov Decision Pro-cess. Its state is given by four sets: i) the set of unplacedblocks that encodes the remaining blocks, ii) the set of placedblocks that have already been used, iii) the set of target grid-cells (pink) that are part of the target shape and should allbe filled, and iv) the set of non-target grid-cells (green) thatshould remain unoccupied (cf. Fig. 2). We also assume thatall building blocks are a combination of primitive blocks.This choice allows to modularly represent any more compli-cated block through primitive elements.For block placement, we use of a discrete, time-varyingaction space. Every unplaced primitive block can be placedw.r.t. all available grid-cells while additionally selecting fromfour actions that rotate the block by 0◦,±90◦, or180◦around the z-axis. We also add one terminationaction that results in stopping the assembly process. The resulting action space of combinatorialcomplexity thus contains #unplaced primitive elements ×#grid-cells ×4+1 = Up×Gc×4+1 actions.After every placement action, the set of placed/unplaced elements and target/non-target grid cells areupdated, and a reward is assigned. The reward is positive when the action reduced the number oftarget grid-cells, and negative if non-target grid-cells are being filled, therefore actively enforcingresource efficiency. The conditions for a successful placement action are that the block can be placedby the robot without moving or colliding with any other block, and that it is placed in a stableconfiguration. On any invalid action, the episode is terminated and a high negative reward is assigned.2Otherwise, the episode is terminated upon the events of i) the agent choosing the termination action,ii) no more available building blocks, or iii) the completion of RAD.2 MethodWe now introduce the two upper levels of our proposed tri-level hybrid approach for reliable RAD(cf. Fig. 1). For the lowest level that only realizes the commanded assembly actions, we refer to theappendix.High Level: MILP for optimal geometric target filling . We first solve a MILP for optimizingthe blocks’ placing poses to optimally fill the desired shape in light of the problem’s combinatorialcomplexity. Yet, to render the problem tractable, we do not consider the sequencing and roboticconstraints. Based on the previous definitions (reward & voxelization), the MILP’s objective (subjectto maximization) equates to OMILP=cTg,with vector g∈RGc×1representing the grid-state, andc∈RGc×1containing weights that indicate whether a grid-cell should be filled (1) or not (-1). Weessentially flatten the 2D grid from Fig. 2 into a single vector by converting the discrete coordinatesdx,dyof every grid-cell to a single index j=dx+dynx(with grid widht nx). As every grid-cellcan only be occupied at maximum by one block, we add g[i]≤1,∀g[i]∈gas constraints. Next, wedetermine how every potential action changes the grid-state. I.e., placing the horizontal block fromFig. 2 in the lowest left position results in a grid state of pTi=1,k=1= [1,1,0, ....,0]∈R1×Gc, withblock type index iand placement action k. By additionally assigning a binary decision variablewi,kand taking all object types into account, we can define the change in the grid-state accordingtog=PPˆi=1PK(ˆi)ˆk=1wi=ˆi,k=ˆkpi=ˆi,k=ˆkwith a total of Pdifferent block types and K(i)admissibleactions. While the binary decision variables prohibit any partial block placement by definition, westill have to restrict that any type of block can only be placed depending on its appearance in the scene(Ni),PK(i)ˆk=1wi,k=ˆk≤Ni,∀i∈P.We solve the resulting MILP through Gurobi [ 13] and obtain theoptimal values for the decision variables thereby revealing the final poses for every block type.Medium Level: GNN for task sequencing . The high level only partially resolves the problem’scombinatorial aspect. It lacks i) the placement actions’ sequencing, ii) the exact assignment of whichblock to use for each placement, and iii) the consideration of robotic feasibility, the blocks’ initialpositions, and structural stability. Thus, we require this level which is tasked to decide upon eitherexecuting one of the proposed actions from the higher level MILP or terminating the current assembly.We propose an approach based on combining GNN and Q-learning [ 5,6]. The GNN is capable ofproviding the required representational flexibility and invariance to problem size, while performingaction selection based on Q-learning is desirable as i) the action space is discrete, ii) the estimation ofall the actions’ quality as basis for action selection allows to efficiently incorporate the MILP’s priorknowledge by masking out all actions that are not inside its solution, iii) potential multimodalitiesin the MILP solution can be captured, and iv) it allows easy and time-effective combination withsearch-based methods, i.e., Monte Carlo Tree Search (MCTS) to improve performance [10].We now describe the action selection process (cf. Fig. 3). We refer to [ 6] (which esentially usesthe same GNN) for the details. We first transform the environment’s state into a graph by creatingnodes for all primitive blocks and grid-cells. Every node has 5 initial features, i.e., the 3D worldcoordinates of its centre ∈R3, and 2 booleans indicating the node type, i.e. placed/unplaced primitiveblock ( [1,0]/[0,0]), target/non-target grid-cell ( [1,1]/[0,1]). Almost all nodes of the graph are fully-Figure 3: Illustrating action selection. First, the current scene is transformed into a graph. Note:Only a subset of the target (pink) and non-target (green) grid-cells is shown. White nodes depict theunplaced primitive blocks. Next follows message passing updating the nodes’ features. The action’sQ-values are predicted based on the nodes’ features of the respective unplaced primitive block andthe grid-cells using a feedforward neural network (NN). To incorporate the prior knowledge, we onlyconsider actions part of the MILP solution (shown in red).3connected with each other – we only omit the connections in-between the unplaced primitive blocksif they do not belong to the same block to provide an inductive bias for the object shape. Upon graphcreation follow three rounds of message passing using an attention mechanism [ 6,9], in which wesequentially build an encoded graph. The encodings are the basis for computing Q-values for allavailable actions (i.e., predicting every actions’ quality). As any unplaced primitive block can beplaced w.r.t. every grid-cell, a standard feedforward NN is used, that takes as input the encoded nodevalues of i) the primitive block-to-be-placed, and ii) the grid-cell, and outputs the Q-values for allthe four rotational-placement actions between these nodes. This process is repeated for all pairs ofunplaced primitive blocks and grid-cells. The action decision is done using an ε-greedy policy, yet,only allowing to choose from the set of actions proposed by the MILP (we mask out all the otherones), as well as the termination action. The ε-greedy policy controls the tradeoff between randomlyexploring actions and exploiting, i.e., selecting the action with highest Q-value during training andevaluation. The graph’s weights are refined through temporal-difference learning, thereby attemptingto improve the estimation of the Q-values by minimizing the difference between the predicted qualityof the action and the observed outcome. While this Q-learning procedure by itself already results ingood policies, during test time, we additionally consider action selection based on the combination ofQ-learning and MCTS (DQN+MCTS). For more details, please see [14] and the appendix.3 Experimental ResultsWe evaluate our proposed MILP-DQN method and potentially adding MCTS (search budget of 5), insimulation (Fig. 1) and reality (cf. link to videos in abstract). We aim to answer two questions: 1)Does the MILP’s guided exploration signal improve performance compared to end-to-end approaches?2) How effective is the medium level GNN compared to an heuristic approach for task sequencing?The training is conducted as in [ 6], and we also reuse their simulation, yet, allowing block placementsthroughout the whole assembly area & voxelizing the target shape. In the evaluations, we describethe environment’s difficulty through the grid size, i.e., Fig. 2 shows a potential target shape for a gridsize of 3. The star(*) indicates the agents’ evaluation in their training conditions, while the otherexperiments are out-of-distribution. The results are obtained by averaging the agents’ performance in200 scenes. We report the discounted reward R, the fraction of runs that ended upon failure f, andthe target grid-cell coverage ̄a, i.e., the fraction of initially unfilled and finally filled target grid-cells.Table 1: Comparing our proposed methodwith two learned baselines in the two-sidedenvironment wo robot.Grid Size Method R ̄a3* DQN 0.63 (0.02) 0.71DQN-REL [6] 0.67 (0.01) 0.68MILP-DQN 1.22 (0.01) 0.874 DQN 0.71 (0.08) 0.69DQN-REL [6] 0.75 (0.08) 0.66MILP-DQN 1.56 (0.03) 0.875 MILP-DQN 1.92 (0.05) 0.85A) Is the high level MILP needed?We consider scenarios without the robot, which re-duces the task’s complexity to placing the blocks ina stable configuration while trying to optimally fillthe desired shape. We compare against two baselines.The first one (DQN) does not consider the MILP’sprior knowledge and can therefore place any of theavailable blocks at all currently unoccupied grid-cells.The second one (DQN-REL) follows [ 6], in whichthe available blocks can only be placed next alreadyplaced blocks, thus, reducing the action space. In thefirst step, we allow to place the blocks at any target grid-cell.The results in Table 1 reveal that the MILP provides a strong inductive bias that is effective in guidingthe exploration. The agents trained using our proposed MILP-DQN approach outperform the twobaselines which in turn exhibit very similar performance. Compared to the baselines, MILP-DQNagents achieve an increase in the success rate and discounted reward by a factor of 2. These resultsconfirm the task’s combinatorial complexity. Performing an ε-greedy exploration without using aninformed prior does not allow for discovering good action sequences. The results also reveal thatthe MILP-DQN agents generalize well to the out-of-distribution environments as the desired targetgrid-cell coverage remains high at 0.87 and 0.85 (grid size of 4,5), despite the significant increasein task complexity. I.e., the number of blocks in the scene increases in line with the average targetgrid-cells that should be filled. The latter increases from roughly 5 to 12 while increasing the gridsize from 3 to 5.B)How effective is the GNN for robotic execution? We now consider the scenario with the robot (Fig.1) and investigate the GNN’s effectiveness. For this purpose, we compare the GNN with a heuristic(HEUR). The HEUR agents perform action selection as follows: based on MILP’s proposed actions,4the heuristic only considers those which will result in a stable block placement and selects one ofthem at random. If there is no such action, the termination action is selected.Table 2: Comparing our proposed methodwith a heuristic in the two-sided environmentwith the robot-in-the-loop.GridSize Method R f ̄a4* HEUR 0.57 0.4 0.62MILP-DQN 1.03 0.16 0.7MILP-DQN-MCTS 1.24 0.05 0.755 HEUR 0.34 0.58 0.47MILP-DQN 0.98 0.25 0.58MILP-DQN-MCTS 1.38 0.08 0.65As shown in Table 2, in both environments, our pro-posed agents (MILP-DQN & MILP-DQN-MCTS)clearly outperform the heuristic. Already in the envi-ronment with less blocks, the heuristic results in 40%of failures, indicating that a more informed methodfor action sequencing is required. An example ofsuch a failure is depicted in Fig. 5, where due to badaction sequencing by the heuristic, the two blockscollide. Our proposed approaches effectively reducethe failure rates, with MILP-DQN achieving a de-crease by a factor of 2, while adding MCTS leads toa decrease by a factor of almost 8. Those results show that our learned graph-based representationsare indeed capable of effectively capturing the environment’s state and make informed decisionsregarding the action sequencing - a crucial component of RAD. Overall, we conclude that our pro-posed hierarchical approach is indeed capable of resolving the inherent difficulties of RAD, as alsoillustrated in Fig. 4 where we show the successful assembly of a desired target shape using 4 blocksof 3 different types. Moreover, on our website, we also showcase real-world transfer of the learnedpolicies.4 ConclusionsWe have presented a novel hierarchical approach for robot assembly discovery (RAD). Our approachcombines global reasoning through mixed-integer programming, which forms a powerful inductivebias for the subsequent graph-based reinforcement learning for local decision-making, together withgrasp and motion planning for realizing the assembly actions. The hierarchy efficiently decomposesthe problem’s huge combinatorial action space and results in robust and reliable RAD policies. Theproposed approach is validated in a set of simulated RAD and real-world experiments that illustrateits effectiveness. As graph structures are already widely used in robotics (i.e., kinematic/dynamicchains, scene graphs, factor graphs), in the future, we want to investigate how our approach andlearning on graphs can be applied in different problem settings and domains.Figure 4: Illustration of a successful RAD sequence using our proposed MILP-DQN-MCTS approach.The agent successfully the assembly successfully using in total 4 blocks and 3 different block types.Figure 5: Illustration of an unsuccessful RAD sequence using the heuristic agent introduced inSec. 3-B. As shown in the images, it is important to perform informed decisions about the assemblysequence, as the wrong sequencing can result in collisions between the block that is placed and otherblocks in the scene.5AcknowledgementsThis work is supported by the AICO grant by the Nexplore/Hochtief Collaboration with TU Darmstadt,and the Emmy Noether DFG Programme (No. 448644653). Calculations for this research wereconducted on the Lichtenberg high performance computer of the TU Darmstadt.References[1]Elma Durmisevic. Circular economy in construction design strategies for reversible buildings.BAMB, Netherlands , 2019. 1[2]Skylar Tibbits. Autonomous assembly: designing for a new era of collective construction . JohnWiley & Sons, 2017. 1[3]Harvey M Salkin and Cornelis A De Kluyver. The knapsack problem: a survey. Naval ResearchLogistics Quarterly , 1975. 2[4]Victor Bapst, Alvaro Sanchez-Gonzalez, and Carl Doersch et al. Structured agents for physicalconstruction. In ICML , 2019. 2[5]Jessica B Hamrick, Victor Bapst, and Alvaro Sanchez-Gonzalez et al. Combining q-learningand search with amortized value estimates. In ICLR , 2019. 2, 3[6]Niklas Funk, Georgia Chalvatzaki, Boris Belousov, and Jan Peters. Learn2assemble withstructured representations and search for robotic architectural construction. In CoRL , 2021. 2,3, 4[7]Ashish Vaswani, Noam Shazeer, Niki Parmar, and Jakob Uszkoreit et al. Attention is all youneed. In NeurIPS , 2017. 2[8]Petar Veli ˇckovi ́c, Guillem Cucurull, and Arantxa Casanova et al. Graph attention networks.arXiv:1710.10903 , 2017. 2[9]Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! InICLR , 2018. 2, 4[10] David Silver, Julian Schrittwieser, and Karen Simonyan et al. Mastering the game of go withouthuman knowledge. nature , 2017. 2, 3[11] Marc Toussaint. Logic-geometric programming: An optimization-based approach to combinedtask and motion planning. In IJCAI , 2015. 2[12] Leslie Pack Kaelbling and Tomás Lozano-Pérez. Hierarchical planning in the now. In WorkshopsAAAI , 2010. 2[13] Gurobi Optimization, LLC. Gurobi Optimizer, 2022. 3[14] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press,2018. 4[15] V olodymyr Mnih, Koray Kavukcuoglu, David Silver, and Andrei A Rusu et al. Human-levelcontrol through deep reinforcement learning. nature , 2015. 106A AppendixA.1 Visualization of successful RAD sequencesTo support the experimental evaluations presented in Section III-B, we also present videos on ourwebsite https://sites.google.com/view/milp-gnn-for-rad that illustrate the differencebetween the agents.We also show on the website that we can successfully transfer our learned policies to real-world RADinstances. To get all the information about the initial scene, we use an OptiTrack motion capturesystem. Afterwards, we can use this information to create a digital twin and subsequently plan insimulation and execute the respective actions also in the real world as shown in the videos.A.2 Further details regarding our proposed approachIn this section, we aim to summarize the overall working of our algorithms and provide a morethorough description of the individual components that are involved.A.2.1 Formulating RAD as MDPAs already mentioned in the problem definition section, we describe RAD using the notation of aMarkov Decision Process (MDP) with state and action space, S,A, transition probabilities p, rewardfunction r, and discount factor γ.State spaceThe state sis given by the combination of four sets, s=(SU,SP,TF,TE), with|SU|=NU,|SP|=NP,|TF|=NF,|TE|=NE. The set SUencodes the unplaced primitive units thatare still available for construction, SPthe primitive units that have already been used, TFandTEcontain the so-called target grid-cells and non-target grid-cells, respectively. These grid-cellsare parameterized through their respective 3D center coordinate x∈R3, i.e.TF={xi, i∈NF},TE={xi, i∈NE}(visualized in pink and green). By projecting all grid-cells centre coordinatexiinto the yellow target shape (cf. Fig. 2), we decide whether it should be occupied or remainunoccupied during RAD.Moreover, we assume that all building blocks are a combination of primitive units. More specifically,we consider that there is only one type of primitive unit: a unit cube uc= 13. Thus, all the blocksin the scene are a combination of primitive units (cf. Fig. 2), i.e., block iis defined by the union ofNbiprimitive units, bi=SNbij=1uc. Representing blocks as concatenations of primitives allows for auniversal interface with graph-based representations, as any Tetris-like block can easily be represented.Simply put, each primitive unit induces a node in the graph, and the connectivity information encodeswhether or not multiple primitive units form a larger block (cf. two leftmost frames in Fig. 3). Thischoice also allows us to describe the placed and unplaced blocks through the primitive units’ 3Dpositions xk, and connectivity information yk= [yk,1, ..., y k,NU], i.e.,SU={(xk, yk), k∈NU}. Ifprimitive unit k is connected with primitive unit 1 to form a larger block yk,1equals 1, otherwiseyk,1=0. We follow the same procedure for the set of already placed elements SP.Action spaceFor placing blocks in the scene, we use a discrete, time-varying action space. In particular, everyprimitive unit which is at the moment unplaced, can be placed w.r.t. all available grid-cells. Asmore complicated blocks might also require rotations, we augment all placement actions with fourrotational actions, i.e. rotating the block by 0,±90, or180degrees around the upward-pointingz-axis. Furthermore, we add one termination action that enables the agent to indicate that the currentassembly is finished or not possible to continue, as there are no feasible actions left. Thus, theresulting action space contains Na=NU×(NF+NE)×4 + 1 = NU×Gc×4 + 1 actions. Notethat the MDP is focused on high level decision making. It does not account for the low level motiongeneration, namely grasp selection and robot motion planning, as this would further increase thealready large action space. Nevertheless, given the action, the motion generation problem is welldefined as it specifies the block that is to be moved, the required relative change in orientation, and itsplacement location. After every placement action, all primitive units belonging to the moved blockare transferred from the set of unplaced elements to the set of placed ones. We also update the set ofgrid-cells by removing all cells that are now occupied.7Reward definitionOn every successful placement action, we assign a reward of r(st, at) = 0 .2(NFt−NFt+1+NEt+1−NEt), thereby giving a positive signal when the action reduced the number of target grid-cells, whilealso penalizing unnecessary filling of non-target grid-cells. Thus, this choice actively enforcesresource efficiency. The conditions for a successful placement action (i.e., valid action) are:• the block is placed by the robot without moving or colliding with any other block•the block is placed in a stable configuration (i.e. the resulting structure is not falling apart due togravity).If any of these conditions is violated, the action will be marked as invalid. This results in terminatingthe current episode and assigning a reward of −1.To summarize the previous definitions, the reward is given byr(st, at) =0.2(NFt−NFt+1+NEt+1−NEt)if valid action executed ,−1if executing an invalid action ,0.2(NFt−NFt+1+NEt+1−NEt)+1 if valid action and NFt+1== 0 , i.e., the completion of RAD ,(1)As the last case corresponds to the desired behaviour, i.e., successful completion of RAD, we increasethe final reward by +1upon this event.Episode terminationWe additionally want to point out which events result in terminating the current episode. Terminatingthe episode requires having to sample a new RAD scene before taking the next action. Every episodeterminates upon one of the following events occurring:•the agent selecting an invalid action (i.e., one of the following: 1) upon block placement, therobot or the block, or both collide with any other block, 2) the block placement results in anunstable configuration, i.e., the resulting structure falling apart due to gravity)• the agent choosing the termination action• no more available building blocks• the completion of RAD, i.e., the filling of all target grid-cells.Discount factorFinally, to reflect the long-horizon of the considered task, we set the discount factor γto0.999. Forthe definition of the reward, we refer to the next section.A.2.2 High Level: MILP for optimal geometric target fillingThe pseudocode in Alg. 1 contains all the logic to obtain and update the MILP’s solution. In particular,the function computeMILPSol from Line 4 onwards describes the necessary steps to obtain the MILPsolution given the current state. The pseudocode closely follows our descriptions from Sec. 2 for thehighest level.As the environment state changes constantly during the assembly process, we also require anotherfunction that updates the actions that are available to the agent. This function is called updateAvailAc-tions and described from Line 16 onwards. Please note that the update function does not requiresolving the MILP again (it takes as input the previously calculated solution) and is therefore waymore efficient.8Algorithm 1 MILP1: Grid is of size nx, ny, nz(x-, y-, z-direction, respectively)2: Grid has Gc=nxnynzcells3: Conversion from grid coordinate dx, dy, dzto index via j=dx+dynx+dznxny4:procedure computeMILPSol (s)5: Extract grid state g∈RGc×1froms6: Add Gcconstraints ensuring only 1 primitive block at each cell: g[i]≤1,∀g[i]∈g7: From SU, extract the Pdifferent block types that are in the scene, and their quantities Ni8: K,M=computeUniquePlacements (SU, P,g)(cf. Line 20)9: Introduce vector of binary decision variables: w∈{0,1}(PPˆi=1K(ˆi))×110: Add contraint on decision variables regarding occurance of each block type:PK(i)l=K(i−1)w[l]≤Ni, fori in1..P, and with K(0) = 111: Define cost vector c∈RGc×112: Define Objective: maxwcTMw13: Solve optimization problem, returns optimal entries for w14: compute all available actions: AMILP=computeAvailGNNActions (SU,g,w,M)(cf. Line 34)15: return AMILP,w,M16:procedure updateAvailActions (s,w,M)17: Extract grid state g∈RGc×1froms18: compute all available actions: AMILP=computeAvailGNNActions (SU,g,w,M)19: return AMILP,w,M20:procedure computeUniquePlacements (SU, P,g)21: Define empty Matrix M22: forˆiin1..Pdo23: Number of valid placing actions: K(ˆi) = 024: formin1..Gcdo25: /* Iterate over all potential placing positions26: forhin1..4do27: /* Iterate over all rotational actions28: Starting from empty grid with all zeros: g1= 0Gc×129: Attempt to place the first primitive unit p1of block ˆiat grid-cell mwhile applying rotationh, and capture resulting grid-state g130: ifAll primitive units inside cell && g1not equal to any column in M&&gT1g= 0then31: Append vector g1as a new column to M32: K(ˆi)+=133: return K,M34:procedure computeAvailGNNActions (SU,g,w,M)35: Define empty list of available actions AMILP36: From SU, extract the Ptdifferent block types that are currently in the scene37: forˆiin1..Ptdo38: formin1..Gcdo39: /* Iterate over all potential placing positions40: forhin1..4do41: /* Iterate over all rotational actions42: Starting from empty grid with all zeros: g1= 0Gc×143: Attempt to place the first primitive unit p1of block ˆiat grid-cell mwhile applying rotationh, and capture resulting grid-state g144: ifAll primitive units inside cell && g1equal to any column jinMfor which holdsw[j] == 1 &&gT1g= 0then45:/* This if checks: 1) is the block entirely in the target area?, 2) is the action that is to be executed inside thesolution space?, 3) is any of the voxels, where the block might be placed already filled?46: Append Triple ( p1, m, h ) toAMILP47: Append termination action ( 0,0,5) toAMILP48: return AMILPA.2.3 Medium Level: GNN & Q-Learning for task sequencing.On the medium level, we now get as input the solution from the higher level and use it to train ourgraph-based reinforcement learning agents.9To start with, we present the general loop that is used to train the graph-based representations inAlg. 2. It consists of two main components. We first collect experience through interacting with theenvironment (cf. while loop in Line 6), and secondly, we use the obtained samples to refine our GNN(cf. Line 23) that estimates the quality of all the actions and thus directly influences the actions thatare being taken in the environment.Algorithm 2 Training Loop for the medium level GNN-RL1:foriin1..NumberEpochs do2: /* Collect experience3: j= 0, Buffer B= []4: /* Define number of samples to collect5: Γ = 1006: while j <Γdo7: Sample initial state s8: /* Obtain MILP solution by running computeMILPSol from Alg. 1, cf. Line 49: AMILP,w,M=computeMILPSol (s)10: finished=False11: while finished==False do12: Sample action a=act(Q, s,AMILP)using Q-function approximator Q. This calls Alg. 3 Line 213: Execute a, i.e., move robot to pick and place the part & obtain r(s, a)14: Receive next state s′15: B.append([ s, a, r (s, a), s′])16: j=j+ 117: s=s′18: /* Update MILP solution by running updateAvailActions from Alg. 1, cf. Line 1619: AMILP=updateAvailActions (s,w,M)20: ifAny of the termination criteria (cf. A.2.1) is true then21: finished=True22: /* Update weights of Q-function23: π=update (Q,B)During training and also during evaluation of our proposed MILP-DQN approach, we perform actionselection as shown in Alg. 3 Line 2. In both cases of either exploring a random action (cf. Line 5) orselecting the action with highest predicted Q-value (i.e., exploitation, cf. Line 7), we only allow tochoose from the set of actions that has been previously proposed by the high level MILP. For trainingthe GNN to predict the correct Q-values, we exploit the collected experience and perform temporaldifference learning, as shown in Alg. 3 from Line 8 onwards.Algorithm 3 DQN1: Number of update steps χ2:procedure act(Q, s,AMILP)3: /* ε-greedy policy4: ifRandomVariable < εthen5: a= RandomChoice( AMILP)6: else7: a= max a′Q(s, a′|a′∈ A MILP)return a8:procedure update (π,B)9: Add Bto Replay Memory10: foriin1..χdo11: Sample random subset from Replay Memory12: /* Temporal-difference learning using tatget network QTas in [15].13: loss = smoothL1( Q(s, a)−(r(s, a) +γmax a′QT(s′, a′|a′∈AllPossibleActions( s))))14: Update Q-function approximator Qwith parameters θ15: θ=θ−α∂loss∂θ16: return QLastly, we are only missing the details regarding action selection for our proposed MILP-DQN-MCTS method. Contrarily to the previous MILP-DQN approach, here, we even add model-basedsearch through MCTS. This has the potential to even further improve performance, robustness, andgeneralization.10Alg. 4 provides the details for action selection in MILP-DQN-MCTS agents. Please note, that Alg. 4is still only capable of selecting from the set of actions proposed by the high level MILP. As can beseen in the pseudocode, we now simulate the outcome of multiple actions and subsequently exploitthis experience to decide upon the desired action that should be executed. We provide the code forthe search process from Line 7 onwards. Moreover, as pointed out in the second line of the algorithm,we only consider a rollout depth of 1. This means that we stop the model-based rollouts after the firstaction and estimate the expected reward of the remaining trajectory by again querying our Q-functionestimator. This possibility of clipping the rollouts already after the first or generally speaking aftervery few actions is another reason why the combination of Q-learning and MCTS is appealing andefficient.Algorithm 4 DQN + MCTS1: /* Note, this is only during evaluation.2: Rollout Depth η= 1if not stated otherwise3: Search Budget τ= 5if not stated otherwise4:procedure act(Q, s,AMILP)5: Given: state s, set containing the explored actions SA={}6: ∀a∈ A MILP, Initialize W(s, a) = 1 ,QS(s, a) =Q(s, a)7: foriin1..τdo8: ifRandomVariable < εthen9: a= RandomChoice( a∈ A MILP|W(s, a) = 1)10: else11: a= max a′Q(s, a′|a′∈ A MILP, W(s, a′) = 1)12: Add atoSA, collect r(s, a), update AMILP=updateAvailActions (s,w,M)13: forjin1..η−1do14: a= DQN −act(Q, s)(Alg. 3, Line 2), collect current single step reward ̃r15: Update: r(s, a) =r(s, a) +γj ̃randAMILP16: Update: W(s, a) =W(s, a) + 1 ,QS(s, a) =12(QS(s, a) +r(s, a) +γηmax a′Q(s′, a′))17: ar= max a′QS(s, a′|a′∈ SA)18: return arA.3 Additional details on the lowest level: Grasp and Motion planning (GAMP)The lowest level is tasked with the conversion of the previous level’s actions into robot joint commands,and performs the final robot execution of block grasping and moving such that the block is placed inthe desired pose. While it would be possible to add those decisions to the higher levels, we decidedto consider motion generation as a separate module in our hierarchical framework, as these decisionsare heavily dependent on the actual robot manipulator. Moreover, we want to avoid increasing theaction space of the previous level. Robotic block grasping and placing is achieved by first checkingthe feasibility of a predefined set of top-down grasping poses and subsequently checking if this graspresults in a feasible final placement pose. If there exists a pair of feasible grasping and placing poses,we move the robot by approaching the grasping pose from the top, then move to a position thatis slightly above the placing location, and finally, approach the placement pose. All intermediatewaypoints are computed based on inverse kinematics.A.4 Additional details on running timesLastly, we want to provide the running times of our individual components. We focus on theenvironment with the robot-in-the-loop, and thus report the running times for the experimentspresented in Section 3 - B). Please note that we did not have time to properly optimize our code, thus,we think that there is still lots of room for improvement for the running times that we will report inthis section. The results are again obtained by averaging across all the 200 RAD scenes that havebeen presented to the agents. We also want to emphasize that computing the initial MILP solution isonly required once per scene, whereas all the other components have to be run per action, i.e., perstep that is taken in the environment.The results from this experiment are shown in Table 3. Computing the initial MILP solution, i.e.,calling the function computeMILPSol (cf. Alg. 1), takes around 18 milliseconds (ms), and 26 ms forthe environments with a grid sizes of 4 and 5, respectively. Please again note, that computing the11Table 3: Reporting the average running times of all our components in the same experimental setting aspresented in Section 3 - B). All the running times have been acquired on a computer with 64GB RAM, anNVIDIA GeForce RTX 2080 SUPER GPU, as well with an AMD Ryzen 9 3900x CPU (24 cores).Grid Compute MILP solution Update MILP solution GNN-DQN GNN-DQN + MCTS GAMPSize (per scene ) (per step) (per step) (per step) (per step)4 0.0178 s 6.5410−5s 0.0069 s 1.2094 s 0.0324 s5 0.0259 s 7.5110−5s 0.0069 s 1.3968 s 0.0370 sinitial MILP solution is only required once for every RAD scene. All the other components have to berun for every action, i.e., every step that is taken in the scene. Further, the table shows that updatingthe MILP solution, i.e., calling the function updateAvailActions (cf. Alg. 1) requires by far the leastamount of time and is negligible compared to the other running times. Calculating the desired actionbased on the GNN-DQN approach only (cf. Alg. 3) is also very efficient as it only takes about 7msfor both of the environments. However, if we take more than 3 actions per RAD scene, then the totaltime required for the GNN-DQN already exceeds the time taken to compute the initial MILP solution.Moreover, running the grasp and motion planning (GAMP, cf. Sec. A.3) which is required for everyaction requires on average around 32 and 37ms (for the two different versions of the environment) andthus even consumes more time than computing the initial MILP solution. Finally, when performingthe action decision based on the combination of GNN-DQN + search(MCTS) as described in Alg.4, this requires 1.2s and even 1.4s on average for the environments with the grid sizes of 4 and 5.The big increase in runtime compared to the GNN-DQN approach can be explained by the fact thatwe explore five different actions before we decide upon the one that should be taken. This meansthat we have to query the GNN five times, perform GAMP five times, and lastly, have to evaluatethe outcomes of the five actions using our PyBullet simulation which is very costly. Nevertheless,we still want to point out that our approach is targeted at high-level decision-making and that therobot motion in the real world (i.e., picking and placing the block) takes on the order of 20s, which isstill much longer than the time taken to decide upon the action. However, as we plan to apply ourproposed algorithms to different domains, speeding up this combination of DQN+MCTS is on top ofour priority list as it performed best across experiments.12
XEq-lQiNrtF
Contributions: 1. The paper focuses on an interesting problem of autonomous robotic assembly discovery (RAD). The goal is that a robotic agent should reason about abstract 3D target shapes that need to be fulfilled given a set of available building block. 2. The problem is challenging to solve . 3. The authors use a mixed-integer program formulation for global optimization of blocktype selection and use the blocks’ final position in order to recreate shapes. 4. The solution of the MILP’s is used as a guiding exploration signal in a graph Reinforcement Learning (RL) framework. 5. Authors use a GNN for capturing the geometric, structural, and physical relationship between entities. 6. They assume that all building blocks are a combination of primitive blocks. This helps with a modular representation of complicated blocks through primitive elements 7. The authors train the GNN through model-free Q-learning. Further, they also show that it can be integrated with tree search(MCTS) for improved long-term decisions. 8. Empirically the results are significantly better than other methods and also generalize to out-of-distribution data. Pros: 1. Good empirical evaluation. 2. Comparison with baseline 3. Ablation study to show importance of each component Cons: 1. Presentation could have been better. For example, in sec.3 the dimensions of variables g, p,c should be clarified. Further, can you clarify how g becomes a vector after summation after grid state change in section 3. and how values remain <=1 in vector g. 2. Slight more explanation of how the scene is converted to a graph could be better. Although the authors have cited the paper. However, it will be better if some more details of construction can be presented in the paper itself. 3. Code is not shared Recommendation: I vote for weak accept. There are some presentation issues that should be clarified. Further, the code is not shared.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery ### Paper Abstract Robot assembly discovery is a challenging problem that lives at the intersection of resource allocation and motion planning. The goal is to combine a predefined set of objects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined target structures entirely from scratch using a set of Tetris-like building blocks and a robot. Our novel hierarchical approach aims at efficiently decomposing the overall task into three feasible levels that benefit mutually from each other. On the high level, we run a classical mixed-integer program for global optimization of blocktype selection and the blocks’ final poses to recreate the desired shape. Its output is then exploited as a prior to efficiently guide the exploration of an underlying reinforcement learning (RL) policy handling decisions regarding structural stability and robotic feasibility. This RL policy draws its generalization properties from a flexible graph-based neural network that is learned through Q-learning and can be refined with search. Lastly, a grasp and motion planner transforms the desired assembly commands into robot joint movements. We demonstrate our proposed method’s performance on a set of competitive simulated and real-world robot assembly discovery environments and report performance and robustness gains compared to an unstructured graph-based end-to-end approach. Videos are available at https://sites.google.com/view/milp-gnn-for-rad . ### Paper Keywords ["graph neural networks", "reinforcement learning", "robotics"] ### Paper Content Graph-based Reinforcement Learning meets Mixed IntegerPrograms: An application to 3D robot assembly discoveryNiklas FunkTechnical University of Darmstadtniklas@robot-learning.deSvenja MenzenbachTechnical University of Darmstadtsvenja.menzenbach@stud.tu-darmstadt.deGeorgia ChalvatzakiTechnical University of Darmstadtgeorgia@robot-learning.deJan PetersTechnical University of DarmstadtGerman Research Center for AI (DFKI)Hessian.AICentre for Cognitive Sciencejan@robot-learning.deAbstractRobot assembly discovery is a challenging problem that lives at the intersection ofresource allocation and motion planning. The goal is to combine a predefined set ofobjects to form something new while considering task execution with the robot-in-the-loop. In this work, we tackle the problem of building arbitrary, predefined targetstructures entirely from scratch using a set of Tetris-like building blocks and a robot.Our novel hierarchical approach aims at efficiently decomposing the overall taskinto three feasible levels that benefit mutually from each other. On the high level,we run a classical mixed-integer program for global optimization of blocktypeselection and the blocks’ final poses to recreate the desired shape. Its outputis then exploited as a prior to efficiently guide the exploration of an underlyingreinforcement learning (RL) policy handling decisions regarding structural stabilityand robotic feasibility. This RL policy draws its generalization properties froma flexible graph-based neural network that is learned through Q-learning andcan be refined with search. Lastly, a grasp and motion planner transforms thedesired assembly commands into robot joint movements. We demonstrate ourproposed method’s performance on a set of competitive simulated and real-worldrobot assembly discovery environments and report performance and robustnessgains compared to an unstructured graph-based end-to-end approach. Videos areavailable at https://sites.google.com/view/milp-gnn-for-rad .1 Introduction & Problem DefinitionFigure 1: Illustrating a simulated RAD en-vironment (left) and all three components ofour proposed hierarchical approach (right).A common desire amongst many industry sectors isto increase resource efficiency. The construction in-dustry could significantly reduce its environmentalimpact by re-using existing material more efficiently[1]. There is a fundamental need for combining in-telligent algorithms for reasoning on how existingmaterial can be recombined to form something new,with autonomous execution [2].Herein, we tackle the problem of autonomous roboticassembly discovery (RAD), i.e., a robotic agentshould reason about abstract 3D target shapes thatN. Funk et al., Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robotassembly discovery (Extended Abstract). Presented at the First Learning on Graphs Conference (LoG 2022),Virtual Event, December 9–12, 2022.need to be fulfilled given a set of available building blocks (cf. Fig. 1). Unlike other assemblyproblems with known instructions, in RAD, the agent does neither have any prior information aboutwhich blocks to use and their final poses, nor about the execution sequence. Contrarily, the RADagent should discover the possible ways of combining the building blocks, find appropriate actionsequences, and put them into practice. RAD can thus be structured into two difficulty levels. Onthe high level, a goal-defined resource allocation problem has to be solved, which is typically NP-complete for discrete resources, and can be viewed as a real-world version of the Knapsack Problem[3]. The low level requires solving a motion planning problem, i.e., having to come up with an overallfeasible action sequence of picking and placing actions taking into account the robot’s kinematics,structural stability throughout the assembly, and avoiding any collisions.One way of approaching RAD are end-to-end approaches that directly map from problem definitionto low level actions [ 4–6]. They are typically straightforward to design, and based on learned graphneural network (GNN) representations. Due to their ability to learn relational encodings [ 7,8] andinvariant representations, they can overcome combinatorial barriers [ 9], and be combined with searchfor improved generalization [ 5,6,10]. Yet, they often require extensive training in combinatorialaction spaces, and typically lack interpretability. On the other end of the spectrum are Task andMotion Planning approaches [ 11,12], which naturally represent problem’s hierarchy and necessitatefull prior knowledge of geometrics and kinematics. They are usually unsuitable for real-time reactivecontrol, as the full joint optimization suffers from combinatorics and non-convex constraints.We propose a novel hierarchical method for 3D RAD that addresses both, resource allocation andmotion planning. On the high level, a model-based mixed-integer linear program (MILP), handlingthe process of block-type selection and optimizing the blocks’ final poses for optimally resemblingthe desired target shape, is solved. The MILP’s solution is then used as a guiding exploration signalin a graph-based Reinforcement Learning (RL) framework. We define a GNN for capturing thegeometric, structural, and physical relationships between building blocks, robot, and target shape,thereby incorporating all effects that have not been modelled on the higher level. The GNN is trainedthrough model-free Q-learning allowing the integration with tree search for improved long-termdecisions [ 10]. To put the previous reasoning into practice, at the lowest level, we rely on simplegrasp and motion planning. We present an empirical evaluation of our proposed approach in a set ofcompetitive simulated RAD tasks. The results show superior performance of our approach againstboth empirical and end-end GNN baselines, thereby underlining its effectiveness.Problem DefinitionFigure 2: 2D RAD environmentwith one placed block consisting oftwo primitive elements (shown inbrown/blue). The grid-cells are visual-ized through their centre points. Pinkpoints represent target grid-cells thatare to be filled & non-target grid-cells(green) should remain unoccupied.We formulate the problem of having to combine rectlinearblocks into a desired target shape as Markov Decision Pro-cess. Its state is given by four sets: i) the set of unplacedblocks that encodes the remaining blocks, ii) the set of placedblocks that have already been used, iii) the set of target grid-cells (pink) that are part of the target shape and should allbe filled, and iv) the set of non-target grid-cells (green) thatshould remain unoccupied (cf. Fig. 2). We also assume thatall building blocks are a combination of primitive blocks.This choice allows to modularly represent any more compli-cated block through primitive elements.For block placement, we use of a discrete, time-varyingaction space. Every unplaced primitive block can be placedw.r.t. all available grid-cells while additionally selecting fromfour actions that rotate the block by 0◦,±90◦, or180◦around the z-axis. We also add one terminationaction that results in stopping the assembly process. The resulting action space of combinatorialcomplexity thus contains #unplaced primitive elements ×#grid-cells ×4+1 = Up×Gc×4+1 actions.After every placement action, the set of placed/unplaced elements and target/non-target grid cells areupdated, and a reward is assigned. The reward is positive when the action reduced the number oftarget grid-cells, and negative if non-target grid-cells are being filled, therefore actively enforcingresource efficiency. The conditions for a successful placement action are that the block can be placedby the robot without moving or colliding with any other block, and that it is placed in a stableconfiguration. On any invalid action, the episode is terminated and a high negative reward is assigned.2Otherwise, the episode is terminated upon the events of i) the agent choosing the termination action,ii) no more available building blocks, or iii) the completion of RAD.2 MethodWe now introduce the two upper levels of our proposed tri-level hybrid approach for reliable RAD(cf. Fig. 1). For the lowest level that only realizes the commanded assembly actions, we refer to theappendix.High Level: MILP for optimal geometric target filling . We first solve a MILP for optimizingthe blocks’ placing poses to optimally fill the desired shape in light of the problem’s combinatorialcomplexity. Yet, to render the problem tractable, we do not consider the sequencing and roboticconstraints. Based on the previous definitions (reward & voxelization), the MILP’s objective (subjectto maximization) equates to OMILP=cTg,with vector g∈RGc×1representing the grid-state, andc∈RGc×1containing weights that indicate whether a grid-cell should be filled (1) or not (-1). Weessentially flatten the 2D grid from Fig. 2 into a single vector by converting the discrete coordinatesdx,dyof every grid-cell to a single index j=dx+dynx(with grid widht nx). As every grid-cellcan only be occupied at maximum by one block, we add g[i]≤1,∀g[i]∈gas constraints. Next, wedetermine how every potential action changes the grid-state. I.e., placing the horizontal block fromFig. 2 in the lowest left position results in a grid state of pTi=1,k=1= [1,1,0, ....,0]∈R1×Gc, withblock type index iand placement action k. By additionally assigning a binary decision variablewi,kand taking all object types into account, we can define the change in the grid-state accordingtog=PPˆi=1PK(ˆi)ˆk=1wi=ˆi,k=ˆkpi=ˆi,k=ˆkwith a total of Pdifferent block types and K(i)admissibleactions. While the binary decision variables prohibit any partial block placement by definition, westill have to restrict that any type of block can only be placed depending on its appearance in the scene(Ni),PK(i)ˆk=1wi,k=ˆk≤Ni,∀i∈P.We solve the resulting MILP through Gurobi [ 13] and obtain theoptimal values for the decision variables thereby revealing the final poses for every block type.Medium Level: GNN for task sequencing . The high level only partially resolves the problem’scombinatorial aspect. It lacks i) the placement actions’ sequencing, ii) the exact assignment of whichblock to use for each placement, and iii) the consideration of robotic feasibility, the blocks’ initialpositions, and structural stability. Thus, we require this level which is tasked to decide upon eitherexecuting one of the proposed actions from the higher level MILP or terminating the current assembly.We propose an approach based on combining GNN and Q-learning [ 5,6]. The GNN is capable ofproviding the required representational flexibility and invariance to problem size, while performingaction selection based on Q-learning is desirable as i) the action space is discrete, ii) the estimation ofall the actions’ quality as basis for action selection allows to efficiently incorporate the MILP’s priorknowledge by masking out all actions that are not inside its solution, iii) potential multimodalitiesin the MILP solution can be captured, and iv) it allows easy and time-effective combination withsearch-based methods, i.e., Monte Carlo Tree Search (MCTS) to improve performance [10].We now describe the action selection process (cf. Fig. 3). We refer to [ 6] (which esentially usesthe same GNN) for the details. We first transform the environment’s state into a graph by creatingnodes for all primitive blocks and grid-cells. Every node has 5 initial features, i.e., the 3D worldcoordinates of its centre ∈R3, and 2 booleans indicating the node type, i.e. placed/unplaced primitiveblock ( [1,0]/[0,0]), target/non-target grid-cell ( [1,1]/[0,1]). Almost all nodes of the graph are fully-Figure 3: Illustrating action selection. First, the current scene is transformed into a graph. Note:Only a subset of the target (pink) and non-target (green) grid-cells is shown. White nodes depict theunplaced primitive blocks. Next follows message passing updating the nodes’ features. The action’sQ-values are predicted based on the nodes’ features of the respective unplaced primitive block andthe grid-cells using a feedforward neural network (NN). To incorporate the prior knowledge, we onlyconsider actions part of the MILP solution (shown in red).3connected with each other – we only omit the connections in-between the unplaced primitive blocksif they do not belong to the same block to provide an inductive bias for the object shape. Upon graphcreation follow three rounds of message passing using an attention mechanism [ 6,9], in which wesequentially build an encoded graph. The encodings are the basis for computing Q-values for allavailable actions (i.e., predicting every actions’ quality). As any unplaced primitive block can beplaced w.r.t. every grid-cell, a standard feedforward NN is used, that takes as input the encoded nodevalues of i) the primitive block-to-be-placed, and ii) the grid-cell, and outputs the Q-values for allthe four rotational-placement actions between these nodes. This process is repeated for all pairs ofunplaced primitive blocks and grid-cells. The action decision is done using an ε-greedy policy, yet,only allowing to choose from the set of actions proposed by the MILP (we mask out all the otherones), as well as the termination action. The ε-greedy policy controls the tradeoff between randomlyexploring actions and exploiting, i.e., selecting the action with highest Q-value during training andevaluation. The graph’s weights are refined through temporal-difference learning, thereby attemptingto improve the estimation of the Q-values by minimizing the difference between the predicted qualityof the action and the observed outcome. While this Q-learning procedure by itself already results ingood policies, during test time, we additionally consider action selection based on the combination ofQ-learning and MCTS (DQN+MCTS). For more details, please see [14] and the appendix.3 Experimental ResultsWe evaluate our proposed MILP-DQN method and potentially adding MCTS (search budget of 5), insimulation (Fig. 1) and reality (cf. link to videos in abstract). We aim to answer two questions: 1)Does the MILP’s guided exploration signal improve performance compared to end-to-end approaches?2) How effective is the medium level GNN compared to an heuristic approach for task sequencing?The training is conducted as in [ 6], and we also reuse their simulation, yet, allowing block placementsthroughout the whole assembly area & voxelizing the target shape. In the evaluations, we describethe environment’s difficulty through the grid size, i.e., Fig. 2 shows a potential target shape for a gridsize of 3. The star(*) indicates the agents’ evaluation in their training conditions, while the otherexperiments are out-of-distribution. The results are obtained by averaging the agents’ performance in200 scenes. We report the discounted reward R, the fraction of runs that ended upon failure f, andthe target grid-cell coverage ̄a, i.e., the fraction of initially unfilled and finally filled target grid-cells.Table 1: Comparing our proposed methodwith two learned baselines in the two-sidedenvironment wo robot.Grid Size Method R ̄a3* DQN 0.63 (0.02) 0.71DQN-REL [6] 0.67 (0.01) 0.68MILP-DQN 1.22 (0.01) 0.874 DQN 0.71 (0.08) 0.69DQN-REL [6] 0.75 (0.08) 0.66MILP-DQN 1.56 (0.03) 0.875 MILP-DQN 1.92 (0.05) 0.85A) Is the high level MILP needed?We consider scenarios without the robot, which re-duces the task’s complexity to placing the blocks ina stable configuration while trying to optimally fillthe desired shape. We compare against two baselines.The first one (DQN) does not consider the MILP’sprior knowledge and can therefore place any of theavailable blocks at all currently unoccupied grid-cells.The second one (DQN-REL) follows [ 6], in whichthe available blocks can only be placed next alreadyplaced blocks, thus, reducing the action space. In thefirst step, we allow to place the blocks at any target grid-cell.The results in Table 1 reveal that the MILP provides a strong inductive bias that is effective in guidingthe exploration. The agents trained using our proposed MILP-DQN approach outperform the twobaselines which in turn exhibit very similar performance. Compared to the baselines, MILP-DQNagents achieve an increase in the success rate and discounted reward by a factor of 2. These resultsconfirm the task’s combinatorial complexity. Performing an ε-greedy exploration without using aninformed prior does not allow for discovering good action sequences. The results also reveal thatthe MILP-DQN agents generalize well to the out-of-distribution environments as the desired targetgrid-cell coverage remains high at 0.87 and 0.85 (grid size of 4,5), despite the significant increasein task complexity. I.e., the number of blocks in the scene increases in line with the average targetgrid-cells that should be filled. The latter increases from roughly 5 to 12 while increasing the gridsize from 3 to 5.B)How effective is the GNN for robotic execution? We now consider the scenario with the robot (Fig.1) and investigate the GNN’s effectiveness. For this purpose, we compare the GNN with a heuristic(HEUR). The HEUR agents perform action selection as follows: based on MILP’s proposed actions,4the heuristic only considers those which will result in a stable block placement and selects one ofthem at random. If there is no such action, the termination action is selected.Table 2: Comparing our proposed methodwith a heuristic in the two-sided environmentwith the robot-in-the-loop.GridSize Method R f ̄a4* HEUR 0.57 0.4 0.62MILP-DQN 1.03 0.16 0.7MILP-DQN-MCTS 1.24 0.05 0.755 HEUR 0.34 0.58 0.47MILP-DQN 0.98 0.25 0.58MILP-DQN-MCTS 1.38 0.08 0.65As shown in Table 2, in both environments, our pro-posed agents (MILP-DQN & MILP-DQN-MCTS)clearly outperform the heuristic. Already in the envi-ronment with less blocks, the heuristic results in 40%of failures, indicating that a more informed methodfor action sequencing is required. An example ofsuch a failure is depicted in Fig. 5, where due to badaction sequencing by the heuristic, the two blockscollide. Our proposed approaches effectively reducethe failure rates, with MILP-DQN achieving a de-crease by a factor of 2, while adding MCTS leads toa decrease by a factor of almost 8. Those results show that our learned graph-based representationsare indeed capable of effectively capturing the environment’s state and make informed decisionsregarding the action sequencing - a crucial component of RAD. Overall, we conclude that our pro-posed hierarchical approach is indeed capable of resolving the inherent difficulties of RAD, as alsoillustrated in Fig. 4 where we show the successful assembly of a desired target shape using 4 blocksof 3 different types. Moreover, on our website, we also showcase real-world transfer of the learnedpolicies.4 ConclusionsWe have presented a novel hierarchical approach for robot assembly discovery (RAD). Our approachcombines global reasoning through mixed-integer programming, which forms a powerful inductivebias for the subsequent graph-based reinforcement learning for local decision-making, together withgrasp and motion planning for realizing the assembly actions. The hierarchy efficiently decomposesthe problem’s huge combinatorial action space and results in robust and reliable RAD policies. Theproposed approach is validated in a set of simulated RAD and real-world experiments that illustrateits effectiveness. As graph structures are already widely used in robotics (i.e., kinematic/dynamicchains, scene graphs, factor graphs), in the future, we want to investigate how our approach andlearning on graphs can be applied in different problem settings and domains.Figure 4: Illustration of a successful RAD sequence using our proposed MILP-DQN-MCTS approach.The agent successfully the assembly successfully using in total 4 blocks and 3 different block types.Figure 5: Illustration of an unsuccessful RAD sequence using the heuristic agent introduced inSec. 3-B. As shown in the images, it is important to perform informed decisions about the assemblysequence, as the wrong sequencing can result in collisions between the block that is placed and otherblocks in the scene.5AcknowledgementsThis work is supported by the AICO grant by the Nexplore/Hochtief Collaboration with TU Darmstadt,and the Emmy Noether DFG Programme (No. 448644653). Calculations for this research wereconducted on the Lichtenberg high performance computer of the TU Darmstadt.References[1]Elma Durmisevic. Circular economy in construction design strategies for reversible buildings.BAMB, Netherlands , 2019. 1[2]Skylar Tibbits. Autonomous assembly: designing for a new era of collective construction . JohnWiley & Sons, 2017. 1[3]Harvey M Salkin and Cornelis A De Kluyver. The knapsack problem: a survey. Naval ResearchLogistics Quarterly , 1975. 2[4]Victor Bapst, Alvaro Sanchez-Gonzalez, and Carl Doersch et al. Structured agents for physicalconstruction. In ICML , 2019. 2[5]Jessica B Hamrick, Victor Bapst, and Alvaro Sanchez-Gonzalez et al. Combining q-learningand search with amortized value estimates. In ICLR , 2019. 2, 3[6]Niklas Funk, Georgia Chalvatzaki, Boris Belousov, and Jan Peters. Learn2assemble withstructured representations and search for robotic architectural construction. In CoRL , 2021. 2,3, 4[7]Ashish Vaswani, Noam Shazeer, Niki Parmar, and Jakob Uszkoreit et al. Attention is all youneed. In NeurIPS , 2017. 2[8]Petar Veli ˇckovi ́c, Guillem Cucurull, and Arantxa Casanova et al. Graph attention networks.arXiv:1710.10903 , 2017. 2[9]Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! InICLR , 2018. 2, 4[10] David Silver, Julian Schrittwieser, and Karen Simonyan et al. Mastering the game of go withouthuman knowledge. nature , 2017. 2, 3[11] Marc Toussaint. Logic-geometric programming: An optimization-based approach to combinedtask and motion planning. In IJCAI , 2015. 2[12] Leslie Pack Kaelbling and Tomás Lozano-Pérez. Hierarchical planning in the now. In WorkshopsAAAI , 2010. 2[13] Gurobi Optimization, LLC. Gurobi Optimizer, 2022. 3[14] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press,2018. 4[15] V olodymyr Mnih, Koray Kavukcuoglu, David Silver, and Andrei A Rusu et al. Human-levelcontrol through deep reinforcement learning. nature , 2015. 106A AppendixA.1 Visualization of successful RAD sequencesTo support the experimental evaluations presented in Section III-B, we also present videos on ourwebsite https://sites.google.com/view/milp-gnn-for-rad that illustrate the differencebetween the agents.We also show on the website that we can successfully transfer our learned policies to real-world RADinstances. To get all the information about the initial scene, we use an OptiTrack motion capturesystem. Afterwards, we can use this information to create a digital twin and subsequently plan insimulation and execute the respective actions also in the real world as shown in the videos.A.2 Further details regarding our proposed approachIn this section, we aim to summarize the overall working of our algorithms and provide a morethorough description of the individual components that are involved.A.2.1 Formulating RAD as MDPAs already mentioned in the problem definition section, we describe RAD using the notation of aMarkov Decision Process (MDP) with state and action space, S,A, transition probabilities p, rewardfunction r, and discount factor γ.State spaceThe state sis given by the combination of four sets, s=(SU,SP,TF,TE), with|SU|=NU,|SP|=NP,|TF|=NF,|TE|=NE. The set SUencodes the unplaced primitive units thatare still available for construction, SPthe primitive units that have already been used, TFandTEcontain the so-called target grid-cells and non-target grid-cells, respectively. These grid-cellsare parameterized through their respective 3D center coordinate x∈R3, i.e.TF={xi, i∈NF},TE={xi, i∈NE}(visualized in pink and green). By projecting all grid-cells centre coordinatexiinto the yellow target shape (cf. Fig. 2), we decide whether it should be occupied or remainunoccupied during RAD.Moreover, we assume that all building blocks are a combination of primitive units. More specifically,we consider that there is only one type of primitive unit: a unit cube uc= 13. Thus, all the blocksin the scene are a combination of primitive units (cf. Fig. 2), i.e., block iis defined by the union ofNbiprimitive units, bi=SNbij=1uc. Representing blocks as concatenations of primitives allows for auniversal interface with graph-based representations, as any Tetris-like block can easily be represented.Simply put, each primitive unit induces a node in the graph, and the connectivity information encodeswhether or not multiple primitive units form a larger block (cf. two leftmost frames in Fig. 3). Thischoice also allows us to describe the placed and unplaced blocks through the primitive units’ 3Dpositions xk, and connectivity information yk= [yk,1, ..., y k,NU], i.e.,SU={(xk, yk), k∈NU}. Ifprimitive unit k is connected with primitive unit 1 to form a larger block yk,1equals 1, otherwiseyk,1=0. We follow the same procedure for the set of already placed elements SP.Action spaceFor placing blocks in the scene, we use a discrete, time-varying action space. In particular, everyprimitive unit which is at the moment unplaced, can be placed w.r.t. all available grid-cells. Asmore complicated blocks might also require rotations, we augment all placement actions with fourrotational actions, i.e. rotating the block by 0,±90, or180degrees around the upward-pointingz-axis. Furthermore, we add one termination action that enables the agent to indicate that the currentassembly is finished or not possible to continue, as there are no feasible actions left. Thus, theresulting action space contains Na=NU×(NF+NE)×4 + 1 = NU×Gc×4 + 1 actions. Notethat the MDP is focused on high level decision making. It does not account for the low level motiongeneration, namely grasp selection and robot motion planning, as this would further increase thealready large action space. Nevertheless, given the action, the motion generation problem is welldefined as it specifies the block that is to be moved, the required relative change in orientation, and itsplacement location. After every placement action, all primitive units belonging to the moved blockare transferred from the set of unplaced elements to the set of placed ones. We also update the set ofgrid-cells by removing all cells that are now occupied.7Reward definitionOn every successful placement action, we assign a reward of r(st, at) = 0 .2(NFt−NFt+1+NEt+1−NEt), thereby giving a positive signal when the action reduced the number of target grid-cells, whilealso penalizing unnecessary filling of non-target grid-cells. Thus, this choice actively enforcesresource efficiency. The conditions for a successful placement action (i.e., valid action) are:• the block is placed by the robot without moving or colliding with any other block•the block is placed in a stable configuration (i.e. the resulting structure is not falling apart due togravity).If any of these conditions is violated, the action will be marked as invalid. This results in terminatingthe current episode and assigning a reward of −1.To summarize the previous definitions, the reward is given byr(st, at) =0.2(NFt−NFt+1+NEt+1−NEt)if valid action executed ,−1if executing an invalid action ,0.2(NFt−NFt+1+NEt+1−NEt)+1 if valid action and NFt+1== 0 , i.e., the completion of RAD ,(1)As the last case corresponds to the desired behaviour, i.e., successful completion of RAD, we increasethe final reward by +1upon this event.Episode terminationWe additionally want to point out which events result in terminating the current episode. Terminatingthe episode requires having to sample a new RAD scene before taking the next action. Every episodeterminates upon one of the following events occurring:•the agent selecting an invalid action (i.e., one of the following: 1) upon block placement, therobot or the block, or both collide with any other block, 2) the block placement results in anunstable configuration, i.e., the resulting structure falling apart due to gravity)• the agent choosing the termination action• no more available building blocks• the completion of RAD, i.e., the filling of all target grid-cells.Discount factorFinally, to reflect the long-horizon of the considered task, we set the discount factor γto0.999. Forthe definition of the reward, we refer to the next section.A.2.2 High Level: MILP for optimal geometric target fillingThe pseudocode in Alg. 1 contains all the logic to obtain and update the MILP’s solution. In particular,the function computeMILPSol from Line 4 onwards describes the necessary steps to obtain the MILPsolution given the current state. The pseudocode closely follows our descriptions from Sec. 2 for thehighest level.As the environment state changes constantly during the assembly process, we also require anotherfunction that updates the actions that are available to the agent. This function is called updateAvailAc-tions and described from Line 16 onwards. Please note that the update function does not requiresolving the MILP again (it takes as input the previously calculated solution) and is therefore waymore efficient.8Algorithm 1 MILP1: Grid is of size nx, ny, nz(x-, y-, z-direction, respectively)2: Grid has Gc=nxnynzcells3: Conversion from grid coordinate dx, dy, dzto index via j=dx+dynx+dznxny4:procedure computeMILPSol (s)5: Extract grid state g∈RGc×1froms6: Add Gcconstraints ensuring only 1 primitive block at each cell: g[i]≤1,∀g[i]∈g7: From SU, extract the Pdifferent block types that are in the scene, and their quantities Ni8: K,M=computeUniquePlacements (SU, P,g)(cf. Line 20)9: Introduce vector of binary decision variables: w∈{0,1}(PPˆi=1K(ˆi))×110: Add contraint on decision variables regarding occurance of each block type:PK(i)l=K(i−1)w[l]≤Ni, fori in1..P, and with K(0) = 111: Define cost vector c∈RGc×112: Define Objective: maxwcTMw13: Solve optimization problem, returns optimal entries for w14: compute all available actions: AMILP=computeAvailGNNActions (SU,g,w,M)(cf. Line 34)15: return AMILP,w,M16:procedure updateAvailActions (s,w,M)17: Extract grid state g∈RGc×1froms18: compute all available actions: AMILP=computeAvailGNNActions (SU,g,w,M)19: return AMILP,w,M20:procedure computeUniquePlacements (SU, P,g)21: Define empty Matrix M22: forˆiin1..Pdo23: Number of valid placing actions: K(ˆi) = 024: formin1..Gcdo25: /* Iterate over all potential placing positions26: forhin1..4do27: /* Iterate over all rotational actions28: Starting from empty grid with all zeros: g1= 0Gc×129: Attempt to place the first primitive unit p1of block ˆiat grid-cell mwhile applying rotationh, and capture resulting grid-state g130: ifAll primitive units inside cell && g1not equal to any column in M&&gT1g= 0then31: Append vector g1as a new column to M32: K(ˆi)+=133: return K,M34:procedure computeAvailGNNActions (SU,g,w,M)35: Define empty list of available actions AMILP36: From SU, extract the Ptdifferent block types that are currently in the scene37: forˆiin1..Ptdo38: formin1..Gcdo39: /* Iterate over all potential placing positions40: forhin1..4do41: /* Iterate over all rotational actions42: Starting from empty grid with all zeros: g1= 0Gc×143: Attempt to place the first primitive unit p1of block ˆiat grid-cell mwhile applying rotationh, and capture resulting grid-state g144: ifAll primitive units inside cell && g1equal to any column jinMfor which holdsw[j] == 1 &&gT1g= 0then45:/* This if checks: 1) is the block entirely in the target area?, 2) is the action that is to be executed inside thesolution space?, 3) is any of the voxels, where the block might be placed already filled?46: Append Triple ( p1, m, h ) toAMILP47: Append termination action ( 0,0,5) toAMILP48: return AMILPA.2.3 Medium Level: GNN & Q-Learning for task sequencing.On the medium level, we now get as input the solution from the higher level and use it to train ourgraph-based reinforcement learning agents.9To start with, we present the general loop that is used to train the graph-based representations inAlg. 2. It consists of two main components. We first collect experience through interacting with theenvironment (cf. while loop in Line 6), and secondly, we use the obtained samples to refine our GNN(cf. Line 23) that estimates the quality of all the actions and thus directly influences the actions thatare being taken in the environment.Algorithm 2 Training Loop for the medium level GNN-RL1:foriin1..NumberEpochs do2: /* Collect experience3: j= 0, Buffer B= []4: /* Define number of samples to collect5: Γ = 1006: while j <Γdo7: Sample initial state s8: /* Obtain MILP solution by running computeMILPSol from Alg. 1, cf. Line 49: AMILP,w,M=computeMILPSol (s)10: finished=False11: while finished==False do12: Sample action a=act(Q, s,AMILP)using Q-function approximator Q. This calls Alg. 3 Line 213: Execute a, i.e., move robot to pick and place the part & obtain r(s, a)14: Receive next state s′15: B.append([ s, a, r (s, a), s′])16: j=j+ 117: s=s′18: /* Update MILP solution by running updateAvailActions from Alg. 1, cf. Line 1619: AMILP=updateAvailActions (s,w,M)20: ifAny of the termination criteria (cf. A.2.1) is true then21: finished=True22: /* Update weights of Q-function23: π=update (Q,B)During training and also during evaluation of our proposed MILP-DQN approach, we perform actionselection as shown in Alg. 3 Line 2. In both cases of either exploring a random action (cf. Line 5) orselecting the action with highest predicted Q-value (i.e., exploitation, cf. Line 7), we only allow tochoose from the set of actions that has been previously proposed by the high level MILP. For trainingthe GNN to predict the correct Q-values, we exploit the collected experience and perform temporaldifference learning, as shown in Alg. 3 from Line 8 onwards.Algorithm 3 DQN1: Number of update steps χ2:procedure act(Q, s,AMILP)3: /* ε-greedy policy4: ifRandomVariable < εthen5: a= RandomChoice( AMILP)6: else7: a= max a′Q(s, a′|a′∈ A MILP)return a8:procedure update (π,B)9: Add Bto Replay Memory10: foriin1..χdo11: Sample random subset from Replay Memory12: /* Temporal-difference learning using tatget network QTas in [15].13: loss = smoothL1( Q(s, a)−(r(s, a) +γmax a′QT(s′, a′|a′∈AllPossibleActions( s))))14: Update Q-function approximator Qwith parameters θ15: θ=θ−α∂loss∂θ16: return QLastly, we are only missing the details regarding action selection for our proposed MILP-DQN-MCTS method. Contrarily to the previous MILP-DQN approach, here, we even add model-basedsearch through MCTS. This has the potential to even further improve performance, robustness, andgeneralization.10Alg. 4 provides the details for action selection in MILP-DQN-MCTS agents. Please note, that Alg. 4is still only capable of selecting from the set of actions proposed by the high level MILP. As can beseen in the pseudocode, we now simulate the outcome of multiple actions and subsequently exploitthis experience to decide upon the desired action that should be executed. We provide the code forthe search process from Line 7 onwards. Moreover, as pointed out in the second line of the algorithm,we only consider a rollout depth of 1. This means that we stop the model-based rollouts after the firstaction and estimate the expected reward of the remaining trajectory by again querying our Q-functionestimator. This possibility of clipping the rollouts already after the first or generally speaking aftervery few actions is another reason why the combination of Q-learning and MCTS is appealing andefficient.Algorithm 4 DQN + MCTS1: /* Note, this is only during evaluation.2: Rollout Depth η= 1if not stated otherwise3: Search Budget τ= 5if not stated otherwise4:procedure act(Q, s,AMILP)5: Given: state s, set containing the explored actions SA={}6: ∀a∈ A MILP, Initialize W(s, a) = 1 ,QS(s, a) =Q(s, a)7: foriin1..τdo8: ifRandomVariable < εthen9: a= RandomChoice( a∈ A MILP|W(s, a) = 1)10: else11: a= max a′Q(s, a′|a′∈ A MILP, W(s, a′) = 1)12: Add atoSA, collect r(s, a), update AMILP=updateAvailActions (s,w,M)13: forjin1..η−1do14: a= DQN −act(Q, s)(Alg. 3, Line 2), collect current single step reward ̃r15: Update: r(s, a) =r(s, a) +γj ̃randAMILP16: Update: W(s, a) =W(s, a) + 1 ,QS(s, a) =12(QS(s, a) +r(s, a) +γηmax a′Q(s′, a′))17: ar= max a′QS(s, a′|a′∈ SA)18: return arA.3 Additional details on the lowest level: Grasp and Motion planning (GAMP)The lowest level is tasked with the conversion of the previous level’s actions into robot joint commands,and performs the final robot execution of block grasping and moving such that the block is placed inthe desired pose. While it would be possible to add those decisions to the higher levels, we decidedto consider motion generation as a separate module in our hierarchical framework, as these decisionsare heavily dependent on the actual robot manipulator. Moreover, we want to avoid increasing theaction space of the previous level. Robotic block grasping and placing is achieved by first checkingthe feasibility of a predefined set of top-down grasping poses and subsequently checking if this graspresults in a feasible final placement pose. If there exists a pair of feasible grasping and placing poses,we move the robot by approaching the grasping pose from the top, then move to a position thatis slightly above the placing location, and finally, approach the placement pose. All intermediatewaypoints are computed based on inverse kinematics.A.4 Additional details on running timesLastly, we want to provide the running times of our individual components. We focus on theenvironment with the robot-in-the-loop, and thus report the running times for the experimentspresented in Section 3 - B). Please note that we did not have time to properly optimize our code, thus,we think that there is still lots of room for improvement for the running times that we will report inthis section. The results are again obtained by averaging across all the 200 RAD scenes that havebeen presented to the agents. We also want to emphasize that computing the initial MILP solution isonly required once per scene, whereas all the other components have to be run per action, i.e., perstep that is taken in the environment.The results from this experiment are shown in Table 3. Computing the initial MILP solution, i.e.,calling the function computeMILPSol (cf. Alg. 1), takes around 18 milliseconds (ms), and 26 ms forthe environments with a grid sizes of 4 and 5, respectively. Please again note, that computing the11Table 3: Reporting the average running times of all our components in the same experimental setting aspresented in Section 3 - B). All the running times have been acquired on a computer with 64GB RAM, anNVIDIA GeForce RTX 2080 SUPER GPU, as well with an AMD Ryzen 9 3900x CPU (24 cores).Grid Compute MILP solution Update MILP solution GNN-DQN GNN-DQN + MCTS GAMPSize (per scene ) (per step) (per step) (per step) (per step)4 0.0178 s 6.5410−5s 0.0069 s 1.2094 s 0.0324 s5 0.0259 s 7.5110−5s 0.0069 s 1.3968 s 0.0370 sinitial MILP solution is only required once for every RAD scene. All the other components have to berun for every action, i.e., every step that is taken in the scene. Further, the table shows that updatingthe MILP solution, i.e., calling the function updateAvailActions (cf. Alg. 1) requires by far the leastamount of time and is negligible compared to the other running times. Calculating the desired actionbased on the GNN-DQN approach only (cf. Alg. 3) is also very efficient as it only takes about 7msfor both of the environments. However, if we take more than 3 actions per RAD scene, then the totaltime required for the GNN-DQN already exceeds the time taken to compute the initial MILP solution.Moreover, running the grasp and motion planning (GAMP, cf. Sec. A.3) which is required for everyaction requires on average around 32 and 37ms (for the two different versions of the environment) andthus even consumes more time than computing the initial MILP solution. Finally, when performingthe action decision based on the combination of GNN-DQN + search(MCTS) as described in Alg.4, this requires 1.2s and even 1.4s on average for the environments with the grid sizes of 4 and 5.The big increase in runtime compared to the GNN-DQN approach can be explained by the fact thatwe explore five different actions before we decide upon the one that should be taken. This meansthat we have to query the GNN five times, perform GAMP five times, and lastly, have to evaluatethe outcomes of the five actions using our PyBullet simulation which is very costly. Nevertheless,we still want to point out that our approach is targeted at high-level decision-making and that therobot motion in the real world (i.e., picking and placing the block) takes on the order of 20s, which isstill much longer than the time taken to decide upon the action. However, as we plan to apply ourproposed algorithms to different domains, speeding up this combination of DQN+MCTS is on top ofour priority list as it performed best across experiments.12<|im_end|> <|im_start|>assistant ### Review Title ### Review Text Contributions: 1. The paper focuses on an interesting problem of autonomous robotic assembly discovery (RAD). The goal is that a robotic agent should reason about abstract 3D target shapes that need to be fulfilled given a set of available building block. 2. The problem is challenging to solve . 3. The authors use a mixed-integer program formulation for global optimization of blocktype selection and use the blocks’ final position in order to recreate shapes. 4. The solution of the MILP’s is used as a guiding exploration signal in a graph Reinforcement Learning (RL) framework. 5. Authors use a GNN for capturing the geometric, structural, and physical relationship between entities. 6. They assume that all building blocks are a combination of primitive blocks. This helps with a modular representation of complicated blocks through primitive elements 7. The authors train the GNN through model-free Q-learning. Further, they also show that it can be integrated with tree search(MCTS) for improved long-term decisions. 8. Empirically the results are significantly better than other methods and also generalize to out-of-distribution data. Pros: 1. Good empirical evaluation. 2. Comparison with baseline 3. Ablation study to show importance of each component Cons: 1. Presentation could have been better. For example, in sec.3 the dimensions of variables g, p,c should be clarified. Further, can you clarify how g becomes a vector after summation after grid state change in section 3. and how values remain <=1 in vector g. 2. Slight more explanation of how the scene is converted to a graph could be better. Although the authors have cited the paper. However, it will be better if some more details of construction can be presented in the paper itself. 3. Code is not shared Recommendation: I vote for weak accept. There are some presentation issues that should be clarified. Further, the code is not shared. ### Review Rating ### Review Confidence <|im_end|> <|im_end|>
SyG1QnRqF7
ICLR.cc/2019/Conference
2019
Towards Resisting Large Data Variations via Introspective Learning
["Yunhan Zhao", "Ye Tian", "Wei Shen", "Alan Yuille"]
Learning deep networks which can resist large variations between training andtesting data is essential to build accurate and robust image classifiers. Towardsthis end, a typical strategy is to apply data augmentation to enlarge the trainingset. However, standard data augmentation is essentially a brute-force strategywhich is inefficient, as it performs all the pre-defined transformations to everytraining sample. In this paper, we propose a principled approach to train networkswith significantly improved resistance to large variations between training andtesting data. This is achieved by embedding a learnable transformation moduleinto the introspective networks (Jin et al., 2017; Lazarow et al., 2017; Lee et al.,2018), which is a convolutional neural network (CNN) classifier empowered withgenerative capabilities. Our approach alternatively synthesizes pseudo-negativesamples with learned transformations and enhances the classifier by retraining itwith synthesized samples. Experimental results verify that our approach signif-icantly improves the ability of deep networks to resist large variations betweentraining and testing data and achieves classification accuracy improvements onseveral benchmark datasets, including MNIST, affNIST, SVHN and CIFAR-10.
["Introspective learning", "Large variations resistance", "Image classification", "Generative models"]
ABSTRACTLearning deep networks which can resist large variations between training andtesting data is essential to build accurate and robust image classifiers. Towardsthis end, a typical strategy is to apply data augmentation to enlarge the trainingset. However, standard data augmentation is essentially a brute-force strategywhich is inefficient, as it performs all the pre-defined transformations to everytraining sample. In this paper, we propose a principled approach to train networkswith significantly improved resistance to large variations between training andtesting data. This is achieved by embedding a learnable transformation moduleinto the introspective networks (Jin et al., 2017; Lazarow et al., 2017; Lee et al.,2018), which is a convolutional neural network (CNN) classifier empowered withgenerative capabilities. Our approach alternatively synthesizes pseudo-negativesamples with learned transformations and enhances the classifier by retraining itwith synthesized samples. Experimental results verify that our approach signif-icantly improves the ability of deep networks to resist large variations betweentraining and testing data and achieves classification accuracy improvements onseveral benchmark datasets, including MNIST, affNIST, SVHN and CIFAR-10.1 I NTRODUCTIONClassification problems have rapidly progressed with advancements in convolutional neural net-works (CNNs) (LeCun et al., 1989; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedyet al., 2015; He et al., 2016; Huang et al., 2017). CNNs are able to produce promising performance,given sufficient training data. However, when the training data is limited and unable to cover all thedata variations in the testing data (e.g., the training set is MNIST, while the testing set is affNIST),the trained networks generalize poorly on the testing data. Consequently, how to learn deep net-works which can resist large variations between training and testing data is a significant challengefor building accurate and robust image classifiers.To address this issue, a typical strategy is to apply data augmentation to enlarging the training set,i.e., applying various transformations, including random translations, rotations and flips as well asGaussian noise injection, to the existing training data. This strategy is very effective in improvingthe performance, but it is essentially a brute-force strategy which is inefficient, as it exhaustivelyperforms all these transformations to every training samples. Neither is it theoretically formulated.Alternatively, we realize that we can synthesize extra training samples with generative models. But,the problem is how to generate synthetic samples which are able to improve the robustness of CNNsto large variations between training and testing data. In this paper, we achieve this by embed-ding a learnable transformation module into introspective networks (Jin et al., 2017; Lazarow et al.,2017), a CNN classifier empowered with generative capabilities. We name our approach intro-spective transformation network (ITN), which performs training by a reclassification-by-synthesisalgorithm. It alternatively synthesizes samples with learned transformations and enhances the clas-sifier by retraining it with synthesized samples. We use a min-max formulation to learn our ITN,where the transformation module transforms the synthesized pseudo-negative samples to maximizetheir variations to the original training samples and the CNN classifier is updated by minimizingthe classification loss of the transformed synthesized pseudo-negative samples. The transformationmodules are learned jointly with the CNN classifier, which augments training data in an intelligentmanner by narrowing down the search space for the variations.1Under review as a conference paper at ICLR 2019Our approach can work with any models that have generative and discriminative abilities, such asgenerative adversarial networks (GANs) and introspective networks. In this paper, we choose theintrospective networks to generate extra training samples rather than GANs, because introspectivenetworks have several advantages over GANs. Introspective learning framework maintains onesingle CNN discriminator that itself is also a generator while GANs have separate discriminatorsand generators. The generative and discriminative models are simultaneously refined over iterations.Additionally, Introspective networks are easier to train than GANs with gradient descent algorithmsby avoiding adversarial learning.The main contribution of the paper is that we propose a principled approach that endows classifierswith the ability to resist larger variations between training and testing data in an intelligent andefficient manner. Experimental results show that our approach achieves better performance thanstandard data augmentation on both classification and cross-dataset generalization. Furthermore,we also show that our approach has great abilities in resisting different types of variations betweentraining and testing data.2 R ELATED WORKIn recent years, a significant number of works have emerged focus on resisting large variationsbetween training and testing data. The most widely adopted approach is data augmentation thatapplies pre-defined transformations to the training data. Nevertheless, this method lacks efficiencyand stability since the users have to predict the types of transformations and manually applies themto the training set. Better methods have been proposed by building connections between generativemodels and discriminative classifiers (Friedman et al., 2001; Liang & Jordan, 2008; Tu et al., 2008;Jebara, 2012; Welling et al., 2003). This type of methods capture the underlying generation processof the entire dataset. The discrepancy between training and test data is reduced by generating moresamples from the data distribution.GANs (Goodfellow et al., 2014) have led a huge wave in exploring the generative adversarial struc-tures. Combining this structure with deep CNNs can produce models that have stronger generativeabilities. In GANs, generators and discriminators are trained simultaneously. Generators try to gen-erate fake images that fool the discriminators, while discriminators try to distinguish the real andfake images. Many variations of GANs have emerged in the past three years, like DCGAN (Radfordet al., 2015), WGAN (Arjovsky et al., 2017) and WGAN-GP (Gulrajani et al., 2017). These GANsvariations show stronger learning ability that enables generating complex images. Techniques havebeen proposed to improve adversarial learning for image generation (Salimans et al., 2016; Gulra-jani et al., 2017; Denton et al., 2015) as well as for training better image generative models (Radfordet al., 2015; Isola et al., 2017).Introspective networks (Tu, 2007; Lazarow et al., 2017; Jin et al., 2017; Lee et al., 2018) provide analternative approach to generate samples. Introspective networks are closely related to GANs sincethey both have generative and discriminative abilities but different in various ways. Introspectivenetworks maintain one single model that is both discriminative and generative at the same time whileGANs have distinct generators and discriminators. Introspective networks focus on introspectivelearning that synthesizes samples from its own classifier. On the other hand, GANs emphasizeadversarial learning that guides generators with separate discriminators. The generators in GANsare mappings from the features to the images. However, Introspective networks directly models theunderlying statistics of an image with an efficient sampling/inference process.3 M ETHODWe now describe the details of our approach in this section. We first briefly review the introspectivelearning framework proposed by (Tu, 2007). This is followed by our the detailed mathematicalexplanation of our approach. In particular, we focus on explaining how our model generates unseenexamples that complement the training dataset.3.1 I NTROSPECTIVE LEARNINGWe only briefly review introspective learning for binary-class problems, since the same idea can beeasily extended to multi-class problems. Let us denote x2Rdas a data sample and y2+1;12Under review as a conference paper at ICLR 2019as the corresponding label of x. The goal of introspective learning is to model positive samples bylearning the generative model p(xjy= +1) . Under Bayes rule, we havep(xjy= +1) =p(y= +1jx)p(y=1)p(y=1jx)p(y= +1)p(xjy=1); (1)wherep(yjx)is a discriminative model. For pedagogical simplicity, we assume p(y= 1) =p(y=1)and this equation can be further simplified as:p(xjy= +1) =p(y= +1jx)p(y=1jx)p(xjy=1): (2)The above equation suggests that a generative model for the positives p(xjy= +1) can be obtainedfrom the discriminative model p(yjx)and a generative model p(xjy=1)for the negatives. How-ever, to faithfully learn p(xjy= +1) , we need to have a representative p(xjy=1), which is verydifficult to obtain. A solution was provided in (Tu, 2007) which learns p(xjy=1)by using aniterative process starting from an initial reference distribution of the negatives p0(xjy=1), e.g.,p0(xjy=1) =U(x), a Gaussian distribution on the entire space Rd. This is updated bypt+1(xjy=1) =1Ztqt(y= +1jx)qt(y=1jx)pt(xjy=1); (3)whereqt(yjx)is a discriminative model learned on a given set of positives and a limited numberof pseudo-negatives sampled from pt(xjy=1)andZt=Rqt(y=+1jx)qt(y=1jx)pt(xjy=1)dxis thenormalizing factor. It has been proven that KL(p(xjy= +1)jjpt+1(xjy=1))KL(p(xjy=+1)jjpt(xjy=1))) (as long as each qt(yjx)makes a better-than-random prediction, the inequal-ity holds) in (Tu, 2007), where KL(jj)denotes the Kullback-Leibler divergences, which impliespt(xjy=1)t=1!p(xjy= +1) . Therefore, gradually learning pt(xjy=1)by following thisiterative process of Eqn.(3), the samples drawn from xpt(xjy=1)become indistinguishablefrom the given training samples.3.2 LARGE VARIATIONS RESISTANCE VIA INTROSPECTIVE LEARNINGIntrospective Convolutional Networks (ICN) (Jin et al., 2017) and Wasserstein Introspective NeuralNetworks (WINN) (Lee et al., 2018) adopt the introspective learning framework and strengthen theclassifiers by a reclassification-by-synthesis algorithm. However, both of them fail to capture largedata variations between the training and testing data, since most of the generated pseudo-negativesare very similar to the original samples. But in practice, it is very common that the test data containunseen variations that are not in training data, such as the same objects viewed from different anglesand suffered from shape deformation.To address this issue, we present our approach building upon the introspective learning frameworkto resist large data variations between training and test data. Arguably, even large training setscannot fully contains all the possible variations. Our goal is to quickly generate extra trainingsamples with beneficial unseen variations that is not covered by the training data to help classi-fiers become robust. We assume that we can generates such training samples by applying a trans-formation function T(;)parametrized by learnable parameters to the original training sam-ples. Let us denote g(; )as the function that maps the samples xto the transformation param-eters, where is the model parameter of the function g. The generated samples still belong tothe same category of the original samples, since the transformation function Tonly changes thehigh-level geometric properties of the samples. The outline of training procedures of ITN is pre-sented in Algorithm 1. We denote S+=f(x+i;+1);i= 1:::jS+jgas the positive sample set,Tt(S+) =f(xTi;+1);i= 1:::jS+j;xTi=T(x+i;t)gas the transformed positive sample set at tthiteration with transformation parameter tandSt=f(xi;1);i= 1:::jSjgas the set of pseudo-negatives drawn from pt(xjy=1). We then will describe the detail of the training procedure.Discriminative model We first demonstrate the approach of building robust classifiers with givent. For a binary classification problem, at tthiteration, the discriminative model is represented asqt(yjx;t) =11 +exp(yft(x;t))(4)3Under review as a conference paper at ICLR 2019Algorithm 1: Outline of ITN Training Algorithm1:Input: Positive sample set S+, initial reference distribution p0(xjy=1)and transformation function T2:Output: Parameters,!and 3: BuildS0by samplingjS+jpseudo-negatives samples from p0(xjy=1)4: initialize parameters ,!and , set t = 15:while not converge do6: foreachx+i2S+andxi2Stdo7: Compute transformation parameters i=g(x+i; )8: Choose iU(0;1)and compute ^xi=iT(x+i;i) + ( 1i)xi9: end for10: Compute ;! by Eqn.(6)11: Compute by Eqn.(8)12: Sample pseudo-negatives samples Zt=fzti;i= 1;:::;jS+jgfromp0(xjy=1)13: Update all samples in Ztby Eqn.(12)14: Augment pseudo-negatives sample set St=St1[f(zti;1);i= 1;:::;jS+jgand t = t + 115:end whilewheretrepresents the model parameters at iteration t, andft(x;t)represents the model outputattthiteration. Note that, qt(yjx;t)is trained on S+,T(S+;t)and pseudo-negatives drawnfrompt(xjy=1). In order to achieve stronger ability in resisting unseen variations, we want thedistribution ofT(S+;t)to be approximated by the distribution of pseudo negatives pt(xjy=1),which can be achieved by minimizing the following Wasserstein distance (Gulrajani et al., 2017):D(t;!t) =ExTT(S+;t)[ft(xT;t;!t)]ExSt[ft(x;t;!t)] +E^x^Xt[kr^xft(^x;t;!t)1k22];(5)where!tis the extra parameter together with ft(;t)to compute the Wasserstein distance. Each ^xin the set ^Xtis computed with the formula ^x=xT+ (1)x, wheresamples from uniformdistribution U(0;1),xT2T(S+;t)andx2St. The term (jjr^xft(^x;t)jj21)2is thegradient penalty that stabilizes the training procedure of the Wasserstein loss function.The goal of the discriminative model is to correctly classify any given x+,xTandx. Thus, theobjective function of learning the discriminative model at iteration tismint;!tJ(t) +D(t;!t);whereJ(t) =E(x;y)S+[St[T(S+;t)[logq(yjx;t)] (6)The classifiers obtain the strong ability in resisting unseen variations by training on the extra sampleswhile preserving the ability to correctly classify the original samples. We discussed the binaryclassification case above. When dealing with multi-class classification problems, it is needed toadapt the above reclassification-by-synthesis scheme to the multi-class case. We can directly followthe strategies proposed in (Jin et al., 2017) to extend ITN to deal with multi-class problems bylearning a series of one-vs-all classifiers or a single CNN classifier.Exploring variations. The previous section describes how to learn the robust classifiers when thetis given. However, tis unknown and there are huge number of possibilities to selecting t. Now,the problem becomes how do we learn the tin a principled manner and apply it towards buildingrobust classifiers? We solve this issue by forming a min-max problem upon the Eqn.(6):min;!maxJ(;) +D(;!; ); (7)Here, we rewrite J()andD(;!)in Eqn.(5) and Eqn.(6) as J(;)andD(;!; ), sinceisnow an unknown variable. We also subsequently drop the subscript tfor notational simplicity. Thisformulation gives us a unified perspective that encompasses some prior work on building robustclassifiers. The inner maximization part aims to find the transformation parameter that achievesthe high loss values. On the other hand, the goal of the outer minimization is expected to find the themodel parameters that enables discriminators to correctly classify xTand!allows the negativedistribution to well approximate the distribution of T(S+;t). However, direclty solving Eqn. 7 isdifficult. Thus, we break this learning process and first find a that satisfiesmaxE(xT;y)T(S+;)[log(q(yjxT))] +ExTT(S+;)[f(xT;;!)] +E^x^X[kr^xf(^x;;!)1k22](8)4Under review as a conference paper at ICLR 2019whereand!are fixed. Then, and!are learned with Eqn.(6) by keep =. Empirically, thefirst term in the Eqn. 8 dominates over other terms, therefore we can drop the second and third termsto focus on learning more robust classifiers. The purpose of empirical approximation is to find thethat makexThard to classify correctly. Instead of enumerating all possible examples in the dataaugmentation, Eqn.(8) efficiently and precisely finds a proper that increase the robustness of thecurrent classifiers.We useg(; )to learn, thus=g(x; )+, whereis random noise follows the standard normaldistribution. The function parameter is learned by Eqn.(8). Notably, following the standardbackpropagation procedure, we need to compute the derivative of the transformation function Tin each step. In other words, the transformation function T(;)need to be differentiable withrespect to the parameter to allow the gradients to flow through the transformation function Twhen learning by backpropagation.Generative model In the discriminative models, the updated discriminative model p(yjx)is learnedby Eqn.(6). The updated discriminative model is then used to compute the generative model bythe Eqn.(3) in section 3.1. The generative is learned by maximizing the likelihood function p(x).However, directly learning the generative model is cumbersome since we only need samples fromthe latest generative model.Let’s denote initial reference distribution as p0(x)andpn(xjy=1)aspn(x)for simplicity.Following standard introspective learning, we can approximate samples drawn from latest negativedistribution by first sampling from p0(x)and iteratively update them to approach desired samples.Withp0and Eqn.(3), we havepn(x) = (n1Yt=11Ztqt(y= +1jx)qt(y=1jx))p0(x); (9)whereZtindicates the normalizing factor at tthiteration. The random samples xare updated byincreasing maximize the log likelihood of pn(x). Note that maximizing logpn(x)can be simplifiedas maximizingQn1t=1qt(y=+1jx)qt(y=1jx)sinceZtandp0are fixed in Eqn.(9). From this observation, wecan directly learn a model ht(x)such thatht(x) =qt(y= +1jx)qt(y=1jx)= exp(ft(x;t)) (10)Taking natural logarithm on both side of the equation above, we can get lnht(x) =ft(x;t).Therefore, logpn(x)can be rewritten aslogpn(x) = log(n1Yt=11Ztqt(y= +1jx)qt(y=1jx))p0(x) =Cn1Xt=1ft(x;t)p0(x); (11)whereCis the constant computed with normalizing factors Zt. This conversion allows us to max-imize logpn(x)by maximizingPn1t=1ft(x;t). By taking the derivative of logpn(x), the updatesteprxis:rx=r(n1Xt=1ft(x;t)) +; (12)whereN(0;1)is the random Gaussian noise and is the step size that is annealed in thesampling process. In practice, we update from the samples generated from previous iterations toreduce time and memory complexity. An update threshold is introduced to guarantee the generatednegative images are above certain criteria, which ensures the quality of negative samples. We modifythe update threshold proposed in (Lee et al., 2018) and keep track of the ft(x;t)in every iteration.In particular, we build a set Dby recording E[ft(x;t)], wherex2S+in every iteration. We forma normal distribution N(a;b), whereaandbrepresents mean and standard deviation computed fromsetD. The stop threshold is set to be a random number sampled from this normal distribution. Thereason behind this threshold is to make sure the generated negative images are close to the majorityof transformed positive images in the feature space.4 E XPERIMENTSIn this section, we demonstrate the ability of our algorithm in resisting the large variations betweentraining and testing data through a series of experiments. First, we show the outstanding classifi-cation performance of ITN on several benchmark datasets. We also analyze the properties of the5Under review as a conference paper at ICLR 2019generated examples from different perspectives. We then further explore the ability of our algorithmin resisting large variations with two challenging classification tasks and show the consistently betterperformance. Finally, we illustrate the flexibility of our architecture in addressing different types ofunseen variations.Baselines We compare our method against CNNs, DCGAN (Radford et al., 2015), WGAN-GP(Gulrajani et al., 2017), ICN (Jin et al., 2017) and WINN (Lee et al., 2018). For generative modelsDCGAN and WGAN-GP, we adopt the evaluation metric proposed in (Jin et al., 2017). The trainingphase becomes a two-step implementation. We first generate negative samples with the originalimplementation. Then, the generated negative images are used to augment the original training set.We train a simple CNN that has the identical structure with our method on the augmented trainingset. All results reported in this section are the average of multiple repetitions.Experiment Setup All experiments are conducted with a simple CNN architecture (Lee et al., 2018)that contains 4 convolutional layers, each having a 55filter size with 64 channels and stride 2 inall layers. Each convolutional layer is followed by a batch normalization layer (Ioffe & Szegedy,2015) and a swish activation function (Ramachandran et al., 2018). The last convolutional layer isfollowed by two consecutive fully connected layers to compute logits and Wasserstein distances.The training epochs are 200 for both our method and all other baselines. The optimizer used isthe Adam optimizer (Kingma & Ba, 2014) with parameters 1= 0 and2= 0:9. Our methodrelies on the transformation function T()to convert the original samples to the unseen variations.In the following experiments, we demonstrate the ability of ITN in resisting large variations withspatial transformers (STs) (Jaderberg et al., 2015) as our transformation function unless specified.Theoretically, STs can represent all affine transformations, which endows more flexible ability inresisting unseen variations. More importantly, STs are fully differentiable, which allows the learningprocedure through standard backpropagation.4.1 C LASSIFICATIONTo demonstrate the effectiveness of ITN, we first evaluate our algorithm on 4 benchmark datasets,MNIST (LeCun et al., 1998), affNIST (Tieleman, 2013), SVHN (Netzer et al., 2011) and CIFAR-10(Krizhevsky & Hinton, 2009). The MNIST dataset includes 55000, 5000 and 10000 handwrittendigits in the training, validation and testing set, respectively. The affNIST dataset is a variant fromthe MNIST dataset and it is built by applying various affine transformations to the samples in MNISTdataset. To accord with the MNIST dataset and for the purpose of the following experiments, wereduce the size of training, validation and testing set to 55000, 5000 and 10000, respectively. SVHNis a real-world dataset that contains house numbers images from Google Street View and CIFAR-10contains 60000 images of ten different objects from the natural scenes. The purpose of introducingthese two datasets is to further verify the performance of ITN on real-world datasets. The data aug-mentation we applied in the following experiments is the standard data augmentation that includesaffine transformations, such as rotation, translation, scaling and shear.w/o DA w/ DAMethod MNIST affNIST SVHN CIFAR-10 MNIST affNIST SVHN CIFAR-10CNN 0.89% 2.82% 9.86% 31.31% 0.57% 1.65% 7.01% 24.35%DCGAN 0.79% 2.78% 9.78% 31.22% 0.57% 1.63% 6.98% 24.18%WGAN-GP 0.74% 2.76% 9.73% 31.08% 0.56% 1.56% 6.73% 24.07%ICN 0.72% 2.97% 9.72% 32.34% 0.56% 1.54% 6.81% 24.98%WINN 0.67% 2.56% 9.84% 30.72% 0.52% 1.48% 6.44% 23.74%ITN 0.49% 1.52% 6.73% 21.93% 0.47% 1.09% 5.92% 20.65%Table 1: testing errors of the classification experiments discussed in Section 4.1, where w/DA andw/o DA indicates whether data augmentation is applied.As shown in Table 1, our method achieves the best performance on all four datasets. The overallimprovements can be explained by the fact that our method generates novel and reliable negativeimages (shown in Figure 1) that effectively strengthen the classifiers. The images we generateare different from the previous ones, but can still be recognized as the same class. The boostedperformance in value on MNIST dataset is marginal perhaps because the performance on the MNISTdataset is close to saturation. The difference between training and testing split in MNIST dataset is6Under review as a conference paper at ICLR 2019CNN WGAN-GP WINN ITNw/o DA 72.06% 70.60% 70.36% 34.29%w/DA 40.74% 36.29% 33.53% 21.31%Table 2: testing errors of the classification experiments described in Section 4.2.1, where w/DA andw/o DA indicates whether data augmentation is applied.also very small compared to other datasets. Moreover, the amount of improvements increases as thedataset becomes complicated. Based on the observation of the results, we conclude that our methodhas stronger ability in resisting unseen variations especially when the dataset is complicated. On theother hand, we can clearly observe that our method outperforms the standard data augmentation onall datasets. This result confirms the effectiveness and the advantages of our approach. Additionally,ITN does not contradict with data augmentation since ITN shows even greater performance whenintegrating with data augmentation techniques. The possible reason for this observation is that theexplored space between ITN and data augmentation is not overlapped. Therefore, the algorithmachieves greater performance when combining two methods together since more unseen variationsare discovered in this case.Figure 1: Images generated by our method on MNIST, affNIST, SVHN and CIFAR-10 dataset. Ineach sector, the top row is the original images and the bottom row is our generated images.4.2 Q UANTITATIVE ANALYSIS4.2.1 C ROSS DATASET GENERALIZATIONWe have shown the substantial performance improvements of ITN against other baselines on severalbenchmark datasets. In this section, we want to further explore the ability of our method in resistinglarge variations. We design a challenging cross dataset classification task between two significantlydifferent datasets (cross dataset generalization). The training set in this experiment is the MNISTdataset while the testing set is the affNIST dataset. The difficulty of this classification tasks is clearlyhow to overcome such huge data discrepancy between training and testing set since the testing setincludes much more variations. Another reason why we pick these two datasets as training andtesting set is that they share the same categories, which ensures the challenge is only about resistinglarge data variations.As shown in Table 2, ITN has clear improvements over CNN, WGAN-GP and WINN. The amountof improvement is much larger than on the regular training and testing splits shown in Section 4.1.More importantly, our performance in this challenging task is still better than CNN with data aug-mentation. This encouraging result further verifies the efficiency and effectiveness of ITN comparedwith data augmentation. It’s not surprising that data augmentation improves the performance by asignificant margin since the space of unseen variations is huge. Data augmentation increases theclassification performance by enumerating a large number of unseen samples, however, this brute-force searching inevitably lacks efficiency and precision.4.2.2 R ESISTING ABILITY UNDER DATA PAUCITYAnother way to evaluate the ability of resisting variations is to reduce the amount of training samples.Intuitively, the discrepancy between the training and testing sets increases when the number ofsamples in the training set shrinks. The purpose of this experiment is to demonstrate the potentialof ITN in resisting unseen variations from a different perspective. We design the experiments wherethe training set is the MNIST dataset with only 0.1%, 1%, 10% and 25% of the whole training setwhile the testing set is the whole MNIST testing set. Each sample is randomly selected from thepool while keeps the number of samples per class same. Similarly, we repeat the same experimentson the CIFAR-10 dataset to further verify the results on a more complicated dataset. As shown inTable 3, our method has better results on all tasks. This result is consistent with Section 4.2.1 andSection 4.1, which undoubtedly illustrate the strong ability of ITN in resisting unseen variations inthe testing set. The constant superior performance over data augmentation also proves the efficiencyof ITN.7Under review as a conference paper at ICLR 2019w/o DA w/ DAMethod CNN WGAN-GP WINN ITN CNN WGAN-GP WINN ITN0.1%(M) 36.50% 29.43% 27.18% 16.47 % 18.07% 15.35% 14.46% 12.67%1%(M) 7.66% 6.86% 5.10% 3.48 % 4.48% 3.98% 3.66% 2.82%10%(M) 2.02% 1.63% 1.49% 0.98 % 1.24% 1.18% 1.00% 0.92%25%(M) 1.29% 1.13% 1.00% 0.78 % 0.83% 0.79% 0.77% 0.66%0.1%(C) 81.99% 80.92% 78.24% 72.50 % 79.04% 78.75% 76.97% 70.43%1%(C) 72.31% 71.34% 69.79% 61.48 % 65.23% 64.84% 63.26% 58.07%10%(C) 59.02% 57.37% 56.02% 45.06 % 47.75% 46.86% 46.04% 42.62%25%(C) 51.35% 49.01% 48.43% 34.56 % 36.50% 35.46% 34.29% 30.60%Table 3: testing errors of the classification tasks described in Section 4.2.2, where M and C repre-sents the experiments conducted on the MNIST dataset and CIFAR-10 dataset, respectively.4.3 B EYOND SPATIAL TRANSFORMEREven though we utilize STs to demonstrate our ability in resisting data variations, our method ac-tually has the ability to generalize to other types of transformations. Our algorithm can take othertypes of differentiable transformation functions and strengthen the discriminators in a similar man-ner. Moreover, our algorithm can utilize multiple types of transformations at the same time andprovide even stronger ability in resisting variations. To verify this, we introduce another recentlyproposed work, Deep Diffeomorphic Transformer (DDT) Networks (Detlefsen et al., 2018). DDTsare similar to STs in a way that both of them can be optimized through standard backpropagation.We replace the ST modules with the DDT modules and check whether our algorithm can resist suchtype of transformation. Then, we include both STs and DDTs in our model and verify the perfor-mance again. Let MNIST dataset be the training set of the experiments while the testing sets are theMNIST dataset with different types of transformation applied. We introduce two types of testingsets in this section. The first one is the normal testing set with random DDT transformation only.The second one is similar to the first one but includes both random DDT and affine transformations.The DDT transformation parameters are drawn from N(0;0:7Id)as suggest in (Detlefsen et al.,2018), whereIdrepresents the ddimensional identity matrix. Then the transformed images arerandomly placed in a 4242images. We replicate the same experiment on the CIFAR-10 dataset.MNIST CIFAR-10DDT DDT + ST DDT DDT + STCNN 17.75% 55.11% 76.14 % 78.01 %WGAN-GP 17.53% 53.24% 75.93% 77.02 %WINN 17.20% 52.43% 75.43 % 76.92 %ITN(DDT) 12.85 % 40.60% 53.62% 63.56 %ITN(DDT + ST) 9.41% 34.37% 45.26% 56.95 %Table 4: testing errors of cross dataset classification experiments, where CNN (w/ DA) representsthe CNNs with data augmentation.We can make some interesting observations from the Table 4. First, ITN can integrate with flexiblywith DDT or DDT + ST to resist the corresponding variations. Second, ITN can resist partial unseenvariations out of a mixture of transformations in the testing data. More importantly, the performanceof ITN won’t degrade when the model has transformation functions that doesn’t match the type ofvariations in the testing data, e.g. ITN(DDT + ST) on testing data with DDT only. This observationallows us to apply multiple transformation functions in ITN without knowing the types of variationsin the testing data and still maintain good performance.5 C ONCLUSIONWe proposed a principled and smart approach that endows the classifiers with the ability to resistlarger variations between training and testing data. Our method, ITN strengthens the classifiersby generating unseen variations with various learned transformations. Experimental results showconsistent performance improvements not only on the classification tasks but also on the other chal-lenging classification tasks, such as cross dataset generalization. Moreover, ITN demonstrates itsadvantages in both effectiveness and efficiency over data augmentation. Our future work includesapplying our approach to large scale datasets and extending it to generate samples with more typesof variations.8Under review as a conference paper at ICLR 2019
SJlQKiZGT7
Good discussion, improved comparisons needed
5: Marginally below acceptance threshold
This paper suggests the use of learned transformation networks, embedded within introspective networks to improve classification performance with synthesized examples. The authors cite a number of related works, and give a good introduction to both introspective learning, as well as the particular usage for large variation resistance. This discussion forms the majority of the paper, and while the explanation seems clear it would be nice to have a stronger dedicated section on exact relations to both GAN and potentially VAE mathematically. Any kind of discussion which could tie the thorough derivation to some of the contemporaries in generative modeling would help the accessibility of the paper. My primary concerns are from the experimental sections of the paper. The setup of the problem seems such that any strong baseline could be directly tested, since the final CNN is ultimately trained in a two stage setup on the aggregated dataset (as per subsection Baselines). Here it is also worth mentioning DAgger https://ri.cmu.edu/pub_files/2010/5/Ross-AIStats10-paper.pdf / https://www.cs.cmu.edu/~sross1/publications/Ross-AIStats11-NoRegret.pdf which is used in another context but in a similar way for improving imitation and online learning. Many of the baseline methods for table 1 seem far from what I would consider "strong" baselines. Given that the core proposal of the paper is improving classifiers, the importance of having high quality baselines cannot be overstated. Particularly, baseline CNN numbers for MNIST, SVHN, and CIFAR-10 are far from what has been seen in simple papers such as Wide ResNet https://arxiv.org/abs/1605.07146, ResNet https://arxiv.org/abs/1512.03385, Universum Perscription (which bears some resemblence to this work in high level concept) https://arxiv.org/abs/1511.03719, or even older work such as Maxout Networks http://proceedings.mlr.press/v28/goodfellow13.pdf . Particularly, these papers show examples of simple CNNs which outscore the best values reported in this table, on the same datasets. Running the same setup but with these improved classifiers as baselines would make a much stronger support for the core hypothesis that ITN can be used to improve strong classifiers. Table 2 seems to me an improper comparison. Methods such as zero-shot or meta-learning type approaches seem much more appropriate for testing cross-generalization improvement. In some sense, none of the tested methods besides ITN should even be able to cross-generalize well, so the fact that ITN does better here is not surprising to me. While this is a benefit of ITN as stated, seeing a comparison to methods deisgned for cross-generalization as well would make the results of Table 2 much stronger. Table 3 also seems have improper comparisons, in that there are a large number of works using semi-supervised generative models (Improved Techniques for Training GANS, which is already cited, SS-VAE https://arxiv.org/abs/1406.5298, Temporal Ensembling https://arxiv.org/abs/1610.02242, VAT https://ieeexplore.ieee.org/abstract/document/8417973/, Ladder Networks http://papers.nips.cc/paper/5947-semi-supervised-learning-with-ladder-networks, Auxiliary Deep Generative Models https://arxiv.org/abs/1602.05473, Manifold Tangent Classifier https://papers.nips.cc/paper/4409-the-manifold-tangent-classifier) for improved classification in low data domains. Adopting the same settings and comparing directly to these methods would greatly strengthen this section as well, as simple classifiers (as shown in many of these previous papers) are not generally great baselines for semi-supervised modeling. In addition, there should be a direct case where becoming robust to these kinds of transformations fails. For example, if my classification task is the rotation/position of a repositioned MNIST digit, becoming robust to these types of transformations may be harmful. An experiment or discussion about when robustness to large data variation might be harmful would be a good inclusion as well. As a more general comment, this method seems applicable outside of image domains, and it would be interesting to see it applied in other settings though it is likely outside the scope of this particular paper. Formatting of the paper (specifically spacing between sections and subsections) seems a bit off in general. If the authors are applying /vspace tricks to shrink spaces in the format, I would recommend taking a closer look at how and where, to make spacing more consistent over the whole document. Comparing to past ICLR papers (best papers, or high rated from past conferences) to see how they approach formatting could improve the visual appeal of this paper. Overall, this paper has a lot in its favor. The experimental section is thorough, if not as strong as I would like. The derivation of the method and motivation is clear, and there are a lot of avenues explored. However, as it currently stands the experimental sections should be stronger to really prove out the core claim "Our method, ITN strengthens the classifiers by generating unseen variations with various learned transformations." compared to other methods using generative and semi-supervised methods in a similar vein. In addition, the clarity and approachability of the paper could be improved by drawing a relation to parallel related work such as GAN or VAE in more detail.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Towards Resisting Large Data Variations via Introspective Learning ### Paper Abstract Learning deep networks which can resist large variations between training andtesting data is essential to build accurate and robust image classifiers. Towardsthis end, a typical strategy is to apply data augmentation to enlarge the trainingset. However, standard data augmentation is essentially a brute-force strategywhich is inefficient, as it performs all the pre-defined transformations to everytraining sample. In this paper, we propose a principled approach to train networkswith significantly improved resistance to large variations between training andtesting data. This is achieved by embedding a learnable transformation moduleinto the introspective networks (Jin et al., 2017; Lazarow et al., 2017; Lee et al.,2018), which is a convolutional neural network (CNN) classifier empowered withgenerative capabilities. Our approach alternatively synthesizes pseudo-negativesamples with learned transformations and enhances the classifier by retraining itwith synthesized samples. Experimental results verify that our approach signif-icantly improves the ability of deep networks to resist large variations betweentraining and testing data and achieves classification accuracy improvements onseveral benchmark datasets, including MNIST, affNIST, SVHN and CIFAR-10. ### Paper Keywords ["Introspective learning", "Large variations resistance", "Image classification", "Generative models"] ### Paper Content ABSTRACTLearning deep networks which can resist large variations between training andtesting data is essential to build accurate and robust image classifiers. Towardsthis end, a typical strategy is to apply data augmentation to enlarge the trainingset. However, standard data augmentation is essentially a brute-force strategywhich is inefficient, as it performs all the pre-defined transformations to everytraining sample. In this paper, we propose a principled approach to train networkswith significantly improved resistance to large variations between training andtesting data. This is achieved by embedding a learnable transformation moduleinto the introspective networks (Jin et al., 2017; Lazarow et al., 2017; Lee et al.,2018), which is a convolutional neural network (CNN) classifier empowered withgenerative capabilities. Our approach alternatively synthesizes pseudo-negativesamples with learned transformations and enhances the classifier by retraining itwith synthesized samples. Experimental results verify that our approach signif-icantly improves the ability of deep networks to resist large variations betweentraining and testing data and achieves classification accuracy improvements onseveral benchmark datasets, including MNIST, affNIST, SVHN and CIFAR-10.1 I NTRODUCTIONClassification problems have rapidly progressed with advancements in convolutional neural net-works (CNNs) (LeCun et al., 1989; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; Szegedyet al., 2015; He et al., 2016; Huang et al., 2017). CNNs are able to produce promising performance,given sufficient training data. However, when the training data is limited and unable to cover all thedata variations in the testing data (e.g., the training set is MNIST, while the testing set is affNIST),the trained networks generalize poorly on the testing data. Consequently, how to learn deep net-works which can resist large variations between training and testing data is a significant challengefor building accurate and robust image classifiers.To address this issue, a typical strategy is to apply data augmentation to enlarging the training set,i.e., applying various transformations, including random translations, rotations and flips as well asGaussian noise injection, to the existing training data. This strategy is very effective in improvingthe performance, but it is essentially a brute-force strategy which is inefficient, as it exhaustivelyperforms all these transformations to every training samples. Neither is it theoretically formulated.Alternatively, we realize that we can synthesize extra training samples with generative models. But,the problem is how to generate synthetic samples which are able to improve the robustness of CNNsto large variations between training and testing data. In this paper, we achieve this by embed-ding a learnable transformation module into introspective networks (Jin et al., 2017; Lazarow et al.,2017), a CNN classifier empowered with generative capabilities. We name our approach intro-spective transformation network (ITN), which performs training by a reclassification-by-synthesisalgorithm. It alternatively synthesizes samples with learned transformations and enhances the clas-sifier by retraining it with synthesized samples. We use a min-max formulation to learn our ITN,where the transformation module transforms the synthesized pseudo-negative samples to maximizetheir variations to the original training samples and the CNN classifier is updated by minimizingthe classification loss of the transformed synthesized pseudo-negative samples. The transformationmodules are learned jointly with the CNN classifier, which augments training data in an intelligentmanner by narrowing down the search space for the variations.1Under review as a conference paper at ICLR 2019Our approach can work with any models that have generative and discriminative abilities, such asgenerative adversarial networks (GANs) and introspective networks. In this paper, we choose theintrospective networks to generate extra training samples rather than GANs, because introspectivenetworks have several advantages over GANs. Introspective learning framework maintains onesingle CNN discriminator that itself is also a generator while GANs have separate discriminatorsand generators. The generative and discriminative models are simultaneously refined over iterations.Additionally, Introspective networks are easier to train than GANs with gradient descent algorithmsby avoiding adversarial learning.The main contribution of the paper is that we propose a principled approach that endows classifierswith the ability to resist larger variations between training and testing data in an intelligent andefficient manner. Experimental results show that our approach achieves better performance thanstandard data augmentation on both classification and cross-dataset generalization. Furthermore,we also show that our approach has great abilities in resisting different types of variations betweentraining and testing data.2 R ELATED WORKIn recent years, a significant number of works have emerged focus on resisting large variationsbetween training and testing data. The most widely adopted approach is data augmentation thatapplies pre-defined transformations to the training data. Nevertheless, this method lacks efficiencyand stability since the users have to predict the types of transformations and manually applies themto the training set. Better methods have been proposed by building connections between generativemodels and discriminative classifiers (Friedman et al., 2001; Liang & Jordan, 2008; Tu et al., 2008;Jebara, 2012; Welling et al., 2003). This type of methods capture the underlying generation processof the entire dataset. The discrepancy between training and test data is reduced by generating moresamples from the data distribution.GANs (Goodfellow et al., 2014) have led a huge wave in exploring the generative adversarial struc-tures. Combining this structure with deep CNNs can produce models that have stronger generativeabilities. In GANs, generators and discriminators are trained simultaneously. Generators try to gen-erate fake images that fool the discriminators, while discriminators try to distinguish the real andfake images. Many variations of GANs have emerged in the past three years, like DCGAN (Radfordet al., 2015), WGAN (Arjovsky et al., 2017) and WGAN-GP (Gulrajani et al., 2017). These GANsvariations show stronger learning ability that enables generating complex images. Techniques havebeen proposed to improve adversarial learning for image generation (Salimans et al., 2016; Gulra-jani et al., 2017; Denton et al., 2015) as well as for training better image generative models (Radfordet al., 2015; Isola et al., 2017).Introspective networks (Tu, 2007; Lazarow et al., 2017; Jin et al., 2017; Lee et al., 2018) provide analternative approach to generate samples. Introspective networks are closely related to GANs sincethey both have generative and discriminative abilities but different in various ways. Introspectivenetworks maintain one single model that is both discriminative and generative at the same time whileGANs have distinct generators and discriminators. Introspective networks focus on introspectivelearning that synthesizes samples from its own classifier. On the other hand, GANs emphasizeadversarial learning that guides generators with separate discriminators. The generators in GANsare mappings from the features to the images. However, Introspective networks directly models theunderlying statistics of an image with an efficient sampling/inference process.3 M ETHODWe now describe the details of our approach in this section. We first briefly review the introspectivelearning framework proposed by (Tu, 2007). This is followed by our the detailed mathematicalexplanation of our approach. In particular, we focus on explaining how our model generates unseenexamples that complement the training dataset.3.1 I NTROSPECTIVE LEARNINGWe only briefly review introspective learning for binary-class problems, since the same idea can beeasily extended to multi-class problems. Let us denote x2Rdas a data sample and y2+1;12Under review as a conference paper at ICLR 2019as the corresponding label of x. The goal of introspective learning is to model positive samples bylearning the generative model p(xjy= +1) . Under Bayes rule, we havep(xjy= +1) =p(y= +1jx)p(y=1)p(y=1jx)p(y= +1)p(xjy=1); (1)wherep(yjx)is a discriminative model. For pedagogical simplicity, we assume p(y= 1) =p(y=1)and this equation can be further simplified as:p(xjy= +1) =p(y= +1jx)p(y=1jx)p(xjy=1): (2)The above equation suggests that a generative model for the positives p(xjy= +1) can be obtainedfrom the discriminative model p(yjx)and a generative model p(xjy=1)for the negatives. How-ever, to faithfully learn p(xjy= +1) , we need to have a representative p(xjy=1), which is verydifficult to obtain. A solution was provided in (Tu, 2007) which learns p(xjy=1)by using aniterative process starting from an initial reference distribution of the negatives p0(xjy=1), e.g.,p0(xjy=1) =U(x), a Gaussian distribution on the entire space Rd. This is updated bypt+1(xjy=1) =1Ztqt(y= +1jx)qt(y=1jx)pt(xjy=1); (3)whereqt(yjx)is a discriminative model learned on a given set of positives and a limited numberof pseudo-negatives sampled from pt(xjy=1)andZt=Rqt(y=+1jx)qt(y=1jx)pt(xjy=1)dxis thenormalizing factor. It has been proven that KL(p(xjy= +1)jjpt+1(xjy=1))KL(p(xjy=+1)jjpt(xjy=1))) (as long as each qt(yjx)makes a better-than-random prediction, the inequal-ity holds) in (Tu, 2007), where KL(jj)denotes the Kullback-Leibler divergences, which impliespt(xjy=1)t=1!p(xjy= +1) . Therefore, gradually learning pt(xjy=1)by following thisiterative process of Eqn.(3), the samples drawn from xpt(xjy=1)become indistinguishablefrom the given training samples.3.2 LARGE VARIATIONS RESISTANCE VIA INTROSPECTIVE LEARNINGIntrospective Convolutional Networks (ICN) (Jin et al., 2017) and Wasserstein Introspective NeuralNetworks (WINN) (Lee et al., 2018) adopt the introspective learning framework and strengthen theclassifiers by a reclassification-by-synthesis algorithm. However, both of them fail to capture largedata variations between the training and testing data, since most of the generated pseudo-negativesare very similar to the original samples. But in practice, it is very common that the test data containunseen variations that are not in training data, such as the same objects viewed from different anglesand suffered from shape deformation.To address this issue, we present our approach building upon the introspective learning frameworkto resist large data variations between training and test data. Arguably, even large training setscannot fully contains all the possible variations. Our goal is to quickly generate extra trainingsamples with beneficial unseen variations that is not covered by the training data to help classi-fiers become robust. We assume that we can generates such training samples by applying a trans-formation function T(;)parametrized by learnable parameters to the original training sam-ples. Let us denote g(; )as the function that maps the samples xto the transformation param-eters, where is the model parameter of the function g. The generated samples still belong tothe same category of the original samples, since the transformation function Tonly changes thehigh-level geometric properties of the samples. The outline of training procedures of ITN is pre-sented in Algorithm 1. We denote S+=f(x+i;+1);i= 1:::jS+jgas the positive sample set,Tt(S+) =f(xTi;+1);i= 1:::jS+j;xTi=T(x+i;t)gas the transformed positive sample set at tthiteration with transformation parameter tandSt=f(xi;1);i= 1:::jSjgas the set of pseudo-negatives drawn from pt(xjy=1). We then will describe the detail of the training procedure.Discriminative model We first demonstrate the approach of building robust classifiers with givent. For a binary classification problem, at tthiteration, the discriminative model is represented asqt(yjx;t) =11 +exp(yft(x;t))(4)3Under review as a conference paper at ICLR 2019Algorithm 1: Outline of ITN Training Algorithm1:Input: Positive sample set S+, initial reference distribution p0(xjy=1)and transformation function T2:Output: Parameters,!and 3: BuildS0by samplingjS+jpseudo-negatives samples from p0(xjy=1)4: initialize parameters ,!and , set t = 15:while not converge do6: foreachx+i2S+andxi2Stdo7: Compute transformation parameters i=g(x+i; )8: Choose iU(0;1)and compute ^xi=iT(x+i;i) + ( 1i)xi9: end for10: Compute ;! by Eqn.(6)11: Compute by Eqn.(8)12: Sample pseudo-negatives samples Zt=fzti;i= 1;:::;jS+jgfromp0(xjy=1)13: Update all samples in Ztby Eqn.(12)14: Augment pseudo-negatives sample set St=St1[f(zti;1);i= 1;:::;jS+jgand t = t + 115:end whilewheretrepresents the model parameters at iteration t, andft(x;t)represents the model outputattthiteration. Note that, qt(yjx;t)is trained on S+,T(S+;t)and pseudo-negatives drawnfrompt(xjy=1). In order to achieve stronger ability in resisting unseen variations, we want thedistribution ofT(S+;t)to be approximated by the distribution of pseudo negatives pt(xjy=1),which can be achieved by minimizing the following Wasserstein distance (Gulrajani et al., 2017):D(t;!t) =ExTT(S+;t)[ft(xT;t;!t)]ExSt[ft(x;t;!t)] +E^x^Xt[kr^xft(^x;t;!t)1k22];(5)where!tis the extra parameter together with ft(;t)to compute the Wasserstein distance. Each ^xin the set ^Xtis computed with the formula ^x=xT+ (1)x, wheresamples from uniformdistribution U(0;1),xT2T(S+;t)andx2St. The term (jjr^xft(^x;t)jj21)2is thegradient penalty that stabilizes the training procedure of the Wasserstein loss function.The goal of the discriminative model is to correctly classify any given x+,xTandx. Thus, theobjective function of learning the discriminative model at iteration tismint;!tJ(t) +D(t;!t);whereJ(t) =E(x;y)S+[St[T(S+;t)[logq(yjx;t)] (6)The classifiers obtain the strong ability in resisting unseen variations by training on the extra sampleswhile preserving the ability to correctly classify the original samples. We discussed the binaryclassification case above. When dealing with multi-class classification problems, it is needed toadapt the above reclassification-by-synthesis scheme to the multi-class case. We can directly followthe strategies proposed in (Jin et al., 2017) to extend ITN to deal with multi-class problems bylearning a series of one-vs-all classifiers or a single CNN classifier.Exploring variations. The previous section describes how to learn the robust classifiers when thetis given. However, tis unknown and there are huge number of possibilities to selecting t. Now,the problem becomes how do we learn the tin a principled manner and apply it towards buildingrobust classifiers? We solve this issue by forming a min-max problem upon the Eqn.(6):min;!maxJ(;) +D(;!; ); (7)Here, we rewrite J()andD(;!)in Eqn.(5) and Eqn.(6) as J(;)andD(;!; ), sinceisnow an unknown variable. We also subsequently drop the subscript tfor notational simplicity. Thisformulation gives us a unified perspective that encompasses some prior work on building robustclassifiers. The inner maximization part aims to find the transformation parameter that achievesthe high loss values. On the other hand, the goal of the outer minimization is expected to find the themodel parameters that enables discriminators to correctly classify xTand!allows the negativedistribution to well approximate the distribution of T(S+;t). However, direclty solving Eqn. 7 isdifficult. Thus, we break this learning process and first find a that satisfiesmaxE(xT;y)T(S+;)[log(q(yjxT))] +ExTT(S+;)[f(xT;;!)] +E^x^X[kr^xf(^x;;!)1k22](8)4Under review as a conference paper at ICLR 2019whereand!are fixed. Then, and!are learned with Eqn.(6) by keep =. Empirically, thefirst term in the Eqn. 8 dominates over other terms, therefore we can drop the second and third termsto focus on learning more robust classifiers. The purpose of empirical approximation is to find thethat makexThard to classify correctly. Instead of enumerating all possible examples in the dataaugmentation, Eqn.(8) efficiently and precisely finds a proper that increase the robustness of thecurrent classifiers.We useg(; )to learn, thus=g(x; )+, whereis random noise follows the standard normaldistribution. The function parameter is learned by Eqn.(8). Notably, following the standardbackpropagation procedure, we need to compute the derivative of the transformation function Tin each step. In other words, the transformation function T(;)need to be differentiable withrespect to the parameter to allow the gradients to flow through the transformation function Twhen learning by backpropagation.Generative model In the discriminative models, the updated discriminative model p(yjx)is learnedby Eqn.(6). The updated discriminative model is then used to compute the generative model bythe Eqn.(3) in section 3.1. The generative is learned by maximizing the likelihood function p(x).However, directly learning the generative model is cumbersome since we only need samples fromthe latest generative model.Let’s denote initial reference distribution as p0(x)andpn(xjy=1)aspn(x)for simplicity.Following standard introspective learning, we can approximate samples drawn from latest negativedistribution by first sampling from p0(x)and iteratively update them to approach desired samples.Withp0and Eqn.(3), we havepn(x) = (n1Yt=11Ztqt(y= +1jx)qt(y=1jx))p0(x); (9)whereZtindicates the normalizing factor at tthiteration. The random samples xare updated byincreasing maximize the log likelihood of pn(x). Note that maximizing logpn(x)can be simplifiedas maximizingQn1t=1qt(y=+1jx)qt(y=1jx)sinceZtandp0are fixed in Eqn.(9). From this observation, wecan directly learn a model ht(x)such thatht(x) =qt(y= +1jx)qt(y=1jx)= exp(ft(x;t)) (10)Taking natural logarithm on both side of the equation above, we can get lnht(x) =ft(x;t).Therefore, logpn(x)can be rewritten aslogpn(x) = log(n1Yt=11Ztqt(y= +1jx)qt(y=1jx))p0(x) =Cn1Xt=1ft(x;t)p0(x); (11)whereCis the constant computed with normalizing factors Zt. This conversion allows us to max-imize logpn(x)by maximizingPn1t=1ft(x;t). By taking the derivative of logpn(x), the updatesteprxis:rx=r(n1Xt=1ft(x;t)) +; (12)whereN(0;1)is the random Gaussian noise and is the step size that is annealed in thesampling process. In practice, we update from the samples generated from previous iterations toreduce time and memory complexity. An update threshold is introduced to guarantee the generatednegative images are above certain criteria, which ensures the quality of negative samples. We modifythe update threshold proposed in (Lee et al., 2018) and keep track of the ft(x;t)in every iteration.In particular, we build a set Dby recording E[ft(x;t)], wherex2S+in every iteration. We forma normal distribution N(a;b), whereaandbrepresents mean and standard deviation computed fromsetD. The stop threshold is set to be a random number sampled from this normal distribution. Thereason behind this threshold is to make sure the generated negative images are close to the majorityof transformed positive images in the feature space.4 E XPERIMENTSIn this section, we demonstrate the ability of our algorithm in resisting the large variations betweentraining and testing data through a series of experiments. First, we show the outstanding classifi-cation performance of ITN on several benchmark datasets. We also analyze the properties of the5Under review as a conference paper at ICLR 2019generated examples from different perspectives. We then further explore the ability of our algorithmin resisting large variations with two challenging classification tasks and show the consistently betterperformance. Finally, we illustrate the flexibility of our architecture in addressing different types ofunseen variations.Baselines We compare our method against CNNs, DCGAN (Radford et al., 2015), WGAN-GP(Gulrajani et al., 2017), ICN (Jin et al., 2017) and WINN (Lee et al., 2018). For generative modelsDCGAN and WGAN-GP, we adopt the evaluation metric proposed in (Jin et al., 2017). The trainingphase becomes a two-step implementation. We first generate negative samples with the originalimplementation. Then, the generated negative images are used to augment the original training set.We train a simple CNN that has the identical structure with our method on the augmented trainingset. All results reported in this section are the average of multiple repetitions.Experiment Setup All experiments are conducted with a simple CNN architecture (Lee et al., 2018)that contains 4 convolutional layers, each having a 55filter size with 64 channels and stride 2 inall layers. Each convolutional layer is followed by a batch normalization layer (Ioffe & Szegedy,2015) and a swish activation function (Ramachandran et al., 2018). The last convolutional layer isfollowed by two consecutive fully connected layers to compute logits and Wasserstein distances.The training epochs are 200 for both our method and all other baselines. The optimizer used isthe Adam optimizer (Kingma & Ba, 2014) with parameters 1= 0 and2= 0:9. Our methodrelies on the transformation function T()to convert the original samples to the unseen variations.In the following experiments, we demonstrate the ability of ITN in resisting large variations withspatial transformers (STs) (Jaderberg et al., 2015) as our transformation function unless specified.Theoretically, STs can represent all affine transformations, which endows more flexible ability inresisting unseen variations. More importantly, STs are fully differentiable, which allows the learningprocedure through standard backpropagation.4.1 C LASSIFICATIONTo demonstrate the effectiveness of ITN, we first evaluate our algorithm on 4 benchmark datasets,MNIST (LeCun et al., 1998), affNIST (Tieleman, 2013), SVHN (Netzer et al., 2011) and CIFAR-10(Krizhevsky & Hinton, 2009). The MNIST dataset includes 55000, 5000 and 10000 handwrittendigits in the training, validation and testing set, respectively. The affNIST dataset is a variant fromthe MNIST dataset and it is built by applying various affine transformations to the samples in MNISTdataset. To accord with the MNIST dataset and for the purpose of the following experiments, wereduce the size of training, validation and testing set to 55000, 5000 and 10000, respectively. SVHNis a real-world dataset that contains house numbers images from Google Street View and CIFAR-10contains 60000 images of ten different objects from the natural scenes. The purpose of introducingthese two datasets is to further verify the performance of ITN on real-world datasets. The data aug-mentation we applied in the following experiments is the standard data augmentation that includesaffine transformations, such as rotation, translation, scaling and shear.w/o DA w/ DAMethod MNIST affNIST SVHN CIFAR-10 MNIST affNIST SVHN CIFAR-10CNN 0.89% 2.82% 9.86% 31.31% 0.57% 1.65% 7.01% 24.35%DCGAN 0.79% 2.78% 9.78% 31.22% 0.57% 1.63% 6.98% 24.18%WGAN-GP 0.74% 2.76% 9.73% 31.08% 0.56% 1.56% 6.73% 24.07%ICN 0.72% 2.97% 9.72% 32.34% 0.56% 1.54% 6.81% 24.98%WINN 0.67% 2.56% 9.84% 30.72% 0.52% 1.48% 6.44% 23.74%ITN 0.49% 1.52% 6.73% 21.93% 0.47% 1.09% 5.92% 20.65%Table 1: testing errors of the classification experiments discussed in Section 4.1, where w/DA andw/o DA indicates whether data augmentation is applied.As shown in Table 1, our method achieves the best performance on all four datasets. The overallimprovements can be explained by the fact that our method generates novel and reliable negativeimages (shown in Figure 1) that effectively strengthen the classifiers. The images we generateare different from the previous ones, but can still be recognized as the same class. The boostedperformance in value on MNIST dataset is marginal perhaps because the performance on the MNISTdataset is close to saturation. The difference between training and testing split in MNIST dataset is6Under review as a conference paper at ICLR 2019CNN WGAN-GP WINN ITNw/o DA 72.06% 70.60% 70.36% 34.29%w/DA 40.74% 36.29% 33.53% 21.31%Table 2: testing errors of the classification experiments described in Section 4.2.1, where w/DA andw/o DA indicates whether data augmentation is applied.also very small compared to other datasets. Moreover, the amount of improvements increases as thedataset becomes complicated. Based on the observation of the results, we conclude that our methodhas stronger ability in resisting unseen variations especially when the dataset is complicated. On theother hand, we can clearly observe that our method outperforms the standard data augmentation onall datasets. This result confirms the effectiveness and the advantages of our approach. Additionally,ITN does not contradict with data augmentation since ITN shows even greater performance whenintegrating with data augmentation techniques. The possible reason for this observation is that theexplored space between ITN and data augmentation is not overlapped. Therefore, the algorithmachieves greater performance when combining two methods together since more unseen variationsare discovered in this case.Figure 1: Images generated by our method on MNIST, affNIST, SVHN and CIFAR-10 dataset. Ineach sector, the top row is the original images and the bottom row is our generated images.4.2 Q UANTITATIVE ANALYSIS4.2.1 C ROSS DATASET GENERALIZATIONWe have shown the substantial performance improvements of ITN against other baselines on severalbenchmark datasets. In this section, we want to further explore the ability of our method in resistinglarge variations. We design a challenging cross dataset classification task between two significantlydifferent datasets (cross dataset generalization). The training set in this experiment is the MNISTdataset while the testing set is the affNIST dataset. The difficulty of this classification tasks is clearlyhow to overcome such huge data discrepancy between training and testing set since the testing setincludes much more variations. Another reason why we pick these two datasets as training andtesting set is that they share the same categories, which ensures the challenge is only about resistinglarge data variations.As shown in Table 2, ITN has clear improvements over CNN, WGAN-GP and WINN. The amountof improvement is much larger than on the regular training and testing splits shown in Section 4.1.More importantly, our performance in this challenging task is still better than CNN with data aug-mentation. This encouraging result further verifies the efficiency and effectiveness of ITN comparedwith data augmentation. It’s not surprising that data augmentation improves the performance by asignificant margin since the space of unseen variations is huge. Data augmentation increases theclassification performance by enumerating a large number of unseen samples, however, this brute-force searching inevitably lacks efficiency and precision.4.2.2 R ESISTING ABILITY UNDER DATA PAUCITYAnother way to evaluate the ability of resisting variations is to reduce the amount of training samples.Intuitively, the discrepancy between the training and testing sets increases when the number ofsamples in the training set shrinks. The purpose of this experiment is to demonstrate the potentialof ITN in resisting unseen variations from a different perspective. We design the experiments wherethe training set is the MNIST dataset with only 0.1%, 1%, 10% and 25% of the whole training setwhile the testing set is the whole MNIST testing set. Each sample is randomly selected from thepool while keeps the number of samples per class same. Similarly, we repeat the same experimentson the CIFAR-10 dataset to further verify the results on a more complicated dataset. As shown inTable 3, our method has better results on all tasks. This result is consistent with Section 4.2.1 andSection 4.1, which undoubtedly illustrate the strong ability of ITN in resisting unseen variations inthe testing set. The constant superior performance over data augmentation also proves the efficiencyof ITN.7Under review as a conference paper at ICLR 2019w/o DA w/ DAMethod CNN WGAN-GP WINN ITN CNN WGAN-GP WINN ITN0.1%(M) 36.50% 29.43% 27.18% 16.47 % 18.07% 15.35% 14.46% 12.67%1%(M) 7.66% 6.86% 5.10% 3.48 % 4.48% 3.98% 3.66% 2.82%10%(M) 2.02% 1.63% 1.49% 0.98 % 1.24% 1.18% 1.00% 0.92%25%(M) 1.29% 1.13% 1.00% 0.78 % 0.83% 0.79% 0.77% 0.66%0.1%(C) 81.99% 80.92% 78.24% 72.50 % 79.04% 78.75% 76.97% 70.43%1%(C) 72.31% 71.34% 69.79% 61.48 % 65.23% 64.84% 63.26% 58.07%10%(C) 59.02% 57.37% 56.02% 45.06 % 47.75% 46.86% 46.04% 42.62%25%(C) 51.35% 49.01% 48.43% 34.56 % 36.50% 35.46% 34.29% 30.60%Table 3: testing errors of the classification tasks described in Section 4.2.2, where M and C repre-sents the experiments conducted on the MNIST dataset and CIFAR-10 dataset, respectively.4.3 B EYOND SPATIAL TRANSFORMEREven though we utilize STs to demonstrate our ability in resisting data variations, our method ac-tually has the ability to generalize to other types of transformations. Our algorithm can take othertypes of differentiable transformation functions and strengthen the discriminators in a similar man-ner. Moreover, our algorithm can utilize multiple types of transformations at the same time andprovide even stronger ability in resisting variations. To verify this, we introduce another recentlyproposed work, Deep Diffeomorphic Transformer (DDT) Networks (Detlefsen et al., 2018). DDTsare similar to STs in a way that both of them can be optimized through standard backpropagation.We replace the ST modules with the DDT modules and check whether our algorithm can resist suchtype of transformation. Then, we include both STs and DDTs in our model and verify the perfor-mance again. Let MNIST dataset be the training set of the experiments while the testing sets are theMNIST dataset with different types of transformation applied. We introduce two types of testingsets in this section. The first one is the normal testing set with random DDT transformation only.The second one is similar to the first one but includes both random DDT and affine transformations.The DDT transformation parameters are drawn from N(0;0:7Id)as suggest in (Detlefsen et al.,2018), whereIdrepresents the ddimensional identity matrix. Then the transformed images arerandomly placed in a 4242images. We replicate the same experiment on the CIFAR-10 dataset.MNIST CIFAR-10DDT DDT + ST DDT DDT + STCNN 17.75% 55.11% 76.14 % 78.01 %WGAN-GP 17.53% 53.24% 75.93% 77.02 %WINN 17.20% 52.43% 75.43 % 76.92 %ITN(DDT) 12.85 % 40.60% 53.62% 63.56 %ITN(DDT + ST) 9.41% 34.37% 45.26% 56.95 %Table 4: testing errors of cross dataset classification experiments, where CNN (w/ DA) representsthe CNNs with data augmentation.We can make some interesting observations from the Table 4. First, ITN can integrate with flexiblywith DDT or DDT + ST to resist the corresponding variations. Second, ITN can resist partial unseenvariations out of a mixture of transformations in the testing data. More importantly, the performanceof ITN won’t degrade when the model has transformation functions that doesn’t match the type ofvariations in the testing data, e.g. ITN(DDT + ST) on testing data with DDT only. This observationallows us to apply multiple transformation functions in ITN without knowing the types of variationsin the testing data and still maintain good performance.5 C ONCLUSIONWe proposed a principled and smart approach that endows the classifiers with the ability to resistlarger variations between training and testing data. Our method, ITN strengthens the classifiersby generating unseen variations with various learned transformations. Experimental results showconsistent performance improvements not only on the classification tasks but also on the other chal-lenging classification tasks, such as cross dataset generalization. Moreover, ITN demonstrates itsadvantages in both effectiveness and efficiency over data augmentation. Our future work includesapplying our approach to large scale datasets and extending it to generate samples with more typesof variations.8Under review as a conference paper at ICLR 2019<|im_end|> <|im_start|>assistant ### Review Title Good discussion, improved comparisons needed ### Review Text This paper suggests the use of learned transformation networks, embedded within introspective networks to improve classification performance with synthesized examples. The authors cite a number of related works, and give a good introduction to both introspective learning, as well as the particular usage for large variation resistance. This discussion forms the majority of the paper, and while the explanation seems clear it would be nice to have a stronger dedicated section on exact relations to both GAN and potentially VAE mathematically. Any kind of discussion which could tie the thorough derivation to some of the contemporaries in generative modeling would help the accessibility of the paper. My primary concerns are from the experimental sections of the paper. The setup of the problem seems such that any strong baseline could be directly tested, since the final CNN is ultimately trained in a two stage setup on the aggregated dataset (as per subsection Baselines). Here it is also worth mentioning DAgger https://ri.cmu.edu/pub_files/2010/5/Ross-AIStats10-paper.pdf / https://www.cs.cmu.edu/~sross1/publications/Ross-AIStats11-NoRegret.pdf which is used in another context but in a similar way for improving imitation and online learning. Many of the baseline methods for table 1 seem far from what I would consider "strong" baselines. Given that the core proposal of the paper is improving classifiers, the importance of having high quality baselines cannot be overstated. Particularly, baseline CNN numbers for MNIST, SVHN, and CIFAR-10 are far from what has been seen in simple papers such as Wide ResNet https://arxiv.org/abs/1605.07146, ResNet https://arxiv.org/abs/1512.03385, Universum Perscription (which bears some resemblence to this work in high level concept) https://arxiv.org/abs/1511.03719, or even older work such as Maxout Networks http://proceedings.mlr.press/v28/goodfellow13.pdf . Particularly, these papers show examples of simple CNNs which outscore the best values reported in this table, on the same datasets. Running the same setup but with these improved classifiers as baselines would make a much stronger support for the core hypothesis that ITN can be used to improve strong classifiers. Table 2 seems to me an improper comparison. Methods such as zero-shot or meta-learning type approaches seem much more appropriate for testing cross-generalization improvement. In some sense, none of the tested methods besides ITN should even be able to cross-generalize well, so the fact that ITN does better here is not surprising to me. While this is a benefit of ITN as stated, seeing a comparison to methods deisgned for cross-generalization as well would make the results of Table 2 much stronger. Table 3 also seems have improper comparisons, in that there are a large number of works using semi-supervised generative models (Improved Techniques for Training GANS, which is already cited, SS-VAE https://arxiv.org/abs/1406.5298, Temporal Ensembling https://arxiv.org/abs/1610.02242, VAT https://ieeexplore.ieee.org/abstract/document/8417973/, Ladder Networks http://papers.nips.cc/paper/5947-semi-supervised-learning-with-ladder-networks, Auxiliary Deep Generative Models https://arxiv.org/abs/1602.05473, Manifold Tangent Classifier https://papers.nips.cc/paper/4409-the-manifold-tangent-classifier) for improved classification in low data domains. Adopting the same settings and comparing directly to these methods would greatly strengthen this section as well, as simple classifiers (as shown in many of these previous papers) are not generally great baselines for semi-supervised modeling. In addition, there should be a direct case where becoming robust to these kinds of transformations fails. For example, if my classification task is the rotation/position of a repositioned MNIST digit, becoming robust to these types of transformations may be harmful. An experiment or discussion about when robustness to large data variation might be harmful would be a good inclusion as well. As a more general comment, this method seems applicable outside of image domains, and it would be interesting to see it applied in other settings though it is likely outside the scope of this particular paper. Formatting of the paper (specifically spacing between sections and subsections) seems a bit off in general. If the authors are applying /vspace tricks to shrink spaces in the format, I would recommend taking a closer look at how and where, to make spacing more consistent over the whole document. Comparing to past ICLR papers (best papers, or high rated from past conferences) to see how they approach formatting could improve the visual appeal of this paper. Overall, this paper has a lot in its favor. The experimental section is thorough, if not as strong as I would like. The derivation of the method and motivation is clear, and there are a lot of avenues explored. However, as it currently stands the experimental sections should be stronger to really prove out the core claim "Our method, ITN strengthens the classifiers by generating unseen variations with various learned transformations." compared to other methods using generative and semi-supervised methods in a similar vein. In addition, the clarity and approachability of the paper could be improved by drawing a relation to parallel related work such as GAN or VAE in more detail. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
Byg5ZANtvH
ICLR.cc/2020/Conference
2020
Short and Sparse Deconvolution --- A Geometric Approach
["Yenson Lau", "Qing Qu", "Han-Wen Kuo", "Pengcheng Zhou", "Yuqian Zhang", "John Wright"]
Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure. Variants of this problem arise in applications such as image deblurring, microscopy, neural spike sorting, and more. The problem is challenging in both theory and practice, as natural optimization formulations are nonconvex. Moreover, practical deconvolution problems involve smooth motifs (kernels) whose spectra decay rapidly, resulting in poor conditioning and numerical challenges. This paper is motivated by recent theoretical advances \citep{zhang2017global,kuo2019geometry}, which characterize the optimization landscape of a particular nonconvex formulation of SaSD. This is used to derive a provable algorithm that exactly solves certain non-practical instances of the SaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on data arising from a range of application areas. We highlight key additional challenges posed by the ill-conditioning of real SaSD problems and suggest heuristics (acceleration, continuation, reweighting) to mitigate them. Experiments demonstrate the performance and generality of the proposed method.
["short", "sparse deconvolution", "geometric", "sasd", "problem", "theory", "deconvolution", "motifs", "signals", "spatial"]
ABSTRACTShort-and-sparse deconvolution (SaSD) is the problem of extracting localized,recurring motifs in signals with spatial or temporal structure. Variants of thisproblem arise in applications such as image deblurring, microscopy, neural spikesorting, and more. The problem is challenging in both theory and practice, as natu-ral optimization formulations are nonconvex. Moreover, practical deconvolutionproblems involve smooth motifs (kernels) whose spectra decay rapidly, resultingin poor conditioning and numerical challenges. This paper is motivated by recenttheoretical advances (Zhang et al., 2017; Kuo et al., 2019), which characterize theoptimization landscape of a particular nonconvex formulation of SaSD and giveaprovable algorithm which exactly solves certain non-practical instances of theSaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on dataarising from a range of application areas. We highlight key additional challengesposed by the ill-conditioning of real SaSD problems, and suggest heuristics (accel-eration, continuation, reweighting) to mitigate them. Experiments demonstrate theperformance and generality of the proposed method.1 I NTRODUCTIONMany signals arising in science and engineering can be modeled as superpositions of basic, recurringmotifs, which encode critical information about a physical process of interest. Signals of this typecan be modeled as the convolution of a zero-padded short kernel a0PRp0(the motif) with a longersparse signalx0PRm(m"p0) which encodes the locations of the motifs in the sample1:ya0fx0: (1)We term this a short-and-sparse (SaS) model. Since often only yis observed, short-and-sparsedeconvolution (SaSD) is the problem of recovering both a0andx0fromy. Variants of SaSD arise inareas such as microscopy (Cheung et al., 2018), astronomy (Briers et al., 2013), and neuroscience(Song et al., 2018). SaSD is a challenging inverse problem in both theory and practice. Naturalformulations are nonconvex, and very little algorithmic theory was available. Moreover, practicalinstances are often ill-conditioned, due to the spectral decay of the kernel a0(Cheung et al., 2018).This paper is motivated by recent theoretical advances in nonconvex optimization and, in particular,on the geometry of SaSD. Zhang et al. (2017) and Kuo et al. (2019) study particular optimizationYL and QQ contributed equally to this work. The full version of this work can be found at https://arxiv.org/abs/1908.10959 .1For simplicity, (1)uses cyclic convolution; algorithms are results also apply to linear convolution with minormodifications. Here denotes the zero padding operator.1Published as a conference paper at ICLR 2020formulations for SaSD and show that the landscape is largely driven by the problem symmetries ofSaSD. They derive provable methods for idealized problem instances, which exactly recover pa0;x0qup to trivial ambiguities. While inspiring, these methods are not practical and perform poorly on realproblem instances. Where the emphasis of Zhang et al. (2017) and Kuo et al. (2019) is on theoreticalguarantees, here we focus on practical computation. We show how to combine ideas from this theorywith heuristics that better address the properties of practical deconvolution problems, to build a novelmethod that performs well on data arising in a range of application areas. A critical issue in movingfrom theory to practice is the poor conditioning of naturally-occurring deconvolution problems: weshow how to address this with a combination of ideas from sparse optimization, such as momentum,continuation, and reweighting. The end result is a general purpose method, which we demonstrate ondata from neural spike sorting, calcium imaging and fluorescence microscopy.Notation. The zero-padding operator is denoted by :RpÑRm. Projection of a vector vPRponto the sphere is denoted by PSp1pvq:v{}v}2, andPzpvq:vxv;zyzdenotes projectiononto the tangent space of zPSp1. The Riemannian gradient of a function f:Sp1ÑRat pointzon the sphere is given by gradfpzq:Pzprfpzqq.Reproducible research. The code for implementations of our algorithms can be found online:https://github.com/qingqu06/sparse_deconvolution .For more details of our work on SaSD, we refer interested readers to our project websitehttps://deconvlab.github.io/ .2 S YMMETRY AND GEOMETRY IN SASDIn this section, we begin by describing two intrinsic properties for SaSD. Later, we show how theseplay an important role in the geometry of optimization and the design of efficient methods.An important observation of the SaSD problem is that it admits multiple equivalent solutions. This ispurely due to the cyclic convolution between a0andx0, which exhibits the trivial ambiguity2ya0fx0ps`ra0sq f1s`rx0s;for any nonzero scalar and cyclic shift s`rs. These scale and shift symmetries create severalacceptable candidates for a0andx0, and in the absence of further information we only expect torecovera0andx0up to symmetry. Furthermore, they largely drive the behavior of certain nonconvexoptimization problems formulated for SaSD. Since the success of SaSD requires distinguishingbetween overlapping copies of a0, its difficulty also depends highly on the “similarity” of the a0toits shifts. Here we capture this notion using the shift-coherence ofa0,pa0q:max`0|xa0;s`ra0sy|P r0;1s: (2)Intuitively, the shifts of a0become closer together as pa0qincreases (Figure 10), making objectivelandscapes for optimization less favorable for recovering any specific shift of a0.2.1 L ANDSCAPE GEOMETRY UNDER SHIFT -INCOHERENCEA natural approach to solving SaSD is to formulate it as a suitable optimization problem. In thispaper we will focus on the Bilinear Lasso (BL) problem, which minimizes the squared error betweenthe observation yand its reconstruction afx, plus a`1-norm sparsity penalty on x,minaPSp1;xPRmBLpa;xq:12}yafx}22}x}1: (3)Later in this section, we will see that the kernel length pshould be set slightly larger than p0.The Bilinear Lasso is a nonconvex optimization problem, as the shift symmetries of SaSD create dis-crete local minimizers in the objective landscape. The regularization created by problem symmetries2We therefore assume w.l.o.g. that }a0}21in this paper.2Published as a conference paper at ICLR 2020(a)Near one shift (b)Two shifts (c)Multiple shifts(d)'ABL (e)'BLFigure 1: Geometry of 'ABLnear superpositions of shifts ofa0(Kuo et al., 2019). (a)Regions near singleshifts are strongly convex. (b)Regions between two shifts contain a saddle-point, with negative curvaturetowards each shift and positive curvature orthogonally. (c)The span of three shifts. For each figure, the top showsthe function value in height, and the bottom shows function value over the sphere. (d,e) Whenspa0q0, theBilinear Lasso 'BLpaq:minxBLpa;xqand ABL'ABLpaqare empirically similar in the span of three shifts.in nonconvex inverse problems are a fairly general phenomenon (Sun et al., 2015) and, as Kuo et al.(2019) shows, its influence in SaSD extends beyond the neighborhoods of these local minimizers.Kuo et al. analyzed an Approximate Bilinear Lasso (ABL) objective3ABL, which satisfiesABLpa;xqBLpa;xq; whenpaq0:This non-practical objective serves as a valid simplification of the Bilinear Lasso for analysis whenthe true kernel is itself incoherent, i.e. pa0q0(Figures 1d and 1e). Under its marginalization4'ABLpaq:minxPRmABLpa;xq; (4)certain crucial properties regarding its curvature can be characterized for generic choices of x. Thereason we choose to partial minimize xinstead ofais because (i)the problem (4)is convex w.r.t. x,and(ii)the dimension of the subspace of ais significantly smaller than that of x(i.e.,p!m), whichis the place that the measure concentrates.Curvature in the span of a few shifts. Suppose we set p¡p0, which ensures that we can findana1s`1ra0s2s`2ra0s PSp1that lies near the span of two shifts of a0. If1 1(or20) then, under suitable conditions on a0andx0, Kuo et al. (2019) asserts that aliesin a strongly convex region of 'ABL, containing a single minimizer near s`1ra0s(Figure 1a); theconverse is also true. A saddle-point exists nearby when 12is balanced, characterized by largenegative curvature along the two shifts and positive curvature in orthogonal directions (Figure 1b).Interpolating between these two cases, large negative gradients point towards individual shifts.The behavior of 'ABLbetween two shifts of a0— strong convexity near single shifts, and saddle-points near balanced points — extends to regions of the sphere spanned by several shifts (Figure 1c);we elaborate on this further in Appendix A.1. This regional landscape guarantees that a0can beefficiently recovered up to a signed shift using methods for first and second-order descent, as soon asacan be brought sufficiently close to the span of a few shifts.Optimization over the sphere. For both the Bilinear Lasso and ABL, a unit-norm constraint onais enforced to break the scaling symmetry between a0andx0. Choosing the `2-norm, however,has surprisingly strong implications for optimization. The ABL objective, for example, is piecewiseconcave whenever ais sufficiently far away from any shift of a0, but the sphere induces positivecurvature near individual shifts to create strong convexity. These two properties combine to ensurerecoverability of a0. In contrast, enforcing `1-norm constraints often leads to spurious minimizersfor deconvolution problems (Levin et al., 2011; Benichoux et al., 2013; Zhang et al., 2017).Initializing near a few shifts. The landscape of 'ABLmakes single shifts of a0easy to locate if ais initialized near a span of a few shifts. Fortunately, this is a relatively simple matter in SaSD, as yis3As the intention here is apply some key intuition from the ABL objective towards the Bilinear Lasso itself,we intentionally omit the concrete form of ABLpaq. Readers may refer to Appendix A for more details.4Minimizing'ABL, this is equivalent to minimizing ABLasxcan be recovered via convex optimization.3Published as a conference paper at ICLR 2020itself a sparse superposition of shifts . Therefore, one initialization strategy is to randomly choose alength-p0windowryi:ryiyi1:::yip01sTfrom the observation and setap0q:PSp1r0p01;ryi;0p01s: (5)This bringsap0qsuitably close to the sum of a few shifts of a0(Appendix A.2); any truncation effectsare absorbed by padding the ends of ryi, which also sets the length for ato bep3p02.Implications for practical computation. The (regionally) benign optimization landscape of 'ABLguarantees that efficient recovery is possible for SaSD when a0is incoherent. Applications of sparsedeconvolution, however, are often motivated by sharpening or resolution tasks (Huang et al., 2009;Candès & Fernandez-Granda, 2014; Campisi & Egiazarian, 2016) where the motif a0is smooth andcoherent (i.e. pa0qis large). The ABL objective is a poor approximation of the Bilinear Lasso insuch cases and fails to yield practical algorithms, so we should optimize the Bilinear Lasso directly.From Figures 1d and 1e, we can see that low-dimensional subspheres spanned by shifts of a0areempirically similar when a0is incoherent. Although this breaks down in the coherent case, as weillustrate in Appendix A.3, the symmetry breaking properties of 'BLremain present. This allows usto apply the geometric intuition discussed here to create an optimization method that, with the help ofa number of computational heuristics, performs well in for SaSD even in general problem instances.Algorithm 1 Inertial Alternating Descent Method (iADM)Input: Initializations ap0qPSp1,xPRm; observation yPRm; penalty¥0; momentum Pr0;1q.Output:papkq;xpkqq, a local minimizer of BL.Initialize ap1qap0q,xp1qxp0q.fork1;2;::: until converged doUpdate xwith accelerated proximal gradient step:wpkqÐxpkqxpkqxpk1qxpk1qÐsofttkwpkqtkrx apkq;wpkq;where soft pvq:signpvqdmaxp|v|;0qdenotes the soft-thresholding operator.Update awith accelerated Riemannian gradient step:zpkqÐPSp1apkqxapkq;apk1qyPapk1qapkqapk1qÐPSp1zpkqkgrada zpkq;xpk1q:end for(a)Gradient descent (b)GD with momentumFigure 2: Momentum acceleration. a) Iterates of gradient descent oscillate on ill-conditioned functions; eachmarker denotes one iteration. b) Momentum dampens oscillation and speeds up convergence.3 D ESIGNING A PRACTICAL SASD ALGORITHMSeveral algorithms for SaSD-type problems have been developed for specific applications, such asimage deblurring (Levin et al., 2011; Briers et al., 2013; Campisi & Egiazarian, 2016), neuroscience(Rey et al., 2015; Friedrich et al., 2017; Song et al., 2018), and image super-resolution (Baker &Kanade, 2002; Shtengel et al., 2009; Yang et al., 2010), or are augmented with additional structure(Wipf & Zhang, 2014; Ling & Strohmer, 2017; Walk et al., 2017).Here, we instead leverage the theory from Section 2 to build an algorithm for general practicalsettings. In addition to applying an appropriate initialization scheme (5)and optimizing on the sphere,we minimize the Bilinear Lasso (3)instead of the ABL (4)to more accurately account for interactionsbetween shifts of a0in highly shift-coherent settings. Furthermore, we also address the negativeeffects of large coherence using a number of heuristics, leading to an efficient algorithm for SaSD.4Published as a conference paper at ICLR 2020Momentum acceleration. In shift-coherent settings, the Hessian of BLbecomes ill-conditioned5near shifts ofa0, a situation known to cause slow convergence for first-order methods (Nesterov,2013). A remedy is to add momentum (Polyak, 1964; Beck & Teboulle, 2009) to first-order iterations,for instance, by augmenting gradient descent on some smooth fpzqwith stepsize with the termw,wpkqÐzpkqpzpkqzpk1qq (6)zpk1qÐwpkqrfpwpkqq: (7)Here,controls the momentum added6. As illustrated in Figure 2, this additional term improvesconvergence by reducing oscillations of the iterates for ill-conditioned problems. Momentum hasbeen shown to improve convergence for nonconvex and nonsmooth problems (Pock & Sabach, 2016;Jin et al., 2018). Here we provide an inertial alternating descent method (iADM) for finding localminimizers of BL(Algorithm 1), which modifies iPALM (Pock & Sabach, 2016) to perform updatesonavia retraction on the sphere (Absil et al., 2009)7.Algorithm 2 SaS-BD with homotopy continuationInput: Observation yPRm, motif sizep0; momentum Pr0;1q; initialp1qfinal, penalty decreasePp0;1q; precision factor Pp0;1q.Output: Solution path p^apnq;^xpnq;pnqq(for SaSD.Setnumber of iterations NÐXlogp{p1qq{log\.Initialize ^ap0qPR3p02using (5), ^xp0q0PRm.forn1;:::;N doMinimize pnqto precisionpnqwith Algorithm 1:^apnq;^xpnqÐiADM^apn1q;^xpn1q;y;pnq;:Updatepn1qÐpnq.end for(a)5101(b)5102(c)5103Figure 3: Bilinear-lasso objective 'on the sphere Sp1, forp3and varying; brighter colors indicatehigher values. The function landscape of 'flattens as sparse penalty decreases from left to right.Homotopy continuation. It is also possible to improve optimization by modifying the objectiveBLdirectly through the sparsity penalty . Variations of this idea appear in both Zhang et al. (2017)and Kuo et al. (2019), and can also help to mitigate the effects of large shift-coherence.When solving (3)in the noise-free case, it is clear that larger choices of encourage sparsersolutions forx. Conversely, smaller choices of place local minimizers of the marginal objective'BLpaq:minxBLpa;xqcloser to signed-shifts of a0by emphasizing reconstruction quality.Whenpa0qis large, however, 'BLbecomes ill-conditioned as Ñ0due to the poor spectralconditioning of a0, leading to severe flatness near local minimizers and the creation spurious localminimizers when noise is present (Figure 3). Conversely, larger values of limitxto a small set ofsupport patterns and simplify the landscape of 'BL, at the expense of precision.It is therefore important both for fast convergence and accurate recovery for to be chosen appro-priately. When problem parameters — such as noise level, p0, or— are not known a priori, ahomotopy continuation method (Hale et al., 2008; Wright et al., 2009; Xiao & Zhang, 2013) can beused to obtain a range of solutions for SaSD. Using initialization (5), a rough estimate p^ap1q;^xp1qq5This is because the circulant matrix Ca0is ill-conditioned.6Setting0removes momentum and reverts to standard gradient descent.7The stepsizes tkandkare obtained by backtracking (Nocedal & Wright, 2006; Pock & Sabach, 2016) toensure sufficient decrease for BLapkq;wpkqBLapkq;xpk1q, and vice versa.5Published as a conference paper at ICLR 2020is obtained by solving (3)with iADM using a large choice for p1q. This estimate is refined via asolution path p^apnq;^xpnq;pnqq(by gradually decreasing pnq. By ensuring that xremains sparsealong the solution path, the objective BLenjoys restricted strong convexity w.r.t. both aandxthroughout optimization (Agarwal et al., 2010). As a result, homotopy achieves linear convergencefor SaSD where sublinear convergence is expected otherwise (Figures 4c and 4d). We provide acomplete algorithm for SaSD combining Bilinear Lasso and homotopy continuation in Algorithm 2.4 E XPERIMENTS4.1 S YNTHETIC EXPERIMENTSHere we perform SaSD in simulations on both coherent and incoherent settings. Coherent kernelsare discretized from the Gaussian window function a0gp0;0:5, wheregp;:PSp1expp2ip1q22pp1q2pi1. Incoherent kernels a0UnifpSp01qare sampled uniformly on the sphere.(a)Incoherent a0 (b)Coherent a0 (c)Incoherent a0 (d)Coherent a0Figure 4: Synthetic experiments for Bilinear Lasso. Success probability (a, b) :x0i.i.d.BRpq, thesuccess probability of SaS-BD by solving (3), shown by increasing brightness, is large when the sparsityrateis sufficiently small compared to the length of a0, and vice versa. Success with a fixed sparsity rateis more likely when a0is incoherent. Algorithmic convergence (c, d) :iterate convergence for iADM withkpk1q{pk1qvs.k0(ADM); with and without homotopy. Homotopy significantly improvesconvergence rate, and momentum improves convergence when a0is coherent.(a)Simulated kernel recovery(b)Spike train estimates (simulated)(c)Real calcium signal vs. reconstruction(d)Spike train estimates (real data)Figure 5: Deconvolution for calcium imaging using Algorithm 2 with iADM and with reweighting (Ap-pendix B). Simulated data: (a)recovered AR2 kernel; (b)estimate of spike train. Real data: (c)reconstructedcalcium signal (d)estimate of spike train. Reweighting improves estimation quality in each case.Recovery performance. We test recovery probability for varying kernel lengths p0and sparsityrates. To ensure the problem size is sufficiently large, we set m100p0. For eachp0and, werandomly generate8xi.i.d.BRpqfor both coherent and incoherent a0. We solve ten trials of (3)onclean observation data a0fx0using iADM with 102?p0. The probability of recovering a signed8BRpqdenotes the Bernoulli-Rademacher distribution, which has values 1w.p.{2and zero w.p. 1.6Published as a conference paper at ICLR 2020shift ofa0is shown in Figure 4. Recovery is likely when sparsity is low compared to the kernellength. The coherent problem setting has a smaller success region compared to the incoherent setting.Momentum and homotopy. Next, we test the performance of Algorithm 1 with momentum(kk1k2; see Pock & Sabach (2016)) and without ( 0). This is done by minimizing BLwithinitialization (5), using clean observations with p0102,m104, andp3{40 for coherent andincoherenta0. We also apply homotopy (Algorithm 2) with p1qmax`jxs`rap0qs;yyj— see Xiao& Zhang (2013), 0:3?p0,0:8, and0:1. The final solve of (3)uses precision "106,regardless of method. Figures 4c and 4d show the comparison results on coherent problem settings.Comparison to existing methods. Finally, we compare iADM, and iADM with homotopy, againsta number of existing methods for minimizing 'BL. The first is alternating minimization (Kuo et al.,2019), which at each iteration kminimizesapkqwithxpkqfixed using accelerated (Riemannian)gradient descent with backtracking, and vice versa. The next method is the popular alternatingdirection method of multipliers (Boyd et al., 2011). Finally, we compare against iPALM (Pock &Sabach, 2016) with backtracking, using the unit ball constraint on a0instead of the unit sphere.For each method, we deconvolve signals with p050;m100p0, andp3{40 for both coherentand incoherent a0. For both iADM, iADM with homotopy, and iPALM we set 0:3. Forhomotopy, we set p1qmax`jxs`rap0qs;yyj,0:3?p0, and0:5. Furthermore we set 0:5or0:8and for ADMM, we set the slack parameter to 0:7or0:5for incoherent andcoherenta0respectively. From Figure 6, we can see that ADMM performs better than iADM in theincoherent case, but becomes less reliable in the coherent case. In both cases, iADM with homotopyis the best performer. Finally, we observe roughly equal performance between iPALM and iADM.(a)Incoherent a0 (b)Coherent a0 (c)Calcium AR2Figure 6: Algorithmic comparison. (a) Convergence of various methods minimizing BLwith incoherent a0over FFT operations used (for computing convolutions). The y-axis denotes the log of the angle between apkqand the nearest shift of a0, and each marker denotes five iterations. (b)Convergence for coherent a0, and (c)with an AR2 kernel for modeling calcium signals.4.2 I MAGING APPLICATIONSHere we demonstrate the performance and generality of the proposed method. We begin with calciumfluorescence imaging, a popular modality for studying spiking activity in large neuronal populations(Grienberger & Konnerth, 2012), followed by stochastic optical reconstruction microscopy (STORM)(Rust et al., 2006; Huang et al., 2008; 2010), a superresolution technique for in vivo microscopy9.Sparse deconvolution of calcium signals. Neural spike trains created by action potentials, eachinducing a transient response in the calcium concentration of the surrounding environment. Theaggregate signal can be modeled as a convolution between the transient a0and the spike train x0.Whilsta0andx0both encode valuable information, neither are perfectly known ahead of time.Here, we first test our method on synthetic data generated using an AR2 model for a0, a shift-coherent kernel that is challenging for deconvolution, see e.g. Friedrich et al. (2017). We setx0i.i.d.Bernoullipp4{50qPR104with additive noise ni.i.d.Np0;5102q. Figures 5a and 5bdemonstrate accurate recovery of a0andx0in this synthetic setting. Next, we test our method onreal data10; Figures 5c and 5d demonstrate recovery of spike locations. Although iADM provides9Other superresolution methods for microscopy include photoactivated localization microscopy (PALM)(Betzig et al., 2006), and fluorescence photoactivation localization microscopy (fPALM) (Hess et al., 2006).10Obtained at http://spikefinder.codeneuro.org .7Published as a conference paper at ICLR 2020decent performance, in the presence of large noise estimation quality can be improved by strongersparsification methods, such as the reweighting technique by Candes et al. (2008), which we elaborateon in Appendix B. Additionally, Figure 6c shows that the proposed method converges to higherprecision in comparison with state-of-the-art methods.(a)Frame 100, time = 4s (b)Frame 200, time = 8s (c)Original (d)ResolvedFigure 7: SaSD for STORM imaging. (a, b) Individual frames (left) and predicted point process map usingSaSD (right). (c, d) shows the original microscopy and the super-resolved image obtained by our method.(a)Calcium image Y(b)Estimated kernels Ak(c)Reconstruction AkfXkpk1;2q(d)Predicted activation maps XkFigure 8: Classification of calcium images. (a) Original calcium image; (b)respective kernel estimates; (c)reconstructed images with the (left) neuron and (right) dendrite kernels; (d)respective occurence map estimates.Super-resolution for fluorescence microscopy. Fluorescence microscopy is often spatially limitedby the diffraction of light; its wavelength (several hundred nanometers) is often larger than typicalmolecular length-scales in cells, preventing a detailed characterization of subcellular structures. TheSTORM technique overcomes this resolution limit by using photoswitchable fluorescent probesto multiplex the image into multiple frames, each containing a subset of the molecules present(Figure 7). If the location of these molecules can be precisely determined for each frame, synthesizingall deconvolved frames will produce a super-resolution microscopy image with nanoscale resolution.For each image frame, the localization task can be formulated via the SaS modelYtloomoonSTORM frameA0loomoonpoint spread functionfX0;tloomoonsparse point sourcesNtloomoonnoise; (8)wherefdenotes 2D convolution. Here we will solve this task on the single-molecule localizationmicroscopy (SMLM) benchmarking dataset11via SaSD, recovering both the PSF A0and the pointsource mapsX0;tsimultaneously. We apply iADM with reweighting (Appendix B) on frames of size128128from the video sequence “Tubulin”; each pixel is of 100nm2resolution12, the fluorescencewavelength is 690nm , and the framerate is f25Hz . Figure 7 shows examples of recoveredactivation maps, and the aggregated super-resolution image from all 500 frames, accurately predictingthe PSF (see Appendix D) and the activation map for each video frame to produce higher resolutionmicroscopy images.Localization in calcium images. Our methods are easily extended to handle superpositions ofmultiple SaS signals. In calcium imaging, this can potentially be used to track the neurons invideo sequences, a challenging task due to (non-) rigid motion, overlapping sources, and irregular11Data can be accessed at http://bigwww.epfl.ch/smlm/datasets/index.html .12Here we solve SaSD on the same 128128grid. In practice, the localization problem is solved on a finergrid, so that the resulting resolution can reach 2030nm.8Published as a conference paper at ICLR 2020background noise Pnevmatikakis et al. (2016); Giovannucci et al. (2019). We consider frames videoobtained via the two-photon calcium microscopy dataset from the Allen Institute for Brain Science13,shown in Figure 8. Each frame contains the cross section of several neurons and dendrites, whichhave distinct sizes. We model this as the SaS signal YtA1fX1;tA2fX2;t, where eachsummand consists of neurons or dendrites exclusively. By extending Algorithm 2 to recover each ofthe kernelsAkand mapsXk, we can solve this convolutional dictionary learning (SaS-CDL; seeAppendix C) problem which allows us to separate the dendritic and neuronal components from thisimage for localization of firing activity, etc. As a result, the application of SaS-CDL as a denoising oranalysis tool for calcium imaging videos provides a very promising direction for future research.5 D ISCUSSIONMany nonconvex inverse problems, such as SaSD, are strongly regulated by their problem symmetries.Understanding this regularity and when or how it breaks down is important for developing effectivealgorithms. We illustrate this by combining geometric intuition with practical heuristics, motivatedby common challenges in real deconvolution, to produce an efficient and general purpose method thatperforms well on data arising from a range of application areas. Our approach, therefore, can serveas a general baseline for studying and developing extensions to SaSD, such as SaS-CDL (Bristow& Lucey, 2014; Chun & Fessler, 2017; Garcia-Cardona & Wohlberg, 2018), Bayesian approaches(Babacan et al., 2008; Wipf & Zhang, 2014), and hierarchical SaS models (Chen et al., 2013).ACKNOWLEDGMENTSThis work was funded by NSF 1343282, NSF CCF 1527809, and NSF IIS 1546411. QQ alsoacknowledges supports from Microsoft PhD fellowship and the Moore-Sloan fellowship. We wouldlike to thank Gongguo Tang, Shuyang Ling, Carlos Fernandez-Granda, Ruoxi Sun, and Liam Paninskifor fruitful discussions.
rJlh3raLqH
Official Blind Review #2
3: Weak Reject
This paper analyzes some of the optimization challenges presented by a particular formulation of "short-and-sparse deconvolution" and proposes a new general purpose algorithm to try to alleviate them. The paper is reasonably clearly written and the experimental results seem impressive (as a non-specialist in this area). However the experimental investigation on real data does not compare to a baseline, which seems like an essential part of investigating the proposed method. If this were corrected I would recommend an accept. I would like to make it clear that I have little experience in this area and am not familiar with relevant previous literature, so I was only able to roughly judge novelty of the ideas and how the experimental results might compare with existing approaches. Major comments: For the experimental investigations on real data there was no baseline presented. Are there no task-specific algorithms that have previously been developed for deconvolution of calcium signals, fluoresence microscopy and calcium imaging? At the very least it would be helpful to compare to one of the other general purpose methods used for figure 6. It seemed like a lot of the paper is taken up with a recap of the results presented in Kuo et al. (2019), and it didn't always seem clear exactly what the relevance of these results were to the present paper. In particular it seemed strange to me to devote so much space in section 2 to the ABL objective given that it is not used (as far as I can tell) for the proposed algorithm. Minor comments: The abstract says "We leverage... sphere constraints, data-driven initialization". Is the effect of these actually investigated experimentally? In the abstract, "This is used to derive a provable algorithm" makes it sounds to me like this will be done in the current paper. In the abstract, a reference for the "due to the spectral decay of the kernel a_0" claim would be helpful. In the notation section, the definition of the Riemannian gradient seems a little sloppy mathematically: strictly f needs to be defined on $\reals^p$ (or an open subset of $\reals^p$ containing $S^{p - 1}$) in order for the right side of the gradient expression to be defined. At the start of section 2, it would be helpful to give a one-sentence description of the unifying theme of the section. It might be helpful to explicitly state that "coherence" is related to the strength of temporal / spatial correlations to aid developing the reader's intuition for the meaning of $\mu(a)$. Throughout the paper the problems that multiple equivalent local optima (due to shift symmetry) cause come up repeatedly. What happens if this symmetry is removed straightforwardly by adding a term to the cost function to encourage a particular value of $a$ to be the largest? In section 2.1, for "It analyzes an ABL...", it's not completely clear what "It" refers to (I presume Kuo et. al (2019) from context). Marginalization in my experience refers to a "partial summing" operation, whereas just above (4) it is used to refer to a "partial maximization" operation. This seems non-standard to me, but is this usage standard in this field? I didn't understand the relevance of the sentence "Under its marginalization... smaller dimension p << m." to the present paper. The sentence also seemed vague and difficult to understand if you were not already familiar with this result. It should also have a reference to justify this claim. It wasn't clear to me whether figure 1 were schematic "rough intuition" diagrams intended merely to be suggestive, or provably showing the type of behavior that happens in all cases. Also, what are the axes, generic "parameter space", I guess? I also didn't follow why (a) appears projected on to a plane while (b) and (c) appear projected on to a sphere(?) In section 3, under "momentum acceleration", it would be helpful to justify the claim that "In shift-coherent settings, the Hessian... ill-conditioned...". In figure 4, the axis labels are much too small to read. In figure 5 (b), the red poorer results obscure the green better results. In figure 4 (d), is it fair to say that the main convergence speed improvement for homotopy-iADM is in going faster from "quite near" the optimum to "really near" it, rather than getting "quite near" it in the first place? If so, isn't the latter more often what's relevant in practical applications? In figure 5, it's a little confusing to switch color meanings between (a) and (b). Reweighting seems to have a dramatic positive effect in figure 5. Is it worth investigating its effect in the other experiments as well, for example in figure 4? In figure 6 (b), is there any reason not to compare against standard black box optimizers like vanilla SGD, ADAM, etc (possibly with projection to satisfy the sphere constraint if necessary)? Would they perform very badly? Typo "Whilst $a_0$ nor $x_0$" should be "Whilst $a_0$ and $x_0$". What does the square-boxed convolution operator in (8) mean?
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Short and Sparse Deconvolution --- A Geometric Approach ### Paper Abstract Short-and-sparse deconvolution (SaSD) is the problem of extracting localized, recurring motifs in signals with spatial or temporal structure. Variants of this problem arise in applications such as image deblurring, microscopy, neural spike sorting, and more. The problem is challenging in both theory and practice, as natural optimization formulations are nonconvex. Moreover, practical deconvolution problems involve smooth motifs (kernels) whose spectra decay rapidly, resulting in poor conditioning and numerical challenges. This paper is motivated by recent theoretical advances \citep{zhang2017global,kuo2019geometry}, which characterize the optimization landscape of a particular nonconvex formulation of SaSD. This is used to derive a provable algorithm that exactly solves certain non-practical instances of the SaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on data arising from a range of application areas. We highlight key additional challenges posed by the ill-conditioning of real SaSD problems and suggest heuristics (acceleration, continuation, reweighting) to mitigate them. Experiments demonstrate the performance and generality of the proposed method. ### Paper Keywords ["short", "sparse deconvolution", "geometric", "sasd", "problem", "theory", "deconvolution", "motifs", "signals", "spatial"] ### Paper Content ABSTRACTShort-and-sparse deconvolution (SaSD) is the problem of extracting localized,recurring motifs in signals with spatial or temporal structure. Variants of thisproblem arise in applications such as image deblurring, microscopy, neural spikesorting, and more. The problem is challenging in both theory and practice, as natu-ral optimization formulations are nonconvex. Moreover, practical deconvolutionproblems involve smooth motifs (kernels) whose spectra decay rapidly, resultingin poor conditioning and numerical challenges. This paper is motivated by recenttheoretical advances (Zhang et al., 2017; Kuo et al., 2019), which characterize theoptimization landscape of a particular nonconvex formulation of SaSD and giveaprovable algorithm which exactly solves certain non-practical instances of theSaSD problem. We leverage the key ideas from this theory (sphere constraints, data-driven initialization) to develop a practical algorithm, which performs well on dataarising from a range of application areas. We highlight key additional challengesposed by the ill-conditioning of real SaSD problems, and suggest heuristics (accel-eration, continuation, reweighting) to mitigate them. Experiments demonstrate theperformance and generality of the proposed method.1 I NTRODUCTIONMany signals arising in science and engineering can be modeled as superpositions of basic, recurringmotifs, which encode critical information about a physical process of interest. Signals of this typecan be modeled as the convolution of a zero-padded short kernel a0PRp0(the motif) with a longersparse signalx0PRm(m"p0) which encodes the locations of the motifs in the sample1:ya0fx0: (1)We term this a short-and-sparse (SaS) model. Since often only yis observed, short-and-sparsedeconvolution (SaSD) is the problem of recovering both a0andx0fromy. Variants of SaSD arise inareas such as microscopy (Cheung et al., 2018), astronomy (Briers et al., 2013), and neuroscience(Song et al., 2018). SaSD is a challenging inverse problem in both theory and practice. Naturalformulations are nonconvex, and very little algorithmic theory was available. Moreover, practicalinstances are often ill-conditioned, due to the spectral decay of the kernel a0(Cheung et al., 2018).This paper is motivated by recent theoretical advances in nonconvex optimization and, in particular,on the geometry of SaSD. Zhang et al. (2017) and Kuo et al. (2019) study particular optimizationYL and QQ contributed equally to this work. The full version of this work can be found at https://arxiv.org/abs/1908.10959 .1For simplicity, (1)uses cyclic convolution; algorithms are results also apply to linear convolution with minormodifications. Here denotes the zero padding operator.1Published as a conference paper at ICLR 2020formulations for SaSD and show that the landscape is largely driven by the problem symmetries ofSaSD. They derive provable methods for idealized problem instances, which exactly recover pa0;x0qup to trivial ambiguities. While inspiring, these methods are not practical and perform poorly on realproblem instances. Where the emphasis of Zhang et al. (2017) and Kuo et al. (2019) is on theoreticalguarantees, here we focus on practical computation. We show how to combine ideas from this theorywith heuristics that better address the properties of practical deconvolution problems, to build a novelmethod that performs well on data arising in a range of application areas. A critical issue in movingfrom theory to practice is the poor conditioning of naturally-occurring deconvolution problems: weshow how to address this with a combination of ideas from sparse optimization, such as momentum,continuation, and reweighting. The end result is a general purpose method, which we demonstrate ondata from neural spike sorting, calcium imaging and fluorescence microscopy.Notation. The zero-padding operator is denoted by :RpÑRm. Projection of a vector vPRponto the sphere is denoted by PSp1pvq:v{}v}2, andPzpvq:vxv;zyzdenotes projectiononto the tangent space of zPSp1. The Riemannian gradient of a function f:Sp1ÑRat pointzon the sphere is given by gradfpzq:Pzprfpzqq.Reproducible research. The code for implementations of our algorithms can be found online:https://github.com/qingqu06/sparse_deconvolution .For more details of our work on SaSD, we refer interested readers to our project websitehttps://deconvlab.github.io/ .2 S YMMETRY AND GEOMETRY IN SASDIn this section, we begin by describing two intrinsic properties for SaSD. Later, we show how theseplay an important role in the geometry of optimization and the design of efficient methods.An important observation of the SaSD problem is that it admits multiple equivalent solutions. This ispurely due to the cyclic convolution between a0andx0, which exhibits the trivial ambiguity2ya0fx0ps`ra0sq f1s`rx0s;for any nonzero scalar and cyclic shift s`rs. These scale and shift symmetries create severalacceptable candidates for a0andx0, and in the absence of further information we only expect torecovera0andx0up to symmetry. Furthermore, they largely drive the behavior of certain nonconvexoptimization problems formulated for SaSD. Since the success of SaSD requires distinguishingbetween overlapping copies of a0, its difficulty also depends highly on the “similarity” of the a0toits shifts. Here we capture this notion using the shift-coherence ofa0,pa0q:max`0|xa0;s`ra0sy|P r0;1s: (2)Intuitively, the shifts of a0become closer together as pa0qincreases (Figure 10), making objectivelandscapes for optimization less favorable for recovering any specific shift of a0.2.1 L ANDSCAPE GEOMETRY UNDER SHIFT -INCOHERENCEA natural approach to solving SaSD is to formulate it as a suitable optimization problem. In thispaper we will focus on the Bilinear Lasso (BL) problem, which minimizes the squared error betweenthe observation yand its reconstruction afx, plus a`1-norm sparsity penalty on x,minaPSp1;xPRmBLpa;xq:12}yafx}22}x}1: (3)Later in this section, we will see that the kernel length pshould be set slightly larger than p0.The Bilinear Lasso is a nonconvex optimization problem, as the shift symmetries of SaSD create dis-crete local minimizers in the objective landscape. The regularization created by problem symmetries2We therefore assume w.l.o.g. that }a0}21in this paper.2Published as a conference paper at ICLR 2020(a)Near one shift (b)Two shifts (c)Multiple shifts(d)'ABL (e)'BLFigure 1: Geometry of 'ABLnear superpositions of shifts ofa0(Kuo et al., 2019). (a)Regions near singleshifts are strongly convex. (b)Regions between two shifts contain a saddle-point, with negative curvaturetowards each shift and positive curvature orthogonally. (c)The span of three shifts. For each figure, the top showsthe function value in height, and the bottom shows function value over the sphere. (d,e) Whenspa0q0, theBilinear Lasso 'BLpaq:minxBLpa;xqand ABL'ABLpaqare empirically similar in the span of three shifts.in nonconvex inverse problems are a fairly general phenomenon (Sun et al., 2015) and, as Kuo et al.(2019) shows, its influence in SaSD extends beyond the neighborhoods of these local minimizers.Kuo et al. analyzed an Approximate Bilinear Lasso (ABL) objective3ABL, which satisfiesABLpa;xqBLpa;xq; whenpaq0:This non-practical objective serves as a valid simplification of the Bilinear Lasso for analysis whenthe true kernel is itself incoherent, i.e. pa0q0(Figures 1d and 1e). Under its marginalization4'ABLpaq:minxPRmABLpa;xq; (4)certain crucial properties regarding its curvature can be characterized for generic choices of x. Thereason we choose to partial minimize xinstead ofais because (i)the problem (4)is convex w.r.t. x,and(ii)the dimension of the subspace of ais significantly smaller than that of x(i.e.,p!m), whichis the place that the measure concentrates.Curvature in the span of a few shifts. Suppose we set p¡p0, which ensures that we can findana1s`1ra0s2s`2ra0s PSp1that lies near the span of two shifts of a0. If1 1(or20) then, under suitable conditions on a0andx0, Kuo et al. (2019) asserts that aliesin a strongly convex region of 'ABL, containing a single minimizer near s`1ra0s(Figure 1a); theconverse is also true. A saddle-point exists nearby when 12is balanced, characterized by largenegative curvature along the two shifts and positive curvature in orthogonal directions (Figure 1b).Interpolating between these two cases, large negative gradients point towards individual shifts.The behavior of 'ABLbetween two shifts of a0— strong convexity near single shifts, and saddle-points near balanced points — extends to regions of the sphere spanned by several shifts (Figure 1c);we elaborate on this further in Appendix A.1. This regional landscape guarantees that a0can beefficiently recovered up to a signed shift using methods for first and second-order descent, as soon asacan be brought sufficiently close to the span of a few shifts.Optimization over the sphere. For both the Bilinear Lasso and ABL, a unit-norm constraint onais enforced to break the scaling symmetry between a0andx0. Choosing the `2-norm, however,has surprisingly strong implications for optimization. The ABL objective, for example, is piecewiseconcave whenever ais sufficiently far away from any shift of a0, but the sphere induces positivecurvature near individual shifts to create strong convexity. These two properties combine to ensurerecoverability of a0. In contrast, enforcing `1-norm constraints often leads to spurious minimizersfor deconvolution problems (Levin et al., 2011; Benichoux et al., 2013; Zhang et al., 2017).Initializing near a few shifts. The landscape of 'ABLmakes single shifts of a0easy to locate if ais initialized near a span of a few shifts. Fortunately, this is a relatively simple matter in SaSD, as yis3As the intention here is apply some key intuition from the ABL objective towards the Bilinear Lasso itself,we intentionally omit the concrete form of ABLpaq. Readers may refer to Appendix A for more details.4Minimizing'ABL, this is equivalent to minimizing ABLasxcan be recovered via convex optimization.3Published as a conference paper at ICLR 2020itself a sparse superposition of shifts . Therefore, one initialization strategy is to randomly choose alength-p0windowryi:ryiyi1:::yip01sTfrom the observation and setap0q:PSp1r0p01;ryi;0p01s: (5)This bringsap0qsuitably close to the sum of a few shifts of a0(Appendix A.2); any truncation effectsare absorbed by padding the ends of ryi, which also sets the length for ato bep3p02.Implications for practical computation. The (regionally) benign optimization landscape of 'ABLguarantees that efficient recovery is possible for SaSD when a0is incoherent. Applications of sparsedeconvolution, however, are often motivated by sharpening or resolution tasks (Huang et al., 2009;Candès & Fernandez-Granda, 2014; Campisi & Egiazarian, 2016) where the motif a0is smooth andcoherent (i.e. pa0qis large). The ABL objective is a poor approximation of the Bilinear Lasso insuch cases and fails to yield practical algorithms, so we should optimize the Bilinear Lasso directly.From Figures 1d and 1e, we can see that low-dimensional subspheres spanned by shifts of a0areempirically similar when a0is incoherent. Although this breaks down in the coherent case, as weillustrate in Appendix A.3, the symmetry breaking properties of 'BLremain present. This allows usto apply the geometric intuition discussed here to create an optimization method that, with the help ofa number of computational heuristics, performs well in for SaSD even in general problem instances.Algorithm 1 Inertial Alternating Descent Method (iADM)Input: Initializations ap0qPSp1,xPRm; observation yPRm; penalty¥0; momentum Pr0;1q.Output:papkq;xpkqq, a local minimizer of BL.Initialize ap1qap0q,xp1qxp0q.fork1;2;::: until converged doUpdate xwith accelerated proximal gradient step:wpkqÐxpkqxpkqxpk1qxpk1qÐsofttkwpkqtkrx apkq;wpkq;where soft pvq:signpvqdmaxp|v|;0qdenotes the soft-thresholding operator.Update awith accelerated Riemannian gradient step:zpkqÐPSp1apkqxapkq;apk1qyPapk1qapkqapk1qÐPSp1zpkqkgrada zpkq;xpk1q:end for(a)Gradient descent (b)GD with momentumFigure 2: Momentum acceleration. a) Iterates of gradient descent oscillate on ill-conditioned functions; eachmarker denotes one iteration. b) Momentum dampens oscillation and speeds up convergence.3 D ESIGNING A PRACTICAL SASD ALGORITHMSeveral algorithms for SaSD-type problems have been developed for specific applications, such asimage deblurring (Levin et al., 2011; Briers et al., 2013; Campisi & Egiazarian, 2016), neuroscience(Rey et al., 2015; Friedrich et al., 2017; Song et al., 2018), and image super-resolution (Baker &Kanade, 2002; Shtengel et al., 2009; Yang et al., 2010), or are augmented with additional structure(Wipf & Zhang, 2014; Ling & Strohmer, 2017; Walk et al., 2017).Here, we instead leverage the theory from Section 2 to build an algorithm for general practicalsettings. In addition to applying an appropriate initialization scheme (5)and optimizing on the sphere,we minimize the Bilinear Lasso (3)instead of the ABL (4)to more accurately account for interactionsbetween shifts of a0in highly shift-coherent settings. Furthermore, we also address the negativeeffects of large coherence using a number of heuristics, leading to an efficient algorithm for SaSD.4Published as a conference paper at ICLR 2020Momentum acceleration. In shift-coherent settings, the Hessian of BLbecomes ill-conditioned5near shifts ofa0, a situation known to cause slow convergence for first-order methods (Nesterov,2013). A remedy is to add momentum (Polyak, 1964; Beck & Teboulle, 2009) to first-order iterations,for instance, by augmenting gradient descent on some smooth fpzqwith stepsize with the termw,wpkqÐzpkqpzpkqzpk1qq (6)zpk1qÐwpkqrfpwpkqq: (7)Here,controls the momentum added6. As illustrated in Figure 2, this additional term improvesconvergence by reducing oscillations of the iterates for ill-conditioned problems. Momentum hasbeen shown to improve convergence for nonconvex and nonsmooth problems (Pock & Sabach, 2016;Jin et al., 2018). Here we provide an inertial alternating descent method (iADM) for finding localminimizers of BL(Algorithm 1), which modifies iPALM (Pock & Sabach, 2016) to perform updatesonavia retraction on the sphere (Absil et al., 2009)7.Algorithm 2 SaS-BD with homotopy continuationInput: Observation yPRm, motif sizep0; momentum Pr0;1q; initialp1qfinal, penalty decreasePp0;1q; precision factor Pp0;1q.Output: Solution path p^apnq;^xpnq;pnqq(for SaSD.Setnumber of iterations NÐXlogp{p1qq{log\.Initialize ^ap0qPR3p02using (5), ^xp0q0PRm.forn1;:::;N doMinimize pnqto precisionpnqwith Algorithm 1:^apnq;^xpnqÐiADM^apn1q;^xpn1q;y;pnq;:Updatepn1qÐpnq.end for(a)5101(b)5102(c)5103Figure 3: Bilinear-lasso objective 'on the sphere Sp1, forp3and varying; brighter colors indicatehigher values. The function landscape of 'flattens as sparse penalty decreases from left to right.Homotopy continuation. It is also possible to improve optimization by modifying the objectiveBLdirectly through the sparsity penalty . Variations of this idea appear in both Zhang et al. (2017)and Kuo et al. (2019), and can also help to mitigate the effects of large shift-coherence.When solving (3)in the noise-free case, it is clear that larger choices of encourage sparsersolutions forx. Conversely, smaller choices of place local minimizers of the marginal objective'BLpaq:minxBLpa;xqcloser to signed-shifts of a0by emphasizing reconstruction quality.Whenpa0qis large, however, 'BLbecomes ill-conditioned as Ñ0due to the poor spectralconditioning of a0, leading to severe flatness near local minimizers and the creation spurious localminimizers when noise is present (Figure 3). Conversely, larger values of limitxto a small set ofsupport patterns and simplify the landscape of 'BL, at the expense of precision.It is therefore important both for fast convergence and accurate recovery for to be chosen appro-priately. When problem parameters — such as noise level, p0, or— are not known a priori, ahomotopy continuation method (Hale et al., 2008; Wright et al., 2009; Xiao & Zhang, 2013) can beused to obtain a range of solutions for SaSD. Using initialization (5), a rough estimate p^ap1q;^xp1qq5This is because the circulant matrix Ca0is ill-conditioned.6Setting0removes momentum and reverts to standard gradient descent.7The stepsizes tkandkare obtained by backtracking (Nocedal & Wright, 2006; Pock & Sabach, 2016) toensure sufficient decrease for BLapkq;wpkqBLapkq;xpk1q, and vice versa.5Published as a conference paper at ICLR 2020is obtained by solving (3)with iADM using a large choice for p1q. This estimate is refined via asolution path p^apnq;^xpnq;pnqq(by gradually decreasing pnq. By ensuring that xremains sparsealong the solution path, the objective BLenjoys restricted strong convexity w.r.t. both aandxthroughout optimization (Agarwal et al., 2010). As a result, homotopy achieves linear convergencefor SaSD where sublinear convergence is expected otherwise (Figures 4c and 4d). We provide acomplete algorithm for SaSD combining Bilinear Lasso and homotopy continuation in Algorithm 2.4 E XPERIMENTS4.1 S YNTHETIC EXPERIMENTSHere we perform SaSD in simulations on both coherent and incoherent settings. Coherent kernelsare discretized from the Gaussian window function a0gp0;0:5, wheregp;:PSp1expp2ip1q22pp1q2pi1. Incoherent kernels a0UnifpSp01qare sampled uniformly on the sphere.(a)Incoherent a0 (b)Coherent a0 (c)Incoherent a0 (d)Coherent a0Figure 4: Synthetic experiments for Bilinear Lasso. Success probability (a, b) :x0i.i.d.BRpq, thesuccess probability of SaS-BD by solving (3), shown by increasing brightness, is large when the sparsityrateis sufficiently small compared to the length of a0, and vice versa. Success with a fixed sparsity rateis more likely when a0is incoherent. Algorithmic convergence (c, d) :iterate convergence for iADM withkpk1q{pk1qvs.k0(ADM); with and without homotopy. Homotopy significantly improvesconvergence rate, and momentum improves convergence when a0is coherent.(a)Simulated kernel recovery(b)Spike train estimates (simulated)(c)Real calcium signal vs. reconstruction(d)Spike train estimates (real data)Figure 5: Deconvolution for calcium imaging using Algorithm 2 with iADM and with reweighting (Ap-pendix B). Simulated data: (a)recovered AR2 kernel; (b)estimate of spike train. Real data: (c)reconstructedcalcium signal (d)estimate of spike train. Reweighting improves estimation quality in each case.Recovery performance. We test recovery probability for varying kernel lengths p0and sparsityrates. To ensure the problem size is sufficiently large, we set m100p0. For eachp0and, werandomly generate8xi.i.d.BRpqfor both coherent and incoherent a0. We solve ten trials of (3)onclean observation data a0fx0using iADM with 102?p0. The probability of recovering a signed8BRpqdenotes the Bernoulli-Rademacher distribution, which has values 1w.p.{2and zero w.p. 1.6Published as a conference paper at ICLR 2020shift ofa0is shown in Figure 4. Recovery is likely when sparsity is low compared to the kernellength. The coherent problem setting has a smaller success region compared to the incoherent setting.Momentum and homotopy. Next, we test the performance of Algorithm 1 with momentum(kk1k2; see Pock & Sabach (2016)) and without ( 0). This is done by minimizing BLwithinitialization (5), using clean observations with p0102,m104, andp3{40 for coherent andincoherenta0. We also apply homotopy (Algorithm 2) with p1qmax`jxs`rap0qs;yyj— see Xiao& Zhang (2013), 0:3?p0,0:8, and0:1. The final solve of (3)uses precision "106,regardless of method. Figures 4c and 4d show the comparison results on coherent problem settings.Comparison to existing methods. Finally, we compare iADM, and iADM with homotopy, againsta number of existing methods for minimizing 'BL. The first is alternating minimization (Kuo et al.,2019), which at each iteration kminimizesapkqwithxpkqfixed using accelerated (Riemannian)gradient descent with backtracking, and vice versa. The next method is the popular alternatingdirection method of multipliers (Boyd et al., 2011). Finally, we compare against iPALM (Pock &Sabach, 2016) with backtracking, using the unit ball constraint on a0instead of the unit sphere.For each method, we deconvolve signals with p050;m100p0, andp3{40 for both coherentand incoherent a0. For both iADM, iADM with homotopy, and iPALM we set 0:3. Forhomotopy, we set p1qmax`jxs`rap0qs;yyj,0:3?p0, and0:5. Furthermore we set 0:5or0:8and for ADMM, we set the slack parameter to 0:7or0:5for incoherent andcoherenta0respectively. From Figure 6, we can see that ADMM performs better than iADM in theincoherent case, but becomes less reliable in the coherent case. In both cases, iADM with homotopyis the best performer. Finally, we observe roughly equal performance between iPALM and iADM.(a)Incoherent a0 (b)Coherent a0 (c)Calcium AR2Figure 6: Algorithmic comparison. (a) Convergence of various methods minimizing BLwith incoherent a0over FFT operations used (for computing convolutions). The y-axis denotes the log of the angle between apkqand the nearest shift of a0, and each marker denotes five iterations. (b)Convergence for coherent a0, and (c)with an AR2 kernel for modeling calcium signals.4.2 I MAGING APPLICATIONSHere we demonstrate the performance and generality of the proposed method. We begin with calciumfluorescence imaging, a popular modality for studying spiking activity in large neuronal populations(Grienberger & Konnerth, 2012), followed by stochastic optical reconstruction microscopy (STORM)(Rust et al., 2006; Huang et al., 2008; 2010), a superresolution technique for in vivo microscopy9.Sparse deconvolution of calcium signals. Neural spike trains created by action potentials, eachinducing a transient response in the calcium concentration of the surrounding environment. Theaggregate signal can be modeled as a convolution between the transient a0and the spike train x0.Whilsta0andx0both encode valuable information, neither are perfectly known ahead of time.Here, we first test our method on synthetic data generated using an AR2 model for a0, a shift-coherent kernel that is challenging for deconvolution, see e.g. Friedrich et al. (2017). We setx0i.i.d.Bernoullipp4{50qPR104with additive noise ni.i.d.Np0;5102q. Figures 5a and 5bdemonstrate accurate recovery of a0andx0in this synthetic setting. Next, we test our method onreal data10; Figures 5c and 5d demonstrate recovery of spike locations. Although iADM provides9Other superresolution methods for microscopy include photoactivated localization microscopy (PALM)(Betzig et al., 2006), and fluorescence photoactivation localization microscopy (fPALM) (Hess et al., 2006).10Obtained at http://spikefinder.codeneuro.org .7Published as a conference paper at ICLR 2020decent performance, in the presence of large noise estimation quality can be improved by strongersparsification methods, such as the reweighting technique by Candes et al. (2008), which we elaborateon in Appendix B. Additionally, Figure 6c shows that the proposed method converges to higherprecision in comparison with state-of-the-art methods.(a)Frame 100, time = 4s (b)Frame 200, time = 8s (c)Original (d)ResolvedFigure 7: SaSD for STORM imaging. (a, b) Individual frames (left) and predicted point process map usingSaSD (right). (c, d) shows the original microscopy and the super-resolved image obtained by our method.(a)Calcium image Y(b)Estimated kernels Ak(c)Reconstruction AkfXkpk1;2q(d)Predicted activation maps XkFigure 8: Classification of calcium images. (a) Original calcium image; (b)respective kernel estimates; (c)reconstructed images with the (left) neuron and (right) dendrite kernels; (d)respective occurence map estimates.Super-resolution for fluorescence microscopy. Fluorescence microscopy is often spatially limitedby the diffraction of light; its wavelength (several hundred nanometers) is often larger than typicalmolecular length-scales in cells, preventing a detailed characterization of subcellular structures. TheSTORM technique overcomes this resolution limit by using photoswitchable fluorescent probesto multiplex the image into multiple frames, each containing a subset of the molecules present(Figure 7). If the location of these molecules can be precisely determined for each frame, synthesizingall deconvolved frames will produce a super-resolution microscopy image with nanoscale resolution.For each image frame, the localization task can be formulated via the SaS modelYtloomoonSTORM frameA0loomoonpoint spread functionfX0;tloomoonsparse point sourcesNtloomoonnoise; (8)wherefdenotes 2D convolution. Here we will solve this task on the single-molecule localizationmicroscopy (SMLM) benchmarking dataset11via SaSD, recovering both the PSF A0and the pointsource mapsX0;tsimultaneously. We apply iADM with reweighting (Appendix B) on frames of size128128from the video sequence “Tubulin”; each pixel is of 100nm2resolution12, the fluorescencewavelength is 690nm , and the framerate is f25Hz . Figure 7 shows examples of recoveredactivation maps, and the aggregated super-resolution image from all 500 frames, accurately predictingthe PSF (see Appendix D) and the activation map for each video frame to produce higher resolutionmicroscopy images.Localization in calcium images. Our methods are easily extended to handle superpositions ofmultiple SaS signals. In calcium imaging, this can potentially be used to track the neurons invideo sequences, a challenging task due to (non-) rigid motion, overlapping sources, and irregular11Data can be accessed at http://bigwww.epfl.ch/smlm/datasets/index.html .12Here we solve SaSD on the same 128128grid. In practice, the localization problem is solved on a finergrid, so that the resulting resolution can reach 2030nm.8Published as a conference paper at ICLR 2020background noise Pnevmatikakis et al. (2016); Giovannucci et al. (2019). We consider frames videoobtained via the two-photon calcium microscopy dataset from the Allen Institute for Brain Science13,shown in Figure 8. Each frame contains the cross section of several neurons and dendrites, whichhave distinct sizes. We model this as the SaS signal YtA1fX1;tA2fX2;t, where eachsummand consists of neurons or dendrites exclusively. By extending Algorithm 2 to recover each ofthe kernelsAkand mapsXk, we can solve this convolutional dictionary learning (SaS-CDL; seeAppendix C) problem which allows us to separate the dendritic and neuronal components from thisimage for localization of firing activity, etc. As a result, the application of SaS-CDL as a denoising oranalysis tool for calcium imaging videos provides a very promising direction for future research.5 D ISCUSSIONMany nonconvex inverse problems, such as SaSD, are strongly regulated by their problem symmetries.Understanding this regularity and when or how it breaks down is important for developing effectivealgorithms. We illustrate this by combining geometric intuition with practical heuristics, motivatedby common challenges in real deconvolution, to produce an efficient and general purpose method thatperforms well on data arising from a range of application areas. Our approach, therefore, can serveas a general baseline for studying and developing extensions to SaSD, such as SaS-CDL (Bristow& Lucey, 2014; Chun & Fessler, 2017; Garcia-Cardona & Wohlberg, 2018), Bayesian approaches(Babacan et al., 2008; Wipf & Zhang, 2014), and hierarchical SaS models (Chen et al., 2013).ACKNOWLEDGMENTSThis work was funded by NSF 1343282, NSF CCF 1527809, and NSF IIS 1546411. QQ alsoacknowledges supports from Microsoft PhD fellowship and the Moore-Sloan fellowship. We wouldlike to thank Gongguo Tang, Shuyang Ling, Carlos Fernandez-Granda, Ruoxi Sun, and Liam Paninskifor fruitful discussions.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text This paper analyzes some of the optimization challenges presented by a particular formulation of "short-and-sparse deconvolution" and proposes a new general purpose algorithm to try to alleviate them. The paper is reasonably clearly written and the experimental results seem impressive (as a non-specialist in this area). However the experimental investigation on real data does not compare to a baseline, which seems like an essential part of investigating the proposed method. If this were corrected I would recommend an accept. I would like to make it clear that I have little experience in this area and am not familiar with relevant previous literature, so I was only able to roughly judge novelty of the ideas and how the experimental results might compare with existing approaches. Major comments: For the experimental investigations on real data there was no baseline presented. Are there no task-specific algorithms that have previously been developed for deconvolution of calcium signals, fluoresence microscopy and calcium imaging? At the very least it would be helpful to compare to one of the other general purpose methods used for figure 6. It seemed like a lot of the paper is taken up with a recap of the results presented in Kuo et al. (2019), and it didn't always seem clear exactly what the relevance of these results were to the present paper. In particular it seemed strange to me to devote so much space in section 2 to the ABL objective given that it is not used (as far as I can tell) for the proposed algorithm. Minor comments: The abstract says "We leverage... sphere constraints, data-driven initialization". Is the effect of these actually investigated experimentally? In the abstract, "This is used to derive a provable algorithm" makes it sounds to me like this will be done in the current paper. In the abstract, a reference for the "due to the spectral decay of the kernel a_0" claim would be helpful. In the notation section, the definition of the Riemannian gradient seems a little sloppy mathematically: strictly f needs to be defined on $\reals^p$ (or an open subset of $\reals^p$ containing $S^{p - 1}$) in order for the right side of the gradient expression to be defined. At the start of section 2, it would be helpful to give a one-sentence description of the unifying theme of the section. It might be helpful to explicitly state that "coherence" is related to the strength of temporal / spatial correlations to aid developing the reader's intuition for the meaning of $\mu(a)$. Throughout the paper the problems that multiple equivalent local optima (due to shift symmetry) cause come up repeatedly. What happens if this symmetry is removed straightforwardly by adding a term to the cost function to encourage a particular value of $a$ to be the largest? In section 2.1, for "It analyzes an ABL...", it's not completely clear what "It" refers to (I presume Kuo et. al (2019) from context). Marginalization in my experience refers to a "partial summing" operation, whereas just above (4) it is used to refer to a "partial maximization" operation. This seems non-standard to me, but is this usage standard in this field? I didn't understand the relevance of the sentence "Under its marginalization... smaller dimension p << m." to the present paper. The sentence also seemed vague and difficult to understand if you were not already familiar with this result. It should also have a reference to justify this claim. It wasn't clear to me whether figure 1 were schematic "rough intuition" diagrams intended merely to be suggestive, or provably showing the type of behavior that happens in all cases. Also, what are the axes, generic "parameter space", I guess? I also didn't follow why (a) appears projected on to a plane while (b) and (c) appear projected on to a sphere(?) In section 3, under "momentum acceleration", it would be helpful to justify the claim that "In shift-coherent settings, the Hessian... ill-conditioned...". In figure 4, the axis labels are much too small to read. In figure 5 (b), the red poorer results obscure the green better results. In figure 4 (d), is it fair to say that the main convergence speed improvement for homotopy-iADM is in going faster from "quite near" the optimum to "really near" it, rather than getting "quite near" it in the first place? If so, isn't the latter more often what's relevant in practical applications? In figure 5, it's a little confusing to switch color meanings between (a) and (b). Reweighting seems to have a dramatic positive effect in figure 5. Is it worth investigating its effect in the other experiments as well, for example in figure 4? In figure 6 (b), is there any reason not to compare against standard black box optimizers like vanilla SGD, ADAM, etc (possibly with projection to satisfy the sphere constraint if necessary)? Would they perform very badly? Typo "Whilst $a_0$ nor $x_0$" should be "Whilst $a_0$ and $x_0$". What does the square-boxed convolution operator in (8) mean? ### Review Rating 3: Weak Reject ### Review Confidence <|im_end|> <|im_end|>
YmA86Zo-P_t
ICLR.cc/2021/Conference
2021
What they do when in doubt: a study of inductive biases in seq2seq learners
["Eugene Kharitonov", "Rahma Chaabouni"]
Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners' preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff's theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases.
["inductive biases", "description length", "sequence-to-sequence models"]
ABSTRACTSequence-to-sequence (seq2seq) learners are widely used, but we still have onlylimited knowledge about what inductive biases shape the way they generalize. Weaddress that by investigating how popular seq2seq learners generalize in tasksthat have high ambiguity in the training data. We use four new tasks to studylearners’ preferences for memorization, arithmetic, hierarchical, and compositionalreasoning. Further, we connect to Solomonoff’s theory of induction and propose touse description length as a principled and sensitive measure of inductive biases.In our experimental study, we find that LSTM-based learners can learn to performcounting, addition, and multiplication by a constant from a single training exam-ple. Furthermore, Transformer and LSTM-based learners show a bias toward thehierarchical induction over the linear one, while CNN-based learners prefer theopposite. The latter also show a bias toward a compositional generalization overmemorization. Finally, across all our experiments, description length proved to bea sensitive measure of inductive biases.1 I NTRODUCTIONSequence-to-sequence (seq2seq) learners (Sutskever et al., 2014) demonstrated remarkable perfor-mance in machine translation, story generation, and open-domain dialog (Sutskever et al., 2014;Fan et al., 2018; Adiwardana et al., 2020). Yet, these models have been criticized for requiring atremendous amount of data and being unable to generalize systematically (Dupoux, 2018; Loulaet al., 2018; Lake & Baroni, 2017; Bastings et al., 2018). In contrast, humans rely on their inductivebiases to generalize from a limited amount of data (Chomsky, 1965; Lake et al., 2019). Due to thecentrality of humans’ biases in language learning, several works have studied inductive biases ofseq2seq models and connected their poor generalization to the lack of the “right” biases (Lake &Baroni, 2017; Lake et al., 2019).In this work, we focus on studying inductive biases of seq2seq models. We start from an observationthat, generally, multiple explanations can be consistent with a limited training set, each leading todifferent predictions on unseen data. A learner might prefer one type of explanations over another ina systematic way, as a result of its inductive biases (Ritter et al., 2017; Feinman & Lake, 2018).To illustrate the setup we work in, consider a quiz-like question: if f(3)maps to 6, what doesf(4)map to? The “training” example is consistent with the following answers: 6 (f(x)6);7(f(x) =x+ 3) ;8 (f(x) = 2x); any number z, since we always can construct a function such thatf(3) = 6 andf(4) = z. By analyzing the learner’s output on this new input, we can infer its biases.This example demonstrates how biases of learners are studied through the lenses of the poverty ofthe stimulus principle (Chomsky, 1965; 1980): if nothing in the training data indicates that a learnershould generalize in a certain way, but it does nonetheless, then this is due to the biases of the learner.Inspired by the work of Zhang et al. (2019) in the image domain, we take this principle to the extremeand study biases of seq2seq learners in the regime of very few training examples, often as little asone. Under this setup, we propose four new synthetic tasks that probe seq2seq learners’ preferencesto memorization-, arithmetic-, hierarchical- and compositional-based “reasoning”.Equal contribution.1Published as a conference paper at ICLR 2021Next, we connect to the ideas of Solomonoff’s theory of induction (Solomonoff, 1964) and MinimalDescription Length (Rissanen, 1978; Grunwald, 2004) and propose to use description length, under alearner, as a principled measure of its inductive biases.Our experimental study1shows that the standard seq2seq learners have strikingly different inductivebiases. We find that LSTM-based learners can learn non-trivial counting-, multiplication-, andaddition-based rules from as little as one example. CNN-based seq2seq learners would prefer linearover hierarchical generalizations, while LSTM-based ones and Transformers would do just theopposite. When investigating the compositional reasoning, description length proved to be a sensitivemeasure. Equipped with it, we found that CNN-, and, to a lesser degree, LSTM-based learners prefercompositional generalization over memorization when provided with enough composite examples. Inturn, Transformers show a strong bias toward memorization.2 S EARCHING FOR INDUCTIVE BIASESTo formalize the way we look for inductive biases of a learner M, we consider a training datasetof input/output pairs, T=fxi; yigni=1, and a hold-out set of inputs, H=fxigki=n+1. W.l.o.g, weassume that there are two candidate “rules” that explain the training data, but do not coincide on thehold-out data:C1(xi) =C2(xi) =yi;1inand9i:C1(xi)6=C2(xi); n+ 1ik.To compare preferences of a learner Mtoward those two rules, we fit the learner on the training dataTand then compare its predictions on the hold-out data Hto the outputs of the rules. We refer to thisapproach as “intuitive”. Usually, the measures of similarity between the outputs are task-specific:McCoy et al. (2020) used accuracy of the first term, Zhang et al. (2019) used correlation and MSE,and Lake & Baroni (2017) used accuracy calculated on the entire output sequence.We too start with an accuracy-based measure. We define the fraction of perfect agreement (FPA)between a learnerMand a candidate generalization rule Cas the fraction of seeds that generalizeperfectly in agreement with that rule on the hold-out set H. Larger FPA ofMis w.r.t.C, morebiasedMis towardC. However, FPA does not account for imperfect generalization nor allows directcomparison between two candidate rules when both are dominated by a third candidate rule. Hence,below we propose a principled approach based on the description length.Description Length and Inductive Biases At the core of the theory of induction (Solomonoff, 1964)is the question of continuation of a finite string that is very similar to our setup. Indeed, we can easilyre-formulate our motivating example as a string continuation problem: “ 3!6; 4!”. The solutionproposed by Solomonoff (1964) is to select the continuation that admits “the simplest explanation” ofthe entire string, i.e. that is produced by programs of the shortest length (description length).Our intuition is that when a continuation is “simple” for a learner, then this learner is biased towardit. We consider a learner Mto be biased toward C1overC2if the training set and its extensionaccording toC1has a shorter description length (for M) compared to that of C2. Denoting descriptionlength of a dataset Dunder the learnerMasLM(D), we hypothesise that if LM(fC1(xi)gki=1)<LM(fC2(xi)gki=1), thenMis biased towardC1.Calculating Description Length To find the description length of data under a fixed learner, we usethe online (prequential) code (Rissanen, 1978; Grunwald, 2004; Blier & Ollivier, 2018).The problem of calculating LM(D),D=fxi; yigki=1is considered as a problem of transferringoutputs yione-by-one, in a compressed form, between two parties, Alice (sender) and Bob (receiver).Alice has the entire dataset fxi; yig, while Bob only has inputs fxig. Before the transmission starts,both parties agreed on the initialization of the model M, order of the inputs fxg, random seeds, andthe details of the learning procedure. Outputs fyigare sequences of tokens from a vocabulary V.W.l.o.g. we fix some order over fxg. We assume that, given x, the learnerMproduces a probabilitydistribution over the space of the outputs y,pM(yjx).1Code used in the experiments can be found at https://github.com/facebookresearch/FIND .2Published as a conference paper at ICLR 2021The very first output y1can be sent by using not more than c=jy1jlogjVjnats, using a naïveencoding.2After that, both Alice and Bob update their learners using the example (x1; y1), availableto both of them, and get identical instances of M1.Further transfer is done iteratively under the invariant that both Alice and Bob start every step twithexactly the same learners Mt1and finish with identical Mt. At step tAlice would useMt1toencode the next output yt. This can be done usinglogpMt1(ytjxt)nats (MacKay, 2003). SinceBob has exactly the same model, he can decode the message to obtain ytand use the new pair (xt; yt)to update his model and get Mt. Alice also updates her model, and proceeds to sending the next yt+1(if any), encoding it with the help of Mt. The cumulative number of nats transmitted is:LM(D) =kXt=2logpMt1(ytjxt) +c: (1)The obtained code length of Eq. 1 depends on the order in which yare transmitted and the procedurewe use to updateM:To account for that, we average out the data order by training with multiplerandom seeds. Further, for larger datasets, full re-training after adding a new example is impracticaland, in such cases, examples can be transmitted in blocks.If we measure the description length of the training data Tshuffled with the hold-out data H, bothdatasets would have symmetric roles. However, there is a certain asymmetry in the extrapolationproblem: we are looking for an extrapolation from T, not vice-versa. To break this symmetry, wealways transmit outputs for the entire training data as the first block.While LM(D)is seemingly different from the “intuitive” measures introduced before, we canillustrate their connection as follows. Consider a case where we first transmit the training outputsas the first block and allof the hold-out data outputs under C,C(H), as the second block. Then thedescription length is equal to cross-entropy of the trained learner on the hold-out data, recovering aprocess akin to the “intuitive” measuring of inductive biases. With smaller blocks, the descriptionlength also catches whether a learner is capable of finding regularity in the data fast, with few datapoints; hence it also encodes the speed-of-learning ideas for measuring inductive biases (Chaabouniet al., 2019).Finally, the description length has three more attractive properties when measuring inductive biases:(a) it is domain-independent, i.e. can be applied for instance in the image domain, (b) it allowscomparisons across models that account for model complexity, and (c) it enables direct comparisonbetween two candidate rules (as we will show in the Section 5).3 T ASKSWe describe four tasks that we use to study inductive biases of seq2seq learners. We select thosetasks to cover different aspects of learners’ behavior. Each task has a highly ambiguous trainingset which is consistent with infinite number of generalizations. We pre-select several candidaterules highlighting biases that are useful for language processing and known to exist in humans, orare otherwise reasonable. Our experiments show that these rules cover many cases of the learners’behavior.The first two tasks study biases in arithmetic reasoning: Count-or-Memorization quantifies learners’preference for counting vs. a simple memorization and Add-or-Multiply further probes the learners’preferences in arithmetic operations. We believe these tasks are interesting, as counting is neededin some NLP problems like processing linearized parse trees (Weiss et al., 2018). The third task,Hierarchical-or-Linear , contrasts hierarchical and linear inductive reasoning. The hierarchicalreasoning bias is believed to be fundamental in learning some syntactical rules in human acquisitionof syntax (Chomsky, 1965; 1980). Finally, with the Composition-or-Memorization task, we investigatebiases for systematic compositionality, which are central for human capabilities in language. Figure 1illustrates these four tasks.2As we are interested in comparing candidate generalization rules, the value of the additive constant cis notimportant, as it is learner- and candidate-independent. In experiments, we subtract it from all measurements.3Published as a conference paper at ICLR 2021aaa!bbb<latexit sha1_base64="2aP1qHe7cCIfZJZf006ToyPFAlM=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimTNNuGZrNLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZRxLBtfG8b6ewtr6xuVXcLu3s7u0fuIdHLR2nirImjUWsOgQ0E1yypuFGsE6iGEREsDYZ38z89gNTmsfy3mQJCyIYSh5yCsZKfbcMALin+HBkQKn4ERNC+m7Fq3pz4FXi56SCcjT67ldvENM0YtJQAVp3fS8xwQSU4VSwaamXapYAHcOQdS2VEDEdTObHT/GpVQY4jJUtafBc/T0xgUjrLCK2MwIz0sveTPzP66YmvAwmXCapYZIuFoWpwCbGsyTwgCtGjcgsAaq4vRXTESigxuZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBNRlKFn9IrenCfnxXl3PhatBSefKaM/cD5/AGgPlKM=</latexit>test exampleaa!?<latexit sha1_base64="zC7LQM7YZYuoBe5SPPwHFlyMIso=">AAACB3icbVDLSsNAFJ34rPUVdSnIYBFchUQq6sqiG5cV7AOaUG6mk2bo5MHMRCmhOzf+ihsXirj1F9z5N07bLLT1wIXDOfdy7z1+yplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoi2D5JyFtOGYorTdiooRD6nLX9wPfZb91RIlsR3aphSL4J+zAJGQGmpax4AYFewfqhAiOQBu6FMgdDcsk9JNLrsmhXbsifA88QpSAUVqHfNL7eXkCyisSIcpOw4dqq8HIRihNNR2c0k1QsG0KcdTWOIqPTyyR8jfKSVHg4SoStWeKL+nsghknIY+bozAhXKWW8s/ud1MhWcezmL00zRmEwXBRnHKsHjUHCPCUoUH2oCRDB9KyYhCCBKR1fWITizL8+T5onlVK2L22qldlXEUUL76BAdIwedoRq6QXXUQAQ9omf0it6MJ+PFeDc+pq0LRjGzh/7A+PwBSTCY9A==</latexit>bbb<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>bb<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>(count)(mem)(linear)(hierar)aabaa!b<latexit sha1_base64="/YFAJknAn2U8M0GNie5D5uxl4Tw=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMEALgnmLDkQGl5CMO+27Fq3pz4FXi56SCcjT67ldvIEkaU2EIB627vpeYYALKMMLptNRLNU2AjGFIu5YKiKkOJvPjp/jUKgMcSWVLGDxXf09MINY6i0PbGYMZ6WVvJv7ndVMTXQYTJpLUUEEWi6KUYyPxLAk8YIoSwzNLgChmb8VkBAqIsXmVbAj+8surpHVe9WvVq7tapX6dx1FEx+gEnSEfXaA6ukUN1EQEZegZvaI358l5cd6dj0VrwclnyugPnM8fZqiUog==</latexit>aba!?<latexit sha1_base64="4YyIpL4gWY0aiFz+Nr1J+DsIshc=">AAACCXicbVC7SgNBFJ2Nrxhfq5Y2g0GwWnYlolYGbSwjmAckS7g7mSRDZmaXmVklLGlt/BUbC0Vs/QM7/8bJo9DEAxcO59zLvfdECWfa+P63k1taXlldy68XNja3tnfc3b2ajlNFaJXEPFaNCDTlTNKqYYbTRqIoiIjTejS4Hvv1e6o0i+WdGSY0FNCTrMsIGCu1XQwR4JZivb4BpeIH3OrrBAjNfO9UiBG+bLtF3/MnwIskmJEimqHSdr9anZikgkpDOGjdDPzEhBkowwino0Ir1dRuGECPNi2VIKgOs8knI3xklQ7uxsqWNHii/p7IQGg9FJHtFGD6et4bi/95zdR0z8OMySQ1VJLpom7KsYnxOBbcYYoSw4eWAFHM3opJHxQQY8Mr2BCC+ZcXSe3EC0rexW2pWL6axZFHB+gQHaMAnaEyukEVVEUEPaJn9IrenCfnxXl3PqatOWc2s4/+wPn8AX7kmZQ=</latexit>a<latexit sha1_base64="bihfQSgohUCaiZLH8E/K/puHk0I=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmrRfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/IzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1arXzVqlfpPHUYQTOIVz8OAS6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fxz2M8Q==</latexit>b<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>(comp)(mem)b<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>a!a<latexit sha1_base64="jnXbe6btQmwXgibKAN2VhSH1B5w=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qXMptk2NJssSVapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3dsvuweHLS1TRWiTSC5VJwRNORO0aZjhtJMoCnHIaTsc38z89gNVmklxbyYJDWIYChYxAsZKfbcMuKfYcGRAKfmIoe9WvKo3B14lfk4qKEej7371BpKkMRWGcNC663uJCTJQhhFOp6VeqmkCZAxD2rVUQEx1kM0Pn+JTqwxwJJUtYfBc/T2RQaz1JA5tZwxmpJe9mfif101NdBlkTCSpoYIsFkUpx0biWQp4wBQlhk8sAaKYvRWTESggxmZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBMRlKJn9IrenCfnxXl3PhatBSefOUJ/4Hz+AGnJkvQ=</latexit>b!b<latexit sha1_base64="y/odGoA3GMubqSuaA2hvPYvW6gw=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qVk02wbmk2WZFapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyaCG/C8b6ewtr6xuVXcLu3s7u2X3YPDllGppqxJlVC6ExLDBJesCRwE6ySakTgUrB2Ob2Z++4Fpw5W8h0nCgpgMJY84JWClvlsOcU/z4QiI1uoRh3234lW9OfAq8XNSQTkafferN1A0jZkEKogxXd9LIMiIBk4Fm5Z6qWEJoWMyZF1LJYmZCbL54VN8apUBjpS2JQHP1d8TGYmNmcSh7YwJjMyyNxP/87opRJdBxmWSApN0sShKBQaFZyngAdeMgphYQqjm9lZMR0QTCjarkg3BX355lbTOq36tenVXq9Sv8ziK6BidoDPkowtUR7eogZqIohQ9o1f05jw5L86787FoLTj5zBH6A+fzB2zfkvY=</latexit>bbb<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>Count-or-Memorization<latexit sha1_base64="dl6D95tdDzd51TNAft5O5SVEngo=">AAACEXicbVC7SgNBFJ31GeMrammzGAQbw64E1C6YxkaIYB6QhDA7uUkG57HM3BXjkl+w8VdsLBSxtbPzb9xsUmjigWEO55zLzD1BKLhFz/t2FhaXlldWM2vZ9Y3Nre3czm7N6sgwqDIttGkE1ILgCqrIUUAjNEBlIKAe3JbHfv0OjOVa3eAwhLakfcV7nFFMpE7uqIVwj0bG6c0xLutI4bE2x1cgteEPaW40ynZyea/gpXDniT8leTJFpZP7anU1iyQoZIJa2/S9ENsxNciZgFG2FVkIKbulfWgmVFEJth2nG43cw0Tpuj1tkqPQTdXfEzGV1g5lkCQlxYGd9cbif14zwt5ZO+YqjBAUmzzUi4SL2h3X43a5AYZimBDKDE/+6rIBNZRhUuK4BH925XlSOyn4xcL5dTFfupjWkSH75IAcEZ+ckhK5JBVSJYw8kmfySt6cJ+fFeXc+JtEFZzqzR/7A+fwBMTKegQ==</latexit>Composition-or-Memorization<latexit sha1_base64="AU3h0Xycm3+2dblNcl19kE3syhg=">AAACF3icbVC7SgNBFJ31GeMrammzGASbhF0JqF0wjY0QwTwgCWF2cpMMmdlZZu6Kcclf2PgrNhaK2Grn37i7SaGJB4Y5nHMul3u8QHCDjvNtLS2vrK6tZzaym1vbO7u5vf26UaFmUGNKKN30qAHBfaghRwHNQAOVnoCGN6okfuMOtOHKv8VxAB1JBz7vc0Yxlrq5YhvhHrWM0p9jVFEyUIYnbkHpwjVIpflDmp5Mst1c3ik6KexF4s5InsxQ7ea+2j3FQgk+MkGNablOgJ2IauRMwCTbDg0ElI3oAFox9akE04nSuyb2caz07L7S8fPRTtXfExGVxoylFyclxaGZ9xLxP68VYv+8E3E/CBF8Nl3UD4WNyk5KsntcA0MxjgllOi6D2WxINWUYV5mU4M6fvEjqp0W3VLy4KeXLl7M6MuSQHJET4pIzUiZXpEpqhJFH8kxeyZv1ZL1Y79bHNLpkzWYOyB9Ynz9YnKFI</latexit>Hierarchical-or-Linear<latexit sha1_base64="z+j/XVE4K8yKa/1sdHAWANdl7aI=">AAACEnicbVC7TgJBFJ31ifhatbTZSEy0gOwaErUj2lBYYCKPBAiZHS4wYWZ3M3PXSDZ8g42/YmOhMbZWdv6Ns0Ch4Ekmc3LOvXfmHj8SXKPrfltLyyura+uZjezm1vbOrr23X9NhrBhUWShC1fCpBsEDqCJHAY1IAZW+gLo/vE79+j0ozcPgDkcRtCXtB7zHGUUjdezTFsIDKplMbo5JmYOiig1MhciHKn9jBlM1Hmc7ds4tuBM4i8SbkRyZodKxv1rdkMUSAmSCat303AjbCVXImYBxthVriCgb0j40DQ2oBN1OJiuNnWOjdJ1eqMwJ0JmovzsSKrUeSd9USooDPe+l4n9eM8beRTvhQRQjBGz6UC8WDoZOmo/T5QoYipEhlClu/uqwAVWUoUkxDcGbX3mR1M4KXrFweVvMla5mcWTIITkiJ8Qj56REyqRCqoSRR/JMXsmb9WS9WO/Wx7R0yZr1HJA/sD5/AILSnp4=</latexit>Add-or-Multiply<latexit sha1_base64="Qjee5BVEq+vKsWYHnWbnBQlIpYA=">AAACC3icbVDLSsNAFJ34rPUVdekmtAhuWhIpqLuqGzdCBfuANpTJZNIOnUnCzI0YQvdu/BU3LhRx6w+4829M0iy09cDlHs65l5l7nJAzBab5rS0tr6yurZc2yptb2zu7+t5+RwWRJLRNAh7InoMV5cynbWDAaS+UFAuH064zucr87j2VigX+HcQhtQUe+cxjBEMqDfXKAOgDSJHknUFy4bq1QNZuIg4s5PF0Wh7qVbNu5jAWiVWQKirQGupfAzcgkaA+EI6V6ltmCHaCJTDC6bQ8iBQNMZngEe2n1MeCKjvJb5kaR6niGl4g0/LByNXfGwkWSsXCSScFhrGa9zLxP68fgXdmJ8wPI6A+mT3kRdyAwMiCMVwmKQEepwQTydK/GmSMJSaQxpeFYM2fvEg6J3WrUT+/bVSbl0UcJXSIKugYWegUNdE1aqE2IugRPaNX9KY9aS/au/YxG13Sip0D9Afa5w/83pun</latexit>(add)(mem)train exampletrain exampletest exampletrain exampletest exampletrain exampletest exampleaa!bbbb<latexit sha1_base64="a4iN37u8WAC9fwuLOaFSCqDHf/w=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMgHuKDUcGlJKPOLTouxWv6s2BV4mfkwrK0ei7X72BJGlMhSEctO76XmKCCSjDCKfTUi/VNAEyhiHtWiogpjqYzI+f4lOrDHAklS1h8Fz9PTGBWOssDm1nDGakl72Z+J/XTU10GUyYSFJDBVksilKOjcSzJPCAKUoMzywBopi9FZMRKCDG5lWyIfjLL6+S1nnVr1Wv7mqV+nUeRxEdoxN0hnx0geroFjVQExGUoWf0it6cJ+fFeXc+Fq0FJ58poz9wPn8AaY6UpA==</latexit>a!?<latexit sha1_base64="MdWGCyoEtQk4iPFMQu2DAITPSsU=">AAACBnicbVDLSsNAFJ34rPUVdSnCYBFchUQq6sqiG5cV7AOaUCbTSTN0ZhJmJkoJXbnxV9y4UMSt3+DOv3HaZqGtBy4czrmXe+8JU0aVdt1va2FxaXlltbRWXt/Y3Nq2d3abKskkJg2csES2Q6QIo4I0NNWMtFNJEA8ZaYWD67HfuidS0UTc6WFKAo76gkYUI22krn2AoC9pP9ZIyuQB+rFKESa5455iPrrs2hXXcSeA88QrSAUUqHftL7+X4IwToTFDSnU8N9VBjqSmmJFR2c8UMQsGqE86hgrEiQryyRsjeGSUHowSaUpoOFF/T+SIKzXkoenkSMdq1huL/3mdTEfnQU5Fmmki8HRRlDGoEzjOBPaoJFizoSEIS2puhThGEmFtkiubELzZl+dJ88Txqs7FbbVSuyriKIF9cAiOgQfOQA3cgDpoAAwewTN4BW/Wk/VivVsf09YFq5jZA39gff4AheWYiQ==</latexit>(mul)bbbb<latexit sha1_base64="Skuwggtv7n9iurfu6b6dnycPw7E=">AAAB63icbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWsB/QLiWbZtvQJLskWaEs/QtePCji1T/kzX9jtt2Dtj4YeLw3w8y8MBHcWM/7RqW19Y3NrfJ2ZWd3b/+genjUNnGqKWvRWMS6GxLDBFesZbkVrJtoRmQoWCec3OV+54lpw2P1aKcJCyQZKR5xSmwuhQ6Das2re3PgVeIXpAYFmoPqV38Y01QyZakgxvR8L7FBRrTlVLBZpZ8alhA6ISPWc1QRyUyQzW+d4TOnDHEUa1fK4rn6eyIj0pipDF2nJHZslr1c/M/rpTa6DjKuktQyRReLolRgG+P8cTzkmlErpo4Qqrm7FdMx0YRaF0/FheAvv7xK2hd1/7J+83BZa9wWcZThBE7hHHy4ggbcQxNaQGEMz/AKb0iiF/SOPhatJVTMHMMfoM8f93WONg==</latexit>bb<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>bbb<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>thrice a!aaa<latexit sha1_base64="qnFGHcCsleGiOXNIMFDZUtpqE2A=">AAACHnicbVDJSgNBEO1xN25Rj14ag+DFYcYF9SZ68ahgVEhCqOlUMo3dM0N3jRKGfIkXf8WLB0UET/o3dmIEt9c0vHpVRVW9KFPSUhC8eyOjY+MTk1PTpZnZufmF8uLSuU1zI7AqUpWaywgsKplglSQpvMwMgo4UXkRXR/38xTUaK9PkjLoZNjR0EtmWAshJzfIOxUYKrMc2A4FF4O9o3YOvcCPwt1xcN7ITExiT3nDov2a5EvjBAPwvCYekwoY4aZZf661U5BoTEgqsrYVBRo0CDEmhsFeq5xbdxCvoYM3RBDTaRjE4r8fXnNLi7dS4nxAfqN87CtDWdnXkKjVQbH/n+uJ/uVpO7b1GIZMsJ0zE56B2rjilvO8Vb0mDglTXERBGul25iMGAIOdoyZkQ/j75Lznf9MNtf/90u3JwOLRjiq2wVbbOQrbLDtgxO2FVJtgtu2eP7Mm78x68Z+/ls3TEG/Yssx/w3j4AA+KiZg==</latexit>thrice b!?<latexit sha1_base64="gbmyyfTl9bxXUBY/8c/CKSNznlI=">AAACKXicbVBNS8NAEN34WetX1KOXxSJ4sSRSUU8WvXhUsCq0pWy202ZxNwm7E6WE/h0v/hUvCop69Y+4TSP49WDhzXszzM4LEikMet6bMzE5NT0zW5orzy8sLi27K6sXJk41hwaPZayvAmZAiggaKFDCVaKBqUDCZXB9PPIvb0AbEUfnOEigrVg/Ej3BGVqp49Yx1IJDKzQJ45B51V2lhsFXuT2uW1r0Q2Rax7f0y/Jzhx523IpX9XLQv8QvSIUUOO24T61uzFMFEXLJjGn6XoLtjGkUXMKw3EoN2A3XrA9NSyOmwLSz/NIh3bRKl/ZibV+ENFe/T2RMGTNQge1UDEPz2xuJ/3nNFHv77UxESYoQ8fGiXiopxnQUG+0KDRzlwBLGtbB/pTxkmnG04ZZtCP7vk/+Si52qX6senNUq9aMijhJZJxtki/hkj9TJCTklDcLJHXkgz+TFuXcenVfnfdw64RQza+QHnI9P8/inCQ==</latexit>Figure 1: Illustration of the tasks. After training on the train examples (green blocks), learnersare tested on held-out examples (red blocks). In pink blocks are generalizations according to thecandidate rules.Byakwe denote a sequence that contains token arepeated ktimes. For training, we representsequences in a standard way: the tokens are one-hot-encoded separately, and we append a specialend-of-sequence token to each sequence. Input and output vocabularies are disjoint.Count-or-Memorization : In this task, we contrast learners’ preferences for counting vs. memoriza-tion. We train models to fit a single training example with input aland output bl(i.e., to perform themapping al!bl) and test it on amwithm2[l10; l+10] . If a learner learns the constant function,outputting blindependently of its inputs, then it follows the mem strategy. On the other hand, if itgeneralizes to the am!bmmapping, then the learner is biased toward the count strategy.Add-or-Multiply : This task is akin to the motivating example in Section 1. The single trainingexample represents a mapping of an input string alto an output string b2l. As test inputs, we generateamformin the interval [l3; l+ 3]. We consider the learned rule to be consistent with mul if forallm, the input/output pairs are consistent with am!b2m. Similarly, if they are consistent witham!bm+l, we say that the learner follows the addition rule, add. Finally, the learner can learn aconstant mapping am!b2lfor any m. Again, we call this rule mem.Hierarchical-or-Linear : For a fixed depth d, we train learners on four training examples xdyxd!ywhere x; y2fa; bg.3Each training example has a nested structure, where ddefines its depth. Alearner with a hierarchical bias ( hierar ), would output the middle symbol. We also consider thelinear rule ( linear ) in which the learner outputs the (d+1)thsymbol of its input.To probe learners’ biases, we test them on inputs with different depths m2[d2; d+ 2]. Note thatto examine the linear rule (i.e. if the learner outputs the (d+1)thsymbol of any test input of depthm), we need md2. Similar to the previous tasks, there is no vocabulary sharing between a model’sinputs and outputs (input and output tokens aandbare different).Composition-or-Memorization : We take inspiration from SCAN (Lake & Baroni, 2017), a bench-mark used for studying systematic generalization of seq2seq learners.4The input vocabulary has Nsymbols aithat are one-to-one mapped into Noutput symbols bi(i.e.,ai!bi). In addition, thereis a modifier token thrice : when thrice precedes an input symbol ai, the corresponding output isrepeated three times: thrice a i!bibibi.We train a learner on all non-compositional examples ( ai!bi) and M(M < N ) compositionalexamples ( thrice a i!bibibi). At test time, we feed the learner with the remaining compositionalexamples ( thrice a i,i > M ). If the learner generalizes to the mapping thrice a i!bibibifori > M ,we consider it to be biased toward a compositional reasoning ( comp ). As an alternative generalization,3This mapping consist then on four combinations adbad!b;adaad!a;bdabd!aandbdbbd!b.4In Appendix, we report a study of inductive biases on the SCAN data.4Published as a conference paper at ICLR 2021we consider a mapping where all inputs containing aiare mapped into bi:thrice a i!bi(i > M ).We call this generalization memorization ( mem).4 M ETHODOLOGY4.1 S EQUENCE -TO-SEQUENCE LEARNERSWe experiment with three standard seq2seq models: LSTM-based seq2seq (LSTM-s2s) (Sutskeveret al., 2014), CNN-based seq2seq (CNN-s2s) (Gehring et al., 2017), and Transformer (Vaswani et al.,2017). All share a similar Encoder/Decoder architecture (Sutskever et al., 2014).LSTM-s2s Both Encoder and Decoder are implemented as LSTM cells (Hochreiter & Schmidhuber,1997). Encoder encodes its inputs incrementally from left to right. We experiment with architectureswithout (LSTM-s2s no att.) and with (LSTM-s2s att.) an attention mechanism (Bahdanau et al.,2014). For the first three tasks, both Encoder and Decoder are single-layer LSTM cells with hiddensize of 512 and embedding of dimension 16.CNN-s2s Encoder and Decoder are convolutional networks (LeCun et al., 1990), followed by GLUnon-linearities (Dauphin et al., 2017) and an attention layer. To represent positions of input tokens,CNN-s2s uses learned positional embeddings. Encoder and Decoder networks have one layer with512 filters and a kernel width of 3. We set the embedding size to 16.Transformer Encoder and Decoder are implemented as a sequence of (self-)attention and feed-forward layers. We use sinusoidal position embeddings. Both Encoder and Decoder contain onetransformer layer. The attention modules have 8 heads, feed-forward layers have dimension of 512and the embedding is of dimension 16.In Appendix we report experiments where we vary hyperparameters of the learners.4.2 T RAINING AND EVALUATIONFor all tasks, we follow the same training procedure. We train with Adam optimizer (Kingma & Ba,2014) for 3000 epochs. The learning rate starts at 105and increases for the first 1000 warm-upupdates till reaching 103. We include all available examples in a single batch. We use teacherforcing (Goodfellow et al., 2016) and set the dropout probability to 0.5. For each learner, we performtraining and evaluation 100 times, changing random seeds. At generation, to calculate FPA, we selectthe next token greedily. We use the model implementations from fairseq (Ott et al., 2019).As discussed in Section 2, when calculating L, we use the training examples as the first transmittedblock at t= 1. InCount-or-Memorization andAdd-or-Multiply this block contains one example,inHierarchical-or-Linear it has 4 examples, and in Composition-or-Memorization it has N+Mexamples. Next, we transmit examples obtained from the candidate rules in a randomized order,by blocks of size 1, 1, 1, and 4 for Count-or-Memorization ,Add-or-Multiply ,Composition-or-Memorization , and Hierarchical-or-Linear respectively. At each step, the learner is re-trained fromthe same initialization, using the procedure and hyper-parameters as discussed above.Our training setup is typical for seq2seq training. It does not include any additional pressure towardsdescription length minimization. Description length is solely used as a measure of inductive biases.5 E XPERIMENTSCount-or-Memorization We investigate here learners’ biases toward count andmem rules. Weprovide a single example al!blas the training set, varying l2f10;20;30;40g. We report thelearners’ performances in Table 1a. We observe that, independently of the length of the trainingexample l, CNN-s2s and Transformer learners inferred perfectly the mem rule with FPA- mem > 0.90(i.e. more than 90% of the random seeds output blfor any given input am).However, LSTM-based learners demonstrate a more complex behavior. With l= 10 , both learners(with and without attention) exhibit a preference for mem. Indeed, while these learners rarelygeneralize perfectly to any of the hypothesis ( 0:0FPA (no att.), 0:2/0:0FPA for mem/count (att.)),they have significantly lower L-mem. Aslincreases, LSTM-based learners become more biased5Published as a conference paper at ICLR 2021FPA" L, nats#length l count mem count memLSTM-s2s no att. 40 1 :00 0 :00 0.0197:5130 0 :97 0 :00 0.0172:6720 0 :07 0 :00 2.4955:6710 0 :00 0 :00 88 :27 48.67LSTM-s2s att. 40 0 :99 0 :00 7.84121:4830 0 :96 0 :02 1.1483:4820 0 :70 0 :16 5.7349:3310 0 :00 0 :20 98 :12 8.46CNN-s2s f10;20;30;40g 0:00 >0:90>592:92 <1.31Transformer f10;20;30;40g 0:00 >0:97>113:30 <11.14(a) Count-or-MemorizationFPA" L, nats#length l add mul mem add mul memLSTM-s2s no att. 20 0 :00 0 :94 0 :00 25 :42 0.3157:3215 0 :07 0 :65 0 :00 19 :24 4.6743:6510 0 :95 0 :01 0 :00 0.6826:58 25 :155 0 :04 0 :00 0 :00 17.12 50:83 18 :60LSTM-s2s att. 20 0 :00 0 :98 0 :00 30 :26 1.4058:8415 0 :15 0 :83 0 :00 20 :18 4.0746:3610 0 :40 0 :28 0 :18 13.69 18:16 26 :445 0 :00 0 :00 0 :97 45 :88 77 :86 0.01CNN-s2s f5;10;15;20g0:00 0 :00 1 :0>318:12>346:19 0.00Transformer f5;10;15;20g0:00 0 :00 1 :0>38:77 >50:64 <3.50(b) Add-or-MultiplyFPA" L, nats#hierar linear hierar linearLSTM-s2s no att. 0:05 0 :00 31.0461:84LSTM-s2s att. 0:30 0 :00 26.3257:2CNN-s2s 0:00 1 :00 202 :64 0.00Transformer 0:69 0 :00 4.8435:04(c) Hierarchical-or-Linear with depth d= 4FPA" L, nats#M, examples comp mem comp memLSTM-s2s no att. 36 0 :00 0 :00 42 :65 38.5524 0 :00 0 :00 238 :54 89.366 0 :00 0 :00 656 :93 157.55LSTM-s2s att. 36 0 :00 0 :00 62.3470:9224 0 :00 0 :00 263 :33 157.826 0 :00 0 :00 659 :85 164.43CNN-s2s 36 0 :75 0 :00 1.4449:9224 0 :13 0 :00 13.7584:556 0 :00 0 :00 131 :63 29.66Transformer 36 0 :00 0 :82 147 :83 6.3624 0 :00 0 :35 586 :22 26.466 0 :00 0 :00 1235 :01 53.91(d) Composition-or-MemorizationTable 1: FPA measures the fraction of seeds that generalize according to a particular rule. Descriptionlength Lis averaged across examples and seeds. The lowest Lare in bold anddenotes stat. sig.difference in L(p <103, paired t-test).6Published as a conference paper at ICLR 2021toward count . Surprisingly, for l30, most learner instances show sufficiently strong inductivebiases to infer perfectly the non-trivial count hypothesis. With l= 40 ,99% of random seeds ofLSTM-s2s att. and all(100%) of LSTM-s2s no att. seeds generalized perfectly to count .Further, we see that if Lshows similar trends, it has a higher sensitivity. For example, while bothLSTM-based learners have a similar FPA with l= 40 ,Ldemonstrates that LSTM-s2s no att. has astronger count bias.Add-or-Multiply In this task, we examine learners’ generalization after training on the singleexample al!b2l. We vary l2f5;10;15;20g. In Table 1b, we report FPA and Lfor the threegeneralization hypotheses, add,mul, and mem. We observe, similarly to the previous task, thatCNN-s2s and Transformer learners always converge perfectly to memorization.In contrast, LSTM-based learners show non-trivial generalizations. Examining first LSTM-s2s att.,when l=5, we note that mem has a high FPA and an Lconsiderably lower than others. This isconsistent with the learner’s behavior in the Count-or-Memorization task. As we increase l, moreinteresting behavior emerges. First, L-mem decreases as lincreases. Second, mul-type preferenceincreases with l. Finally, L-add presents a U-shaped function of l. That is, for the medium examplelength l, the majority of learners switch to approximating the add rule (for l= 10 ). However, whenlgrows further, a considerable fraction of these learners start to follow a mul-type rule. Strikingly,98% of LSTM-s2s att. seeds generalized perfectly to the non-trivial mul rule. As for LSTM-s2s noatt., we do not observe a strong bias to infer any of the rules when l=5. However, when increasingl, the LSTM-s2s no att. behaves similarly to LSTM-s2s att.: at first it has a preference for add(FPA- add=0.95, for l=10) then for mul (e.g. FPA- mul=0.94, for l=20).Hierarchical-or-Linear We look now at learners’ preference for either hierar orlinear gener-alizations. The architectures we use were only able to consistently learn the training examples withthe depth dnot higher than 4. Hence, in this experiment, we set dto4.We report in Table 1c the learners’ FPA and L. We observe that CNN-s2s exhibits a strikinglydifferent bias compared to all other learners with a perfect agreement with the linear rule. Incontrast, Transformer learners show a clear preference for hierar with a high FPA (0.69) and alowL(1.21). Surprisingly, this preference increases with the embedding size and Transformers withembedding size64admit an FPA- hierar of 1.00 (see Appendix for more details). LSTM-s2s att.learners demonstrate also a similar preference for hierar with an FPA of 0.30 and a considerablylower LthanL-hierar . Finally, while only 5%of LSTM-s2s no att. instances generalized toperfect hierar (and none to linear ),Lconfirms their preference for the hierar hypothesis.Composition-or-Memorization In this task, we set the number of primitives N= 40 and vary thenumber of compositional examples M2f6;24;36gseen during training . Results are reported inTable 1d. First, we observe that FPA is only informative for CNN-s2s when trained with a largeM. Indeed, for M= 6, CNN-s2s does not infer any of the candidate rules. However, accordingto description length L, we note a significant preference for mem overcomp . More compositionalexamples CNN-based learners see at training, more biased they become toward comp . The remaininglearners have zero FPA for both candidate rules. However, according to description length, LSTM-based learners have preferences similar to CNN-s2s, although weaker. That is, they show a preferenceformem for low M, that declines in favor of comp asMincreases. In contrast, Transformers show astrong bias for mem with all tested M.Overall, across all the above experiments, we see that seq2seq learners demonstrate strikinglydifferent biases. In many cases, these biases lead to non-trivial generalizations when facing ambiguityin the training data. This spans tasks that probe for memorization, arithmetic, hierarchical, andcompositional reasoning. We found that a single example is sufficient for LSTM-based learners tolearn counting, addition, and multiplication. Moreover, within the same task, they can switch fromone explanation to another, depending on the training example length, with Addition-or-Multiplicationbeing the task where this switch happens twice. In contrast, CNN-s2s and Transformers show a strongbias toward memorization. Furthermore, all learners except for CNN-s2s demonstrate a strong biastoward the hierarchical behavior. In the task of compositional generalization, CNN-s2s shows a strongbias toward compositional reasoning that appears after a few compositional training examples. Onthe other hand, Transformers show a preference for memorization over compositional generalization.7Published as a conference paper at ICLR 2021We see that the conclusions derived from comparing the description length of the candidate rules arein agreement with the results under accuracy-based metrics, but provide a more nuanced picture.Robustness to hyper-parameters We observe that learners’ biases depend, in many cases, on thelength/number of the input examples. In Appendix, we examine impact of other hyper-parameters. Inparticular, we study impact of (1) learners’ architecture size, by varying the number of layers, hiddenand embedding sizes, and (2) dropout probability. Our results show that in some cases a learner’sarchitecture size can influence the strength of inductive biases, but rarely modify them: among the136tested settings, we observe only 3cases of switching the preferences. We also found, in linewith (Arpit et al., 2017), that large dropout probabilities can prevent mem-type generalization.Finally, in Appendix we show that a variant of Transformer learners, namely the joint source-targetself-attention learner (He et al., 2018; Fonollosa et al., 2019), displays the same preferences asthe standard Transformer learners. This variant resembles the “decoder-only” architecture used inlanguage modeling (Radford et al., 2019; Brown et al., 2020). This result demonstrates that our tasksand bias measures could be applied for studying inductive biases of language model architectures.6 R ELATED WORKDessì & Baroni (2019) found that, unlike LSTM-s2s learners, CNN-s2s can perform compositionalgeneralization on SCAN. Our experiments indicate that this only happens when enough compositionalexamples are provided in the training. Moreover, in such a case, attention-enabled LSTM-s2s alsostart to prefer compositional generalization over memorization.McCoy et al. (2020) studied inductive biases of recurrent neural architectures in two synthetic tasks,English question formation and tense inflection. They found that only tree-based architecturesshow robust preference for hierarchical reasoning, in contrast to LSTM-s2s learners that generalizedlinearly. Our experiments on the hyperparameter robustness, reported in Appendix, indicate that thepreferences over linear/hierarchical reasoning are strongly affected by the dropout probability, withlearners shifting to linear behavior with low probabilities. As McCoy et al. (2020) experimented witha low dropout probability of 0.1, we believe this explains the misalignment of the conclusions.Overall, our study shows that inductive biases are more complicated than it seemed in these priorworks and a more careful analysis is crucial. We believe that our extremely controlled setup withvery little confounds is a good addition to those studies.Another line of research investigates theoretically learners’ capabilities, that is, the classes of thehypothesis that a learner can discover (Siegelmann & Sontag, 1992; Suzgun et al., 2019; Merrillet al., 2020). For example, Weiss et al. (2018) demonstrated that LSTM cells can count. In turn, wedemonstrate that LSTM-s2s learners are not only capable but also biased toward arithmetic behavior.7 D ISCUSSION AND CONCLUSIONIn this work, we studied inductive biases of standard seq2seq learners, Transformer-, LSTM-, andCNN-based. To do so, we introduced four new tasks, which allowed us to cover an interesting spec-trum of behaviors useful for language learning. In particular, we considered arithmetic, hierarchical,and compositional “reasoning”. Next, we connected the problem of finding and measuring inductivebiases to Solomonoff’s theory of induction and proposed to use a dataset’s description length under alearner as a tool for sensitive measurement of inductive biases.In our experiments, we found that the seq2seq learners have strikingly different inductive biasesand some of them generalize non-trivially when facing ambiguity. For instance, a single trainingexample is sufficient for LSTM-based learners to learn perfectly how to count, to add and to multiplyby a constant. Transformers and, to a lesser degree, LSTM-s2s demonstrated preferences for thehierarchical bias, a bias that has been argued to govern children’s acquisition of syntax. Interestingly,such biases arose with no explicit wiring for them. Our results support then Elman et al. (1998)’stheory which states that human’s inductive biases can arise from low-level architectural constraints inthe brain with no need for an explicit encoding of a linguistic structure. However, how the brain, or,more generally, a learner is wired to admit a specific inductive bias is still an important open question.8Published as a conference paper at ICLR 2021Across our experiments, we also observed that description length is consistent with “intuitive”measurements of inductive biases, and, at the same time, it turned out to be more sensitive. This alsoindicates that, in the presence of ambiguity in the training data, a learner is more likely to follow thealternative with the shorter description length (i.e. the simplest one) when applied on unseen data,showing consistency with the prescriptions of the theory of induction (Solomonoff, 1964). A similarsimplicity preference is argued to play a role in human language acquisition (Perfors et al., 2011).Our work provides simple tools to investigate learners’ biases. We first show that FPA is an intuitivemeasure to study biases when provided with simple tasks. Second, we present description length asa robust measure to fairly compare learners’ biases. This metric considers learners’ size and theirease of learning as opposed to accuracy-based metrics. Besides, it is a model- and task-agnosticmeasure that succeeds in unveiling learners’ biases even when presented with more complex taskswith spurious correlations.Our findings can guide for architecture selection in the low-data regimes where inductive biasesmight have a higher influence on model’s generalization performance. Large sparse datasets can alsobenefit from predictable behavior in few-shot scenarios akin to what we consider.Finally, our results demonstrate that relatively large deep learning models cangeneralize non-triviallyfrom as little as one example – as long as the task is aligned with the their inductive biases. Webelieve this should reinforce interest in future work on injecting useful inductive biases in our learnersand, we hope, our findings and setup can provide a fertile ground for such work.ACKNOWLEDGEMENTSThe authors are grateful to Marco Baroni, Emmanuel Dupoux, Emmanuel Chemla and participants ofthe EViL seminar for their feedback on our work.
BaEccdWKLSt
Why is this important?
4: Ok but not good enough - rejection
The paper introduces a series of new datasets and task and investigates the inductive bias of seq2seq models. For each dataset, (at least) two hidden hypothesis could explain the data. The tasks investigated are count-vs-memorization, add-or-multiply, hierarchical-or-linear, composition-or-memorization. The datasets consists of one sample with varying length (amount of input/output pairs), which is denoted as description length. The models are evaluated on accuracy and a logloss. An LSTM, CNN, and Transformer are all trained on these datasets. Multiple seeds are used for significance testing. The results suggests that LSTM is better at counting when provided with a longer sequence, while the CNN and Transformer memorizes the data, but are better at handling hierarchical data. What this paper excels at is a thorough description of their experimental section and their approach to design datasets specifically for testing inductive bias, which I have not previously seen and must thus assume is a novel contribution. However, I lean to reject this paper for the following reasons - The paper tries to fit into the emerging field of formal language datasets for evaluating the capacity of deep learning methods. However, they do not build on any of the recent papers in the field. A new dataset, especially a synthetic one, should be well motivated by shortcomings of previous datasets and tasks in the field. I find the motivation and related works section lacking in that sense. - We already know that LSTMs can count https://arxiv.org/abs/1906.03648 and that transformer cannot https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00306 - It is not clear to me why these results are important? Who will benefit from this analysis? Why are the current AnBnCn and DYCK languages that formal language people work with insufficient? - LSTMs do not have the capacity to perform multiplication. I don’t know why your results suggest otherwise. You would need to incorporate special units that can handle multiplications in the LSTM, such as https://arxiv.org/abs/2001.05016 Update First I'd like to thank the authors for their detailed rebuttal. I have upgraded my recommendation from 3 to 4. As mentioned in my review I believe this approach is interesting. However, as pointed by reviewer2, the experimental section lacks completeness. I think this experimental section would be suitable for a workshop, but not a conference. I am excited to hear you are considering to use this method as an inspiration for real problems. I'd like to see the paper resubmitted when you have obtained such results.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title What they do when in doubt: a study of inductive biases in seq2seq learners ### Paper Abstract Sequence-to-sequence (seq2seq) learners are widely used, but we still have only limited knowledge about what inductive biases shape the way they generalize. We address that by investigating how popular seq2seq learners generalize in tasks that have high ambiguity in the training data. We use four new tasks to study learners' preferences for memorization, arithmetic, hierarchical, and compositional reasoning. Further, we connect to Solomonoff's theory of induction and propose to use description length as a principled and sensitive measure of inductive biases. In our experimental study, we find that LSTM-based learners can learn to perform counting, addition, and multiplication by a constant from a single training example. Furthermore, Transformer and LSTM-based learners show a bias toward the hierarchical induction over the linear one, while CNN-based learners prefer the opposite. The latter also show a bias toward a compositional generalization over memorization. Finally, across all our experiments, description length proved to be a sensitive measure of inductive biases. ### Paper Keywords ["inductive biases", "description length", "sequence-to-sequence models"] ### Paper Content ABSTRACTSequence-to-sequence (seq2seq) learners are widely used, but we still have onlylimited knowledge about what inductive biases shape the way they generalize. Weaddress that by investigating how popular seq2seq learners generalize in tasksthat have high ambiguity in the training data. We use four new tasks to studylearners’ preferences for memorization, arithmetic, hierarchical, and compositionalreasoning. Further, we connect to Solomonoff’s theory of induction and propose touse description length as a principled and sensitive measure of inductive biases.In our experimental study, we find that LSTM-based learners can learn to performcounting, addition, and multiplication by a constant from a single training exam-ple. Furthermore, Transformer and LSTM-based learners show a bias toward thehierarchical induction over the linear one, while CNN-based learners prefer theopposite. The latter also show a bias toward a compositional generalization overmemorization. Finally, across all our experiments, description length proved to bea sensitive measure of inductive biases.1 I NTRODUCTIONSequence-to-sequence (seq2seq) learners (Sutskever et al., 2014) demonstrated remarkable perfor-mance in machine translation, story generation, and open-domain dialog (Sutskever et al., 2014;Fan et al., 2018; Adiwardana et al., 2020). Yet, these models have been criticized for requiring atremendous amount of data and being unable to generalize systematically (Dupoux, 2018; Loulaet al., 2018; Lake & Baroni, 2017; Bastings et al., 2018). In contrast, humans rely on their inductivebiases to generalize from a limited amount of data (Chomsky, 1965; Lake et al., 2019). Due to thecentrality of humans’ biases in language learning, several works have studied inductive biases ofseq2seq models and connected their poor generalization to the lack of the “right” biases (Lake &Baroni, 2017; Lake et al., 2019).In this work, we focus on studying inductive biases of seq2seq models. We start from an observationthat, generally, multiple explanations can be consistent with a limited training set, each leading todifferent predictions on unseen data. A learner might prefer one type of explanations over another ina systematic way, as a result of its inductive biases (Ritter et al., 2017; Feinman & Lake, 2018).To illustrate the setup we work in, consider a quiz-like question: if f(3)maps to 6, what doesf(4)map to? The “training” example is consistent with the following answers: 6 (f(x)6);7(f(x) =x+ 3) ;8 (f(x) = 2x); any number z, since we always can construct a function such thatf(3) = 6 andf(4) = z. By analyzing the learner’s output on this new input, we can infer its biases.This example demonstrates how biases of learners are studied through the lenses of the poverty ofthe stimulus principle (Chomsky, 1965; 1980): if nothing in the training data indicates that a learnershould generalize in a certain way, but it does nonetheless, then this is due to the biases of the learner.Inspired by the work of Zhang et al. (2019) in the image domain, we take this principle to the extremeand study biases of seq2seq learners in the regime of very few training examples, often as little asone. Under this setup, we propose four new synthetic tasks that probe seq2seq learners’ preferencesto memorization-, arithmetic-, hierarchical- and compositional-based “reasoning”.Equal contribution.1Published as a conference paper at ICLR 2021Next, we connect to the ideas of Solomonoff’s theory of induction (Solomonoff, 1964) and MinimalDescription Length (Rissanen, 1978; Grunwald, 2004) and propose to use description length, under alearner, as a principled measure of its inductive biases.Our experimental study1shows that the standard seq2seq learners have strikingly different inductivebiases. We find that LSTM-based learners can learn non-trivial counting-, multiplication-, andaddition-based rules from as little as one example. CNN-based seq2seq learners would prefer linearover hierarchical generalizations, while LSTM-based ones and Transformers would do just theopposite. When investigating the compositional reasoning, description length proved to be a sensitivemeasure. Equipped with it, we found that CNN-, and, to a lesser degree, LSTM-based learners prefercompositional generalization over memorization when provided with enough composite examples. Inturn, Transformers show a strong bias toward memorization.2 S EARCHING FOR INDUCTIVE BIASESTo formalize the way we look for inductive biases of a learner M, we consider a training datasetof input/output pairs, T=fxi; yigni=1, and a hold-out set of inputs, H=fxigki=n+1. W.l.o.g, weassume that there are two candidate “rules” that explain the training data, but do not coincide on thehold-out data:C1(xi) =C2(xi) =yi;1inand9i:C1(xi)6=C2(xi); n+ 1ik.To compare preferences of a learner Mtoward those two rules, we fit the learner on the training dataTand then compare its predictions on the hold-out data Hto the outputs of the rules. We refer to thisapproach as “intuitive”. Usually, the measures of similarity between the outputs are task-specific:McCoy et al. (2020) used accuracy of the first term, Zhang et al. (2019) used correlation and MSE,and Lake & Baroni (2017) used accuracy calculated on the entire output sequence.We too start with an accuracy-based measure. We define the fraction of perfect agreement (FPA)between a learnerMand a candidate generalization rule Cas the fraction of seeds that generalizeperfectly in agreement with that rule on the hold-out set H. Larger FPA ofMis w.r.t.C, morebiasedMis towardC. However, FPA does not account for imperfect generalization nor allows directcomparison between two candidate rules when both are dominated by a third candidate rule. Hence,below we propose a principled approach based on the description length.Description Length and Inductive Biases At the core of the theory of induction (Solomonoff, 1964)is the question of continuation of a finite string that is very similar to our setup. Indeed, we can easilyre-formulate our motivating example as a string continuation problem: “ 3!6; 4!”. The solutionproposed by Solomonoff (1964) is to select the continuation that admits “the simplest explanation” ofthe entire string, i.e. that is produced by programs of the shortest length (description length).Our intuition is that when a continuation is “simple” for a learner, then this learner is biased towardit. We consider a learner Mto be biased toward C1overC2if the training set and its extensionaccording toC1has a shorter description length (for M) compared to that of C2. Denoting descriptionlength of a dataset Dunder the learnerMasLM(D), we hypothesise that if LM(fC1(xi)gki=1)<LM(fC2(xi)gki=1), thenMis biased towardC1.Calculating Description Length To find the description length of data under a fixed learner, we usethe online (prequential) code (Rissanen, 1978; Grunwald, 2004; Blier & Ollivier, 2018).The problem of calculating LM(D),D=fxi; yigki=1is considered as a problem of transferringoutputs yione-by-one, in a compressed form, between two parties, Alice (sender) and Bob (receiver).Alice has the entire dataset fxi; yig, while Bob only has inputs fxig. Before the transmission starts,both parties agreed on the initialization of the model M, order of the inputs fxg, random seeds, andthe details of the learning procedure. Outputs fyigare sequences of tokens from a vocabulary V.W.l.o.g. we fix some order over fxg. We assume that, given x, the learnerMproduces a probabilitydistribution over the space of the outputs y,pM(yjx).1Code used in the experiments can be found at https://github.com/facebookresearch/FIND .2Published as a conference paper at ICLR 2021The very first output y1can be sent by using not more than c=jy1jlogjVjnats, using a naïveencoding.2After that, both Alice and Bob update their learners using the example (x1; y1), availableto both of them, and get identical instances of M1.Further transfer is done iteratively under the invariant that both Alice and Bob start every step twithexactly the same learners Mt1and finish with identical Mt. At step tAlice would useMt1toencode the next output yt. This can be done usinglogpMt1(ytjxt)nats (MacKay, 2003). SinceBob has exactly the same model, he can decode the message to obtain ytand use the new pair (xt; yt)to update his model and get Mt. Alice also updates her model, and proceeds to sending the next yt+1(if any), encoding it with the help of Mt. The cumulative number of nats transmitted is:LM(D) =kXt=2logpMt1(ytjxt) +c: (1)The obtained code length of Eq. 1 depends on the order in which yare transmitted and the procedurewe use to updateM:To account for that, we average out the data order by training with multiplerandom seeds. Further, for larger datasets, full re-training after adding a new example is impracticaland, in such cases, examples can be transmitted in blocks.If we measure the description length of the training data Tshuffled with the hold-out data H, bothdatasets would have symmetric roles. However, there is a certain asymmetry in the extrapolationproblem: we are looking for an extrapolation from T, not vice-versa. To break this symmetry, wealways transmit outputs for the entire training data as the first block.While LM(D)is seemingly different from the “intuitive” measures introduced before, we canillustrate their connection as follows. Consider a case where we first transmit the training outputsas the first block and allof the hold-out data outputs under C,C(H), as the second block. Then thedescription length is equal to cross-entropy of the trained learner on the hold-out data, recovering aprocess akin to the “intuitive” measuring of inductive biases. With smaller blocks, the descriptionlength also catches whether a learner is capable of finding regularity in the data fast, with few datapoints; hence it also encodes the speed-of-learning ideas for measuring inductive biases (Chaabouniet al., 2019).Finally, the description length has three more attractive properties when measuring inductive biases:(a) it is domain-independent, i.e. can be applied for instance in the image domain, (b) it allowscomparisons across models that account for model complexity, and (c) it enables direct comparisonbetween two candidate rules (as we will show in the Section 5).3 T ASKSWe describe four tasks that we use to study inductive biases of seq2seq learners. We select thosetasks to cover different aspects of learners’ behavior. Each task has a highly ambiguous trainingset which is consistent with infinite number of generalizations. We pre-select several candidaterules highlighting biases that are useful for language processing and known to exist in humans, orare otherwise reasonable. Our experiments show that these rules cover many cases of the learners’behavior.The first two tasks study biases in arithmetic reasoning: Count-or-Memorization quantifies learners’preference for counting vs. a simple memorization and Add-or-Multiply further probes the learners’preferences in arithmetic operations. We believe these tasks are interesting, as counting is neededin some NLP problems like processing linearized parse trees (Weiss et al., 2018). The third task,Hierarchical-or-Linear , contrasts hierarchical and linear inductive reasoning. The hierarchicalreasoning bias is believed to be fundamental in learning some syntactical rules in human acquisitionof syntax (Chomsky, 1965; 1980). Finally, with the Composition-or-Memorization task, we investigatebiases for systematic compositionality, which are central for human capabilities in language. Figure 1illustrates these four tasks.2As we are interested in comparing candidate generalization rules, the value of the additive constant cis notimportant, as it is learner- and candidate-independent. In experiments, we subtract it from all measurements.3Published as a conference paper at ICLR 2021aaa!bbb<latexit sha1_base64="2aP1qHe7cCIfZJZf006ToyPFAlM=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimTNNuGZrNLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZRxLBtfG8b6ewtr6xuVXcLu3s7u0fuIdHLR2nirImjUWsOgQ0E1yypuFGsE6iGEREsDYZ38z89gNTmsfy3mQJCyIYSh5yCsZKfbcMALin+HBkQKn4ERNC+m7Fq3pz4FXi56SCcjT67ldvENM0YtJQAVp3fS8xwQSU4VSwaamXapYAHcOQdS2VEDEdTObHT/GpVQY4jJUtafBc/T0xgUjrLCK2MwIz0sveTPzP66YmvAwmXCapYZIuFoWpwCbGsyTwgCtGjcgsAaq4vRXTESigxuZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBNRlKFn9IrenCfnxXl3PhatBSefKaM/cD5/AGgPlKM=</latexit>test exampleaa!?<latexit sha1_base64="zC7LQM7YZYuoBe5SPPwHFlyMIso=">AAACB3icbVDLSsNAFJ34rPUVdSnIYBFchUQq6sqiG5cV7AOaUG6mk2bo5MHMRCmhOzf+ihsXirj1F9z5N07bLLT1wIXDOfdy7z1+yplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoi2D5JyFtOGYorTdiooRD6nLX9wPfZb91RIlsR3aphSL4J+zAJGQGmpax4AYFewfqhAiOQBu6FMgdDcsk9JNLrsmhXbsifA88QpSAUVqHfNL7eXkCyisSIcpOw4dqq8HIRihNNR2c0k1QsG0KcdTWOIqPTyyR8jfKSVHg4SoStWeKL+nsghknIY+bozAhXKWW8s/ud1MhWcezmL00zRmEwXBRnHKsHjUHCPCUoUH2oCRDB9KyYhCCBKR1fWITizL8+T5onlVK2L22qldlXEUUL76BAdIwedoRq6QXXUQAQ9omf0it6MJ+PFeDc+pq0LRjGzh/7A+PwBSTCY9A==</latexit>bbb<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>bb<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>(count)(mem)(linear)(hierar)aabaa!b<latexit sha1_base64="/YFAJknAn2U8M0GNie5D5uxl4Tw=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMEALgnmLDkQGl5CMO+27Fq3pz4FXi56SCcjT67ldvIEkaU2EIB627vpeYYALKMMLptNRLNU2AjGFIu5YKiKkOJvPjp/jUKgMcSWVLGDxXf09MINY6i0PbGYMZ6WVvJv7ndVMTXQYTJpLUUEEWi6KUYyPxLAk8YIoSwzNLgChmb8VkBAqIsXmVbAj+8surpHVe9WvVq7tapX6dx1FEx+gEnSEfXaA6ukUN1EQEZegZvaI358l5cd6dj0VrwclnyugPnM8fZqiUog==</latexit>aba!?<latexit sha1_base64="4YyIpL4gWY0aiFz+Nr1J+DsIshc=">AAACCXicbVC7SgNBFJ2Nrxhfq5Y2g0GwWnYlolYGbSwjmAckS7g7mSRDZmaXmVklLGlt/BUbC0Vs/QM7/8bJo9DEAxcO59zLvfdECWfa+P63k1taXlldy68XNja3tnfc3b2ajlNFaJXEPFaNCDTlTNKqYYbTRqIoiIjTejS4Hvv1e6o0i+WdGSY0FNCTrMsIGCu1XQwR4JZivb4BpeIH3OrrBAjNfO9UiBG+bLtF3/MnwIskmJEimqHSdr9anZikgkpDOGjdDPzEhBkowwino0Ir1dRuGECPNi2VIKgOs8knI3xklQ7uxsqWNHii/p7IQGg9FJHtFGD6et4bi/95zdR0z8OMySQ1VJLpom7KsYnxOBbcYYoSw4eWAFHM3opJHxQQY8Mr2BCC+ZcXSe3EC0rexW2pWL6axZFHB+gQHaMAnaEyukEVVEUEPaJn9IrenCfnxXl3PqatOWc2s4/+wPn8AX7kmZQ=</latexit>a<latexit sha1_base64="bihfQSgohUCaiZLH8E/K/puHk0I=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmrRfrrhVdw6ySrycVCBHo1/+6g1ilkYoDRNU667nJsbPqDKcCZyWeqnGhLIxHWLXUkkj1H42P3RKzqwyIGGsbElD5urviYxGWk+iwHZG1Iz0sjcT//O6qQmv/IzLJDUo2WJRmApiYjL7mgy4QmbExBLKFLe3EjaiijJjsynZELzll1dJ+6Lq1arXzVqlfpPHUYQTOIVz8OAS6nAHDWgBA4RneIU359F5cd6dj0VrwclnjuEPnM8fxz2M8Q==</latexit>b<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>(comp)(mem)b<latexit sha1_base64="9Y1xYV9hakFu8xV0cEnjrRUGU9E=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjy2YGuhDWWznbRrN5uwuxFK6C/w4kERr/4kb/4bt20O2vpg4PHeDDPzgkRwbVz32ymsrW9sbhW3Szu7e/sH5cOjto5TxbDFYhGrTkA1Ci6xZbgR2EkU0igQ+BCMb2f+wxMqzWN5byYJ+hEdSh5yRo2VmkG/XHGr7hxklXg5qUCORr/81RvELI1QGiao1l3PTYyfUWU4Ezgt9VKNCWVjOsSupZJGqP1sfuiUnFllQMJY2ZKGzNXfExmNtJ5Ege2MqBnpZW8m/ud1UxNe+RmXSWpQssWiMBXExGT2NRlwhcyIiSWUKW5vJWxEFWXGZlOyIXjLL6+S9kXVq1Wvm7VK/SaPowgncArn4MEl1OEOGtACBgjP8ApvzqPz4rw7H4vWgpPPHMMfOJ8/yMGM8g==</latexit>a!a<latexit sha1_base64="jnXbe6btQmwXgibKAN2VhSH1B5w=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qXMptk2NJssSVapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3dsvuweHLS1TRWiTSC5VJwRNORO0aZjhtJMoCnHIaTsc38z89gNVmklxbyYJDWIYChYxAsZKfbcMuKfYcGRAKfmIoe9WvKo3B14lfk4qKEej7371BpKkMRWGcNC663uJCTJQhhFOp6VeqmkCZAxD2rVUQEx1kM0Pn+JTqwxwJJUtYfBc/T2RQaz1JA5tZwxmpJe9mfif101NdBlkTCSpoYIsFkUpx0biWQp4wBQlhk8sAaKYvRWTESggxmZVsiH4yy+vktZ51a9Vr+5qlfp1HkcRHaMTdIZ8dIHq6BY1UBMRlKJn9IrenCfnxXl3PhatBSefOUJ/4Hz+AGnJkvQ=</latexit>b!b<latexit sha1_base64="y/odGoA3GMubqSuaA2hvPYvW6gw=">AAAB+HicbVBNSwMxEM3Wr1o/uurRS7AInsquFNRb0YvHCvYD2qVk02wbmk2WZFapS3+JFw+KePWnePPfmLZ70NYHA4/3ZpiZFyaCG/C8b6ewtr6xuVXcLu3s7u2X3YPDllGppqxJlVC6ExLDBJesCRwE6ySakTgUrB2Ob2Z++4Fpw5W8h0nCgpgMJY84JWClvlsOcU/z4QiI1uoRh3234lW9OfAq8XNSQTkafferN1A0jZkEKogxXd9LIMiIBk4Fm5Z6qWEJoWMyZF1LJYmZCbL54VN8apUBjpS2JQHP1d8TGYmNmcSh7YwJjMyyNxP/87opRJdBxmWSApN0sShKBQaFZyngAdeMgphYQqjm9lZMR0QTCjarkg3BX355lbTOq36tenVXq9Sv8ziK6BidoDPkowtUR7eogZqIohQ9o1f05jw5L86787FoLTj5zBH6A+fzB2zfkvY=</latexit>bbb<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>Count-or-Memorization<latexit sha1_base64="dl6D95tdDzd51TNAft5O5SVEngo=">AAACEXicbVC7SgNBFJ31GeMrammzGAQbw64E1C6YxkaIYB6QhDA7uUkG57HM3BXjkl+w8VdsLBSxtbPzb9xsUmjigWEO55zLzD1BKLhFz/t2FhaXlldWM2vZ9Y3Nre3czm7N6sgwqDIttGkE1ILgCqrIUUAjNEBlIKAe3JbHfv0OjOVa3eAwhLakfcV7nFFMpE7uqIVwj0bG6c0xLutI4bE2x1cgteEPaW40ynZyea/gpXDniT8leTJFpZP7anU1iyQoZIJa2/S9ENsxNciZgFG2FVkIKbulfWgmVFEJth2nG43cw0Tpuj1tkqPQTdXfEzGV1g5lkCQlxYGd9cbif14zwt5ZO+YqjBAUmzzUi4SL2h3X43a5AYZimBDKDE/+6rIBNZRhUuK4BH925XlSOyn4xcL5dTFfupjWkSH75IAcEZ+ckhK5JBVSJYw8kmfySt6cJ+fFeXc+JtEFZzqzR/7A+fwBMTKegQ==</latexit>Composition-or-Memorization<latexit sha1_base64="AU3h0Xycm3+2dblNcl19kE3syhg=">AAACF3icbVC7SgNBFJ31GeMrammzGASbhF0JqF0wjY0QwTwgCWF2cpMMmdlZZu6Kcclf2PgrNhaK2Grn37i7SaGJB4Y5nHMul3u8QHCDjvNtLS2vrK6tZzaym1vbO7u5vf26UaFmUGNKKN30qAHBfaghRwHNQAOVnoCGN6okfuMOtOHKv8VxAB1JBz7vc0Yxlrq5YhvhHrWM0p9jVFEyUIYnbkHpwjVIpflDmp5Mst1c3ik6KexF4s5InsxQ7ea+2j3FQgk+MkGNablOgJ2IauRMwCTbDg0ElI3oAFox9akE04nSuyb2caz07L7S8fPRTtXfExGVxoylFyclxaGZ9xLxP68VYv+8E3E/CBF8Nl3UD4WNyk5KsntcA0MxjgllOi6D2WxINWUYV5mU4M6fvEjqp0W3VLy4KeXLl7M6MuSQHJET4pIzUiZXpEpqhJFH8kxeyZv1ZL1Y79bHNLpkzWYOyB9Ynz9YnKFI</latexit>Hierarchical-or-Linear<latexit sha1_base64="z+j/XVE4K8yKa/1sdHAWANdl7aI=">AAACEnicbVC7TgJBFJ31ifhatbTZSEy0gOwaErUj2lBYYCKPBAiZHS4wYWZ3M3PXSDZ8g42/YmOhMbZWdv6Ns0Ch4Ekmc3LOvXfmHj8SXKPrfltLyyura+uZjezm1vbOrr23X9NhrBhUWShC1fCpBsEDqCJHAY1IAZW+gLo/vE79+j0ozcPgDkcRtCXtB7zHGUUjdezTFsIDKplMbo5JmYOiig1MhciHKn9jBlM1Hmc7ds4tuBM4i8SbkRyZodKxv1rdkMUSAmSCat303AjbCVXImYBxthVriCgb0j40DQ2oBN1OJiuNnWOjdJ1eqMwJ0JmovzsSKrUeSd9USooDPe+l4n9eM8beRTvhQRQjBGz6UC8WDoZOmo/T5QoYipEhlClu/uqwAVWUoUkxDcGbX3mR1M4KXrFweVvMla5mcWTIITkiJ8Qj56REyqRCqoSRR/JMXsmb9WS9WO/Wx7R0yZr1HJA/sD5/AILSnp4=</latexit>Add-or-Multiply<latexit sha1_base64="Qjee5BVEq+vKsWYHnWbnBQlIpYA=">AAACC3icbVDLSsNAFJ34rPUVdekmtAhuWhIpqLuqGzdCBfuANpTJZNIOnUnCzI0YQvdu/BU3LhRx6w+4829M0iy09cDlHs65l5l7nJAzBab5rS0tr6yurZc2yptb2zu7+t5+RwWRJLRNAh7InoMV5cynbWDAaS+UFAuH064zucr87j2VigX+HcQhtQUe+cxjBEMqDfXKAOgDSJHknUFy4bq1QNZuIg4s5PF0Wh7qVbNu5jAWiVWQKirQGupfAzcgkaA+EI6V6ltmCHaCJTDC6bQ8iBQNMZngEe2n1MeCKjvJb5kaR6niGl4g0/LByNXfGwkWSsXCSScFhrGa9zLxP68fgXdmJ8wPI6A+mT3kRdyAwMiCMVwmKQEepwQTydK/GmSMJSaQxpeFYM2fvEg6J3WrUT+/bVSbl0UcJXSIKugYWegUNdE1aqE2IugRPaNX9KY9aS/au/YxG13Sip0D9Afa5w/83pun</latexit>(add)(mem)train exampletrain exampletest exampletrain exampletest exampletrain exampletest exampleaa!bbbb<latexit sha1_base64="a4iN37u8WAC9fwuLOaFSCqDHf/w=">AAAB/HicbVBNSwMxEM3Wr1q/Vnv0EiyCp7IrBfVW9OKxgv2AdimzabYNzSZLklWWUv+KFw+KePWHePPfmLZ70NYHA4/3ZpiZFyacaeN5305hbX1jc6u4XdrZ3ds/cA+PWlqmitAmkVyqTgiaciZo0zDDaSdRFOKQ03Y4vpn57QeqNJPi3mQJDWIYChYxAsZKfbcMgHuKDUcGlJKPOLTouxWv6s2BV4mfkwrK0ei7X72BJGlMhSEctO76XmKCCSjDCKfTUi/VNAEyhiHtWiogpjqYzI+f4lOrDHAklS1h8Fz9PTGBWOssDm1nDGakl72Z+J/XTU10GUyYSFJDBVksilKOjcSzJPCAKUoMzywBopi9FZMRKCDG5lWyIfjLL6+S1nnVr1Wv7mqV+nUeRxEdoxN0hnx0geroFjVQExGUoWf0it6cJ+fFeXc+Fq0FJ58poz9wPn8AaY6UpA==</latexit>a!?<latexit sha1_base64="MdWGCyoEtQk4iPFMQu2DAITPSsU=">AAACBnicbVDLSsNAFJ34rPUVdSnCYBFchUQq6sqiG5cV7AOaUCbTSTN0ZhJmJkoJXbnxV9y4UMSt3+DOv3HaZqGtBy4czrmXe+8JU0aVdt1va2FxaXlltbRWXt/Y3Nq2d3abKskkJg2csES2Q6QIo4I0NNWMtFNJEA8ZaYWD67HfuidS0UTc6WFKAo76gkYUI22krn2AoC9pP9ZIyuQB+rFKESa5455iPrrs2hXXcSeA88QrSAUUqHftL7+X4IwToTFDSnU8N9VBjqSmmJFR2c8UMQsGqE86hgrEiQryyRsjeGSUHowSaUpoOFF/T+SIKzXkoenkSMdq1huL/3mdTEfnQU5Fmmki8HRRlDGoEzjOBPaoJFizoSEIS2puhThGEmFtkiubELzZl+dJ88Txqs7FbbVSuyriKIF9cAiOgQfOQA3cgDpoAAwewTN4BW/Wk/VivVsf09YFq5jZA39gff4AheWYiQ==</latexit>(mul)bbbb<latexit sha1_base64="Skuwggtv7n9iurfu6b6dnycPw7E=">AAAB63icbVBNSwMxEJ3Ur1q/qh69BIvgqeyKoN6KXjxWsB/QLiWbZtvQJLskWaEs/QtePCji1T/kzX9jtt2Dtj4YeLw3w8y8MBHcWM/7RqW19Y3NrfJ2ZWd3b/+genjUNnGqKWvRWMS6GxLDBFesZbkVrJtoRmQoWCec3OV+54lpw2P1aKcJCyQZKR5xSmwuhQ6Das2re3PgVeIXpAYFmoPqV38Y01QyZakgxvR8L7FBRrTlVLBZpZ8alhA6ISPWc1QRyUyQzW+d4TOnDHEUa1fK4rn6eyIj0pipDF2nJHZslr1c/M/rpTa6DjKuktQyRReLolRgG+P8cTzkmlErpo4Qqrm7FdMx0YRaF0/FheAvv7xK2hd1/7J+83BZa9wWcZThBE7hHHy4ggbcQxNaQGEMz/AKb0iiF/SOPhatJVTMHMMfoM8f93WONg==</latexit>bb<latexit sha1_base64="fz5x21CLleIxoxCf0rdZAKFE5ls=">AAAB6XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkoN6KXjxWsR/QhrLZTtqlm03Y3Qgl9B948aCIV/+RN/+N2zYHbX0w8Hhvhpl5QSK4Nq777RTW1jc2t4rbpZ3dvf2D8uFRS8epYthksYhVJ6AaBZfYNNwI7CQKaRQIbAfj25nffkKleSwfzSRBP6JDyUPOqLHSQxD0yxW36s5BVomXkwrkaPTLX71BzNIIpWGCat313MT4GVWGM4HTUi/VmFA2pkPsWipphNrP5pdOyZlVBiSMlS1pyFz9PZHRSOtJFNjOiJqRXvZm4n9eNzXhlZ9xmaQGJVssClNBTExmb5MBV8iMmFhCmeL2VsJGVFFmbDglG4K3/PIqaV1UvVr1+r5Wqd/kcRThBE7hHDy4hDrcQQOawCCEZ3iFN2fsvDjvzseiteDkM8fwB87nD4KejV4=</latexit>bbb<latexit sha1_base64="OTSroX6SlAdC/21642o1h79Iv24=">AAAB6nicbVBNSwMxEJ2tX7V+VT16CRbBU9mVgnorevFY0X5Au5Qkzbah2WRJskJZ+hO8eFDEq7/Im//GtN2Dtj4YeLw3w8w8kghurO9/e4W19Y3NreJ2aWd3b/+gfHjUMirVlDWpEkp3CDZMcMmallvBOolmOCaCtcn4dua3n5g2XMlHO0lYGOOh5BGn2DrpgRDSL1f8qj8HWiVBTiqQo9Evf/UGiqYxk5YKbEw38BMbZlhbTgWblnqpYQmmYzxkXUcljpkJs/mpU3TmlAGKlHYlLZqrvycyHBsziYnrjLEdmWVvJv7ndVMbXYUZl0lqmaSLRVEqkFVo9jcacM2oFRNHMNXc3YroCGtMrUun5EIIll9eJa2LalCrXt/XKvWbPI4inMApnEMAl1CHO2hAEygM4Rle4c0T3ov37n0sWgtePnMMf+B9/gA8343K</latexit>thrice a!aaa<latexit sha1_base64="qnFGHcCsleGiOXNIMFDZUtpqE2A=">AAACHnicbVDJSgNBEO1xN25Rj14ag+DFYcYF9SZ68ahgVEhCqOlUMo3dM0N3jRKGfIkXf8WLB0UET/o3dmIEt9c0vHpVRVW9KFPSUhC8eyOjY+MTk1PTpZnZufmF8uLSuU1zI7AqUpWaywgsKplglSQpvMwMgo4UXkRXR/38xTUaK9PkjLoZNjR0EtmWAshJzfIOxUYKrMc2A4FF4O9o3YOvcCPwt1xcN7ITExiT3nDov2a5EvjBAPwvCYekwoY4aZZf661U5BoTEgqsrYVBRo0CDEmhsFeq5xbdxCvoYM3RBDTaRjE4r8fXnNLi7dS4nxAfqN87CtDWdnXkKjVQbH/n+uJ/uVpO7b1GIZMsJ0zE56B2rjilvO8Vb0mDglTXERBGul25iMGAIOdoyZkQ/j75Lznf9MNtf/90u3JwOLRjiq2wVbbOQrbLDtgxO2FVJtgtu2eP7Mm78x68Z+/ls3TEG/Yssx/w3j4AA+KiZg==</latexit>thrice b!?<latexit sha1_base64="gbmyyfTl9bxXUBY/8c/CKSNznlI=">AAACKXicbVBNS8NAEN34WetX1KOXxSJ4sSRSUU8WvXhUsCq0pWy202ZxNwm7E6WE/h0v/hUvCop69Y+4TSP49WDhzXszzM4LEikMet6bMzE5NT0zW5orzy8sLi27K6sXJk41hwaPZayvAmZAiggaKFDCVaKBqUDCZXB9PPIvb0AbEUfnOEigrVg/Ej3BGVqp49Yx1IJDKzQJ45B51V2lhsFXuT2uW1r0Q2Rax7f0y/Jzhx523IpX9XLQv8QvSIUUOO24T61uzFMFEXLJjGn6XoLtjGkUXMKw3EoN2A3XrA9NSyOmwLSz/NIh3bRKl/ZibV+ENFe/T2RMGTNQge1UDEPz2xuJ/3nNFHv77UxESYoQ8fGiXiopxnQUG+0KDRzlwBLGtbB/pTxkmnG04ZZtCP7vk/+Si52qX6senNUq9aMijhJZJxtki/hkj9TJCTklDcLJHXkgz+TFuXcenVfnfdw64RQza+QHnI9P8/inCQ==</latexit>Figure 1: Illustration of the tasks. After training on the train examples (green blocks), learnersare tested on held-out examples (red blocks). In pink blocks are generalizations according to thecandidate rules.Byakwe denote a sequence that contains token arepeated ktimes. For training, we representsequences in a standard way: the tokens are one-hot-encoded separately, and we append a specialend-of-sequence token to each sequence. Input and output vocabularies are disjoint.Count-or-Memorization : In this task, we contrast learners’ preferences for counting vs. memoriza-tion. We train models to fit a single training example with input aland output bl(i.e., to perform themapping al!bl) and test it on amwithm2[l10; l+10] . If a learner learns the constant function,outputting blindependently of its inputs, then it follows the mem strategy. On the other hand, if itgeneralizes to the am!bmmapping, then the learner is biased toward the count strategy.Add-or-Multiply : This task is akin to the motivating example in Section 1. The single trainingexample represents a mapping of an input string alto an output string b2l. As test inputs, we generateamformin the interval [l3; l+ 3]. We consider the learned rule to be consistent with mul if forallm, the input/output pairs are consistent with am!b2m. Similarly, if they are consistent witham!bm+l, we say that the learner follows the addition rule, add. Finally, the learner can learn aconstant mapping am!b2lfor any m. Again, we call this rule mem.Hierarchical-or-Linear : For a fixed depth d, we train learners on four training examples xdyxd!ywhere x; y2fa; bg.3Each training example has a nested structure, where ddefines its depth. Alearner with a hierarchical bias ( hierar ), would output the middle symbol. We also consider thelinear rule ( linear ) in which the learner outputs the (d+1)thsymbol of its input.To probe learners’ biases, we test them on inputs with different depths m2[d2; d+ 2]. Note thatto examine the linear rule (i.e. if the learner outputs the (d+1)thsymbol of any test input of depthm), we need md2. Similar to the previous tasks, there is no vocabulary sharing between a model’sinputs and outputs (input and output tokens aandbare different).Composition-or-Memorization : We take inspiration from SCAN (Lake & Baroni, 2017), a bench-mark used for studying systematic generalization of seq2seq learners.4The input vocabulary has Nsymbols aithat are one-to-one mapped into Noutput symbols bi(i.e.,ai!bi). In addition, thereis a modifier token thrice : when thrice precedes an input symbol ai, the corresponding output isrepeated three times: thrice a i!bibibi.We train a learner on all non-compositional examples ( ai!bi) and M(M < N ) compositionalexamples ( thrice a i!bibibi). At test time, we feed the learner with the remaining compositionalexamples ( thrice a i,i > M ). If the learner generalizes to the mapping thrice a i!bibibifori > M ,we consider it to be biased toward a compositional reasoning ( comp ). As an alternative generalization,3This mapping consist then on four combinations adbad!b;adaad!a;bdabd!aandbdbbd!b.4In Appendix, we report a study of inductive biases on the SCAN data.4Published as a conference paper at ICLR 2021we consider a mapping where all inputs containing aiare mapped into bi:thrice a i!bi(i > M ).We call this generalization memorization ( mem).4 M ETHODOLOGY4.1 S EQUENCE -TO-SEQUENCE LEARNERSWe experiment with three standard seq2seq models: LSTM-based seq2seq (LSTM-s2s) (Sutskeveret al., 2014), CNN-based seq2seq (CNN-s2s) (Gehring et al., 2017), and Transformer (Vaswani et al.,2017). All share a similar Encoder/Decoder architecture (Sutskever et al., 2014).LSTM-s2s Both Encoder and Decoder are implemented as LSTM cells (Hochreiter & Schmidhuber,1997). Encoder encodes its inputs incrementally from left to right. We experiment with architectureswithout (LSTM-s2s no att.) and with (LSTM-s2s att.) an attention mechanism (Bahdanau et al.,2014). For the first three tasks, both Encoder and Decoder are single-layer LSTM cells with hiddensize of 512 and embedding of dimension 16.CNN-s2s Encoder and Decoder are convolutional networks (LeCun et al., 1990), followed by GLUnon-linearities (Dauphin et al., 2017) and an attention layer. To represent positions of input tokens,CNN-s2s uses learned positional embeddings. Encoder and Decoder networks have one layer with512 filters and a kernel width of 3. We set the embedding size to 16.Transformer Encoder and Decoder are implemented as a sequence of (self-)attention and feed-forward layers. We use sinusoidal position embeddings. Both Encoder and Decoder contain onetransformer layer. The attention modules have 8 heads, feed-forward layers have dimension of 512and the embedding is of dimension 16.In Appendix we report experiments where we vary hyperparameters of the learners.4.2 T RAINING AND EVALUATIONFor all tasks, we follow the same training procedure. We train with Adam optimizer (Kingma & Ba,2014) for 3000 epochs. The learning rate starts at 105and increases for the first 1000 warm-upupdates till reaching 103. We include all available examples in a single batch. We use teacherforcing (Goodfellow et al., 2016) and set the dropout probability to 0.5. For each learner, we performtraining and evaluation 100 times, changing random seeds. At generation, to calculate FPA, we selectthe next token greedily. We use the model implementations from fairseq (Ott et al., 2019).As discussed in Section 2, when calculating L, we use the training examples as the first transmittedblock at t= 1. InCount-or-Memorization andAdd-or-Multiply this block contains one example,inHierarchical-or-Linear it has 4 examples, and in Composition-or-Memorization it has N+Mexamples. Next, we transmit examples obtained from the candidate rules in a randomized order,by blocks of size 1, 1, 1, and 4 for Count-or-Memorization ,Add-or-Multiply ,Composition-or-Memorization , and Hierarchical-or-Linear respectively. At each step, the learner is re-trained fromthe same initialization, using the procedure and hyper-parameters as discussed above.Our training setup is typical for seq2seq training. It does not include any additional pressure towardsdescription length minimization. Description length is solely used as a measure of inductive biases.5 E XPERIMENTSCount-or-Memorization We investigate here learners’ biases toward count andmem rules. Weprovide a single example al!blas the training set, varying l2f10;20;30;40g. We report thelearners’ performances in Table 1a. We observe that, independently of the length of the trainingexample l, CNN-s2s and Transformer learners inferred perfectly the mem rule with FPA- mem > 0.90(i.e. more than 90% of the random seeds output blfor any given input am).However, LSTM-based learners demonstrate a more complex behavior. With l= 10 , both learners(with and without attention) exhibit a preference for mem. Indeed, while these learners rarelygeneralize perfectly to any of the hypothesis ( 0:0FPA (no att.), 0:2/0:0FPA for mem/count (att.)),they have significantly lower L-mem. Aslincreases, LSTM-based learners become more biased5Published as a conference paper at ICLR 2021FPA" L, nats#length l count mem count memLSTM-s2s no att. 40 1 :00 0 :00 0.0197:5130 0 :97 0 :00 0.0172:6720 0 :07 0 :00 2.4955:6710 0 :00 0 :00 88 :27 48.67LSTM-s2s att. 40 0 :99 0 :00 7.84121:4830 0 :96 0 :02 1.1483:4820 0 :70 0 :16 5.7349:3310 0 :00 0 :20 98 :12 8.46CNN-s2s f10;20;30;40g 0:00 >0:90>592:92 <1.31Transformer f10;20;30;40g 0:00 >0:97>113:30 <11.14(a) Count-or-MemorizationFPA" L, nats#length l add mul mem add mul memLSTM-s2s no att. 20 0 :00 0 :94 0 :00 25 :42 0.3157:3215 0 :07 0 :65 0 :00 19 :24 4.6743:6510 0 :95 0 :01 0 :00 0.6826:58 25 :155 0 :04 0 :00 0 :00 17.12 50:83 18 :60LSTM-s2s att. 20 0 :00 0 :98 0 :00 30 :26 1.4058:8415 0 :15 0 :83 0 :00 20 :18 4.0746:3610 0 :40 0 :28 0 :18 13.69 18:16 26 :445 0 :00 0 :00 0 :97 45 :88 77 :86 0.01CNN-s2s f5;10;15;20g0:00 0 :00 1 :0>318:12>346:19 0.00Transformer f5;10;15;20g0:00 0 :00 1 :0>38:77 >50:64 <3.50(b) Add-or-MultiplyFPA" L, nats#hierar linear hierar linearLSTM-s2s no att. 0:05 0 :00 31.0461:84LSTM-s2s att. 0:30 0 :00 26.3257:2CNN-s2s 0:00 1 :00 202 :64 0.00Transformer 0:69 0 :00 4.8435:04(c) Hierarchical-or-Linear with depth d= 4FPA" L, nats#M, examples comp mem comp memLSTM-s2s no att. 36 0 :00 0 :00 42 :65 38.5524 0 :00 0 :00 238 :54 89.366 0 :00 0 :00 656 :93 157.55LSTM-s2s att. 36 0 :00 0 :00 62.3470:9224 0 :00 0 :00 263 :33 157.826 0 :00 0 :00 659 :85 164.43CNN-s2s 36 0 :75 0 :00 1.4449:9224 0 :13 0 :00 13.7584:556 0 :00 0 :00 131 :63 29.66Transformer 36 0 :00 0 :82 147 :83 6.3624 0 :00 0 :35 586 :22 26.466 0 :00 0 :00 1235 :01 53.91(d) Composition-or-MemorizationTable 1: FPA measures the fraction of seeds that generalize according to a particular rule. Descriptionlength Lis averaged across examples and seeds. The lowest Lare in bold anddenotes stat. sig.difference in L(p <103, paired t-test).6Published as a conference paper at ICLR 2021toward count . Surprisingly, for l30, most learner instances show sufficiently strong inductivebiases to infer perfectly the non-trivial count hypothesis. With l= 40 ,99% of random seeds ofLSTM-s2s att. and all(100%) of LSTM-s2s no att. seeds generalized perfectly to count .Further, we see that if Lshows similar trends, it has a higher sensitivity. For example, while bothLSTM-based learners have a similar FPA with l= 40 ,Ldemonstrates that LSTM-s2s no att. has astronger count bias.Add-or-Multiply In this task, we examine learners’ generalization after training on the singleexample al!b2l. We vary l2f5;10;15;20g. In Table 1b, we report FPA and Lfor the threegeneralization hypotheses, add,mul, and mem. We observe, similarly to the previous task, thatCNN-s2s and Transformer learners always converge perfectly to memorization.In contrast, LSTM-based learners show non-trivial generalizations. Examining first LSTM-s2s att.,when l=5, we note that mem has a high FPA and an Lconsiderably lower than others. This isconsistent with the learner’s behavior in the Count-or-Memorization task. As we increase l, moreinteresting behavior emerges. First, L-mem decreases as lincreases. Second, mul-type preferenceincreases with l. Finally, L-add presents a U-shaped function of l. That is, for the medium examplelength l, the majority of learners switch to approximating the add rule (for l= 10 ). However, whenlgrows further, a considerable fraction of these learners start to follow a mul-type rule. Strikingly,98% of LSTM-s2s att. seeds generalized perfectly to the non-trivial mul rule. As for LSTM-s2s noatt., we do not observe a strong bias to infer any of the rules when l=5. However, when increasingl, the LSTM-s2s no att. behaves similarly to LSTM-s2s att.: at first it has a preference for add(FPA- add=0.95, for l=10) then for mul (e.g. FPA- mul=0.94, for l=20).Hierarchical-or-Linear We look now at learners’ preference for either hierar orlinear gener-alizations. The architectures we use were only able to consistently learn the training examples withthe depth dnot higher than 4. Hence, in this experiment, we set dto4.We report in Table 1c the learners’ FPA and L. We observe that CNN-s2s exhibits a strikinglydifferent bias compared to all other learners with a perfect agreement with the linear rule. Incontrast, Transformer learners show a clear preference for hierar with a high FPA (0.69) and alowL(1.21). Surprisingly, this preference increases with the embedding size and Transformers withembedding size64admit an FPA- hierar of 1.00 (see Appendix for more details). LSTM-s2s att.learners demonstrate also a similar preference for hierar with an FPA of 0.30 and a considerablylower LthanL-hierar . Finally, while only 5%of LSTM-s2s no att. instances generalized toperfect hierar (and none to linear ),Lconfirms their preference for the hierar hypothesis.Composition-or-Memorization In this task, we set the number of primitives N= 40 and vary thenumber of compositional examples M2f6;24;36gseen during training . Results are reported inTable 1d. First, we observe that FPA is only informative for CNN-s2s when trained with a largeM. Indeed, for M= 6, CNN-s2s does not infer any of the candidate rules. However, accordingto description length L, we note a significant preference for mem overcomp . More compositionalexamples CNN-based learners see at training, more biased they become toward comp . The remaininglearners have zero FPA for both candidate rules. However, according to description length, LSTM-based learners have preferences similar to CNN-s2s, although weaker. That is, they show a preferenceformem for low M, that declines in favor of comp asMincreases. In contrast, Transformers show astrong bias for mem with all tested M.Overall, across all the above experiments, we see that seq2seq learners demonstrate strikinglydifferent biases. In many cases, these biases lead to non-trivial generalizations when facing ambiguityin the training data. This spans tasks that probe for memorization, arithmetic, hierarchical, andcompositional reasoning. We found that a single example is sufficient for LSTM-based learners tolearn counting, addition, and multiplication. Moreover, within the same task, they can switch fromone explanation to another, depending on the training example length, with Addition-or-Multiplicationbeing the task where this switch happens twice. In contrast, CNN-s2s and Transformers show a strongbias toward memorization. Furthermore, all learners except for CNN-s2s demonstrate a strong biastoward the hierarchical behavior. In the task of compositional generalization, CNN-s2s shows a strongbias toward compositional reasoning that appears after a few compositional training examples. Onthe other hand, Transformers show a preference for memorization over compositional generalization.7Published as a conference paper at ICLR 2021We see that the conclusions derived from comparing the description length of the candidate rules arein agreement with the results under accuracy-based metrics, but provide a more nuanced picture.Robustness to hyper-parameters We observe that learners’ biases depend, in many cases, on thelength/number of the input examples. In Appendix, we examine impact of other hyper-parameters. Inparticular, we study impact of (1) learners’ architecture size, by varying the number of layers, hiddenand embedding sizes, and (2) dropout probability. Our results show that in some cases a learner’sarchitecture size can influence the strength of inductive biases, but rarely modify them: among the136tested settings, we observe only 3cases of switching the preferences. We also found, in linewith (Arpit et al., 2017), that large dropout probabilities can prevent mem-type generalization.Finally, in Appendix we show that a variant of Transformer learners, namely the joint source-targetself-attention learner (He et al., 2018; Fonollosa et al., 2019), displays the same preferences asthe standard Transformer learners. This variant resembles the “decoder-only” architecture used inlanguage modeling (Radford et al., 2019; Brown et al., 2020). This result demonstrates that our tasksand bias measures could be applied for studying inductive biases of language model architectures.6 R ELATED WORKDessì & Baroni (2019) found that, unlike LSTM-s2s learners, CNN-s2s can perform compositionalgeneralization on SCAN. Our experiments indicate that this only happens when enough compositionalexamples are provided in the training. Moreover, in such a case, attention-enabled LSTM-s2s alsostart to prefer compositional generalization over memorization.McCoy et al. (2020) studied inductive biases of recurrent neural architectures in two synthetic tasks,English question formation and tense inflection. They found that only tree-based architecturesshow robust preference for hierarchical reasoning, in contrast to LSTM-s2s learners that generalizedlinearly. Our experiments on the hyperparameter robustness, reported in Appendix, indicate that thepreferences over linear/hierarchical reasoning are strongly affected by the dropout probability, withlearners shifting to linear behavior with low probabilities. As McCoy et al. (2020) experimented witha low dropout probability of 0.1, we believe this explains the misalignment of the conclusions.Overall, our study shows that inductive biases are more complicated than it seemed in these priorworks and a more careful analysis is crucial. We believe that our extremely controlled setup withvery little confounds is a good addition to those studies.Another line of research investigates theoretically learners’ capabilities, that is, the classes of thehypothesis that a learner can discover (Siegelmann & Sontag, 1992; Suzgun et al., 2019; Merrillet al., 2020). For example, Weiss et al. (2018) demonstrated that LSTM cells can count. In turn, wedemonstrate that LSTM-s2s learners are not only capable but also biased toward arithmetic behavior.7 D ISCUSSION AND CONCLUSIONIn this work, we studied inductive biases of standard seq2seq learners, Transformer-, LSTM-, andCNN-based. To do so, we introduced four new tasks, which allowed us to cover an interesting spec-trum of behaviors useful for language learning. In particular, we considered arithmetic, hierarchical,and compositional “reasoning”. Next, we connected the problem of finding and measuring inductivebiases to Solomonoff’s theory of induction and proposed to use a dataset’s description length under alearner as a tool for sensitive measurement of inductive biases.In our experiments, we found that the seq2seq learners have strikingly different inductive biasesand some of them generalize non-trivially when facing ambiguity. For instance, a single trainingexample is sufficient for LSTM-based learners to learn perfectly how to count, to add and to multiplyby a constant. Transformers and, to a lesser degree, LSTM-s2s demonstrated preferences for thehierarchical bias, a bias that has been argued to govern children’s acquisition of syntax. Interestingly,such biases arose with no explicit wiring for them. Our results support then Elman et al. (1998)’stheory which states that human’s inductive biases can arise from low-level architectural constraints inthe brain with no need for an explicit encoding of a linguistic structure. However, how the brain, or,more generally, a learner is wired to admit a specific inductive bias is still an important open question.8Published as a conference paper at ICLR 2021Across our experiments, we also observed that description length is consistent with “intuitive”measurements of inductive biases, and, at the same time, it turned out to be more sensitive. This alsoindicates that, in the presence of ambiguity in the training data, a learner is more likely to follow thealternative with the shorter description length (i.e. the simplest one) when applied on unseen data,showing consistency with the prescriptions of the theory of induction (Solomonoff, 1964). A similarsimplicity preference is argued to play a role in human language acquisition (Perfors et al., 2011).Our work provides simple tools to investigate learners’ biases. We first show that FPA is an intuitivemeasure to study biases when provided with simple tasks. Second, we present description length asa robust measure to fairly compare learners’ biases. This metric considers learners’ size and theirease of learning as opposed to accuracy-based metrics. Besides, it is a model- and task-agnosticmeasure that succeeds in unveiling learners’ biases even when presented with more complex taskswith spurious correlations.Our findings can guide for architecture selection in the low-data regimes where inductive biasesmight have a higher influence on model’s generalization performance. Large sparse datasets can alsobenefit from predictable behavior in few-shot scenarios akin to what we consider.Finally, our results demonstrate that relatively large deep learning models cangeneralize non-triviallyfrom as little as one example – as long as the task is aligned with the their inductive biases. Webelieve this should reinforce interest in future work on injecting useful inductive biases in our learnersand, we hope, our findings and setup can provide a fertile ground for such work.ACKNOWLEDGEMENTSThe authors are grateful to Marco Baroni, Emmanuel Dupoux, Emmanuel Chemla and participants ofthe EViL seminar for their feedback on our work.<|im_end|> <|im_start|>assistant ### Review Title Why is this important? ### Review Text The paper introduces a series of new datasets and task and investigates the inductive bias of seq2seq models. For each dataset, (at least) two hidden hypothesis could explain the data. The tasks investigated are count-vs-memorization, add-or-multiply, hierarchical-or-linear, composition-or-memorization. The datasets consists of one sample with varying length (amount of input/output pairs), which is denoted as description length. The models are evaluated on accuracy and a logloss. An LSTM, CNN, and Transformer are all trained on these datasets. Multiple seeds are used for significance testing. The results suggests that LSTM is better at counting when provided with a longer sequence, while the CNN and Transformer memorizes the data, but are better at handling hierarchical data. What this paper excels at is a thorough description of their experimental section and their approach to design datasets specifically for testing inductive bias, which I have not previously seen and must thus assume is a novel contribution. However, I lean to reject this paper for the following reasons - The paper tries to fit into the emerging field of formal language datasets for evaluating the capacity of deep learning methods. However, they do not build on any of the recent papers in the field. A new dataset, especially a synthetic one, should be well motivated by shortcomings of previous datasets and tasks in the field. I find the motivation and related works section lacking in that sense. - We already know that LSTMs can count https://arxiv.org/abs/1906.03648 and that transformer cannot https://www.mitpressjournals.org/doi/full/10.1162/tacl_a_00306 - It is not clear to me why these results are important? Who will benefit from this analysis? Why are the current AnBnCn and DYCK languages that formal language people work with insufficient? - LSTMs do not have the capacity to perform multiplication. I don’t know why your results suggest otherwise. You would need to incorporate special units that can handle multiplications in the LSTM, such as https://arxiv.org/abs/2001.05016 Update First I'd like to thank the authors for their detailed rebuttal. I have upgraded my recommendation from 3 to 4. As mentioned in my review I believe this approach is interesting. However, as pointed by reviewer2, the experimental section lacks completeness. I think this experimental section would be suitable for a workshop, but not a conference. I am excited to hear you are considering to use this method as an inspiration for real problems. I'd like to see the paper resubmitted when you have obtained such results. ### Review Rating 4: Ok but not good enough - rejection ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
tf8a4jDRFCv
ICLR.cc/2021/Conference
2021
Learning Aggregation Functions
["Giovanni Pellegrini", "Alessandro Tibo", "Paolo Frasconi", "Andrea Passerini", "Manfred Jaeger"]
Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum- (or max-) decomposition requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. To mitigate this problem, we introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality. LAF can approximate several extensively used aggregators (such as average, sum, maximum) as well as more complex functions (e.g. variance and skewness). We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures such as DeepSets and library-based architectures like Principal Neighborhood Aggregation.
["Deep learning", "Neural networks", "Relational and structured data", "Aggregation functions"]
ABSTRACTLearning on sets is increasingly gaining attention in the machine learning com-munity, due to its widespread applicability. Typically, representations over setsare computed by using fixed aggregation functions such as sum or maximum.However, recent results showed that universal function representation by sum- (ormax-) decomposition requires either highly discontinuous (and thus poorly learn-able) mappings, or a latent dimension equal to the maximum number of elementsin the set. To mitigate this problem, we introduce LAF (Learning AggregationFunctions), a learnable aggregator for sets of arbitrary cardinality. LAF can ap-proximate several extensively used aggregators (such as average, sum, maximum)as well as more complex functions (e.g. variance and skewness). We report experi-ments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures such as DeepSets and library-basedarchitectures like Principal Neighborhood Aggregation.1 I NTRODUCTIONThe need to aggregate representations is ubiquitous in deep learning. Some recent examples includemax-over-time pooling used in convolutional networks for sequence classification (Kim, 2014), av-erage pooling of neighbors in graph convolutional networks (Kipf & Welling, 2017), max-poolingin Deep Sets (Zaheer et al., 2017), in (generalized) multi-instance learning (Tibo et al., 2017) and inGraphSAGE (Hamilton et al., 2017). In all the above cases (with the exception of LSTM-poolingin GraphSAGE) the aggregation function is predefined, i.e., not tunable, which may be in generala disadvantage (Ilse et al., 2018). Sum-based aggregation has been advocated based on theoreticalfindings showing the permutation invariant functions can be sum-decomposed (Zaheer et al., 2017;Xu et al., 2019). However, recent results (Wagstaff et al., 2019) showed that this universal functionrepresentation guarantee requires either highly discontinuous (and thus poorly learnable) mappings,or a latent dimension equal to the maximum number of elements in the set. This suggests thatlearning set functions that are accurate on sets of large cardinality is difficult.Inspired by previous work on learning uninorms (Melnikov & Hüllermeier, 2016), we propose a newparametric family of aggregation functions that we call LAF, for learning aggregation functions . Asingle LAF unit can approximate standard aggregators like sum, max or mean as well as modelintermediate behaviours (possibly different in different areas of the space). In addition, LAF layerswith multiple aggregation units can approximate higher order moments of distributions like variance,skewness or kurtosis. In contrast, other authors (Corso et al., 2020) suggest to employ a predefinedlibrary of elementary aggregators to be combined. Since LAF can represent sums, it can be seen asa smooth version of the class of functions that are shown in Zaheer et al. (2017) to enjoy universalityresults in representing set functions. The hope is that being smoother, LAF is more easily learnable.Our empirical findings show that this can be actually the case, especially when asking the model togeneralize over large sets.In particular, in this paper we offer an extensive experimental analysis showing that:LAF layers can learn a wide range of aggregators (including higher-order moments) on setsof scalars without background knowledge on the nature of the aggregation taskLAF layers on the top of traditional layers can learn the same wide range of aggregators onsets of high dimensional vectors (MNIST images)LAF outperforms state-of-the-art set learning methods such as DeepSets and PNA on real-world problems involving point clouds and text concept set retrieval.1Under review as a conference paper at ICLR 2021Name Definition a bc de fg h limitsconstant c2R 0 1 - - 0 1 - -c0 1 0max maxixi 1=rr - - 0 1 - - 1 0 1 0r!1min minixi 0 11=rr 0 1 - - 1 -1 1 0r!1sumPixi 1 1 - - 0 1 - - 1 0 1 0nonzero count jfi:xi6= 0gj 1 0 - - 0 1 - - 1 0 1 0mean 1=NPixi 1 1 - - 1 0 - - 1 0 1 0kth moment 1=NPixki 1k- - 1 0 - - 1 0 1 0lth power ofkth moment (1=NPixki)ll k - -l0- - 1 0 1 0min/max minixi=maxixi0 11=rr1=ss - - 1 1 1 0r;s!1max/min maxixi=minixi1=rr - - 0 11=ss 1 0 1 1r;s!1Table 1: Different functions achievable by varying the parameters in the formulation in Eq. 2LAF performs comparably to PNA on random graph generation tasks, outperformingseveral graph neural networks architectures including GAT (Veli ˇckovi ́c et al., 2018) andGIN (Xu et al., 2019)The rest of this work is structured as follows. In Section 2 we define the LAF framework and showhow appropriate parametrizations of LAF allow to represent a wide range of popular aggregationfunctions. In Section 3 we discuss some relevant related work. Section 4 reports synthetic and real-world experiments showing the advantages of LAF over (sets of) predifined aggregators. Finally,conclusions and pointers to future work are discussed in Section 5.2 T HELEARNING AGGREGATION FUNCTION FRAMEWORKWe use x=fx1;:::;x Ngto denote finite multisets of real numbers xi2R. Note that directlytaking xto be a multiset, not a vector, means that there is no need to define properties like ex-changeability or permutation equivariance for operations on x. An aggregation function aggis anyfunction that returns for any multiset xof arbitrary cardinality N2Na value agg(x)2R.Standard aggregation functions like mean andmax can be understood as (normalized) Lp-norms.We therefore build our parametric LAF aggregator around generalized Lp-norms of the formLa;b(x) := Xixbi!a(a;b0): (1)La;bis invariant under the addition of zeros: La;b(x) =La;b(x[0)where 0is a multiset of zerosof arbitrary cardinality. In order to also enable aggregations that can represent conjunctive behaviorsuch as min, we make symmetric use of aggregators of the multisets 1x:=f1xijxi2xg. ForLa;b(1x)to be a well-behaved, dual version of La;b(x), the values in xneed to lie in the range[0;1]. We therefore restrict the following definition of our learnable aggregation function to sets xwhose elements are in [0;1]:LAF(x) :=La;b(x) +Lc;d(1x)Le;f(x) +Lg;h(1x)(2)defined by tunable parameters a;:::;h0, and;:::;2R. In cases where sets need to beaggregated whose elements are not already bounded by 0;1, we apply a sigmoid function to the setelements prior to aggregation.Table 1 shows how a number of important aggregation functions are special cases of LAF (forvalues in [0;1]). We make repeated use of the fact that L0;1returns the constant 1. For max andmin LAF only provides an asymptotic approximation in the limit of specific function parameters(as indicated in the limits column of Table 1). In most cases, the parameterization of LAF for thefunctions in Table 1 will not be unique. Being able to encode the powers of moments implies thate.g. the variance of xcan be expressed as the difference 1=NPix2i(1=NPixi)2of two LAFaggregators.Since LAF includes sum-aggregation, we can adapt the results of Zaheer et al. (2017) and Wagstaffet al. (2019) on the theoretical universality of sum-aggregation as follows.2Under review as a conference paper at ICLR 2021Proposition 1 LetXRbe countable, and fa function defined on finite multisets with elementsfromX. Then there exist functions :X ! [0;1],:R!R, and a parameterization of LAF ,such thatf(x) =(LAF (x);;;;;a;b;c;d ), wherexis the multisetf(x)jx2xg.A proof in Wagstaff et al. (2019) for a very similar proposition used a mapping from Xinto the reals.Our requirement that LAF inputs must be in [0;1]requires a modification of the proof (containedin the supplementary material), which for the definition of relies on a randomized construction.Proposition 1 shows that we retain the theoretical universality guarantees of Zaheer et al. (2017),while enabling a wider range of solutions based on continuous encoding and decoding functions.Figure 1: LAF functions with randomly generated parametersIt should be emphasized at this point that the primary purpose of LAF is not to provide a uniformrepresentation of different standard aggregators as displayed in Table 1, but to enable a continuum ofintermediate and hybrid aggregators. Figure 1 shows the graphs of 4 different randomly generatedLAF functions over the unit square [0;1][0;1], i.e., evaluated over sets of size 2. Parameters;:::; were randomly sampled in the interval [0;1]; parameters b;d;f;h are randomly sampledfrom the integers 0;:::; 5, anda;c;e;g are obtained as 1=iwithia random integer from 0;:::; 5.The figure illustrates the rich repertoire of aggregation functions with different qualitative behaviorsalready for non-extreme parameter values.2.1 LAF A RCHITECTURELAF can be easily used as a module of a larger architecture suitable for learning on sets. Sev-eral LAF units can be combined as shown in Figure 2, to capture different aspects of the in-put set, which can be in general a set of vectors x=fx1;:::;x Ngwherexi2Rd. Notethat multiple aggregators are also used in related frameworks such as DeepSets (Zaheer et al.,2017) or Graph Neural Networks (Veli ˇckovi ́c et al., 2018; Corso et al., 2020). A module withrLAF units takes as input d-dimensional vectors and produces a vector of size rdas out-put. Each LAF unit performs an element-wise aggregation of the vectors in the set such thatLk;j= LAF(fxi;j;:::;x N;jg;k;k;k;k;ak;bk;ck;dk)fork= 1;:::;r andj= 1;:::;d .The output vector can be then fed into the next layer.3 R ELATED WORKSeveral studies address the problem of aggregating data over sets. Sum-decomposition strategieshave been used in (Zaheer et al., 2017) for points cloud classification and set expansion tasks andin (Santoro et al., 2017) for question answering and dynamic physical systems computation. Max,sum and average are standard aggregation functions for node neighborhoods in graph neural net-works (Hamilton et al., 2017; Kipf & Welling, 2017; Xu et al., 2019; Veli ˇckovi ́c et al., 2018). Zaheeret al. (2017) first proved universal representation results for these standard aggregators when com-bined with learned mappings over inputs and results of the aggregation. However, Wagstaff et al.(2019) showed that these universality results are of little practical use, as they either require highlydiscontinuous mappings that would be extremely difficult to learn, or a latent dimension that is atleast the size of the maximum number of input elements.Uninorms (Yager & Rybalov, 1996) are a class of aggregation functions in fuzzy logic that canbehave in a conjunctive ,disjunctive oraveraging manner depending on a parameter called neutralelement . Melnikov & Hüllermeier (2016) proposed to learn fuzzy aggregators by adjusting theselearnable parameters, showing promising results on combining reviewers scores on papers into an3Under review as a conference paper at ICLR 2021Figure 2: End-to-end LAF architecture.overall decision of acceptance or reject. Despite the advantage of incorporating different behavioursin one single function, uninorms present discontinuities in the regions between aggregators, mak-ing them not amenable to be utilized in fully differentiable frameworks. Furthermore the range ofpossible behaviours is restricted to those commonly used in the context of fuzzy-logic.The need for considering multiple candidate aggregators is advocated in a very recent work thatwas developed in parallel with our framework (Corso et al., 2020). The resulting architecture,termed Principal Neighborhood Aggregation (PNA) combines multiple standard aggregators, in-cluding most of the ones we consider in the LAF framework, adjusting their outputs with degreescalers. However, the underlying philosophy is rather different. PNA aims at learning to select theappropriate aggregator(s) from a pool of candidates, while LAF explores a continuous space of ag-gregators that includes standard ones as extreme cases. Our experimental evaluation shows that PNAhas troubles in learning aggregators that generalize over set sizes, despite having them in the poolof candidates, likely because of the quasi-combinatorial structure of its search space. On the otherhand, LAF can successfully learn even the higher moment aggregators and consistently outperformsPNA.Closely connected, but somewhat complementary to aggregation operators are attention mecha-nisms (Bahdanau et al., 2015; Vaswani et al., 2017). They have been explored to manipulate set datain Lee et al. (2019) and in the context of multi-instance learning (Ilse et al., 2018). Attention operatesat the level of set elements, and aims at a transformation (weighting) of their representations such asto optimize a subsequent weighted sum-aggregation. While the objectives of attention-based frame-works and LAF partially overlap, they are functionally quite different. Exploring combinations ofLAF with attention mechanisms is a possible subject of future work.4 E XPERIMENTSIn this section, we present and discuss experimental results showing the potential of the LAF frame-work on both synthetic and real-world tasks1. Synthetic experiments are aimed at showing the abilityof LAF to learn a wide range of aggregators and its ability to generalize over set sizes (i.e., havingtest-set sets whose cardinality exceeds the cardinality of the training-set sets), something that alter-native architectures based on predefined aggregators fail to achieve. We use DeepSets, PNA, andLSTM as representatives of these architectures. The LSTM architecture corresponds to a versionof DeepSets where the aggregation function is replaced by a LSTM layer. Experiments on diversetasks including point cloud classification, text concept set retrieval and graph properties predictionare aimed at showing the potential of the framework on real-world applications.1The source code is now available in the supplementary material4Under review as a conference paper at ICLR 20214.1 E XPERIMENTS ON SCALARS10203040500.000.250.500.751.001.251.501.75Mean Absolute Error1e1 CountLAFDeepSetsPNALSTM10203040500123451e1 Sum10203040500.00.20.40.60.81.01.21.4Max10203040500.00.20.40.60.81.0Mean102030405001234561e1 Min10203040500.00.20.40.60.81.01.21.41e1Max - Min102030405001234Mean Absolute Error1e1Inverse Count10203040500.250.500.751.001.251.501.752.00Median10203040500.00.20.40.60.81.01.21e1Min / Max102030405001234561e1Variance10203040501234561e1Skewness10203040500.00.51.01.52.0KurtosisFigure 3: Test performances for the synthetic experiment with integer scalars on increasing testset size. The x axis of the figures represents the maximum test set cardinality, whereas the y axisdepicts the MAE error. The dot, star, diamond and triangle symbols denote LAF, DeepSets, PNA,and LSTM respectively.This section shows the learning capacity of the LAF framework to learn simple and complex ag-gregation functions where constituents of the sets are simple numerical values. In this setting weconsider sets made of scalar integer values. The training set is constructed as follows: for each set,we initially sample its cardinality Kfrom a uniform distribution taking values in f2;Mg, and thenwe uniformly sample Kintegers in 0;:::; 9. For the training set we use M= 10 . We constructseveral test sets for different values of M(M= 5;10;15;20;25;30;35;40;45;50). This impliesthat models need to generalize to larger set sizes. Contrarily to the training set, each test set is con-structed in order to diversify the target labels it contains, so as to avoid degenerate behaviours forlarge set sizes (e.g., maximum constantly equal to 9). Each synthetic dataset is composed of 100,000sets for training, 20,000 set for validating and 100,000 for testing.The number of aggregation units is set as follows. The model contains nine LAF (Equation 2) units,whose parameters fak;:::;h kg,k= 1;:::; 9are initialized according to a uniform sampling in[0;1]as those parameters must be positive, whereas the coefficients f;:::;gare initialized witha Gaussian distribution with zero mean and standard deviation of 0.01 to cover also negative values.The positivity constraint for parameters fa;b;:::;hgis enforced by projection during the optimiza-tion process. The remaining parameters can take on negative values. DeepSets also uses nine units:three max units, three sum units, and three mean units and PNA uses seven units: mean, max, sum,standard deviation, variance, skewness and kurtosis. Preliminary experiments showed that expand-ing the set of aggregators for PNA with higher order moments only leads to worse performance.Each set of integers is fed into an embedding layer (followed by a sigmoid) before performing theaggregation function. DeepSets and PNA do need an embedding layer (otherwise they would haveno parameters to be tuned). Although LAF does not need an embedding layer, we used it in all mod-els to make the comparison more uniform. The architecture details are reported in the supplementarymaterial. We use the Mean Absolute Error (MAE) as a loss function to calculate the prediction error.Figure 3 shows the trend of the MAE error for the three methods for increasing test set sizes, fordifferent types of target aggregators. As expected, DeepSets manages to learn the identity functionand thus correctly models aggregators like sum, max and mean. Even if LAF needs to adjust itsparameters in order to properly aggregate the data, its performance are competitive with those ofDeepSets. When moving to more complex aggregators like inverse count, median or moments ofdifferent orders, DeepSets fails to learn the latent representation. One the other hand, the perfor-mance of LAF is very stable for growing set sizes. While having in principle at its disposal mostof the target aggregators (including higher order moment) PNA badly overfits over the cardinalityof sets in the training set in all cases (remember that the training set contains sets of cardinality atmost 10). The reason why LAF substantially outperforms PNA on large set sizes could be explainedin terms of a greater flexibility to adapt to the learnt representation. Indeed, LAF parameters can5Under review as a conference paper at ICLR 202110 20 30 40 500.00.51.01.52.02.53.0Mean Absolute Error×101 CountLAFDeepSetsPNALSTM10 20 30 40 500.00.20.40.60.81.01.21.4×102 Sum10 20 30 40 500.20.40.60.81.0Max10 20 30 40 500.00.51.01.52.02.53.0Mean10 20 30 40 500.20.40.60.81.01.21.4Min10 20 30 40 500.20.40.60.81.0Max - Min10 20 30 40 50012345Mean Absolute Error×101Inverse Count10 20 30 40 5002468Median10 20 30 40 500.250.500.751.001.251.501.75×101Min / Max10 20 30 40 500.00.51.01.52.02.5Variance10 20 30 40 500.00.51.01.52.02.53.03.5Skewness10 20 30 40 500123456KurtosisFigure 4: Test performances for the synthetic experiment on MNIST digits on increasing test set size.The x axis of the figures represents the maximum test set cardinality, whereas the y axis depicts theMAE error. The dot, star, diamond and traingle symbols denote LAF, DeepSets, PNA and LSTMrespectively.adjust the laffunction to be compliant with the latent representation even if the input mapping failsto learn the identity. On the other hand, having a bunch of fixed, hard-coded aggregators, PNA needsto be able to both learn the identity mapping and select the correct aggregator among the candidates.Finally, LSTM exhibits generally poor results when compared to the other methods, particularly inthe case of the count and the sum.4.2 MNIST DIGITSIn this section, we modify the previous experimental setting to process MNIST images of digits. Thedataset is the same as in the experiment on scalars, but integers are replaced by randomly samplingMNIST images for the same digits. Instances for the training and test sets are drawn from theMNIST training and test sets, respectively. This experiment aims to demonstrate the ability of LAFto learn from more complex representations of the data by plugging it into end-to-end differentiablearchitectures. Contrarily to the model of the previous section, here we use three dense layers forlearning picture representations before performing the aggregation function. The architecture detailsare reported in the supplementary material.Figure 4 shows the comparison of LAF, DeepSets, PNA, and LSTM in this setting. Results arequite similar to those achieved in the scalar setting, indicating that LAF is capable of effectivelybackpropagating information so as to drive the learning of an appropriate latent representation, whileDeepSets, PNA, and LSTM suffer from the same problems seen in aggregating scalars.Furthermore, Figure 5 provides a qualitative evaluation of the predictions of the LAF, DeepSets,and PNA methods on a representative subset of the target aggregators. The images illustrate thecorrelation between the true labels and the predictions. LAF predictions are distributed over thediagonal line, with no clear bias. On the other hand, DeepSets and PNA perform generally worsethan LAF, exhibiting higher variances. In particular, for inverse count and kurtosis, DeepSets andPNA predictions are condensed in a specific area, suggesting an overfitting on the training set.4.3 P OINT CLOUDIn order to evaluate LAF on real-world dataset, we consider point cloud classification, a prototypetask for set-wise prediction. Therefore, we run experimental comparisons on the ModelNet40 (Wuet al., 2015) dataset, which consists of 9,843 training and 2,468 test point clouds of objects dis-tributed over 40 classes. The dataset is preprocessed following the same procedure described by Za-heer et al. (2017). We create point clouds of 100 and 1,000 three-dimensional points by adoptingthe point-cloud library’s sampling routine developed by Rusu & Cousins (2011) and normalizingeach set of points to have zero mean (along each axis) and unit (global) variance. We refer with6Under review as a conference paper at ICLR 2021Figure 5: Scatter plots of the MNIST experiment comparing true (x axis) and predicted (y axis)values with 50 as maximum test set size. The target aggregations are max (up-left), inverse count(up-right), median (bottom-left) and kurtosis (bottom-right).P100 and P1000 to the two datasets. For all the settings, we consider the same architecture andhyper-parameters of the DeepSets permutation invariant model described by Zaheer et al. (2017).For LAF, we replace the original aggregation function (max) used in DeepSets with 10 LAF units,while for PNA we use the concatenation of max, min, mean, and standard deviation, as proposed bythe authors. For PNA we do not consider any scaler, as the cardinalities of the sets are fixed.Results in Table 2 show that LAF produces an advantage in the lower resolution dataset (i.e. onP100), while it obtains comparable (and slightly more stable) performances in the higher resolutionone (i.e. on P1000). These results suggest that having predefined aggregators is not necessarily anoptimal choice in real world cases, and that the flexibility of LAF in modeling diverse aggregationfunctions can boost performance and stability.Table 2: Results on the Point Cloud classification task. Accuracies with standard deviations (calcu-lated on 5 runs) for the ModelNet40 dataset.METHOD P100 P1000DEEPSETS 82.02.0% 87.0 1.0%PNA 82.9 0.7% 86.4 0.6%LSTM 78.7 1.1% 82.2 1.7%LAF 84.00.6% 87.0 0.5%4.4 S ETEXPANSIONFollowing the experimental setup of DeepSets, we also considered the Set Expansion task. In thistask the aim is to augment a set of objects of the same class with other similar objects, as explainedin (Zaheer et al., 2017). The model learns to predict a score for an object given a query set anddecide whether to add the object to the existing set. Specifically, Zaheer et al. (2017) considerthe specific application of set expansion to text concept retrieval. The idea is to retrieve wordsthat belong to a particular concept, giving as input set a set of words having the same concept. Weemploy the same model and hyper-parameters of the original publication, where we replace the sum-decomposition aggregation with LAF units for our methods and the min, max, mean, and standarddeviation aggregators for PNA.We trained our model on sets constructed from a vocabulary of different size, namely LDA-1K ,LDA-3K andLDA-5K . Table 3 shows the results of LAF, DeepSets and PNA on different evaluationmetrics. We report the retrieval metrics recall@K, median rank and mean reciprocal rank. We alsoreport the results on other methods the authors compared to in the original paper. More details on the7Under review as a conference paper at ICLR 2021Table 3: Results on Text Concept Set Retrieval on LDA-1k, LDA-3k, and LDA-5k. Bold valuesdenote the best performance for each metric.METHODLDA-1 k(VOCAB =17k) LDA-3 k(VOCAB =38k) LDA-5 k(VOCAB =61k)RECALL (%)MRR M ED.RECALL (%)MRR M ED.RECALL (%)MRR M ED.@10 @100 @1 K @10 @100 @1 K @10 @100 @1 KRANDOM 0.06 0.6 5.9 0.001 8520 0.02 0.2 2.6 0.000 28635 0.01 0.2 1.6 0.000 30600BAYES SET 1.69 11.9 37.2 0.007 2848 2.01 14.5 36.5 0.008 3234 1.75 12.5 34.5 0.007 3590W2VNEAR 6.00 28.1 54.7 0.021 641 4.80 21.2 43.2 0.016 2054 4.03 16.7 35.2 0.013 6900NN- MAX 4.78 22.5 53.1 0.023 779 5.30 24.9 54.8 0.025 672 4.72 21.4 47.0 0.022 1320NN- SUM -CON 4.58 19.8 48.5 0.021 1110 5.81 27.2 60.0 0.027 453 4.87 23.5 53.9 0.022 731NN- MAX -CON 3.36 16.9 46.6 0.018 1250 5.61 25.7 57.5 0.026 570 4.72 22.0 51.8 0.022 877DEEPSETS 5.53 24.2 54.3 0.025 696 6.04 28.5 60.7 0.027 426 5.54 26.1 55.5 0.026 616DEEPSETS5.89 26.0 55.3 0.026 619 7.56 28.5 64.0 0.035 349 6.49 27.9 56.9 0.030 536PNA 5.56 24.7 53.2 0.027 753 7.04 27.2 58.7 0.028 502 5.47 23.8 52.4 0.025 807LSTM 4.29 21.5 52.6 0.022 690 5.56 25.7 58.8 0.026 830 4.87 23.8 55.0 0.022 672LAF 6.51 26.6 54.5 0.030 650 8.14 32.3 62.8 0.037 339 6.71 28.3 56.9 0.031 523other methods in the table can be found in the original publication. Briefly, Random samples a worduniformly from the vocabulary; Bayes Set (Ghahramani & Heller, 2006); w2v-Near computes thenearest neighbors in the word2vec (Mikolov et al., 2013) space; NN-max uses a similar architectureas our DeepSets but uses max pooling to compute the set feature, as opposed to sum pooling; NN-max-con uses max pooling on set elements but concatenates this pooled representation with that ofquery for a final set feature; NN-sum-con is similar to NN-max-con but uses sum pooling followedby concatenation with query representation. For the sake of fairness, we have rerun DeepSets usingthe current implementation from the authors (indicated as DeepSetin Table 3), exhibiting betterresults than the ones reported in the original paper. Nonetheless, LAF outperforms all other methodsin most cases, especially on LDA-3K andLDA-5K .4.5 M ULTI -TASK GRAPH PROPERTIESCorso et al. (2020) defines a benchmark consisting of 6 classical graph theory tasks on artificiallygenerated graphs from a wide range of popular graph types like Erdos-Renyi, Barabasi-Albert orstar-shaped graphs. Three of the tasks are defined for nodes, while the other three for whole graphs.The node tasks are the single-source shortest-path lengths (N1), the eccentricity (N2) and the Lapla-cian features (N3). The graph tasks are graph connectivity (G1), diameter (G2), and the spectralradius (G3). For more details about the experimental settings please refer to Corso et al. (2020).Table 4: Results on the Multi-task graph properties prediction benchmark. Results are expressed inlog 10 of mean squared error.METHOD N1 N2 N3 G1 G2 G3BASELINE -1.87 -1.50 -1.60 -0.62 -1.30 -1.41GIN -2.00 -1.90 -1.60 -1.61 -2.17 -2.66GCN -2.16 -1.89 -1.60 -1.69 -2.14 -2.79GAT -2.34 -2.09 -1.60 -2.44 -2.40 -2.70MPNN ( MAX ) -2.33 -2.26 -2.37 -1.82 -2.69 -3.52MPNN ( SUM ) -2.36 -2.16 -2.59 -2.54 -2.67 -2.87PNA ( NO SCALERS ) -2.54 -2.42 -2.94 -2.61 -2.82 -3.29PNA -2.89 -2.89 -3.77 -2.61 -3.04 -3.57LAF -2.13 -2.20 -1.67 -2.35 -2.77 -3.63We compare LAF against PNA by simply replacing the original PNA aggregators and scalers with100 LAF units (see Equation 2). Table 4 shows that albeit these datasets were designed to highlightthe features of the PNA architecture, that outperforms a wide range of alternative graph neural net-work approaches LAF produces competitive results, outperforming state-of-the-art GNN approacheslike GIN (Xu et al., 2019), GCN (Kipf & Welling, 2017) and GAT (Veli ˇckovi ́c et al., 2018) and evenimproving over PNA on spectral radius prediction.8Under review as a conference paper at ICLR 20215 C ONCLUSIONSThe theoretical underpinnings for sum aggregation as a universal framework for defining set func-tions do not necessarily provide a template for practical solutions. Therefore we introduced LAF,a framework for learning aggregation functions that make use of a parametric aggregator to ef-fectively explore a rich space of possible aggregations. LAF defines a new class of aggregationfunctions, which include as special cases widely used aggregators, and also has the ability to learncomplex functions such as higher-order moments. We empirically showed the generalization abilityof our method on synthetic settings as well as real-world datasets, providing comparisons with state-of-the-art sum-decomposition approaches and recently introduced techniques. The flexibility of ourmodel is a crucial aspect for potential practical use in many deep learning architectures, due to itsability to be easily plugged into and learned in end-to-end architectures. The portability of LAFopens a new range of possible applications for aggregation functions in machine learning methods,and future research in this direction can enhance the expressivity of many architectures and modelsthat deal with unstructured data.
ukmCfRiCNgL
A novel method for aggregating the information from sets
6: Marginally above acceptance threshold
Summary: Universal function representation guarantee requires either highly discontinuous mappings or a highly dimensional latent space. For this reason the authors propose a new parametric family of aggregation functions, called LAF (for learning aggregation functions). It can be seen as a smooth version of the class of functions that are shown in DeepSets. LAF aggregator could learn all standard aggregation functions. Moreover in experiments the autors shows that LAF surpasses other aggregation methods. ============================================================================= Pros: 1. The authors shows, that all standard aggregation functions are achievable by varying the parameters in the formulation of LAF. Moreover LAF enables a neural network to use a continuum of intermediate and hybrid aggregators. 2. Comprehensive ablation study that compares LAF to DeepSets and PNA on digits and MNIST images. In the study the goal is to learn a different types of target aggregation. The results shows that LAF could learn all the given types of aggregation methods as well as it could generalize well to the size of the test set (and thus is not overfitting to the size of the training set as the other methods). 3. The authors provide an extensive set of experiments on a wide range of datasets, including point clouds, set expansion and graph properties. On most of the given tasks LAF is superior to other methods. ============================================================================= Cons: Not all the details about LAF aggregation are clear to me. The authors should consider rewriting a section 2 (with a description of the aggregation), considering the points I list below. 1. In the manuscript the autors state that LAF is using the tunable parameters a,...,h and alpha,...,delta, however they do not show how to initialize these parameters and do not tell whether the model is sensitive to these values. 2. The autors state that tunable parameters a,...,h are greater or equal than zero. However hey do not show how to achieve this condition. Whether they use exponent, non-linearity or do it in another way. 3. In the definition of LAF aggregator, the authors states that x should be a real number, however looking at the experiments, it seems to me that it could also be applied to vectors. Please correct me if I'm wrong, but if I am right, than please answer the question whether in this situation a,...,h,alpha,...,delta are still the scalars or whether are they vecors? 4. The more detailed description of builded networks should be included (even in the appendix). It is not clear to me, how the authors made thir networks. E.g. in section 4.1 they state that 'The LAF model contains nine LAF(x) aggregation functions' - does this mean that in the final aggregation layer you create 9 independently working LAF aggregators and then concatenate them and pass to final prediction layer? If yes, then what in the situation with aggregating the informations from vectors? Do you still make concatenaction? ============================================================================= Questions during rebuttal period: 1. Why section 5 (Multi-task graph properties) is not the sub-section of section 4 (Experiments)? 2. Usually using a sigmoid could disturb the training of neural network. Could you create the experiment (on a real dataset), where you delete the sigmoid as well as parts with 1-x form the LAF aggregator? 3. Could you create some experiments with LAF as the final aggregator for neural networks? More specific, for exampe, could you use LAF instead of mean average pooling in image classification using ResNet or some text classification task? ============================================================================= ============================================================================= Reasons for score: I vote for accepting this paper. The idea proposed by the authors is novel and elegant. Moreover experiments shows that the proposed model is superior to the other models with whom it has been compared. My major concern is about the clarity of the paper. Hopefully the authors can address my concern in the rebuttal period.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Learning Aggregation Functions ### Paper Abstract Learning on sets is increasingly gaining attention in the machine learning community, due to its widespread applicability. Typically, representations over sets are computed by using fixed aggregation functions such as sum or maximum. However, recent results showed that universal function representation by sum- (or max-) decomposition requires either highly discontinuous (and thus poorly learnable) mappings, or a latent dimension equal to the maximum number of elements in the set. To mitigate this problem, we introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality. LAF can approximate several extensively used aggregators (such as average, sum, maximum) as well as more complex functions (e.g. variance and skewness). We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures such as DeepSets and library-based architectures like Principal Neighborhood Aggregation. ### Paper Keywords ["Deep learning", "Neural networks", "Relational and structured data", "Aggregation functions"] ### Paper Content ABSTRACTLearning on sets is increasingly gaining attention in the machine learning com-munity, due to its widespread applicability. Typically, representations over setsare computed by using fixed aggregation functions such as sum or maximum.However, recent results showed that universal function representation by sum- (ormax-) decomposition requires either highly discontinuous (and thus poorly learn-able) mappings, or a latent dimension equal to the maximum number of elementsin the set. To mitigate this problem, we introduce LAF (Learning AggregationFunctions), a learnable aggregator for sets of arbitrary cardinality. LAF can ap-proximate several extensively used aggregators (such as average, sum, maximum)as well as more complex functions (e.g. variance and skewness). We report experi-ments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures such as DeepSets and library-basedarchitectures like Principal Neighborhood Aggregation.1 I NTRODUCTIONThe need to aggregate representations is ubiquitous in deep learning. Some recent examples includemax-over-time pooling used in convolutional networks for sequence classification (Kim, 2014), av-erage pooling of neighbors in graph convolutional networks (Kipf & Welling, 2017), max-poolingin Deep Sets (Zaheer et al., 2017), in (generalized) multi-instance learning (Tibo et al., 2017) and inGraphSAGE (Hamilton et al., 2017). In all the above cases (with the exception of LSTM-poolingin GraphSAGE) the aggregation function is predefined, i.e., not tunable, which may be in generala disadvantage (Ilse et al., 2018). Sum-based aggregation has been advocated based on theoreticalfindings showing the permutation invariant functions can be sum-decomposed (Zaheer et al., 2017;Xu et al., 2019). However, recent results (Wagstaff et al., 2019) showed that this universal functionrepresentation guarantee requires either highly discontinuous (and thus poorly learnable) mappings,or a latent dimension equal to the maximum number of elements in the set. This suggests thatlearning set functions that are accurate on sets of large cardinality is difficult.Inspired by previous work on learning uninorms (Melnikov & Hüllermeier, 2016), we propose a newparametric family of aggregation functions that we call LAF, for learning aggregation functions . Asingle LAF unit can approximate standard aggregators like sum, max or mean as well as modelintermediate behaviours (possibly different in different areas of the space). In addition, LAF layerswith multiple aggregation units can approximate higher order moments of distributions like variance,skewness or kurtosis. In contrast, other authors (Corso et al., 2020) suggest to employ a predefinedlibrary of elementary aggregators to be combined. Since LAF can represent sums, it can be seen asa smooth version of the class of functions that are shown in Zaheer et al. (2017) to enjoy universalityresults in representing set functions. The hope is that being smoother, LAF is more easily learnable.Our empirical findings show that this can be actually the case, especially when asking the model togeneralize over large sets.In particular, in this paper we offer an extensive experimental analysis showing that:LAF layers can learn a wide range of aggregators (including higher-order moments) on setsof scalars without background knowledge on the nature of the aggregation taskLAF layers on the top of traditional layers can learn the same wide range of aggregators onsets of high dimensional vectors (MNIST images)LAF outperforms state-of-the-art set learning methods such as DeepSets and PNA on real-world problems involving point clouds and text concept set retrieval.1Under review as a conference paper at ICLR 2021Name Definition a bc de fg h limitsconstant c2R 0 1 - - 0 1 - -c0 1 0max maxixi 1=rr - - 0 1 - - 1 0 1 0r!1min minixi 0 11=rr 0 1 - - 1 -1 1 0r!1sumPixi 1 1 - - 0 1 - - 1 0 1 0nonzero count jfi:xi6= 0gj 1 0 - - 0 1 - - 1 0 1 0mean 1=NPixi 1 1 - - 1 0 - - 1 0 1 0kth moment 1=NPixki 1k- - 1 0 - - 1 0 1 0lth power ofkth moment (1=NPixki)ll k - -l0- - 1 0 1 0min/max minixi=maxixi0 11=rr1=ss - - 1 1 1 0r;s!1max/min maxixi=minixi1=rr - - 0 11=ss 1 0 1 1r;s!1Table 1: Different functions achievable by varying the parameters in the formulation in Eq. 2LAF performs comparably to PNA on random graph generation tasks, outperformingseveral graph neural networks architectures including GAT (Veli ˇckovi ́c et al., 2018) andGIN (Xu et al., 2019)The rest of this work is structured as follows. In Section 2 we define the LAF framework and showhow appropriate parametrizations of LAF allow to represent a wide range of popular aggregationfunctions. In Section 3 we discuss some relevant related work. Section 4 reports synthetic and real-world experiments showing the advantages of LAF over (sets of) predifined aggregators. Finally,conclusions and pointers to future work are discussed in Section 5.2 T HELEARNING AGGREGATION FUNCTION FRAMEWORKWe use x=fx1;:::;x Ngto denote finite multisets of real numbers xi2R. Note that directlytaking xto be a multiset, not a vector, means that there is no need to define properties like ex-changeability or permutation equivariance for operations on x. An aggregation function aggis anyfunction that returns for any multiset xof arbitrary cardinality N2Na value agg(x)2R.Standard aggregation functions like mean andmax can be understood as (normalized) Lp-norms.We therefore build our parametric LAF aggregator around generalized Lp-norms of the formLa;b(x) := Xixbi!a(a;b0): (1)La;bis invariant under the addition of zeros: La;b(x) =La;b(x[0)where 0is a multiset of zerosof arbitrary cardinality. In order to also enable aggregations that can represent conjunctive behaviorsuch as min, we make symmetric use of aggregators of the multisets 1x:=f1xijxi2xg. ForLa;b(1x)to be a well-behaved, dual version of La;b(x), the values in xneed to lie in the range[0;1]. We therefore restrict the following definition of our learnable aggregation function to sets xwhose elements are in [0;1]:LAF(x) :=La;b(x) +Lc;d(1x)Le;f(x) +Lg;h(1x)(2)defined by tunable parameters a;:::;h0, and;:::;2R. In cases where sets need to beaggregated whose elements are not already bounded by 0;1, we apply a sigmoid function to the setelements prior to aggregation.Table 1 shows how a number of important aggregation functions are special cases of LAF (forvalues in [0;1]). We make repeated use of the fact that L0;1returns the constant 1. For max andmin LAF only provides an asymptotic approximation in the limit of specific function parameters(as indicated in the limits column of Table 1). In most cases, the parameterization of LAF for thefunctions in Table 1 will not be unique. Being able to encode the powers of moments implies thate.g. the variance of xcan be expressed as the difference 1=NPix2i(1=NPixi)2of two LAFaggregators.Since LAF includes sum-aggregation, we can adapt the results of Zaheer et al. (2017) and Wagstaffet al. (2019) on the theoretical universality of sum-aggregation as follows.2Under review as a conference paper at ICLR 2021Proposition 1 LetXRbe countable, and fa function defined on finite multisets with elementsfromX. Then there exist functions :X ! [0;1],:R!R, and a parameterization of LAF ,such thatf(x) =(LAF (x);;;;;a;b;c;d ), wherexis the multisetf(x)jx2xg.A proof in Wagstaff et al. (2019) for a very similar proposition used a mapping from Xinto the reals.Our requirement that LAF inputs must be in [0;1]requires a modification of the proof (containedin the supplementary material), which for the definition of relies on a randomized construction.Proposition 1 shows that we retain the theoretical universality guarantees of Zaheer et al. (2017),while enabling a wider range of solutions based on continuous encoding and decoding functions.Figure 1: LAF functions with randomly generated parametersIt should be emphasized at this point that the primary purpose of LAF is not to provide a uniformrepresentation of different standard aggregators as displayed in Table 1, but to enable a continuum ofintermediate and hybrid aggregators. Figure 1 shows the graphs of 4 different randomly generatedLAF functions over the unit square [0;1][0;1], i.e., evaluated over sets of size 2. Parameters;:::; were randomly sampled in the interval [0;1]; parameters b;d;f;h are randomly sampledfrom the integers 0;:::; 5, anda;c;e;g are obtained as 1=iwithia random integer from 0;:::; 5.The figure illustrates the rich repertoire of aggregation functions with different qualitative behaviorsalready for non-extreme parameter values.2.1 LAF A RCHITECTURELAF can be easily used as a module of a larger architecture suitable for learning on sets. Sev-eral LAF units can be combined as shown in Figure 2, to capture different aspects of the in-put set, which can be in general a set of vectors x=fx1;:::;x Ngwherexi2Rd. Notethat multiple aggregators are also used in related frameworks such as DeepSets (Zaheer et al.,2017) or Graph Neural Networks (Veli ˇckovi ́c et al., 2018; Corso et al., 2020). A module withrLAF units takes as input d-dimensional vectors and produces a vector of size rdas out-put. Each LAF unit performs an element-wise aggregation of the vectors in the set such thatLk;j= LAF(fxi;j;:::;x N;jg;k;k;k;k;ak;bk;ck;dk)fork= 1;:::;r andj= 1;:::;d .The output vector can be then fed into the next layer.3 R ELATED WORKSeveral studies address the problem of aggregating data over sets. Sum-decomposition strategieshave been used in (Zaheer et al., 2017) for points cloud classification and set expansion tasks andin (Santoro et al., 2017) for question answering and dynamic physical systems computation. Max,sum and average are standard aggregation functions for node neighborhoods in graph neural net-works (Hamilton et al., 2017; Kipf & Welling, 2017; Xu et al., 2019; Veli ˇckovi ́c et al., 2018). Zaheeret al. (2017) first proved universal representation results for these standard aggregators when com-bined with learned mappings over inputs and results of the aggregation. However, Wagstaff et al.(2019) showed that these universality results are of little practical use, as they either require highlydiscontinuous mappings that would be extremely difficult to learn, or a latent dimension that is atleast the size of the maximum number of input elements.Uninorms (Yager & Rybalov, 1996) are a class of aggregation functions in fuzzy logic that canbehave in a conjunctive ,disjunctive oraveraging manner depending on a parameter called neutralelement . Melnikov & Hüllermeier (2016) proposed to learn fuzzy aggregators by adjusting theselearnable parameters, showing promising results on combining reviewers scores on papers into an3Under review as a conference paper at ICLR 2021Figure 2: End-to-end LAF architecture.overall decision of acceptance or reject. Despite the advantage of incorporating different behavioursin one single function, uninorms present discontinuities in the regions between aggregators, mak-ing them not amenable to be utilized in fully differentiable frameworks. Furthermore the range ofpossible behaviours is restricted to those commonly used in the context of fuzzy-logic.The need for considering multiple candidate aggregators is advocated in a very recent work thatwas developed in parallel with our framework (Corso et al., 2020). The resulting architecture,termed Principal Neighborhood Aggregation (PNA) combines multiple standard aggregators, in-cluding most of the ones we consider in the LAF framework, adjusting their outputs with degreescalers. However, the underlying philosophy is rather different. PNA aims at learning to select theappropriate aggregator(s) from a pool of candidates, while LAF explores a continuous space of ag-gregators that includes standard ones as extreme cases. Our experimental evaluation shows that PNAhas troubles in learning aggregators that generalize over set sizes, despite having them in the poolof candidates, likely because of the quasi-combinatorial structure of its search space. On the otherhand, LAF can successfully learn even the higher moment aggregators and consistently outperformsPNA.Closely connected, but somewhat complementary to aggregation operators are attention mecha-nisms (Bahdanau et al., 2015; Vaswani et al., 2017). They have been explored to manipulate set datain Lee et al. (2019) and in the context of multi-instance learning (Ilse et al., 2018). Attention operatesat the level of set elements, and aims at a transformation (weighting) of their representations such asto optimize a subsequent weighted sum-aggregation. While the objectives of attention-based frame-works and LAF partially overlap, they are functionally quite different. Exploring combinations ofLAF with attention mechanisms is a possible subject of future work.4 E XPERIMENTSIn this section, we present and discuss experimental results showing the potential of the LAF frame-work on both synthetic and real-world tasks1. Synthetic experiments are aimed at showing the abilityof LAF to learn a wide range of aggregators and its ability to generalize over set sizes (i.e., havingtest-set sets whose cardinality exceeds the cardinality of the training-set sets), something that alter-native architectures based on predefined aggregators fail to achieve. We use DeepSets, PNA, andLSTM as representatives of these architectures. The LSTM architecture corresponds to a versionof DeepSets where the aggregation function is replaced by a LSTM layer. Experiments on diversetasks including point cloud classification, text concept set retrieval and graph properties predictionare aimed at showing the potential of the framework on real-world applications.1The source code is now available in the supplementary material4Under review as a conference paper at ICLR 20214.1 E XPERIMENTS ON SCALARS10203040500.000.250.500.751.001.251.501.75Mean Absolute Error1e1 CountLAFDeepSetsPNALSTM10203040500123451e1 Sum10203040500.00.20.40.60.81.01.21.4Max10203040500.00.20.40.60.81.0Mean102030405001234561e1 Min10203040500.00.20.40.60.81.01.21.41e1Max - Min102030405001234Mean Absolute Error1e1Inverse Count10203040500.250.500.751.001.251.501.752.00Median10203040500.00.20.40.60.81.01.21e1Min / Max102030405001234561e1Variance10203040501234561e1Skewness10203040500.00.51.01.52.0KurtosisFigure 3: Test performances for the synthetic experiment with integer scalars on increasing testset size. The x axis of the figures represents the maximum test set cardinality, whereas the y axisdepicts the MAE error. The dot, star, diamond and triangle symbols denote LAF, DeepSets, PNA,and LSTM respectively.This section shows the learning capacity of the LAF framework to learn simple and complex ag-gregation functions where constituents of the sets are simple numerical values. In this setting weconsider sets made of scalar integer values. The training set is constructed as follows: for each set,we initially sample its cardinality Kfrom a uniform distribution taking values in f2;Mg, and thenwe uniformly sample Kintegers in 0;:::; 9. For the training set we use M= 10 . We constructseveral test sets for different values of M(M= 5;10;15;20;25;30;35;40;45;50). This impliesthat models need to generalize to larger set sizes. Contrarily to the training set, each test set is con-structed in order to diversify the target labels it contains, so as to avoid degenerate behaviours forlarge set sizes (e.g., maximum constantly equal to 9). Each synthetic dataset is composed of 100,000sets for training, 20,000 set for validating and 100,000 for testing.The number of aggregation units is set as follows. The model contains nine LAF (Equation 2) units,whose parameters fak;:::;h kg,k= 1;:::; 9are initialized according to a uniform sampling in[0;1]as those parameters must be positive, whereas the coefficients f;:::;gare initialized witha Gaussian distribution with zero mean and standard deviation of 0.01 to cover also negative values.The positivity constraint for parameters fa;b;:::;hgis enforced by projection during the optimiza-tion process. The remaining parameters can take on negative values. DeepSets also uses nine units:three max units, three sum units, and three mean units and PNA uses seven units: mean, max, sum,standard deviation, variance, skewness and kurtosis. Preliminary experiments showed that expand-ing the set of aggregators for PNA with higher order moments only leads to worse performance.Each set of integers is fed into an embedding layer (followed by a sigmoid) before performing theaggregation function. DeepSets and PNA do need an embedding layer (otherwise they would haveno parameters to be tuned). Although LAF does not need an embedding layer, we used it in all mod-els to make the comparison more uniform. The architecture details are reported in the supplementarymaterial. We use the Mean Absolute Error (MAE) as a loss function to calculate the prediction error.Figure 3 shows the trend of the MAE error for the three methods for increasing test set sizes, fordifferent types of target aggregators. As expected, DeepSets manages to learn the identity functionand thus correctly models aggregators like sum, max and mean. Even if LAF needs to adjust itsparameters in order to properly aggregate the data, its performance are competitive with those ofDeepSets. When moving to more complex aggregators like inverse count, median or moments ofdifferent orders, DeepSets fails to learn the latent representation. One the other hand, the perfor-mance of LAF is very stable for growing set sizes. While having in principle at its disposal mostof the target aggregators (including higher order moment) PNA badly overfits over the cardinalityof sets in the training set in all cases (remember that the training set contains sets of cardinality atmost 10). The reason why LAF substantially outperforms PNA on large set sizes could be explainedin terms of a greater flexibility to adapt to the learnt representation. Indeed, LAF parameters can5Under review as a conference paper at ICLR 202110 20 30 40 500.00.51.01.52.02.53.0Mean Absolute Error×101 CountLAFDeepSetsPNALSTM10 20 30 40 500.00.20.40.60.81.01.21.4×102 Sum10 20 30 40 500.20.40.60.81.0Max10 20 30 40 500.00.51.01.52.02.53.0Mean10 20 30 40 500.20.40.60.81.01.21.4Min10 20 30 40 500.20.40.60.81.0Max - Min10 20 30 40 50012345Mean Absolute Error×101Inverse Count10 20 30 40 5002468Median10 20 30 40 500.250.500.751.001.251.501.75×101Min / Max10 20 30 40 500.00.51.01.52.02.5Variance10 20 30 40 500.00.51.01.52.02.53.03.5Skewness10 20 30 40 500123456KurtosisFigure 4: Test performances for the synthetic experiment on MNIST digits on increasing test set size.The x axis of the figures represents the maximum test set cardinality, whereas the y axis depicts theMAE error. The dot, star, diamond and traingle symbols denote LAF, DeepSets, PNA and LSTMrespectively.adjust the laffunction to be compliant with the latent representation even if the input mapping failsto learn the identity. On the other hand, having a bunch of fixed, hard-coded aggregators, PNA needsto be able to both learn the identity mapping and select the correct aggregator among the candidates.Finally, LSTM exhibits generally poor results when compared to the other methods, particularly inthe case of the count and the sum.4.2 MNIST DIGITSIn this section, we modify the previous experimental setting to process MNIST images of digits. Thedataset is the same as in the experiment on scalars, but integers are replaced by randomly samplingMNIST images for the same digits. Instances for the training and test sets are drawn from theMNIST training and test sets, respectively. This experiment aims to demonstrate the ability of LAFto learn from more complex representations of the data by plugging it into end-to-end differentiablearchitectures. Contrarily to the model of the previous section, here we use three dense layers forlearning picture representations before performing the aggregation function. The architecture detailsare reported in the supplementary material.Figure 4 shows the comparison of LAF, DeepSets, PNA, and LSTM in this setting. Results arequite similar to those achieved in the scalar setting, indicating that LAF is capable of effectivelybackpropagating information so as to drive the learning of an appropriate latent representation, whileDeepSets, PNA, and LSTM suffer from the same problems seen in aggregating scalars.Furthermore, Figure 5 provides a qualitative evaluation of the predictions of the LAF, DeepSets,and PNA methods on a representative subset of the target aggregators. The images illustrate thecorrelation between the true labels and the predictions. LAF predictions are distributed over thediagonal line, with no clear bias. On the other hand, DeepSets and PNA perform generally worsethan LAF, exhibiting higher variances. In particular, for inverse count and kurtosis, DeepSets andPNA predictions are condensed in a specific area, suggesting an overfitting on the training set.4.3 P OINT CLOUDIn order to evaluate LAF on real-world dataset, we consider point cloud classification, a prototypetask for set-wise prediction. Therefore, we run experimental comparisons on the ModelNet40 (Wuet al., 2015) dataset, which consists of 9,843 training and 2,468 test point clouds of objects dis-tributed over 40 classes. The dataset is preprocessed following the same procedure described by Za-heer et al. (2017). We create point clouds of 100 and 1,000 three-dimensional points by adoptingthe point-cloud library’s sampling routine developed by Rusu & Cousins (2011) and normalizingeach set of points to have zero mean (along each axis) and unit (global) variance. We refer with6Under review as a conference paper at ICLR 2021Figure 5: Scatter plots of the MNIST experiment comparing true (x axis) and predicted (y axis)values with 50 as maximum test set size. The target aggregations are max (up-left), inverse count(up-right), median (bottom-left) and kurtosis (bottom-right).P100 and P1000 to the two datasets. For all the settings, we consider the same architecture andhyper-parameters of the DeepSets permutation invariant model described by Zaheer et al. (2017).For LAF, we replace the original aggregation function (max) used in DeepSets with 10 LAF units,while for PNA we use the concatenation of max, min, mean, and standard deviation, as proposed bythe authors. For PNA we do not consider any scaler, as the cardinalities of the sets are fixed.Results in Table 2 show that LAF produces an advantage in the lower resolution dataset (i.e. onP100), while it obtains comparable (and slightly more stable) performances in the higher resolutionone (i.e. on P1000). These results suggest that having predefined aggregators is not necessarily anoptimal choice in real world cases, and that the flexibility of LAF in modeling diverse aggregationfunctions can boost performance and stability.Table 2: Results on the Point Cloud classification task. Accuracies with standard deviations (calcu-lated on 5 runs) for the ModelNet40 dataset.METHOD P100 P1000DEEPSETS 82.02.0% 87.0 1.0%PNA 82.9 0.7% 86.4 0.6%LSTM 78.7 1.1% 82.2 1.7%LAF 84.00.6% 87.0 0.5%4.4 S ETEXPANSIONFollowing the experimental setup of DeepSets, we also considered the Set Expansion task. In thistask the aim is to augment a set of objects of the same class with other similar objects, as explainedin (Zaheer et al., 2017). The model learns to predict a score for an object given a query set anddecide whether to add the object to the existing set. Specifically, Zaheer et al. (2017) considerthe specific application of set expansion to text concept retrieval. The idea is to retrieve wordsthat belong to a particular concept, giving as input set a set of words having the same concept. Weemploy the same model and hyper-parameters of the original publication, where we replace the sum-decomposition aggregation with LAF units for our methods and the min, max, mean, and standarddeviation aggregators for PNA.We trained our model on sets constructed from a vocabulary of different size, namely LDA-1K ,LDA-3K andLDA-5K . Table 3 shows the results of LAF, DeepSets and PNA on different evaluationmetrics. We report the retrieval metrics recall@K, median rank and mean reciprocal rank. We alsoreport the results on other methods the authors compared to in the original paper. More details on the7Under review as a conference paper at ICLR 2021Table 3: Results on Text Concept Set Retrieval on LDA-1k, LDA-3k, and LDA-5k. Bold valuesdenote the best performance for each metric.METHODLDA-1 k(VOCAB =17k) LDA-3 k(VOCAB =38k) LDA-5 k(VOCAB =61k)RECALL (%)MRR M ED.RECALL (%)MRR M ED.RECALL (%)MRR M ED.@10 @100 @1 K @10 @100 @1 K @10 @100 @1 KRANDOM 0.06 0.6 5.9 0.001 8520 0.02 0.2 2.6 0.000 28635 0.01 0.2 1.6 0.000 30600BAYES SET 1.69 11.9 37.2 0.007 2848 2.01 14.5 36.5 0.008 3234 1.75 12.5 34.5 0.007 3590W2VNEAR 6.00 28.1 54.7 0.021 641 4.80 21.2 43.2 0.016 2054 4.03 16.7 35.2 0.013 6900NN- MAX 4.78 22.5 53.1 0.023 779 5.30 24.9 54.8 0.025 672 4.72 21.4 47.0 0.022 1320NN- SUM -CON 4.58 19.8 48.5 0.021 1110 5.81 27.2 60.0 0.027 453 4.87 23.5 53.9 0.022 731NN- MAX -CON 3.36 16.9 46.6 0.018 1250 5.61 25.7 57.5 0.026 570 4.72 22.0 51.8 0.022 877DEEPSETS 5.53 24.2 54.3 0.025 696 6.04 28.5 60.7 0.027 426 5.54 26.1 55.5 0.026 616DEEPSETS5.89 26.0 55.3 0.026 619 7.56 28.5 64.0 0.035 349 6.49 27.9 56.9 0.030 536PNA 5.56 24.7 53.2 0.027 753 7.04 27.2 58.7 0.028 502 5.47 23.8 52.4 0.025 807LSTM 4.29 21.5 52.6 0.022 690 5.56 25.7 58.8 0.026 830 4.87 23.8 55.0 0.022 672LAF 6.51 26.6 54.5 0.030 650 8.14 32.3 62.8 0.037 339 6.71 28.3 56.9 0.031 523other methods in the table can be found in the original publication. Briefly, Random samples a worduniformly from the vocabulary; Bayes Set (Ghahramani & Heller, 2006); w2v-Near computes thenearest neighbors in the word2vec (Mikolov et al., 2013) space; NN-max uses a similar architectureas our DeepSets but uses max pooling to compute the set feature, as opposed to sum pooling; NN-max-con uses max pooling on set elements but concatenates this pooled representation with that ofquery for a final set feature; NN-sum-con is similar to NN-max-con but uses sum pooling followedby concatenation with query representation. For the sake of fairness, we have rerun DeepSets usingthe current implementation from the authors (indicated as DeepSetin Table 3), exhibiting betterresults than the ones reported in the original paper. Nonetheless, LAF outperforms all other methodsin most cases, especially on LDA-3K andLDA-5K .4.5 M ULTI -TASK GRAPH PROPERTIESCorso et al. (2020) defines a benchmark consisting of 6 classical graph theory tasks on artificiallygenerated graphs from a wide range of popular graph types like Erdos-Renyi, Barabasi-Albert orstar-shaped graphs. Three of the tasks are defined for nodes, while the other three for whole graphs.The node tasks are the single-source shortest-path lengths (N1), the eccentricity (N2) and the Lapla-cian features (N3). The graph tasks are graph connectivity (G1), diameter (G2), and the spectralradius (G3). For more details about the experimental settings please refer to Corso et al. (2020).Table 4: Results on the Multi-task graph properties prediction benchmark. Results are expressed inlog 10 of mean squared error.METHOD N1 N2 N3 G1 G2 G3BASELINE -1.87 -1.50 -1.60 -0.62 -1.30 -1.41GIN -2.00 -1.90 -1.60 -1.61 -2.17 -2.66GCN -2.16 -1.89 -1.60 -1.69 -2.14 -2.79GAT -2.34 -2.09 -1.60 -2.44 -2.40 -2.70MPNN ( MAX ) -2.33 -2.26 -2.37 -1.82 -2.69 -3.52MPNN ( SUM ) -2.36 -2.16 -2.59 -2.54 -2.67 -2.87PNA ( NO SCALERS ) -2.54 -2.42 -2.94 -2.61 -2.82 -3.29PNA -2.89 -2.89 -3.77 -2.61 -3.04 -3.57LAF -2.13 -2.20 -1.67 -2.35 -2.77 -3.63We compare LAF against PNA by simply replacing the original PNA aggregators and scalers with100 LAF units (see Equation 2). Table 4 shows that albeit these datasets were designed to highlightthe features of the PNA architecture, that outperforms a wide range of alternative graph neural net-work approaches LAF produces competitive results, outperforming state-of-the-art GNN approacheslike GIN (Xu et al., 2019), GCN (Kipf & Welling, 2017) and GAT (Veli ˇckovi ́c et al., 2018) and evenimproving over PNA on spectral radius prediction.8Under review as a conference paper at ICLR 20215 C ONCLUSIONSThe theoretical underpinnings for sum aggregation as a universal framework for defining set func-tions do not necessarily provide a template for practical solutions. Therefore we introduced LAF,a framework for learning aggregation functions that make use of a parametric aggregator to ef-fectively explore a rich space of possible aggregations. LAF defines a new class of aggregationfunctions, which include as special cases widely used aggregators, and also has the ability to learncomplex functions such as higher-order moments. We empirically showed the generalization abilityof our method on synthetic settings as well as real-world datasets, providing comparisons with state-of-the-art sum-decomposition approaches and recently introduced techniques. The flexibility of ourmodel is a crucial aspect for potential practical use in many deep learning architectures, due to itsability to be easily plugged into and learned in end-to-end architectures. The portability of LAFopens a new range of possible applications for aggregation functions in machine learning methods,and future research in this direction can enhance the expressivity of many architectures and modelsthat deal with unstructured data.<|im_end|> <|im_start|>assistant ### Review Title A novel method for aggregating the information from sets ### Review Text Summary: Universal function representation guarantee requires either highly discontinuous mappings or a highly dimensional latent space. For this reason the authors propose a new parametric family of aggregation functions, called LAF (for learning aggregation functions). It can be seen as a smooth version of the class of functions that are shown in DeepSets. LAF aggregator could learn all standard aggregation functions. Moreover in experiments the autors shows that LAF surpasses other aggregation methods. ============================================================================= Pros: 1. The authors shows, that all standard aggregation functions are achievable by varying the parameters in the formulation of LAF. Moreover LAF enables a neural network to use a continuum of intermediate and hybrid aggregators. 2. Comprehensive ablation study that compares LAF to DeepSets and PNA on digits and MNIST images. In the study the goal is to learn a different types of target aggregation. The results shows that LAF could learn all the given types of aggregation methods as well as it could generalize well to the size of the test set (and thus is not overfitting to the size of the training set as the other methods). 3. The authors provide an extensive set of experiments on a wide range of datasets, including point clouds, set expansion and graph properties. On most of the given tasks LAF is superior to other methods. ============================================================================= Cons: Not all the details about LAF aggregation are clear to me. The authors should consider rewriting a section 2 (with a description of the aggregation), considering the points I list below. 1. In the manuscript the autors state that LAF is using the tunable parameters a,...,h and alpha,...,delta, however they do not show how to initialize these parameters and do not tell whether the model is sensitive to these values. 2. The autors state that tunable parameters a,...,h are greater or equal than zero. However hey do not show how to achieve this condition. Whether they use exponent, non-linearity or do it in another way. 3. In the definition of LAF aggregator, the authors states that x should be a real number, however looking at the experiments, it seems to me that it could also be applied to vectors. Please correct me if I'm wrong, but if I am right, than please answer the question whether in this situation a,...,h,alpha,...,delta are still the scalars or whether are they vecors? 4. The more detailed description of builded networks should be included (even in the appendix). It is not clear to me, how the authors made thir networks. E.g. in section 4.1 they state that 'The LAF model contains nine LAF(x) aggregation functions' - does this mean that in the final aggregation layer you create 9 independently working LAF aggregators and then concatenate them and pass to final prediction layer? If yes, then what in the situation with aggregating the informations from vectors? Do you still make concatenaction? ============================================================================= Questions during rebuttal period: 1. Why section 5 (Multi-task graph properties) is not the sub-section of section 4 (Experiments)? 2. Usually using a sigmoid could disturb the training of neural network. Could you create the experiment (on a real dataset), where you delete the sigmoid as well as parts with 1-x form the LAF aggregator? 3. Could you create some experiments with LAF as the final aggregator for neural networks? More specific, for exampe, could you use LAF instead of mean average pooling in image classification using ResNet or some text classification task? ============================================================================= ============================================================================= Reasons for score: I vote for accepting this paper. The idea proposed by the authors is novel and elegant. Moreover experiments shows that the proposed model is superior to the other models with whom it has been compared. My major concern is about the clarity of the paper. Hopefully the authors can address my concern in the rebuttal period. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
0BaWDGvCa5p
ICLR.cc/2021/Conference
2021
A Provably Convergent and Practical Algorithm for Min-Max Optimization with Applications to GANs
["Oren Mangoubi", "Sushant Sachdeva", "Nisheeth K Vishnoi"]
We present a first-order algorithm for nonconvex-nonconcave min-max optimization problems such as those that arise in training GANs. Our algorithm provably converges in $\mathrm{poly}(d,L, b)$ steps for any loss function $f:\mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ which is $b$-bounded with ${L}$-Lipschitz gradient. To achieve convergence, we 1) give a novel approximation to the global strategy of the max-player based on first-order algorithms such as gradient ascent, and 2) empower the min-player to look ahead and simulate the max-player’s response for arbitrarily many steps, but restrict the min-player to move according to updates sampled from a stochastic gradient oracle. Our algorithm, when used to train GANs on synthetic and real-world datasets, does not cycle, results in GANs that seem to avoid mode collapse, and achieves a training time per iteration and memory requirement similar to gradient descent-ascent.
["min-max optimization", "GANs"]
ABSTRACTWe present a first-order algorithm for nonconvex-nonconcave min-max optimiza-tion problems such as those that arise in training GANs. Our algorithm provablyconverges in poly(d;L;b )steps for any loss function f:RdRd!Rwhichisb-bounded with L-Lipschitz gradient. To achieve convergence, we 1) give anovel approximation to the global strategy of the max-player based on first-orderalgorithms such as gradient ascent, and 2) empower the min-player to look aheadand simulate the max-player’s response for arbitrarily many steps, but restrict themin-player to move according to updates sampled from a stochastic gradient oracle.Our algorithm, when used to train GANs on synthetic and real-world datasets, doesnot cycle, results in GANs that seem to avoid mode collapse, and achieves a trainingtime per iteration and memory requirement similar to gradient descent-ascent.1 I NTRODUCTIONWe consider the problem of min-max optimization minx2Rdmaxy2Rdf(x;y), where the loss func-tionfmay be nonconvex in xand nonconcave in y. Min-max optimization of such loss functions hasmany applications to machine learning, including to GANs (Goodfellow et al., 2014) and adversarialtraining (Madry et al., 2018). In particular, following Goodfellow et al. (2014), GAN training canbe formulated as a min-max optimization problem where xencodes the parameters of a “generator”network, and yencodes the parameters of a “discriminator” network. Unlike standard minimizationproblems, the min-max nature of GANs makes them particularly difficult to train (Goodfellow,2017), and has received wide attention. A common algorithm to solve these min-max optimizationproblems, gradient descent ascent (GDA), alternates between stochastic gradient descent steps for xand ascent steps for y.1The advantage of GDA is that it just requires first-order access to fand eachiteration is efficient in terms of memory and time, making it quite practical. However, as many workshave observed, GDA can suffer from issues such as cycling (Arjovsky & Bottou, 2017) and “modecollapse” (Dumoulin et al., 2017; Che et al., 2017; Santurkar et al., 2018).Several recent works have focused on finding convergent first-order algorithms for min-max opti-mization (Rafique et al., 2018; Daskalakis et al., 2018; Liang & Stokes, 2019; Gidel et al., 2019b;Mertikopoulos et al., 2019; Nouiehed et al., 2019; Lu et al., 2020; Lin et al., 2020; Mokhtari et al.,2019; Thekumparampil et al., 2019; Mokhtari et al., 2020). However, these algorithms are also notguaranteed to converge for general nonconvex-nonconcave min-max problems. The challenge isthat min-max optimization generalizes nonconvex minimization, which, in general, is intractable.Algorithms for nonconvex minimization resort to finding “local” optima or assume a starting point“close” to a global optimum. However, unlike minimization problems where local notions of optimaexist (Nesterov & Polyak, 2006), it has been challenging to define a notion of convergent points formin-max optimization, and most notions of local optima considered in previous works (Daskalakis &Panageas, 2018; Jin et al., 2020; Fiez et al., 2019) require significant restrictions for existence.Our contributions. Our main result is a new first-order algorithm for min-max optimization (Al-gorithm 1) that for any " >0;any nonconvex-nonconcave loss function, and anystarting point,converges in poly(d;L;b; 1=")steps, iffisb-bounded with L-Lipschitz gradient (Theorem 2.3).1In practice, gradients steps are often replaced by ADAM steps; we ignore this distinction for this discussion.1Under review as a conference paper at ICLR 2021A key ingredient in our result is an approximation to the global max function maxz2Rdf(x;z).Unlike GDA and related algorithms that alternate between updating the discriminator and generatorin an incremental fashion, our algorithm lets the discriminator run a convergent algorithm (such asgradient ascent) until it reaches a first-order stationary point. We then empower the generator tosimulate the discriminator’s response for arbitrarily many gradient ascent updates. Roughly, at eachiteration of our algorithm, the min-player proposes a stochastic (batch) gradient update for xandsimulates the response of the max-player with gradient ascent steps for yuntil it reaches a first-orderstationary point. If the resulting loss has decreased, the updates for xandyare accepted; otherwisethey are only accepted with a small probability ( a lasimulated annealing).The point (x?;y?)returned by our algorithm satisfies the following guarantee: if the min-playerproposes a stochastic gradient descent update to x?;and the max-player is allowed to respond byupdatingy?using any“path” that increases the loss at a rate of at least "— with high probability, thefinal loss cannot decrease by more than ":See Section 2 for our convergence guarantees, Section 4for the key ideas in our proof, and Appendix C for a comparison to previous notions of convergence.Empirically, we apply our algorithm for training GANs (with the cross-entropy loss) on both synthetic(mixture of Gaussians) and real-world (MNIST and CIFAR-10) datasets (Section 3). We compare ouralgorithm’s performance against two related algorithms: gradient/ADAM descent ascent (with one ormultiple discriminator steps), and Unrolled GANs (Metz et al., 2017). Our simulations with MNIST(Figure 1) and mixture of Gaussians (Figure 2) indicate that training GANs using our algorithm canavoid mode collapse and cycling. For instance, on the Gaussian mixture dataset, we found that byaround the 1500’th iteration GDA learned only one mode in 100% of the runs, and cycled betweenmultiple modes. In contrast, our algorithm learned all four modes in 68% of the runs, and three modesin26% of the runs. On 0-1MNIST, we found that GDA tends to briefly generate shapes that looklike a combination of 0’s and 1’s, then switches between generating only 1’s and only 0’s. In contrast,our algorithm seems to learn to generate both 0’s and 1’s early on and does not stop generating eitherdigit. GANs trained using our algorithm generated both digits by the 1000’th iteration in 86% ofthe runs, while those trained using GDA only did so in 23% of the runs. Our CIFAR-10 simulations(Figure 3) indicate that our algorithm trains more stably, resulting in a lower mean and standarddeviation for FID scores compared to GDA. Furthermore, the per-step computational and memorycost of our algorithm is similar to GDA indicating that our algorithm can scale to larger datasets.Related workGuaranteed convergence for min-max optimization. Several works have studied GDA dynamics inGANs (Nagarajan & Kolter, 2017; Mescheder et al., 2017; Li et al., 2018; Balduzzi et al., 2018;Daskalakis & Panageas, 2018; Jin et al., 2020) and established that GDA suffers from severelimitations: GDA can exhibit rotation around some points, or otherwise fail to converge. Thus, wecannot expect global convergence guarantees for GDA. To address these convergence issues for GDA,multiple works have proposed algorithms based on Optimistic Mirror Descent (OMD), Extra-gradientmethod, or similar approaches (Gidel et al., 2019b; Daskalakis et al., 2018; Liang & Stokes, 2019;Daskalakis & Panageas, 2019; Mokhtari et al., 2019; 2020). These algorithms avoid some of thepathological behaviors of GDA and achieve guaranteed convergence in poly(;log(1="))iterationswhereis the condition number of f:However, all these results either require convexity/concavityassumptions on f;which usually do not hold for GANs, or require that the starting point lies in a smallregion around an equilibrium point, and hence provide no guarantees for an arbitrary initialization.Some works also provide convergence guarantees for min-max optimization (Nemirovski & Yudin,1978; Kinderlehrer & Stampacchia, 1980; Nemirovski, 2004; Rafique et al., 2018; Lu et al., 2020;Lin et al., 2020; Nouiehed et al., 2019; Thekumparampil et al., 2019). However, they require fto beconcave iny;again limiting their applicability.As for nonconvex-nonconcave min-max optimization, Heusel et al. (2017) prove convergence offinite-step GDA, under the assumption that the underlying continuous dynamics converge to a localmin-max optimum (this assumption may not even hold for fthat is bi-linear). Jin et al. (2020)present a version of GDA for min-max optimization (generalized by Fiez et al. (2019)) such that if thealgorithm converges, the convergence point is a local min-max optimum. Both these results requirethat the min-player use a vanishingly small step size relative to the max-player, resulting in slowconvergence. Wang et al. (2020) present an algorithm that can converge for nonconvex-nonconcavefunctions, but requires the initial point to lie in a region close a local min-max optimum (suchoptima are not guaranteed to exist). In contrast to the above works, our algorithm is guaranteed to2Under review as a conference paper at ICLR 2021converge for any nonconvex-nonconcave loss, from any starting point, in poly(d;L;b; 1=")steps, iffisb-bounded with L-Lipschitz gradient.Greedy paths. The paths along which the max-player is allowed to make updates in our equilibriumdefinition are inspired from the work of Mangoubi & Vishnoi (2020), which gives a second-orderalgorithm for min-max optimization. The “greedy paths” considered in their work are defined suchthat at every point along these paths, fis non-decreasing, and the first derivative of fis at least"orthe 2nd derivative is at leastp":In contrast, we just require a condition on the first derivative of falong the path. This distinction gives rise to a different notion of equilibrium than the one presentedin their work. The first-order condition on the paths crucially also results in our algorithm beingapplicable to machine learning settings where only first-order oracles are available, because unlikeMangoubi & Vishnoi (2020), traversing such a path only requires first-order access to f.Training GANs. Starting with Goodfellow et al. (2014), there has been considerable work to developalgorithms to train GANs. One line of work focuses on modifying the loss to improve conver-gence (Arjovsky et al., 2017; Bellemare et al., 2017; Lim & Ye, 2017; Mao et al., 2017; Salimanset al., 2018; Metz et al., 2017). Another line of work regularizes the discriminator using gradientpenalties or spectral normalization (Gulrajani et al., 2017; Kodali et al., 2017; Miyato et al., 2018).Metz et al. (2017) introduced Unrolled GANs, where the generator optimizes an “unrolled” lossfunction that allows the generator to simulate a fixed number of discriminator updates. While thishas some similarity to our algorithm there are two important distinctions: 1) the discriminator inUnrolled GANs may not reach a first-order stationary point, and hence their algorithm does not comewith any convergence guarantees, and 2) unlike our algorithm, the implementation of the generatorin Unrolled GANs requires memory that grows with the number of discriminator steps, limiting itsscalability. We observe that our algorithm, applied to training GANs, trains stably and avoids modecollapse, while achieving a training time per iteration and memory requirements that are similar toGDA, and much lower than Unrolled GANs (Metz et al., 2017) (see also the discussion in Section 5).Remark 1.1 (Loss functions in GANs) .Loss functions f:RdRd!Rwhich take bounded valuesonRdRdarise in many GAN applications. For instance, GANs with mean-squared error lossMao et al. (2017) have uniformly bounded f. GANs with cross entropy loss Goodfellow et al. (2014)havefuniformly bounded above, and Wasserstein GANs Arjovsky et al. (2017) have a loss functionf(x;y)which is bounded above as a function of yand is uniformly bounded below.2 T HEORETICAL RESULTSWe consider the problem minxmaxyf(x;y);wherex;y2Rd;andfis a function RdRd!R:Weconsiderfthat is an empirical risk loss over mtraining examples. Thus, we have f=1mPi2[m]fi;and are given access to fvia a randomized oracle Fsuch that E[F] =f:We call such an oracle astochastic zeroth-order oracle for f:We are also given randomized oracles Gx;Gyforrxf;ryf;such that E[Gx] =rxf;andE[Gy] =ryf:We call such oracles stochastic gradient oracles forf:In practice, these oracles are computed by randomly sampling a “batch” B[m]and returningF=1=jBjPi2Bfi;Gx=1=jBjPi2Brxfi;andGy=1=jBjPi2Bryfi:For our convergence guarantees, we require bounds on standard smoothness parameters forfunctionsfi:bsuch that for all iand allx;y; we havejfi(x;y)j b, andLsuch thatkrfi(x;y)rfi(x0;y0)k2Lkxx0k2+Lkyy0k2:Such smoothness/Lipschitz bounds arestandard in convergence guarantees for optimization algorithms (Bubeck, 2017; Nesterov & Polyak,2006; Ge et al., 2015), and imply that fis also continuous, b-bounded, and L-gradient-Lipschitz.Our algorithm is described informally in Algorithm 1 and formally as Algorithm 2 in the Appendix.Intuition for the algorithm. To solve the min-max problem, the max-player would ideally findthe global maximum maxzf(x;z). However, since fmay be nonconcave in y, finding the globalmaximum may be computationally intractable. To get around this problem, roughly speaking, in ouralgorithm the max-player computes its update by running gradient ascent until it reaches a first-order"-stationary point y0, that is, a point where kryf(x;y0)k". This allows our algorithm to computean approximationL"(x;y) =f(x;y0)for the global maximum. (Note that even though maxzf(x;z)is only a function of x,L"may depend on both xand the initial point y:)3Under review as a conference paper at ICLR 2021Algorithm 1 Algorithm for min-max optimizationinput: A stochastic zeroth-order oracle Ffor loss function f:RdRd!R, and stochastic gradientoraclesGxforrxf, andGyforryf. An initial point (x;y);and an error parameter ".output: A point (x?;y?)hyperparameters: rmax(maximum number of rejections); 1(hyperparameters for annealing);ADAM hyperparametersSetr 0;i 0whilerrmaxdofold F(x;y),i i+ 1SetGx Gx(x;y)fCompute a stochastic gradient gUse stochastic gradient Gxto compute a one-step ADAM update forxSetx0 x+ fCompute the proposed update for the min-player gStarting at point y, use stochastic gradients Gy(x0;)to run multiple ADAM steps in they-variable, until a point y0is reached such that kGy(x0;y0)k1"fSimulate max-player’s update gSetfnew F(x0;y0)fCompute the new loss value gSetAccept True:iffnew>fold"=2, setAccept False with probability max(0;1ei=1)faccept or rejectgifAccept =True then Setx x0,y y0;r 0fAccept the updatesgelse Setr r+ 1fReject the updates, and track how many successive steps were rejected. greturn (x;y)We would like the min-player to minimize L"(x;y). Ideally, the min-player would make updates inthe directionrxL". However,L"(x;y)may not be differentiable and may even be discontinuousinx(see Section 2.2 for an example), making it challenging to optimize. Moreover, even at pointswhereL"is differentiable, computing rxL"may require memory proportional to the number ofmax-player steps used to compute L"(for instance, this is the case for Unrolled GANs (Metz et al.,2017)). For this reason, we only provide our min-player with access to the value of L".One approach to minimize L"would be to use a zeroth-order optimization procedure where themin-player proposes a random update to x, and then only accept this update if it results in a decreaseinL":At each iteration of our algorithm, the min-player proposes an update roughly in the directionrxf(x;y):To motivate this choice, note that once the min-player proposes an update tox, themax-player’s updates will only increase f,i.e.,L"(x+ ;y)f(x+ ;y). Moreover, since yis afirst-order stationary point of f(x;)(becauseywas computed using gradient ascent in the previousiteration), we also have L"(x;y) =f(x;y). Therefore, we want an update such thatf(x+ ;y)L"(x+ ;y)L"(x;y) =f(x;y); (1)which implies that any proposed step which decreases L"must also decrease f(although the converseis not true). This motivates proposing steps in the direction of the gradient rxf(x;y).Unfortunately, updates in the direction rxf(x;y)do not necessarily decrease L". Our algorithminstead has the min-player perform a random search by proposing a stochastic update in the directionof a batch gradient with mean rxf(x;y)(or, more precisely, the ADAM batch gradients), andaccepts this update only if L"decreases by some fixed amount. We show empirically that thesedirections allow the algorithm to rapidly decrease the simulated loss. The fact that L"decreaseswhenever the min-player takes a step allows us to guarantee that our algorithm eventually converges.A final issue is that converging to a local minimum point does not guarantee that the point is desirablefrom an applications standpoint. To allow our algorithm to escape undesirable local minima ofL"(;y), we use a randomized accept-reject rule inspired by simulated annealing – if the resultingloss has decreased the updates for xandyare accepted; otherwise they are only accepted with asmall probability ei=1;where1is a “temperature” parameter.2.1 C ONVERGENCE GUARANTEESWe first formally define “simulated loss” and what it means for fto increase rapidly.Definition 2.1. For anyx;y; and" > 0;defineE(";x;y )Rdto be points ws.t. there is acontinuous and (except at finitely many points) differentiable path (t)starting aty;ending atw;4Under review as a conference paper at ICLR 2021and moving with “speed” at most 1 in the `1-norm2ddt(t)11such that at any point on ,3ddtf(x;(t))>": (2)We defineL"(x;y) := supw2E(";x;y )f(x;w);and refer to it as the simulated loss.A few remarks are in order. Observe that L1(x;y) = maxyf(x;y):Further, ifkryf(x;y)k1";thenE(";x;y ) =fygandL"(x;y) =f(x;y)(this follows from H ̈older’s inequality, since (t)hasspeed at most 1 in the `1-norm, the dual of the `1-norm). Note that the path need not be in thedirection of the gradient, and there can potentially be infinitely many such paths starting at y:Unfortunately, the simulated loss may not even be continuous4inx;and thus, gradient-basednotions of approximate local minima do not apply. To bypass this discontinuity (and hence non-differentiability), we use the idea to sample updates to x;and test whetherL"has decreased (equa-tion 35). This leads to the following definition of local min-max equilibrium (see also Section2.3):Definition 2.2. Given a distribution Dx;yfor eachx;y2Rdand"?>0, we say that (x?;y?)is an"?-local min-max equilibrium with respect to the distribution Difkryf(x?;y?)k1"?;and; (3)PrDx?;y?[L"?(x?+ ;y?)<L"?(x?;y?)"?]<"?; (4)In our main result, we set Dto roughly be the distribution of ADAM stochastic gradients for rxf.Also note that from the above discussion, equation 34 implies that L"?(x?;y?) =f(x?;y?). Toallow convergence of ADAM (and avoid non-convergence issues such as those encountered in Reddiet al. (2019)), in our main result we constrain the values of the ADAM hyperparameters5. Now, wecan state the formal guarantees of our algorithm.Theorem 2.3. Algorithm 2, with appropriate hyperparameters for ADAM and some constant 1>0,given access to stochastic zeroth-order and gradient oracles for a function f=Pi2[m]fiwhereeachfiisb-bounded with L-Lipschitz gradient for some b;L > 0, and" >0, with probability atleast 9=10returns a point (x?;y?)2RdRdsuch that, for some "?2[12";"], the point (x?;y?)is an"?-local min-max equilibrium point with respect to the distribution Dx;y, whereDx;yis thedistribution of Gx(x;y)=pG2x(x;y);where the division “ =” is applied element-wise. The number ofstochastic gradient and function evaluations required by the algorithm is poly( 1=";d;b;L ).We present key ideas in the proof in Section 4, and a proof overview in Section B. The full proofappears in Appendix D. Note that Dx;yis the distribution of the stochastic gradient updates withelement-wise normalizations. Roughly, this corresponds to the distribution of ADAM steps taken bythe algorithm for updating xif one uses a small step-size parameter for ADAM.We conclude this section with a few technical remarks about the theorem. Our algorithm couldprovide guarantees with respect to other distributions Dx;yin equation 35 by sampling update stepsforxfromDx;yinstead of ADAM. The norm in the guarantees of the stationary point y?for ouralgorithm is `1since we use ADAM for updating y:A simpler version of this algorithm using SGDwould result in an `2-norm guarantee.Comparison to notions of local optimality. Definition 2.2 provides a new notion of equilibriumfor min-max optimization. Consider the problem minxmaxyf(x;y), but with constraints on theplayer’s updates to x;y— the max player is restricted to updating yvia a path which increasesf(x;y)at a rate of at least "?at every point on the path, and the min player proposes an update forx;sampled fromD. Then (x?;y?)satisfies Definition 2.2 if and only if (i) there is no update toy?that the max player can make which will increase the loss at a rate of at least "?(equation 34),and, (ii) with probability at least 1"?for a random step D proposed by the min player, theabove-constrained max player can update y?s.t. the overall decrease in the loss is at most "?from itsoriginal value f(x?;y?).2We use the`1-norm in place of the Euclidean `2-norm, as it is a more natural norm for ADAM gradients.3In this equation the derivativeddtis taken from the right.4Consider the example f(x;y) = min(x2y2;1):The simulated loss function for " > 0isL"(x;y) =f(x;y)if2x2y<"; and 1 otherwise. Thus L1=2is discontinuous at (1=2;1):5In particular, we set the ADAM hyperparameters 1;2to be1=2= 0.5Under review as a conference paper at ICLR 2021As one comparison to a previous notion of local optimality, any point which is a local optimum underthe definition used previously e.g. in Daskalakis & Panageas (2018), also satisfies our Definition 2.2for small enough "and distributionDcorresponding to small-enough step size. On the other hand,previous notions of local optima including the one in Daskalakis & Panageas (2018) are not guaranteedto exist in a general setting, unlike our definition. (See Appendix C for a detailed comparison of howDefinition 2.2 relates to previously proposed notions of local optimality).3 E MPIRICAL RESULTSWe seek to apply our min-max optimization algorithm for training GANs on both real-world andsynthetic datasets. Following Goodfellow et al. (2014), we formulate GAN training as a min-maxoptimization problem using the cross entropy loss, f(x;y) = log(Dy()) + log(1Dy(Gx()),wherex;yare the weights of the generator and discriminator networks GandDrespectively, issampled from the data, and N(0;Id). For this loss, the smoothness parameters b;Lmay not befinite. To adapt Alg. 1 to training GANs, we make the following simplifications in our simulations:(1)Temperature schedule: We use a fixed temperature ;constant with iteration i, making it simplerto choose a good temperature value rather than a temperature schedule. (2)Accept/reject rule: Wereplace the randomized acceptance rule with a deterministic rule: If fnewfoldwe accept theproposed step, and if fnew> foldwe only accept if iis a multiple of e1=, corresponding to anaverage acceptance rate of e1=.(3)Discriminator steps: We take a fixed number of discriminatorsteps at each iteration, instead of taking as many steps needed to achieve a small gradient.These simplifications do not seem to significantly affect our algorithm’s performance (see AppendixF.5 for simulations showing it effectively trains GANs without most of these simplifications). More-over, our simulations show a smaller number of discriminator steps kis usually sufficient in practice.Datasets and Metrics. We perform simulations on MNIST (LeCun et al., 2010) and CIFAR-10(Krizhevsky et al.) datasets to evaluate whether GANs trained using our algorithm converge, andwhether they are able to learn the target distribution. Convergence is evaluated by visual inspection(for MNIST and CIFAR), and by tracking the loss (for MNIST) and FID scores (for CIFAR).As noted by previous works (Borji, 2019; Metz et al., 2017; Srivastava et al., 2017), it is challengingto detect mode collapse on CIFAR and MNIST, visually or using standard quantitative metrics suchas FID scores, because CIFAR (and to some extent MNIST) do not have well-separated modes. Thus,we consider two datasets, one real and one synthetic, with well-separated modes, whence modecollapse can be clearly detected by visual inspection.For the real dataset we consider the 0-1 MNIST dataset (MNIST restricted to digits labeled 0 or1). The synthetic dataset consists of 512 points sampled from a mixture of four Gaussians in twodimensions with standard deviation 0.01 and means at (0;1),(1;0),(1;0)and(0;1):Hyperparameters and hardware. The details of the networks and hyperparameter choices aregiven in Appendix E. Simulations on MNIST and Gaussian datasets used four 3.0 GHz Intel ScalableCPUs, provided by AWS. On CIFAR-10, we used one High freq. Intel Xeon E5-2686 v4 GPU.3.1 E VALUATING THE PERFORMANCE OF OUR ALGORITHMWe compare our algorithm’s performance to both GDA and unrolled GANs. All algorithms areimplemented using ADAM (Kingma & Ba, 2015).MNIST. We trained a GAN on the full MNIST dataset using our algorithm for 39,000 iterations(withk= 1discriminator steps and acceptance rate e1==1=5). We ran this simulation five times;each time the GAN learned to generate all ten digits (see Appendix F.1 for generated images).0-1 MNIST. We trained a GAN using our algorithm on the 0-1 MNIST dataset for 30,000 iterationsand ploted a moving average of the loss values. We repeated this simulation five times; in each of thefive runs our algorithm learned to generate digits which look like both the “0” and “1” digits, and theloss seems to decrease and stabilize once our algorithm learns how to generate the two digits. (SeeAppendix F.3 for the generated images and loss value plots.)6Under review as a conference paper at ICLR 2021GDA3_Picky_MNIST2501,0005002,0003,0009_Vanilla_MNIST2505001,0002,0003,000 Our algorithm3_Picky_MNIST2501,0005002,0003,0009_Vanilla_MNIST2505001,0002,0003,000Figure 1: We trained a GAN using our algorithm on 0-1 MNIST for 30,000 iterations (with k= 1discriminatorsteps and acceptance rate e1==1=5). We repeated this experiment 22 times for our algorithm and 13 times forGDA. Shown here are the images generated from one of these runs at various iterations for our algorithm (right)and GDA (left) (see also Appendix F.3 for images from other runs).GDA9_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 01500100080060040020001500100080060040020009_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 0150010008006004002000150010008006004002000GDA with 6 discriminator steps9_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 01500100080060040020001500100080060040020009_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 0150010008006004002000150010008006004002000Unrolled GANs with 6 unrolling steps11_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 011_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 0Our algorithm11_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 011_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 0Figure 2: Our algorithm (bottom right), unrolled GANs with k= 6unrolling steps (top right), and GDA withk= 1 (top left) and k= 6 discriminator steps (bottom left). Each algorithm was trained on a 4-Gaussianmixture for 1500 iterations. Our algorithm used k= 6 discriminator steps and acceptance rate e1==1=4.Plots show the points generated by each of these algorithms after the specified number of iterations.CIFAR-10. We ran our algorithm (with k= 1discriminator steps and acceptance rate e1==1=2)on CIFAR-10 for 50,000 iterations. We compare with GDA with k= 1 discriminator steps. Weplotted the FID scores for both algorithms. We found that both algorithms have similar FID scoreswhich decrease over time, and produce images of similar quality after 50,000 iterations (Figure 3).Clock time per iteration. When training on the 0-1 MNIST dataset (with k= 1discriminator stepseach iteration), our algorithm took 1.4 seconds per iteration on the AWS CPU server. On the samemachine, GDA took 0.85 seconds per iteration. When training on CIFAR-10, our algorithm and GDAboth took the same amount of time per iteration, 0.08 seconds, on the AWS GPU server.Mitigating mode-collapse on 0-1 MNIST. We trained GANs using both GDA and our algorithm onthe 0-1 MNIST dataset, and ran each algorithm for 3000 iterations (Figure 1). GDA seems to brieflygenerate shapes that look like a combination of 0’s and 1’s, then switches to generating only 1’s, andthen re-learns how to generate 0’s. In contrast, our algorithm seems to learn how to generate both 0’sand 1’s early on and does not mode collapse to either digit.We repeated this simulation 22 times for our algorithm and 13 times for GDA, and visually inspectedthe images at iteration 1000. GANs trained using our algorithm generated both digits by the 1000’thiteration in 86% of the runs, while those trained using GDA only did so in 23% of the runs at the1000’th iteration (see Appendix F.4 for images from all runs).Mitigating mode-collapse on synthetic data. We trained GANs on the 4-Gaussian mixture datasetfor 1500 iterations (Figure 2) using our algorithm, unrolled GANs with k= 6unrolling steps, andGDA withk= 1andk= 6discriminator steps. We repeated each simulation 10-20 times.By the 1500’th iteration GDA with k= 1discriminator steps seems to have learned only one modein 100% of the runs. GDA with k= 6discriminator steps learned two modes 65% of the runs, onemode 20% of runs, and four modes 15% of runs. Unrolled GANs learned one mode 75% of the runs,two modes 15% of the runs, and three modes 10% of the runs. In contrast, our algorithm learned allfour modes 68% of the runs, three modes 26% of the runs, and two modes 5% of the runs.7Under review as a conference paper at ICLR 2021GDA Our algorithmFigure 3: GAN trained using our algorithm (with k= 1discriminator steps and acceptance rate e1==1=2)and GDA on CIFAR-10 for 50,000 iterations. The images generated from the resulting generator for both ouralgorithm (middle) and GDA (left). Over 9 runs, our algorithm achieves a very similar minimum FID score(33.8) compared to GDA (33.0), and a better average FID score over 9 runs (mean = 35.6, std. dev. = 1.1)compared to GDA ( = 53.8,= 53.9). Images are shown from one run each; see Appendix F.2 for full results.4 K EYIDEAS IN THE PROOFFor simplicity, assume b=L=1= 1. There are two key pieces to proving Theorem 2.3. The firstis to show that our algorithm converges to some point (x?;y?)inpoly(d;1=")gradient and functionevaluations (Lemma D.7). Secondly, we show that, y?is a first-order "-stationary point for f(x?;)andx?is, roughly, a "-local minimum for the simulated loss function L"(;y?)(Lemma D.9).Step1: Bounding the number of gradient evaluations: After (log(1"))steps, the decaying accep-tance rate of the simulated annealing step ensures that our algorithm stops whenever rmax=O(1=")proposed steps are rejected in a row. Thus, for every O(rmax="2)iterations where the algorithmdoes not terminate, with probability at least 1"the value of the loss decreases by more than ":Sincefis 1-bounded, this implies our algorithm terminates after roughly O(rmax="3)iterations of theminimization routine (Proposition D.6).Next, sincefis 1-bounded with 1-Lipschitz gradient, in each iteration, we require at most poly(d=")gradient ascent steps to reach an "-stationary point. Since each step of the maximization subroutinerequires one gradient evaluation, and each iteration of the minimization routine calls the maximizationroutine exactly once, the total number of gradient evaluations is poly(d;1=").Step 2: Show x?is an"-local minimum for L"(;y?)andy?is an"-stationary point. First, wenote that since our algorithm runs the gradient ascent maximization subroutine until it reaches an"-stationary point, we have, kryf(x?;y?)k1".Our stopping condition for the algorithm implies the last rmaxupdates proposed by the max-playerwere all rejected, and hence were sampled from the distribution Dx?;y?of the ADAM gradient at(x?;y?). Roughly, this impliesPrDx?;y?[f(x?+ ;y0)f(x?;y?)"]1"; (5)where the maximization subroutine computes y0by gradient ascent on f(x?+ ;)initialized at y?.In other words, equation 5 says that at the point (x?;y?)where our algorithm stops, if the min-playersamples an update from the distribution Dx?;y?, followed by the max-player updating y?usinggradient ascent, with high probability the final loss value cannot decrease by more than ".To show equation 35 holds, we need to replace fin the above equation with the simulated loss L".We first show that the gradient ascent steps form an “ "-increasing” path, starting at y?with endpointy0, along which fincreases at rate at least "(Prop. D.8). This crucially exploits that our algorithmrestricts the max player to only use such “ "-increasing” paths. Since L"is the supremum of fat theendpoints of allsuch"-increasing paths starting at y?;we getf(x?+ ;y0)L"(x?+ ;y?): (6)Finally, recall from Section 2 that kryf(x?;y?)k1"?impliesL"(x?;y?) =f(x?;y?). Combiningthe above observations implies the "-local minimum condition equation 35.Note that we could pick any distribution Dfor the updates and the proof still holds – the distributionof ADAM gradients works well in practice. Also, we could replace simulated annealing with adeterministic rule, but such an algorithm often gets stuck at poor local equilibria in GAN training.8Under review as a conference paper at ICLR 20215 C ONCLUSIONIn this paper, we develop a convergent first-order algorithm for min-max optimization and showhow it can lead to a stable and scalable algorithm for training GANs. We prove that our algorithmconverges in time polynomial in the dimension and the smoothness parameters of the loss function.Our simulations show that a version of our algorithm can lead to more stable training of GANs onsynthetic and real-world datasets. And yet the amount of memory and time required by each iterationof our algorithm is competitive with GDA. Our algorithm synthesizes a first-order approximationto the global strategy of the max-player, a look ahead strategy based on batch gradients for the min-player, and simulated annealing. We believe that these ideas of imposing computational restrictionson the min- and max-players should be useful in obtaining convergent and practical algorithms forother applications of min-max optimization, such as adversarial learning.
TL5WmAu2YON
A Provably Convergent and Practical Algorithm for Min-Max Optimization with Applications to GANs
6: Marginally above acceptance threshold
This paper proposes a new stochastic gradient descent-ascent-based method to approximate a stationary point (or local min-max solution) of a nonconvex-nonconcave minimax problem with application in GANs. The method is similar to the one in the GAN original paper, but the authors incorporate it with an acceptance rule and use a different model for the max problem. The algorithm also uses ADAM instead of standard SGD. The authors also provide some convergence guarantee to a local min-max point in polynomial-time complexity. Unfortunately, the reviewer was unable to verify the proof due to the time limit. The reviewer finds that it is really hard to understand the proof techniques as well as the meaning of local min-max points defined in this paper especially via a neighborhood D_{x*,y*}. Many places are explained in words which are also hard to verify some of the statements. For example, Theorem 2.3 expresses the complexity in poly(...), which does not know what is the maximum order of epsilon. The proofs are also breaking into different pieces where so many technical details related to high probability statements are used. This makes another difficulty to check the correctness. To this end, the reviewer would like to raise the following question? 1. First, since the problem is nonconvex-nonconcave, how can the algorithm guarantee that the min-play can always decrease the objective function with a certain amount that is fixed as stated in (5)? 2. Second, it is known that ADAM is not convergent even on a convex problem (see (https://openreview.net/pdf?id=ryQu7f-RZ)), do the authors use a modified variant of ADAM, or what has been changed to guarantee its convergence? 3. Third, in Definition 2, does D_{x*,y*} form a "full" neighborhood of (x*,y*) in the feasible set of f(x, y)?
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title A Provably Convergent and Practical Algorithm for Min-Max Optimization with Applications to GANs ### Paper Abstract We present a first-order algorithm for nonconvex-nonconcave min-max optimization problems such as those that arise in training GANs. Our algorithm provably converges in $\mathrm{poly}(d,L, b)$ steps for any loss function $f:\mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ which is $b$-bounded with ${L}$-Lipschitz gradient. To achieve convergence, we 1) give a novel approximation to the global strategy of the max-player based on first-order algorithms such as gradient ascent, and 2) empower the min-player to look ahead and simulate the max-player’s response for arbitrarily many steps, but restrict the min-player to move according to updates sampled from a stochastic gradient oracle. Our algorithm, when used to train GANs on synthetic and real-world datasets, does not cycle, results in GANs that seem to avoid mode collapse, and achieves a training time per iteration and memory requirement similar to gradient descent-ascent. ### Paper Keywords ["min-max optimization", "GANs"] ### Paper Content ABSTRACTWe present a first-order algorithm for nonconvex-nonconcave min-max optimiza-tion problems such as those that arise in training GANs. Our algorithm provablyconverges in poly(d;L;b )steps for any loss function f:RdRd!Rwhichisb-bounded with L-Lipschitz gradient. To achieve convergence, we 1) give anovel approximation to the global strategy of the max-player based on first-orderalgorithms such as gradient ascent, and 2) empower the min-player to look aheadand simulate the max-player’s response for arbitrarily many steps, but restrict themin-player to move according to updates sampled from a stochastic gradient oracle.Our algorithm, when used to train GANs on synthetic and real-world datasets, doesnot cycle, results in GANs that seem to avoid mode collapse, and achieves a trainingtime per iteration and memory requirement similar to gradient descent-ascent.1 I NTRODUCTIONWe consider the problem of min-max optimization minx2Rdmaxy2Rdf(x;y), where the loss func-tionfmay be nonconvex in xand nonconcave in y. Min-max optimization of such loss functions hasmany applications to machine learning, including to GANs (Goodfellow et al., 2014) and adversarialtraining (Madry et al., 2018). In particular, following Goodfellow et al. (2014), GAN training canbe formulated as a min-max optimization problem where xencodes the parameters of a “generator”network, and yencodes the parameters of a “discriminator” network. Unlike standard minimizationproblems, the min-max nature of GANs makes them particularly difficult to train (Goodfellow,2017), and has received wide attention. A common algorithm to solve these min-max optimizationproblems, gradient descent ascent (GDA), alternates between stochastic gradient descent steps for xand ascent steps for y.1The advantage of GDA is that it just requires first-order access to fand eachiteration is efficient in terms of memory and time, making it quite practical. However, as many workshave observed, GDA can suffer from issues such as cycling (Arjovsky & Bottou, 2017) and “modecollapse” (Dumoulin et al., 2017; Che et al., 2017; Santurkar et al., 2018).Several recent works have focused on finding convergent first-order algorithms for min-max opti-mization (Rafique et al., 2018; Daskalakis et al., 2018; Liang & Stokes, 2019; Gidel et al., 2019b;Mertikopoulos et al., 2019; Nouiehed et al., 2019; Lu et al., 2020; Lin et al., 2020; Mokhtari et al.,2019; Thekumparampil et al., 2019; Mokhtari et al., 2020). However, these algorithms are also notguaranteed to converge for general nonconvex-nonconcave min-max problems. The challenge isthat min-max optimization generalizes nonconvex minimization, which, in general, is intractable.Algorithms for nonconvex minimization resort to finding “local” optima or assume a starting point“close” to a global optimum. However, unlike minimization problems where local notions of optimaexist (Nesterov & Polyak, 2006), it has been challenging to define a notion of convergent points formin-max optimization, and most notions of local optima considered in previous works (Daskalakis &Panageas, 2018; Jin et al., 2020; Fiez et al., 2019) require significant restrictions for existence.Our contributions. Our main result is a new first-order algorithm for min-max optimization (Al-gorithm 1) that for any " >0;any nonconvex-nonconcave loss function, and anystarting point,converges in poly(d;L;b; 1=")steps, iffisb-bounded with L-Lipschitz gradient (Theorem 2.3).1In practice, gradients steps are often replaced by ADAM steps; we ignore this distinction for this discussion.1Under review as a conference paper at ICLR 2021A key ingredient in our result is an approximation to the global max function maxz2Rdf(x;z).Unlike GDA and related algorithms that alternate between updating the discriminator and generatorin an incremental fashion, our algorithm lets the discriminator run a convergent algorithm (such asgradient ascent) until it reaches a first-order stationary point. We then empower the generator tosimulate the discriminator’s response for arbitrarily many gradient ascent updates. Roughly, at eachiteration of our algorithm, the min-player proposes a stochastic (batch) gradient update for xandsimulates the response of the max-player with gradient ascent steps for yuntil it reaches a first-orderstationary point. If the resulting loss has decreased, the updates for xandyare accepted; otherwisethey are only accepted with a small probability ( a lasimulated annealing).The point (x?;y?)returned by our algorithm satisfies the following guarantee: if the min-playerproposes a stochastic gradient descent update to x?;and the max-player is allowed to respond byupdatingy?using any“path” that increases the loss at a rate of at least "— with high probability, thefinal loss cannot decrease by more than ":See Section 2 for our convergence guarantees, Section 4for the key ideas in our proof, and Appendix C for a comparison to previous notions of convergence.Empirically, we apply our algorithm for training GANs (with the cross-entropy loss) on both synthetic(mixture of Gaussians) and real-world (MNIST and CIFAR-10) datasets (Section 3). We compare ouralgorithm’s performance against two related algorithms: gradient/ADAM descent ascent (with one ormultiple discriminator steps), and Unrolled GANs (Metz et al., 2017). Our simulations with MNIST(Figure 1) and mixture of Gaussians (Figure 2) indicate that training GANs using our algorithm canavoid mode collapse and cycling. For instance, on the Gaussian mixture dataset, we found that byaround the 1500’th iteration GDA learned only one mode in 100% of the runs, and cycled betweenmultiple modes. In contrast, our algorithm learned all four modes in 68% of the runs, and three modesin26% of the runs. On 0-1MNIST, we found that GDA tends to briefly generate shapes that looklike a combination of 0’s and 1’s, then switches between generating only 1’s and only 0’s. In contrast,our algorithm seems to learn to generate both 0’s and 1’s early on and does not stop generating eitherdigit. GANs trained using our algorithm generated both digits by the 1000’th iteration in 86% ofthe runs, while those trained using GDA only did so in 23% of the runs. Our CIFAR-10 simulations(Figure 3) indicate that our algorithm trains more stably, resulting in a lower mean and standarddeviation for FID scores compared to GDA. Furthermore, the per-step computational and memorycost of our algorithm is similar to GDA indicating that our algorithm can scale to larger datasets.Related workGuaranteed convergence for min-max optimization. Several works have studied GDA dynamics inGANs (Nagarajan & Kolter, 2017; Mescheder et al., 2017; Li et al., 2018; Balduzzi et al., 2018;Daskalakis & Panageas, 2018; Jin et al., 2020) and established that GDA suffers from severelimitations: GDA can exhibit rotation around some points, or otherwise fail to converge. Thus, wecannot expect global convergence guarantees for GDA. To address these convergence issues for GDA,multiple works have proposed algorithms based on Optimistic Mirror Descent (OMD), Extra-gradientmethod, or similar approaches (Gidel et al., 2019b; Daskalakis et al., 2018; Liang & Stokes, 2019;Daskalakis & Panageas, 2019; Mokhtari et al., 2019; 2020). These algorithms avoid some of thepathological behaviors of GDA and achieve guaranteed convergence in poly(;log(1="))iterationswhereis the condition number of f:However, all these results either require convexity/concavityassumptions on f;which usually do not hold for GANs, or require that the starting point lies in a smallregion around an equilibrium point, and hence provide no guarantees for an arbitrary initialization.Some works also provide convergence guarantees for min-max optimization (Nemirovski & Yudin,1978; Kinderlehrer & Stampacchia, 1980; Nemirovski, 2004; Rafique et al., 2018; Lu et al., 2020;Lin et al., 2020; Nouiehed et al., 2019; Thekumparampil et al., 2019). However, they require fto beconcave iny;again limiting their applicability.As for nonconvex-nonconcave min-max optimization, Heusel et al. (2017) prove convergence offinite-step GDA, under the assumption that the underlying continuous dynamics converge to a localmin-max optimum (this assumption may not even hold for fthat is bi-linear). Jin et al. (2020)present a version of GDA for min-max optimization (generalized by Fiez et al. (2019)) such that if thealgorithm converges, the convergence point is a local min-max optimum. Both these results requirethat the min-player use a vanishingly small step size relative to the max-player, resulting in slowconvergence. Wang et al. (2020) present an algorithm that can converge for nonconvex-nonconcavefunctions, but requires the initial point to lie in a region close a local min-max optimum (suchoptima are not guaranteed to exist). In contrast to the above works, our algorithm is guaranteed to2Under review as a conference paper at ICLR 2021converge for any nonconvex-nonconcave loss, from any starting point, in poly(d;L;b; 1=")steps, iffisb-bounded with L-Lipschitz gradient.Greedy paths. The paths along which the max-player is allowed to make updates in our equilibriumdefinition are inspired from the work of Mangoubi & Vishnoi (2020), which gives a second-orderalgorithm for min-max optimization. The “greedy paths” considered in their work are defined suchthat at every point along these paths, fis non-decreasing, and the first derivative of fis at least"orthe 2nd derivative is at leastp":In contrast, we just require a condition on the first derivative of falong the path. This distinction gives rise to a different notion of equilibrium than the one presentedin their work. The first-order condition on the paths crucially also results in our algorithm beingapplicable to machine learning settings where only first-order oracles are available, because unlikeMangoubi & Vishnoi (2020), traversing such a path only requires first-order access to f.Training GANs. Starting with Goodfellow et al. (2014), there has been considerable work to developalgorithms to train GANs. One line of work focuses on modifying the loss to improve conver-gence (Arjovsky et al., 2017; Bellemare et al., 2017; Lim & Ye, 2017; Mao et al., 2017; Salimanset al., 2018; Metz et al., 2017). Another line of work regularizes the discriminator using gradientpenalties or spectral normalization (Gulrajani et al., 2017; Kodali et al., 2017; Miyato et al., 2018).Metz et al. (2017) introduced Unrolled GANs, where the generator optimizes an “unrolled” lossfunction that allows the generator to simulate a fixed number of discriminator updates. While thishas some similarity to our algorithm there are two important distinctions: 1) the discriminator inUnrolled GANs may not reach a first-order stationary point, and hence their algorithm does not comewith any convergence guarantees, and 2) unlike our algorithm, the implementation of the generatorin Unrolled GANs requires memory that grows with the number of discriminator steps, limiting itsscalability. We observe that our algorithm, applied to training GANs, trains stably and avoids modecollapse, while achieving a training time per iteration and memory requirements that are similar toGDA, and much lower than Unrolled GANs (Metz et al., 2017) (see also the discussion in Section 5).Remark 1.1 (Loss functions in GANs) .Loss functions f:RdRd!Rwhich take bounded valuesonRdRdarise in many GAN applications. For instance, GANs with mean-squared error lossMao et al. (2017) have uniformly bounded f. GANs with cross entropy loss Goodfellow et al. (2014)havefuniformly bounded above, and Wasserstein GANs Arjovsky et al. (2017) have a loss functionf(x;y)which is bounded above as a function of yand is uniformly bounded below.2 T HEORETICAL RESULTSWe consider the problem minxmaxyf(x;y);wherex;y2Rd;andfis a function RdRd!R:Weconsiderfthat is an empirical risk loss over mtraining examples. Thus, we have f=1mPi2[m]fi;and are given access to fvia a randomized oracle Fsuch that E[F] =f:We call such an oracle astochastic zeroth-order oracle for f:We are also given randomized oracles Gx;Gyforrxf;ryf;such that E[Gx] =rxf;andE[Gy] =ryf:We call such oracles stochastic gradient oracles forf:In practice, these oracles are computed by randomly sampling a “batch” B[m]and returningF=1=jBjPi2Bfi;Gx=1=jBjPi2Brxfi;andGy=1=jBjPi2Bryfi:For our convergence guarantees, we require bounds on standard smoothness parameters forfunctionsfi:bsuch that for all iand allx;y; we havejfi(x;y)j b, andLsuch thatkrfi(x;y)rfi(x0;y0)k2Lkxx0k2+Lkyy0k2:Such smoothness/Lipschitz bounds arestandard in convergence guarantees for optimization algorithms (Bubeck, 2017; Nesterov & Polyak,2006; Ge et al., 2015), and imply that fis also continuous, b-bounded, and L-gradient-Lipschitz.Our algorithm is described informally in Algorithm 1 and formally as Algorithm 2 in the Appendix.Intuition for the algorithm. To solve the min-max problem, the max-player would ideally findthe global maximum maxzf(x;z). However, since fmay be nonconcave in y, finding the globalmaximum may be computationally intractable. To get around this problem, roughly speaking, in ouralgorithm the max-player computes its update by running gradient ascent until it reaches a first-order"-stationary point y0, that is, a point where kryf(x;y0)k". This allows our algorithm to computean approximationL"(x;y) =f(x;y0)for the global maximum. (Note that even though maxzf(x;z)is only a function of x,L"may depend on both xand the initial point y:)3Under review as a conference paper at ICLR 2021Algorithm 1 Algorithm for min-max optimizationinput: A stochastic zeroth-order oracle Ffor loss function f:RdRd!R, and stochastic gradientoraclesGxforrxf, andGyforryf. An initial point (x;y);and an error parameter ".output: A point (x?;y?)hyperparameters: rmax(maximum number of rejections); 1(hyperparameters for annealing);ADAM hyperparametersSetr 0;i 0whilerrmaxdofold F(x;y),i i+ 1SetGx Gx(x;y)fCompute a stochastic gradient gUse stochastic gradient Gxto compute a one-step ADAM update forxSetx0 x+ fCompute the proposed update for the min-player gStarting at point y, use stochastic gradients Gy(x0;)to run multiple ADAM steps in they-variable, until a point y0is reached such that kGy(x0;y0)k1"fSimulate max-player’s update gSetfnew F(x0;y0)fCompute the new loss value gSetAccept True:iffnew>fold"=2, setAccept False with probability max(0;1ei=1)faccept or rejectgifAccept =True then Setx x0,y y0;r 0fAccept the updatesgelse Setr r+ 1fReject the updates, and track how many successive steps were rejected. greturn (x;y)We would like the min-player to minimize L"(x;y). Ideally, the min-player would make updates inthe directionrxL". However,L"(x;y)may not be differentiable and may even be discontinuousinx(see Section 2.2 for an example), making it challenging to optimize. Moreover, even at pointswhereL"is differentiable, computing rxL"may require memory proportional to the number ofmax-player steps used to compute L"(for instance, this is the case for Unrolled GANs (Metz et al.,2017)). For this reason, we only provide our min-player with access to the value of L".One approach to minimize L"would be to use a zeroth-order optimization procedure where themin-player proposes a random update to x, and then only accept this update if it results in a decreaseinL":At each iteration of our algorithm, the min-player proposes an update roughly in the directionrxf(x;y):To motivate this choice, note that once the min-player proposes an update tox, themax-player’s updates will only increase f,i.e.,L"(x+ ;y)f(x+ ;y). Moreover, since yis afirst-order stationary point of f(x;)(becauseywas computed using gradient ascent in the previousiteration), we also have L"(x;y) =f(x;y). Therefore, we want an update such thatf(x+ ;y)L"(x+ ;y)L"(x;y) =f(x;y); (1)which implies that any proposed step which decreases L"must also decrease f(although the converseis not true). This motivates proposing steps in the direction of the gradient rxf(x;y).Unfortunately, updates in the direction rxf(x;y)do not necessarily decrease L". Our algorithminstead has the min-player perform a random search by proposing a stochastic update in the directionof a batch gradient with mean rxf(x;y)(or, more precisely, the ADAM batch gradients), andaccepts this update only if L"decreases by some fixed amount. We show empirically that thesedirections allow the algorithm to rapidly decrease the simulated loss. The fact that L"decreaseswhenever the min-player takes a step allows us to guarantee that our algorithm eventually converges.A final issue is that converging to a local minimum point does not guarantee that the point is desirablefrom an applications standpoint. To allow our algorithm to escape undesirable local minima ofL"(;y), we use a randomized accept-reject rule inspired by simulated annealing – if the resultingloss has decreased the updates for xandyare accepted; otherwise they are only accepted with asmall probability ei=1;where1is a “temperature” parameter.2.1 C ONVERGENCE GUARANTEESWe first formally define “simulated loss” and what it means for fto increase rapidly.Definition 2.1. For anyx;y; and" > 0;defineE(";x;y )Rdto be points ws.t. there is acontinuous and (except at finitely many points) differentiable path (t)starting aty;ending atw;4Under review as a conference paper at ICLR 2021and moving with “speed” at most 1 in the `1-norm2ddt(t)11such that at any point on ,3ddtf(x;(t))>": (2)We defineL"(x;y) := supw2E(";x;y )f(x;w);and refer to it as the simulated loss.A few remarks are in order. Observe that L1(x;y) = maxyf(x;y):Further, ifkryf(x;y)k1";thenE(";x;y ) =fygandL"(x;y) =f(x;y)(this follows from H ̈older’s inequality, since (t)hasspeed at most 1 in the `1-norm, the dual of the `1-norm). Note that the path need not be in thedirection of the gradient, and there can potentially be infinitely many such paths starting at y:Unfortunately, the simulated loss may not even be continuous4inx;and thus, gradient-basednotions of approximate local minima do not apply. To bypass this discontinuity (and hence non-differentiability), we use the idea to sample updates to x;and test whetherL"has decreased (equa-tion 35). This leads to the following definition of local min-max equilibrium (see also Section2.3):Definition 2.2. Given a distribution Dx;yfor eachx;y2Rdand"?>0, we say that (x?;y?)is an"?-local min-max equilibrium with respect to the distribution Difkryf(x?;y?)k1"?;and; (3)PrDx?;y?[L"?(x?+ ;y?)<L"?(x?;y?)"?]<"?; (4)In our main result, we set Dto roughly be the distribution of ADAM stochastic gradients for rxf.Also note that from the above discussion, equation 34 implies that L"?(x?;y?) =f(x?;y?). Toallow convergence of ADAM (and avoid non-convergence issues such as those encountered in Reddiet al. (2019)), in our main result we constrain the values of the ADAM hyperparameters5. Now, wecan state the formal guarantees of our algorithm.Theorem 2.3. Algorithm 2, with appropriate hyperparameters for ADAM and some constant 1>0,given access to stochastic zeroth-order and gradient oracles for a function f=Pi2[m]fiwhereeachfiisb-bounded with L-Lipschitz gradient for some b;L > 0, and" >0, with probability atleast 9=10returns a point (x?;y?)2RdRdsuch that, for some "?2[12";"], the point (x?;y?)is an"?-local min-max equilibrium point with respect to the distribution Dx;y, whereDx;yis thedistribution of Gx(x;y)=pG2x(x;y);where the division “ =” is applied element-wise. The number ofstochastic gradient and function evaluations required by the algorithm is poly( 1=";d;b;L ).We present key ideas in the proof in Section 4, and a proof overview in Section B. The full proofappears in Appendix D. Note that Dx;yis the distribution of the stochastic gradient updates withelement-wise normalizations. Roughly, this corresponds to the distribution of ADAM steps taken bythe algorithm for updating xif one uses a small step-size parameter for ADAM.We conclude this section with a few technical remarks about the theorem. Our algorithm couldprovide guarantees with respect to other distributions Dx;yin equation 35 by sampling update stepsforxfromDx;yinstead of ADAM. The norm in the guarantees of the stationary point y?for ouralgorithm is `1since we use ADAM for updating y:A simpler version of this algorithm using SGDwould result in an `2-norm guarantee.Comparison to notions of local optimality. Definition 2.2 provides a new notion of equilibriumfor min-max optimization. Consider the problem minxmaxyf(x;y), but with constraints on theplayer’s updates to x;y— the max player is restricted to updating yvia a path which increasesf(x;y)at a rate of at least "?at every point on the path, and the min player proposes an update forx;sampled fromD. Then (x?;y?)satisfies Definition 2.2 if and only if (i) there is no update toy?that the max player can make which will increase the loss at a rate of at least "?(equation 34),and, (ii) with probability at least 1"?for a random step D proposed by the min player, theabove-constrained max player can update y?s.t. the overall decrease in the loss is at most "?from itsoriginal value f(x?;y?).2We use the`1-norm in place of the Euclidean `2-norm, as it is a more natural norm for ADAM gradients.3In this equation the derivativeddtis taken from the right.4Consider the example f(x;y) = min(x2y2;1):The simulated loss function for " > 0isL"(x;y) =f(x;y)if2x2y<"; and 1 otherwise. Thus L1=2is discontinuous at (1=2;1):5In particular, we set the ADAM hyperparameters 1;2to be1=2= 0.5Under review as a conference paper at ICLR 2021As one comparison to a previous notion of local optimality, any point which is a local optimum underthe definition used previously e.g. in Daskalakis & Panageas (2018), also satisfies our Definition 2.2for small enough "and distributionDcorresponding to small-enough step size. On the other hand,previous notions of local optima including the one in Daskalakis & Panageas (2018) are not guaranteedto exist in a general setting, unlike our definition. (See Appendix C for a detailed comparison of howDefinition 2.2 relates to previously proposed notions of local optimality).3 E MPIRICAL RESULTSWe seek to apply our min-max optimization algorithm for training GANs on both real-world andsynthetic datasets. Following Goodfellow et al. (2014), we formulate GAN training as a min-maxoptimization problem using the cross entropy loss, f(x;y) = log(Dy()) + log(1Dy(Gx()),wherex;yare the weights of the generator and discriminator networks GandDrespectively, issampled from the data, and N(0;Id). For this loss, the smoothness parameters b;Lmay not befinite. To adapt Alg. 1 to training GANs, we make the following simplifications in our simulations:(1)Temperature schedule: We use a fixed temperature ;constant with iteration i, making it simplerto choose a good temperature value rather than a temperature schedule. (2)Accept/reject rule: Wereplace the randomized acceptance rule with a deterministic rule: If fnewfoldwe accept theproposed step, and if fnew> foldwe only accept if iis a multiple of e1=, corresponding to anaverage acceptance rate of e1=.(3)Discriminator steps: We take a fixed number of discriminatorsteps at each iteration, instead of taking as many steps needed to achieve a small gradient.These simplifications do not seem to significantly affect our algorithm’s performance (see AppendixF.5 for simulations showing it effectively trains GANs without most of these simplifications). More-over, our simulations show a smaller number of discriminator steps kis usually sufficient in practice.Datasets and Metrics. We perform simulations on MNIST (LeCun et al., 2010) and CIFAR-10(Krizhevsky et al.) datasets to evaluate whether GANs trained using our algorithm converge, andwhether they are able to learn the target distribution. Convergence is evaluated by visual inspection(for MNIST and CIFAR), and by tracking the loss (for MNIST) and FID scores (for CIFAR).As noted by previous works (Borji, 2019; Metz et al., 2017; Srivastava et al., 2017), it is challengingto detect mode collapse on CIFAR and MNIST, visually or using standard quantitative metrics suchas FID scores, because CIFAR (and to some extent MNIST) do not have well-separated modes. Thus,we consider two datasets, one real and one synthetic, with well-separated modes, whence modecollapse can be clearly detected by visual inspection.For the real dataset we consider the 0-1 MNIST dataset (MNIST restricted to digits labeled 0 or1). The synthetic dataset consists of 512 points sampled from a mixture of four Gaussians in twodimensions with standard deviation 0.01 and means at (0;1),(1;0),(1;0)and(0;1):Hyperparameters and hardware. The details of the networks and hyperparameter choices aregiven in Appendix E. Simulations on MNIST and Gaussian datasets used four 3.0 GHz Intel ScalableCPUs, provided by AWS. On CIFAR-10, we used one High freq. Intel Xeon E5-2686 v4 GPU.3.1 E VALUATING THE PERFORMANCE OF OUR ALGORITHMWe compare our algorithm’s performance to both GDA and unrolled GANs. All algorithms areimplemented using ADAM (Kingma & Ba, 2015).MNIST. We trained a GAN on the full MNIST dataset using our algorithm for 39,000 iterations(withk= 1discriminator steps and acceptance rate e1==1=5). We ran this simulation five times;each time the GAN learned to generate all ten digits (see Appendix F.1 for generated images).0-1 MNIST. We trained a GAN using our algorithm on the 0-1 MNIST dataset for 30,000 iterationsand ploted a moving average of the loss values. We repeated this simulation five times; in each of thefive runs our algorithm learned to generate digits which look like both the “0” and “1” digits, and theloss seems to decrease and stabilize once our algorithm learns how to generate the two digits. (SeeAppendix F.3 for the generated images and loss value plots.)6Under review as a conference paper at ICLR 2021GDA3_Picky_MNIST2501,0005002,0003,0009_Vanilla_MNIST2505001,0002,0003,000 Our algorithm3_Picky_MNIST2501,0005002,0003,0009_Vanilla_MNIST2505001,0002,0003,000Figure 1: We trained a GAN using our algorithm on 0-1 MNIST for 30,000 iterations (with k= 1discriminatorsteps and acceptance rate e1==1=5). We repeated this experiment 22 times for our algorithm and 13 times forGDA. Shown here are the images generated from one of these runs at various iterations for our algorithm (right)and GDA (left) (see also Appendix F.3 for images from other runs).GDA9_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 01500100080060040020001500100080060040020009_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 0150010008006004002000150010008006004002000GDA with 6 discriminator steps9_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 01500100080060040020001500100080060040020009_vanilla_short_4Gauss9_vanilla_6discSteps_short_4Gauss0_unrolled_short_4Gauss 0150010008006004002000150010008006004002000Unrolled GANs with 6 unrolling steps11_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 011_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 0Our algorithm11_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 011_picky_4Gauss_c_cold (= 9_picky_4Gauss_c)1500100080060040020001500100080060040020000_unrolled_short_4Gauss 0Figure 2: Our algorithm (bottom right), unrolled GANs with k= 6unrolling steps (top right), and GDA withk= 1 (top left) and k= 6 discriminator steps (bottom left). Each algorithm was trained on a 4-Gaussianmixture for 1500 iterations. Our algorithm used k= 6 discriminator steps and acceptance rate e1==1=4.Plots show the points generated by each of these algorithms after the specified number of iterations.CIFAR-10. We ran our algorithm (with k= 1discriminator steps and acceptance rate e1==1=2)on CIFAR-10 for 50,000 iterations. We compare with GDA with k= 1 discriminator steps. Weplotted the FID scores for both algorithms. We found that both algorithms have similar FID scoreswhich decrease over time, and produce images of similar quality after 50,000 iterations (Figure 3).Clock time per iteration. When training on the 0-1 MNIST dataset (with k= 1discriminator stepseach iteration), our algorithm took 1.4 seconds per iteration on the AWS CPU server. On the samemachine, GDA took 0.85 seconds per iteration. When training on CIFAR-10, our algorithm and GDAboth took the same amount of time per iteration, 0.08 seconds, on the AWS GPU server.Mitigating mode-collapse on 0-1 MNIST. We trained GANs using both GDA and our algorithm onthe 0-1 MNIST dataset, and ran each algorithm for 3000 iterations (Figure 1). GDA seems to brieflygenerate shapes that look like a combination of 0’s and 1’s, then switches to generating only 1’s, andthen re-learns how to generate 0’s. In contrast, our algorithm seems to learn how to generate both 0’sand 1’s early on and does not mode collapse to either digit.We repeated this simulation 22 times for our algorithm and 13 times for GDA, and visually inspectedthe images at iteration 1000. GANs trained using our algorithm generated both digits by the 1000’thiteration in 86% of the runs, while those trained using GDA only did so in 23% of the runs at the1000’th iteration (see Appendix F.4 for images from all runs).Mitigating mode-collapse on synthetic data. We trained GANs on the 4-Gaussian mixture datasetfor 1500 iterations (Figure 2) using our algorithm, unrolled GANs with k= 6unrolling steps, andGDA withk= 1andk= 6discriminator steps. We repeated each simulation 10-20 times.By the 1500’th iteration GDA with k= 1discriminator steps seems to have learned only one modein 100% of the runs. GDA with k= 6discriminator steps learned two modes 65% of the runs, onemode 20% of runs, and four modes 15% of runs. Unrolled GANs learned one mode 75% of the runs,two modes 15% of the runs, and three modes 10% of the runs. In contrast, our algorithm learned allfour modes 68% of the runs, three modes 26% of the runs, and two modes 5% of the runs.7Under review as a conference paper at ICLR 2021GDA Our algorithmFigure 3: GAN trained using our algorithm (with k= 1discriminator steps and acceptance rate e1==1=2)and GDA on CIFAR-10 for 50,000 iterations. The images generated from the resulting generator for both ouralgorithm (middle) and GDA (left). Over 9 runs, our algorithm achieves a very similar minimum FID score(33.8) compared to GDA (33.0), and a better average FID score over 9 runs (mean = 35.6, std. dev. = 1.1)compared to GDA ( = 53.8,= 53.9). Images are shown from one run each; see Appendix F.2 for full results.4 K EYIDEAS IN THE PROOFFor simplicity, assume b=L=1= 1. There are two key pieces to proving Theorem 2.3. The firstis to show that our algorithm converges to some point (x?;y?)inpoly(d;1=")gradient and functionevaluations (Lemma D.7). Secondly, we show that, y?is a first-order "-stationary point for f(x?;)andx?is, roughly, a "-local minimum for the simulated loss function L"(;y?)(Lemma D.9).Step1: Bounding the number of gradient evaluations: After (log(1"))steps, the decaying accep-tance rate of the simulated annealing step ensures that our algorithm stops whenever rmax=O(1=")proposed steps are rejected in a row. Thus, for every O(rmax="2)iterations where the algorithmdoes not terminate, with probability at least 1"the value of the loss decreases by more than ":Sincefis 1-bounded, this implies our algorithm terminates after roughly O(rmax="3)iterations of theminimization routine (Proposition D.6).Next, sincefis 1-bounded with 1-Lipschitz gradient, in each iteration, we require at most poly(d=")gradient ascent steps to reach an "-stationary point. Since each step of the maximization subroutinerequires one gradient evaluation, and each iteration of the minimization routine calls the maximizationroutine exactly once, the total number of gradient evaluations is poly(d;1=").Step 2: Show x?is an"-local minimum for L"(;y?)andy?is an"-stationary point. First, wenote that since our algorithm runs the gradient ascent maximization subroutine until it reaches an"-stationary point, we have, kryf(x?;y?)k1".Our stopping condition for the algorithm implies the last rmaxupdates proposed by the max-playerwere all rejected, and hence were sampled from the distribution Dx?;y?of the ADAM gradient at(x?;y?). Roughly, this impliesPrDx?;y?[f(x?+ ;y0)f(x?;y?)"]1"; (5)where the maximization subroutine computes y0by gradient ascent on f(x?+ ;)initialized at y?.In other words, equation 5 says that at the point (x?;y?)where our algorithm stops, if the min-playersamples an update from the distribution Dx?;y?, followed by the max-player updating y?usinggradient ascent, with high probability the final loss value cannot decrease by more than ".To show equation 35 holds, we need to replace fin the above equation with the simulated loss L".We first show that the gradient ascent steps form an “ "-increasing” path, starting at y?with endpointy0, along which fincreases at rate at least "(Prop. D.8). This crucially exploits that our algorithmrestricts the max player to only use such “ "-increasing” paths. Since L"is the supremum of fat theendpoints of allsuch"-increasing paths starting at y?;we getf(x?+ ;y0)L"(x?+ ;y?): (6)Finally, recall from Section 2 that kryf(x?;y?)k1"?impliesL"(x?;y?) =f(x?;y?). Combiningthe above observations implies the "-local minimum condition equation 35.Note that we could pick any distribution Dfor the updates and the proof still holds – the distributionof ADAM gradients works well in practice. Also, we could replace simulated annealing with adeterministic rule, but such an algorithm often gets stuck at poor local equilibria in GAN training.8Under review as a conference paper at ICLR 20215 C ONCLUSIONIn this paper, we develop a convergent first-order algorithm for min-max optimization and showhow it can lead to a stable and scalable algorithm for training GANs. We prove that our algorithmconverges in time polynomial in the dimension and the smoothness parameters of the loss function.Our simulations show that a version of our algorithm can lead to more stable training of GANs onsynthetic and real-world datasets. And yet the amount of memory and time required by each iterationof our algorithm is competitive with GDA. Our algorithm synthesizes a first-order approximationto the global strategy of the max-player, a look ahead strategy based on batch gradients for the min-player, and simulated annealing. We believe that these ideas of imposing computational restrictionson the min- and max-players should be useful in obtaining convergent and practical algorithms forother applications of min-max optimization, such as adversarial learning.<|im_end|> <|im_start|>assistant ### Review Title A Provably Convergent and Practical Algorithm for Min-Max Optimization with Applications to GANs ### Review Text This paper proposes a new stochastic gradient descent-ascent-based method to approximate a stationary point (or local min-max solution) of a nonconvex-nonconcave minimax problem with application in GANs. The method is similar to the one in the GAN original paper, but the authors incorporate it with an acceptance rule and use a different model for the max problem. The algorithm also uses ADAM instead of standard SGD. The authors also provide some convergence guarantee to a local min-max point in polynomial-time complexity. Unfortunately, the reviewer was unable to verify the proof due to the time limit. The reviewer finds that it is really hard to understand the proof techniques as well as the meaning of local min-max points defined in this paper especially via a neighborhood D_{x*,y*}. Many places are explained in words which are also hard to verify some of the statements. For example, Theorem 2.3 expresses the complexity in poly(...), which does not know what is the maximum order of epsilon. The proofs are also breaking into different pieces where so many technical details related to high probability statements are used. This makes another difficulty to check the correctness. To this end, the reviewer would like to raise the following question? 1. First, since the problem is nonconvex-nonconcave, how can the algorithm guarantee that the min-play can always decrease the objective function with a certain amount that is fixed as stated in (5)? 2. Second, it is known that ADAM is not convergent even on a convex problem (see (https://openreview.net/pdf?id=ryQu7f-RZ)), do the authors use a modified variant of ADAM, or what has been changed to guarantee its convergence? 3. Third, in Definition 2, does D_{x*,y*} form a "full" neighborhood of (x*,y*) in the feasible set of f(x, y)? ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
mhEd8uOyNTI
ICLR.cc/2021/Conference
2021
Representational correlates of hierarchical phrase structure in deep language models
["Matteo Alleman", "Jonathan Mamou", "Miguel A Del Rio", "Hanlin Tang", "Yoon Kim", "SueYeon Chung"]
While contextual representations from Transformer-based architectures have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of input perturbation-based analyses of representations from Transformer networks pretrained on self-supervised objectives. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words.
["bertology", "interpretability", "computational neuroscience", "population coding"]
ABSTRACTWhile contextual representations from pretrained Transformer models have set anew standard for many NLP tasks, there is not yet a complete accounting of theirinner workings. In particular, it is not entirely clear what aspects of sentence-levelsyntax are captured by these representations, nor how (if at all) they are builtalong the stacked layers of the network. In this paper, we aim to address suchquestions with a general class of interventional, input perturbation-based analysesof representations from Transformers networks pretrained with self-supervision.Importing from computational and cognitive neuroscience the notion of represen-tational invariance, we perform a series of probes designed to test the sensitivityof Transformer representations to several kinds of structure in sentences. Eachprobe involves swapping words in a sentence and comparing the representationsfrom perturbed sentences against the original. We experiment with three differentperturbations: (1) random permutations of n-grams of varying width, to test thescale at which a representation is sensitive to word position; (2) swapping of twospans which do or do not form a syntactic phrase, to test sensitivity to global phrasestructure; and (3) swapping of two adjacent words which do or do not break apart asyntactic phrase, to test sensitivity to local phrase structure. We also connect ourprobe results to the Transformer architecture by relating the attention mechanismto syntactic distance between two words. Results from the three probes collectivelysuggest that Transformers build sensitivity to larger parts of the sentence alongtheir layers, and that hierarchical phrase structure plays a role in this process.In particular, sensitivity to local phrase structure increases along deeper layers.Based on our analysis of attention, we show that this is at least partly explained bygenerally larger attention weights between syntactically distant words.11 I NTRODUCTION AND RELATED WORKIt is still unknown how distributed information processing systems encode and exploit complexrelational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019),neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman,1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question,including a productive focus on the potential models and their implementations of hierarchical tasks,such as predictive maps and graphs.Natural (human) language provides a rich domain for studying how complex hierarchical structuresare encoded in information processing systems. More so than other domains, human language isunique in that its underlying hierarchy has been extensively studied and theorized in linguistics, whichprovides source of “ground truth” structures for stimulus data. Much prior work on characterizingthe types of linguistic information encoded in computational models of language such as neuralnetworks has focused on supervised readout probes, which train a classifier on top pretrained modelsto predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al.,2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encodetree-distances as distances in the representational subspace, and Kim et al. (2020) show that thesedistances can be used even without any labeled information to induce hierarchical structure. However,recent work has highlighted issues with correlating supervised probe performance with the amount1Datasets, extracted features and code will be publicly available upon publication.1Under review as a conference paper at ICLR 2021of language structure encoded in such representations (Hewitt & Liang, 2019). Another popularapproach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al.,2019). While geometric interpretations provide significant insights, they present another challengein summarizing the structure in a quantifiable way. More recent techniques such as replica-basedmean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020)connects representation geometry with linear classification performance, but the method is limited tocategorization tasks.In this work, we make use of an experimental framework from cognitive science and neuroscience toprobe for hierarchical structure in contextual representations from pretrained Transformer models (i.e.,BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuringchange in the population activity in response to controlled, input perturbations (Mollica et al., 2020;Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1)of hierarchical phrase structure encoded deep contextual representations, and present several keyfindings:1.Representations are distorted by shuffling small n-grams in early layers, while the distortioncaused by shuffling large n-grams starts to occur in later layers, implying the scale ofcharacteristic word length increases from input to downstream layers.2.Representational distortion caused by swapping two constituent phrases is smaller thanwhen the control sequences of the same length are swapped, indicating that the BERTrepresentations are sensitive to hierarchical phrase structure.3.Representational distortion caused by swapping adjacent words across phrasal boundary islarger than when the swap is within a phrasal boundary; furthermore, the amount of distortionincreases with the syntactic distance between the swapped words. The correlation betweendistortion and tree distance increases across the layers, suggesting that the characteristiccomplexity of phrasal subtrees increases across the layers.4.Early layers pay more attention between syntactically closer adjacent pairs and deeper layerspay more attention between syntactically distant adjacent pairs. The attention paid in eachlayer can explain some of the emergent sensitivity to phrasal structure across layers.Our work demonstrates that interventional tools such as controlled input perturbations can be useful foranalyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adaptexperimental techniques from cognitive neuroscience and psycholinguistics to analyze computationalmodels of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020).2 M ETHODSEliciting changes in behavioral and neural responses through controlled input perturbations is acommon experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone,2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure thediscrepancy between the resulting, perturbed representation against the original. While conceptuallysimple, this approach allows for a targeted analysis of internal representations obtained from differentlayers of deep models, and can suggest partial mechanisms by which such models are able to encodelinguistic structure. We note that sentence perturbations have been primarily utilized in NLP forrepresentation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation(Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarialexamples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our workis to show that input perturbations can serve as a useful tool for analyzing representations learned bydeep networks.2.1 S ENTENCE PERTURBATIONSIn this work we consider three different types of sentence perturbations designed to probe for differentphenomena.n-gram shuffling In then-gram shuffling experiments, we randomly shuffle the words of a sentencein units ofn-grams, with nvarying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example).2Under review as a conference paper at ICLR 2021Figure 1: Do Transformers build complexity along their layers? ( a) The representation of a word is a functionof its context, and this cartoon illustrates an hypothesis that deeper representations use larger contexts. ( b) Anexample parse tree, illustrating our notion of phrase complexity. ( c) Cartoon of the distortion metric, wherevectors are the z-scored feature vectors z, and color map vectors to words.While the number of words which change absolute position is similar for different n, largernwillbetter preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gramswaps affect the representations selective to the context with size nor higher within the sentence, andthat lowernwill result in greater distortion in sentence representations.Phrase swaps Then-gram shuffling experiments probe for sensitivity of representations to localcontext without taking into account syntactic structure. In the phrase swap experiments, we perturb asentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In thefirst setting, the spans are chosen such that they are valid phrases according to its parse tree.2In thesecond setting, the spans are chosen that they are invalid phrases. Importantly, in the second, controlsetting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped arethe same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalidphrases will result in more distortion than swapping valid phrases, since invalid swaps will result ingreater denigration of syntactic structure.Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent wordsin a sentence. We again experiment with two settings – in the first setting, the swapped words staywithin the phrase boundary (i.e., the two words share the same parent), while in the second setting,the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where wecondition the swaps based on the “syntactic distance” between the swapped words, where syntacticdistance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since aphrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nestedphrase boundaries between two adjacent words. Here, we expect the amount of distortion to bepositively correlated with the syntactic distance of the words that are swapped.2.2 C ONTEXTUAL REPRESENTATIONS FROM TRANSFORMERSFor our sentence representation, we focus on the Transformer-family of models pretrained onlarge-scale language datasets (BERT and its variants). Given an input word embedding matrixX2RTdfor a sentence of length T, the Transformer applies self attention over the previous layer’srepresentation to produce a new representation,Xl=fl([Hl;1;:::;Hl;H]); Hl;i=Al;iXl1Vl;i;Al;i= softmax(Xl1Ql;i)(Xl1Kl;i)>pdk;(1)2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994).3Under review as a conference paper at ICLR 2021whereflis an MLP layer, His the number of heads, dH=dHis the head embedding dimension,andQl;i;Kl;i;Vl;i2Rddkare respectively the learned query, key, and value projection matrices atlayerlfor headi. The MLP layer consists of a residual layer followed by layer normalization and anonlinearity. The 0-th layer representation X0is obtained by adding the position embeddings and thesegment embeddings to the input token embeddings X, and passing it through normalization layer.3In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representa-tionsXl= [xl;1;:::;xl;T], where xl;t2Rdis the contextualized representation for word tat layerl.4We analyze the trend in distortion as a function of layer depth lfor the different perturbations.We also explore the different attention heads Hl;i2RTdHand the associated attention matrixAl;i2RTTto inspect whether certain attention heads specialize at encoding syntactic information.2.3 D ISTORTION METRICOur input manipulations allow us to specify the distortion at the input level, and we wish to measurethe corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism,a single vector in an intermediate layer is a function of the representations of (potentially all) theother tokens in the sentence. Therefore, the information about a particular word might be distributedamong the many feature vectors of a sentence, and we wish to consider all feature vectors together asa single sentence-level representation.We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specif-ically, let P2f0;1gTTbe the binary matrix representation of a permutation that perturbs theinput sentence, i.e., ~X=PX. Further let ~XlandXlbe the corresponding sentence representationsfor thel-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling,we also z-score each feature dimension of each layer (by subtracting the mean and dividing by thestandard deviation where these statistics are obtained from the full Penn Treebank corpus) to give~ZlandZl. Our distortion metric for layer lis then defined askZlP1~Zlk=pTd, wherekk isthe matrix 2-norm (i.e., Frobenius norm).5Importantly, we invert the permutation of the perturbedrepresentation to preserve the original ordering, which allows us to measure the distortion due tostructural change, rather than distortion due to simple differences in surface form. We divide bypTdto make the metric comparable between sentences (with different T) and networks (with different d).Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentencerepresentations, zl2RTd. Because each dimension is independently centered and standardized, themaximally unstructured distribution of zlis an isotropic Td-dimensional Gaussian. The expecteddistance between two such vectors is approximatelyp2Td. Therefore, we can interpret a distortionvalue approachingp2as comparable to if we had randomly redrawn the perturbed representation.3 E XPERIMENTAL SETUPWe apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al.,1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for theexact details). We analyze the distortion, as measured by length-normalized Frobenius norm betweenthe perturbed and original representations, as a function of layer depth. Layers that experience largedistortion when the syntactic structure is disrupted from the perturbation can be interpreted as beingmore sensitive to hierarchical syntactic structure.As we found the trend to be largely similar across the different models, in the following section,we primarily discuss results from BERT ( bert-base-cased ), which has 12 layers and hiddensize of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initializedTransformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019),DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2).3However, the exact specification for the MLP and X0may vary across different pretrained models.4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multipletokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its wordrepresentation is computed as the average of all its token representations.5There are many possible ways of measuring distortion, such as the average cosine similarity or distancebetween corresponding feature vectors, as well as different matrix norms. We observed the results to bequalitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. Weshow the results from additional distortion metrics in Sec. A.3.4Under review as a conference paper at ICLR 2021Figure 2: Swapping n-grams and phrases. ( a) Examples of basic n-gram shuffles, where colors indicate theunits of shuffling. ( b) Distortion metric computed at each layer, conditioned on n-gram size. Error bars hereafterrepresent standard error across 400 examples. ( c) An example parse tree, with phrase boundaries shown asgrey brackets, and two low-order phrases marked; and examples of a phrasal and control swap, with colorscorresponding to the phrases marked above. ( d) Distortion, computed at each layer, using either the full sentence,the subsentence of unswapped words, or the subsentence of swapped words, conditioned on swap type.4 R ESULTSWe summarize our findings for the different perturbations below. While not shown in the main results,we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flatdistortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed hereare due to the model’s structural knowledge acquired through training, and not simply due to theunderlying architecture.4.1 C HARACTERISTIC SCALE INCREASES ALONG BERT LAYERSWhen we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layerscompared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, whilepreserving contexts of size nor smaller. Interestingly, smaller n-gram shuffles diverge from theoriginal sentence in the early layers (Fig. 2b, top curve), implying that only in early layers arerepresentations built from short-range contexts. Larger n-gram shuffles remain minimally distortedfor ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layerrepresentations.Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we nowask whether those contexts are structures of some grammatical significance. A basic and importantsyntactic feature is the constituent phrase, which BERT has previously been shown to represented insome fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structurein the BERT representation, and found that phrasal boundaries are indeed influential.If we swap just two n-grams, the BERT representations are less affected when phrases are keptintact. We show this by swapping only two n-grams per sentence and comparing the distortion whenthosen-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for thelength ofn-grams that are swapped in both settings. There is less distortion when respecting phraseboundaries. Furthermore, the distortion is evident among all feature vectors, including those in theposition of words which did not get swapped (Fig. 2d). The global contextual information, distributedacross the sentence, is affected by the phrase boundary.4.2 P HRASE HIERARCHY MATTERSHaving seen that representations are sensitive to phrase boundaries, we next explore whether thatsensitivity is proportional to the number of phrase boundaries that are broken, which is a quantityrelated to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words andanalyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6.This analysis varies the distance in the deeper tree structure while keeping the distance in surfaceform constant (since we always swap adjacent words).If the hierarchical representations are indeed being gradually built up along the layers of thesepretrained models, we expect distortion to be greater for word swaps that are further apart in treedistance. We indeed find that there is a larger distortion when swapping syntactically distant words(Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at theper-head representations of each layer, we see that in deeper layers there are more heads showing a6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two.5Under review as a conference paper at ICLR 2021Figure 3: Syntactic distance affects representational distortion. ( a) An example of adjacent swaps which doand do not cross a phrase boundary, with low-order phrases colored. Phrase boundaries are drawn in red. ( b)Distortion in each layer, but conditioned on the tree distance. ( c) For each head (column) of each layer (row), the(Spearman) rank correlation between distortion and tree distance of the swapped words. Colors are such that redis positive, blue negative. ( d) Histogram of PMI values, for pairs in the same phrase and not. ( e) Similar to b,but averaging all out-of-phrase swaps, and separating pairs above (‘high’) or below (‘low’) the median PMI.positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity tophrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that arebroken.Controlling for co-occurrence Since words in the same phrase may tend to occur together moreoften, co-occurrence is a potential confound when assessing the effects of adjacent word swaps.Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeedit is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013),GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntacticallycloser swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether aswap occurs within a phrase (tree distance = 2) or not.As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, fortwo wordswandv, the PMI is logp(w;v)p(w)p(v), which is estimated from the empirical probabilities. Weconfirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d).Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI(below it), we can see visually that the difference between within-phrase swaps and out-of-phraseswaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we showresults from running a linear regression between distortion and the phrase boundary which accountsfor dependency on any smooth function of PMI (see details in A.4). Even when accounting for theeffect of PMI, there is a significant correlation between the breaking of a phrase and the subsequentdistortion. This indicates that the greater distortion for word swaps which cross phrase boundaries isnot simply due to surface co-occurrence statistics.Effects on linguistic information Do our input perturbations, and the resulting the distortions,reflect changes in the encoding of important linguistic information? One way to address this question,which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology(Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistictask generalizes from the (representations of the) unperturbed sentences to the perturbed ones. Withsupervised probes, we can see how much the representations change with respect to the subspacesthat encode specific linguistic information.Specifically, we relate representational distortion to three common linguistic tasks of increasingcomplexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al.,6Under review as a conference paper at ICLR 2021Figure 4: Distortion and inference impairment for increasing linguistic complexity. In each plot, a point isthe average (distortion, ‘impairment’) for a given layer and a given class of word swap distance. Points areconnected by lines according to their swap type (i.e. tree distance). The circles are colored according to layer(see right for a legend). Averages are taken over 600 test sentences, with one of each swap type per sentence,and both distortion and log-likelihood are computed for every word in the sentence.2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trainedon each of these tasks is a generalized linear model, linearly mapping a datapoint x(i.e. BERTrepresentations from different layers) to a conditional distribution of the labels, p(yj(x))(see A.5for more details). Thus a ready measure of the effect of each type of swap, for a single sentence,islogp(yj(xi))logp(yj(~xi)), where ~xiis same datum as xiin the perturbed representation8.Averaging this quantity over all datapoints gives a measure of content-specific distortion within arepresentation, which we will call “inference impairment”.Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps ismore strongly related to more complex information. The inverted L shape of Fig. 4a suggeststhat increasing distortion is only weakly related to impairment of POS inference, which is perhapsunsurprising given that POS tags can be readily predicted from local context. A deeper syntacticprobe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps whichbreak a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), whichreconstructs the full parse tree, has a consistently positive relationship with distortion, whose slopeis proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasalboundary, the representational distortion is related to relatively more complex linguistic information.4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTIONIn the transformer architecture, contexts are built with the attention mechanism. Recall that attentionis a mechanism for allowing input vectors to interact when forming the output, and the ultimate outputfor a given token is a convex combination of the features of all tokens (Eq. 1). While any interactionsbetween inputs must be mediated by attention, it is not obvious how the contextual information of aparticular layer is captured by attention in that layer. It has been shown qualitatively that, within alayer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Ournext suite of experiments asks whether this might explain the observed relationship between treedistance and distortion.We find that in many Transformer heads, the attention—much like distortion—is proportional to thesyntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rankcorrelation between the parse tree distance of adjacent word pairs, and the attention paid betweenthose words. Different attention heads within a layer range from correlated to anti-correlated, andwith slightly more positively correlated heads in deeper layers. However, there is great variability inthis, suggesting that only certain heads learn to specialize to syntactic phenomena.7While the original paper predicted dependency tree distance, in this paper we instead predict the constituencytree distance.8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputsthe constituency tree distance between each pair of words. Then logp(yj(xi))is simply the log probability ofan individual label.7Under review as a conference paper at ICLR 2021Figure 5: Attention provides a possible explanation for the trends we observe in distortion. ( a) An exampleof the attention matrices for all heads in a single layer (layer 8), given the above sentence as input. Phrasesin the sentence are drawn as blocks in the matrix. ( b) The rank correlation of attention vs. tree distance forall the heads/layers. ( c) The rank correlation coefficients of distortion (y-axis) and attention (x-axis) againsttree distance, colored by layer. Marker size is proportional to 1 minus the p-value of the distortion/distancecorrelation. ( d) A comparison of p-values on the distortion vs. tree distance correlation without (x-axis) andwith attention splines as covariates (y-axis).We observe that at least some of the observed correlation between swap-induced distortion and parsedistance can be accounted for by attention. Of course, all interactions between inputs are mediatedby attention, but it is not certain that the contextual information in a particular layer comes from theattention at that layer. To test a whether the correlation between tree distance and distortion persistswhen accounting for attention, we used a linear regression with any smooth function of attention asa covariate (see A.4). We observe larger p-values in the controlled regression, indicating that thecorrelations become less significant when accounting for attention (Fig. 5d). This suggests that theattention in each layer helps to build sensitivity to syntactic distance.5 D ISCUSSIONIn this paper, we used the representational change in response to perturbed input in order to study theencoding of hierarchical phrasal structure in deep language models. We also identify a link betweenthe perturbation-induced distortion to the magnitude of attention paid to words within and out ofphrase boundaries as a potential mechanistic explanation. Across different models, we find that theword-level contexts used to represent a sentence grow in size and complexity along the model layers,similar to the increasing size of receptive fields found in sensory systems.In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) arecombined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber &Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like anobject in a visual scene, thus motivating our perturbation-based probe for object-like representationalsensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasalunit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries comparedto control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for othermodels).Our method and results suggest many interesting future directions. We hope that this work willmotivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2)a search for the causal connection between attention structure, the representational geometry, andthe model performance; (3) potential applications in network pruning studies; (4) an extension ofthe current work as a hypothesis generator in neuroscience to understand how neural populationsimplement tasks with an underlying compositional structure.8Under review as a conference paper at ICLR 2021Figure 6: Additional experiments and clarifications (by reviewer).9Under review as a conference paper at ICLR 2021
WDOZdzxq6xj
Reveal the correlations of hierarchical phrase structure and BERT's output, but lack theoretical explanation.
6: Marginally above acceptance threshold
--- Overall --- This paper provides some insights into the relation of BERT's output w.r.t. the parser tree (in terms of constituent phrases) of the input sentence. As some previous work has pointed out, BERT model contains the parsing information (e.g., Hewitt & Manning NAACL’19)). This work can be regarded as a moderate reification and improvement of that thought (but it is still limited to existing scopes and methodologies). The merits of this paper: (1) the paper reveals some interesting facts such as BERT are sensitive to phrasal hierarchy and there are behavioural discrepancies between different layer; (2) the experiments are comprehensive, including both the distortion analysis and conventional probe approaches. In my point of view, the main issue of this paper is, like many other works, there is no strong theoretical explanation for the phenomenon being investigated. In this sense, the novelty of this paper is not so strong. --- Major comments --- 1. In experiments, it is not clear whether the randomness of BERT itself has been deducted. The randomness could be caused by the dropout operation which may lead to the discrepancy on output even using the same sentence. 2. In the future version, I recommend to further provide two tests: (1) in current settings, the distortions are showed in sentence-level (e.g., by summing up all distortions within a sentence?). I would like to see a finer-grained test, i.e., whether the most distortion parts are produced by the swapping part; (2) The genre of the Penn treebank dataset is limited to general texts or articles that may be seen in the training corpus of BERT. I would recommend testing on other domains (e.g., biomedical or academic) that BERT never saw before (or structure-less data that do not present a syntactic structure). 3. Do X~ and X in section 2.3 use the same mean and deviation? 4. It seems that Fig. 2(c) takes account into both NP and VP, what if we only constrain the phrases to be only NP? Will the distortion becomes large since swapping subject and object will lead to totally different meanings? 5. There is a lack of explanation about "all words", "swapped", and "unswapped" settings of Fig. 2(d). 6. Is there any intuition for the step back on the layers 11 and 12 in Fig. 4(b) and 4(c)? --- Minor comments --- 1. Strictly speaking, the terminology “gram” in Fig.2(a) should be called "chunk", as “grams” are usually related with sliding window thus overlap with each other. 2. “Fig.3(a)” in the paragraph just above Section 4.2 should be “Fig.2(c)”.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Representational correlates of hierarchical phrase structure in deep language models ### Paper Abstract While contextual representations from Transformer-based architectures have set a new standard for many NLP tasks, there is not yet a complete accounting of their inner workings. In particular, it is not entirely clear what aspects of sentence-level syntax are captured by these representations, nor how (if at all) they are built along the stacked layers of the network. In this paper, we aim to address such questions with a general class of input perturbation-based analyses of representations from Transformer networks pretrained on self-supervised objectives. Importing from computational and cognitive neuroscience the notion of representational invariance, we perform a series of probes designed to test the sensitivity of Transformer representations to several kinds of structure in sentences. Each probe involves swapping words in a sentence and comparing the representations from perturbed sentences against the original. We experiment with three different perturbations: (1) random permutations of n-grams of varying width, to test the scale at which a representation is sensitive to word position; (2) swapping of two spans which do or do not form a syntactic phrase, to test sensitivity to global phrase structure; and (3) swapping of two adjacent words which do or do not break apart a syntactic phrase, to test sensitivity to local phrase structure. We also connect our probe results to the Transformer architecture by relating the attention mechanism to syntactic distance between two words. Results from the three probes collectively suggest that Transformers build sensitivity to larger parts of the sentence along their layers, and that hierarchical phrase structure plays a role in this process. In particular, sensitivity to local phrase structure increases along deeper layers. Based on our analysis of attention, we show that this is at least partly explained by generally larger attention weights between syntactically distant words. ### Paper Keywords ["bertology", "interpretability", "computational neuroscience", "population coding"] ### Paper Content ABSTRACTWhile contextual representations from pretrained Transformer models have set anew standard for many NLP tasks, there is not yet a complete accounting of theirinner workings. In particular, it is not entirely clear what aspects of sentence-levelsyntax are captured by these representations, nor how (if at all) they are builtalong the stacked layers of the network. In this paper, we aim to address suchquestions with a general class of interventional, input perturbation-based analysesof representations from Transformers networks pretrained with self-supervision.Importing from computational and cognitive neuroscience the notion of represen-tational invariance, we perform a series of probes designed to test the sensitivityof Transformer representations to several kinds of structure in sentences. Eachprobe involves swapping words in a sentence and comparing the representationsfrom perturbed sentences against the original. We experiment with three differentperturbations: (1) random permutations of n-grams of varying width, to test thescale at which a representation is sensitive to word position; (2) swapping of twospans which do or do not form a syntactic phrase, to test sensitivity to global phrasestructure; and (3) swapping of two adjacent words which do or do not break apart asyntactic phrase, to test sensitivity to local phrase structure. We also connect ourprobe results to the Transformer architecture by relating the attention mechanismto syntactic distance between two words. Results from the three probes collectivelysuggest that Transformers build sensitivity to larger parts of the sentence alongtheir layers, and that hierarchical phrase structure plays a role in this process.In particular, sensitivity to local phrase structure increases along deeper layers.Based on our analysis of attention, we show that this is at least partly explained bygenerally larger attention weights between syntactically distant words.11 I NTRODUCTION AND RELATED WORKIt is still unknown how distributed information processing systems encode and exploit complexrelational structures in data. The fields of deep learning (Saxe et al., 2013; Hewitt & Manning, 2019),neuroscience (Sarafyazd & Jazayeri, 2019; Stachenfeld et al., 2017), and cognitive science (Elman,1991; Kemp & Tenenbaum, 2008; Tervo et al., 2016) have given great attention to this question,including a productive focus on the potential models and their implementations of hierarchical tasks,such as predictive maps and graphs.Natural (human) language provides a rich domain for studying how complex hierarchical structuresare encoded in information processing systems. More so than other domains, human language isunique in that its underlying hierarchy has been extensively studied and theorized in linguistics, whichprovides source of “ground truth” structures for stimulus data. Much prior work on characterizingthe types of linguistic information encoded in computational models of language such as neuralnetworks has focused on supervised readout probes, which train a classifier on top pretrained modelsto predict a particular linguistic label (Belinkov & Glass, 2017; Liu et al., 2019a; Tenney et al.,2019). In particular, Hewitt & Manning (2019) apply probes to discover linear subspaces that encodetree-distances as distances in the representational subspace, and Kim et al. (2020) show that thesedistances can be used even without any labeled information to induce hierarchical structure. However,recent work has highlighted issues with correlating supervised probe performance with the amount1Datasets, extracted features and code will be publicly available upon publication.1Under review as a conference paper at ICLR 2021of language structure encoded in such representations (Hewitt & Liang, 2019). Another popularapproach to analyzing deep models is through the lens of geometry (Reif et al., 2019; Gigante et al.,2019). While geometric interpretations provide significant insights, they present another challengein summarizing the structure in a quantifiable way. More recent techniques such as replica-basedmean field manifold analysis method (Chung et al., 2018; Cohen et al., 2019; Mamou et al., 2020)connects representation geometry with linear classification performance, but the method is limited tocategorization tasks.In this work, we make use of an experimental framework from cognitive science and neuroscience toprobe for hierarchical structure in contextual representations from pretrained Transformer models (i.e.,BERT (Devlin et al., 2018) and its variants). A popular technique in neuroscience involves measuringchange in the population activity in response to controlled, input perturbations (Mollica et al., 2020;Ding et al., 2016). We apply this approach to test the characteristic scale and the complexity (Fig. 1)of hierarchical phrase structure encoded deep contextual representations, and present several keyfindings:1.Representations are distorted by shuffling small n-grams in early layers, while the distortioncaused by shuffling large n-grams starts to occur in later layers, implying the scale ofcharacteristic word length increases from input to downstream layers.2.Representational distortion caused by swapping two constituent phrases is smaller thanwhen the control sequences of the same length are swapped, indicating that the BERTrepresentations are sensitive to hierarchical phrase structure.3.Representational distortion caused by swapping adjacent words across phrasal boundary islarger than when the swap is within a phrasal boundary; furthermore, the amount of distortionincreases with the syntactic distance between the swapped words. The correlation betweendistortion and tree distance increases across the layers, suggesting that the characteristiccomplexity of phrasal subtrees increases across the layers.4.Early layers pay more attention between syntactically closer adjacent pairs and deeper layerspay more attention between syntactically distant adjacent pairs. The attention paid in eachlayer can explain some of the emergent sensitivity to phrasal structure across layers.Our work demonstrates that interventional tools such as controlled input perturbations can be useful foranalyzing deep networks, adding to the growing, interdisciplinary body of work which profitably adaptexperimental techniques from cognitive neuroscience and psycholinguistics to analyze computationalmodels of language (Futrell et al., 2018; Wilcox et al., 2019; Futrell et al., 2019; Ettinger, 2020).2 M ETHODSEliciting changes in behavioral and neural responses through controlled input perturbations is acommon experimental technique in cognitive neuroscience and psycholinguistics (Tsao & Livingstone,2008; Mollica et al., 2020). Inspired by these approaches, we perturb input sentences and measure thediscrepancy between the resulting, perturbed representation against the original. While conceptuallysimple, this approach allows for a targeted analysis of internal representations obtained from differentlayers of deep models, and can suggest partial mechanisms by which such models are able to encodelinguistic structure. We note that sentence perturbations have been primarily utilized in NLP forrepresentation learning (Hill et al., 2016; Artetxe et al., 2018; Lample et al., 2018), data augmentation(Wang et al., 2018; Andreas, 2020), and testing for model robustness (e.g., against adversarialexamples) (Jia & Liang, 2017; Belinkov & Bisk, 2018). A methodological contribution of our workis to show that input perturbations can serve as a useful tool for analyzing representations learned bydeep networks.2.1 S ENTENCE PERTURBATIONSIn this work we consider three different types of sentence perturbations designed to probe for differentphenomena.n-gram shuffling In then-gram shuffling experiments, we randomly shuffle the words of a sentencein units ofn-grams, with nvarying from 1 (i.e., individual words) to 7 (see Fig. 2a for an example).2Under review as a conference paper at ICLR 2021Figure 1: Do Transformers build complexity along their layers? ( a) The representation of a word is a functionof its context, and this cartoon illustrates an hypothesis that deeper representations use larger contexts. ( b) Anexample parse tree, illustrating our notion of phrase complexity. ( c) Cartoon of the distortion metric, wherevectors are the z-scored feature vectors z, and color map vectors to words.While the number of words which change absolute position is similar for different n, largernwillbetter preserve the local context (i.e., relative position) of more words. Thus, we reason that n-gramswaps affect the representations selective to the context with size nor higher within the sentence, andthat lowernwill result in greater distortion in sentence representations.Phrase swaps Then-gram shuffling experiments probe for sensitivity of representations to localcontext without taking into account syntactic structure. In the phrase swap experiments, we perturb asentence by swapping two randomly chosen spans. We explore two ways of swapping spans. In thefirst setting, the spans are chosen such that they are valid phrases according to its parse tree.2In thesecond setting, the spans are chosen that they are invalid phrases. Importantly, in the second, controlsetting, we fix the length of the spans such that the lengths of spans that are chosen to be swapped arethe same as in the first setting (see Fig. 3a for an example). We hypothesize that swapping invalidphrases will result in more distortion than swapping valid phrases, since invalid swaps will result ingreater denigration of syntactic structure.Adjacent word swaps In the adjacent word swapping experiments, we swap two adjacent wordsin a sentence. We again experiment with two settings – in the first setting, the swapped words staywithin the phrase boundary (i.e., the two words share the same parent), while in the second setting,the swapped words cross phrase boundaries. We also perform a more fine-grained analysis where wecondition the swaps based on the “syntactic distance” between the swapped words, where syntacticdistance is defined as the distance between the two words in the parse tree (see Fig. 4c). Since aphrase corresponds to a subtree of the parse tree, this distance also quantifies the number of nestedphrase boundaries between two adjacent words. Here, we expect the amount of distortion to bepositively correlated with the syntactic distance of the words that are swapped.2.2 C ONTEXTUAL REPRESENTATIONS FROM TRANSFORMERSFor our sentence representation, we focus on the Transformer-family of models pretrained onlarge-scale language datasets (BERT and its variants). Given an input word embedding matrixX2RTdfor a sentence of length T, the Transformer applies self attention over the previous layer’srepresentation to produce a new representation,Xl=fl([Hl;1;:::;Hl;H]); Hl;i=Al;iXl1Vl;i;Al;i= softmax(Xl1Ql;i)(Xl1Kl;i)>pdk;(1)2We use constituency parse trees from the English Penn Treebank (Marcus et al., 1994).3Under review as a conference paper at ICLR 2021whereflis an MLP layer, His the number of heads, dH=dHis the head embedding dimension,andQl;i;Kl;i;Vl;i2Rddkare respectively the learned query, key, and value projection matrices atlayerlfor headi. The MLP layer consists of a residual layer followed by layer normalization and anonlinearity. The 0-th layer representation X0is obtained by adding the position embeddings and thesegment embeddings to the input token embeddings X, and passing it through normalization layer.3In this paper, we conduct our distortion analysis mainly on the intermediate Transformer representa-tionsXl= [xl;1;:::;xl;T], where xl;t2Rdis the contextualized representation for word tat layerl.4We analyze the trend in distortion as a function of layer depth lfor the different perturbations.We also explore the different attention heads Hl;i2RTdHand the associated attention matrixAl;i2RTTto inspect whether certain attention heads specialize at encoding syntactic information.2.3 D ISTORTION METRICOur input manipulations allow us to specify the distortion at the input level, and we wish to measurethe corresponding distortion in the representation space (Fig. 1). Due to the attention mechanism,a single vector in an intermediate layer is a function of the representations of (potentially all) theother tokens in the sentence. Therefore, the information about a particular word might be distributedamong the many feature vectors of a sentence, and we wish to consider all feature vectors together asa single sentence-level representation.We thus represent each sentence as a matrix and use the distance induced by matrix 2-norm. Specif-ically, let P2f0;1gTTbe the binary matrix representation of a permutation that perturbs theinput sentence, i.e., ~X=PX. Further let ~XlandXlbe the corresponding sentence representationsfor thel-th layer for the perturbed and original sentences. To ignore uniform shifting and scaling,we also z-score each feature dimension of each layer (by subtracting the mean and dividing by thestandard deviation where these statistics are obtained from the full Penn Treebank corpus) to give~ZlandZl. Our distortion metric for layer lis then defined askZlP1~Zlk=pTd, wherekk isthe matrix 2-norm (i.e., Frobenius norm).5Importantly, we invert the permutation of the perturbedrepresentation to preserve the original ordering, which allows us to measure the distortion due tostructural change, rather than distortion due to simple differences in surface form. We divide bypTdto make the metric comparable between sentences (with different T) and networks (with different d).Intuitively, our metric is the scaled Euclidean distance between the z-scored, flattened sentencerepresentations, zl2RTd. Because each dimension is independently centered and standardized, themaximally unstructured distribution of zlis an isotropic Td-dimensional Gaussian. The expecteddistance between two such vectors is approximatelyp2Td. Therefore, we can interpret a distortionvalue approachingp2as comparable to if we had randomly redrawn the perturbed representation.3 E XPERIMENTAL SETUPWe apply our perturbation-based analysis on sentences from the English Penn Treebank (Marcus et al.,1994), where we average the distortion metric across randomly chosen sentences (see Sec. A.1 for theexact details). We analyze the distortion, as measured by length-normalized Frobenius norm betweenthe perturbed and original representations, as a function of layer depth. Layers that experience largedistortion when the syntactic structure is disrupted from the perturbation can be interpreted as beingmore sensitive to hierarchical syntactic structure.As we found the trend to be largely similar across the different models, in the following section,we primarily discuss results from BERT ( bert-base-cased ), which has 12 layers and hiddensize of 768 (Devlin et al., 2018). We show results from other pretrained and randomly-initializedTransformer-based models, including RoBERTa (Liu et al., 2019b), ALBERT (Lan et al., 2019),DistilBERT (Sanh et al., 2019), and XLNet (Yang et al., 2019), in the appendix (Sec. A.2).3However, the exact specification for the MLP and X0may vary across different pretrained models.4BERT uses BPE tokenization (Sennrich et al., 2015), which means that some words are split into multipletokens. Since we wish to evaluate representations at word-level, if a word is split into multiple tokens, its wordrepresentation is computed as the average of all its token representations.5There are many possible ways of measuring distortion, such as the average cosine similarity or distancebetween corresponding feature vectors, as well as different matrix norms. We observed the results to bequalitatively similar for different measures, and hence we focus on the Frobenius norm in our main results. Weshow the results from additional distortion metrics in Sec. A.3.4Under review as a conference paper at ICLR 2021Figure 2: Swapping n-grams and phrases. ( a) Examples of basic n-gram shuffles, where colors indicate theunits of shuffling. ( b) Distortion metric computed at each layer, conditioned on n-gram size. Error bars hereafterrepresent standard error across 400 examples. ( c) An example parse tree, with phrase boundaries shown asgrey brackets, and two low-order phrases marked; and examples of a phrasal and control swap, with colorscorresponding to the phrases marked above. ( d) Distortion, computed at each layer, using either the full sentence,the subsentence of unswapped words, or the subsentence of swapped words, conditioned on swap type.4 R ESULTSWe summarize our findings for the different perturbations below. While not shown in the main results,we note that randomly-initialized (i.e. untrained) models (somewhat unsuprisingly) exhibit a flatdistortion trend for all perturbations (see Sec. A.2). This indicates that the patterns observed hereare due to the model’s structural knowledge acquired through training, and not simply due to theunderlying architecture.4.1 C HARACTERISTIC SCALE INCREASES ALONG BERT LAYERSWhen we shuffle in units of larger n-grams, it only introduces distortions in the deeper BERT layerscompared to smaller n-gram shuffles. The n-gram sized shuffles break contexts larger than n, whilepreserving contexts of size nor smaller. Interestingly, smaller n-gram shuffles diverge from theoriginal sentence in the early layers (Fig. 2b, top curve), implying that only in early layers arerepresentations built from short-range contexts. Larger n-gram shuffles remain minimally distortedfor ‘longer’ (Fig. 2b, bottom curve), implying that long-range contexts play a larger role deeper layerrepresentations.Phrasal boundaries matter Since BERT seems to build larger contexts along its layers, we nowask whether those contexts are structures of some grammatical significance. A basic and importantsyntactic feature is the constituent phrase, which BERT has previously been shown to represented insome fashion (Goldberg, 2019; Kim et al., 2020). We applied two targeted probes of phrase structurein the BERT representation, and found that phrasal boundaries are indeed influential.If we swap just two n-grams, the BERT representations are less affected when phrases are keptintact. We show this by swapping only two n-grams per sentence and comparing the distortion whenthosen-grams are phrases to when they cross phrase boundaries (Fig. 3a), where we control for thelength ofn-grams that are swapped in both settings. There is less distortion when respecting phraseboundaries. Furthermore, the distortion is evident among all feature vectors, including those in theposition of words which did not get swapped (Fig. 2d). The global contextual information, distributedacross the sentence, is affected by the phrase boundary.4.2 P HRASE HIERARCHY MATTERSHaving seen that representations are sensitive to phrase boundaries, we next explore whether thatsensitivity is proportional to the number of phrase boundaries that are broken, which is a quantityrelated to the phrase hierarchy. Instead of swapping entire phrases, we swap two adjacent words andanalyze the distortion based on how far apart the two words are in the constituency tree (Fig. 3a)6.This analysis varies the distance in the deeper tree structure while keeping the distance in surfaceform constant (since we always swap adjacent words).If the hierarchical representations are indeed being gradually built up along the layers of thesepretrained models, we expect distortion to be greater for word swaps that are further apart in treedistance. We indeed find that there is a larger distortion when swapping syntactically distant words(Fig. 3b). This distortion grows from earlier to later BERT layers. Furthermore, when looking at theper-head representations of each layer, we see that in deeper layers there are more heads showing a6Note that for adjacent words, the number of broken phrase boundaries equals the tree distance minus two.5Under review as a conference paper at ICLR 2021Figure 3: Syntactic distance affects representational distortion. ( a) An example of adjacent swaps which doand do not cross a phrase boundary, with low-order phrases colored. Phrase boundaries are drawn in red. ( b)Distortion in each layer, but conditioned on the tree distance. ( c) For each head (column) of each layer (row), the(Spearman) rank correlation between distortion and tree distance of the swapped words. Colors are such that redis positive, blue negative. ( d) Histogram of PMI values, for pairs in the same phrase and not. ( e) Similar to b,but averaging all out-of-phrase swaps, and separating pairs above (‘high’) or below (‘low’) the median PMI.positive rank correlation between distortion and tree distance (Fig. 3c). In addition to a sensitivity tophrase boundaries, deeper BERT layers develop a sensitivity to the number of boundaries that arebroken.Controlling for co-occurrence Since words in the same phrase may tend to occur together moreoften, co-occurrence is a potential confound when assessing the effects of adjacent word swaps.Co-occurrence is a simple statistic which does not require any notion of grammar to compute – indeedit is used to train many non-contextual word embeddings (e.g., word2vec Mikolov et al. (2013),GloVe Pennington et al. (2014)). So it is natural to ask whether BERT’s resilience to syntacticallycloser swaps goes beyond simple co-occurrence statistics. For simplicity, let us focus on whether aswap occurs within a phrase (tree distance = 2) or not.As an estimate of co-occurrence, we used the pointwise mutual information (PMI). Specifically, fortwo wordswandv, the PMI is logp(w;v)p(w)p(v), which is estimated from the empirical probabilities. Weconfirm that adjacent words in the same phrase do indeed have a second mode at high PMI (Fig. 3d).Dividing the swaps into those whose words have high PMI (above the marginal median) and low PMI(below it), we can see visually that the difference between within-phrase swaps and out-of-phraseswaps persists in both groups (Fig. 3e). For a more careful statistical test, in the appendix we showresults from running a linear regression between distortion and the phrase boundary which accountsfor dependency on any smooth function of PMI (see details in A.4). Even when accounting for theeffect of PMI, there is a significant correlation between the breaking of a phrase and the subsequentdistortion. This indicates that the greater distortion for word swaps which cross phrase boundaries isnot simply due to surface co-occurrence statistics.Effects on linguistic information Do our input perturbations, and the resulting the distortions,reflect changes in the encoding of important linguistic information? One way to address this question,which is popular in computational neuroscience (DiCarlo & Cox, 2007) and more recently BERTology(Liu et al., 2019a; Tenney et al., 2019), is to see how well a linear classifier trained on a linguistictask generalizes from the (representations of the) unperturbed sentences to the perturbed ones. Withsupervised probes, we can see how much the representations change with respect to the subspacesthat encode specific linguistic information.Specifically, we relate representational distortion to three common linguistic tasks of increasingcomplexity: part of speech (POS) classification; grandparent tag (GP) classification (Tenney et al.,6Under review as a conference paper at ICLR 2021Figure 4: Distortion and inference impairment for increasing linguistic complexity. In each plot, a point isthe average (distortion, ‘impairment’) for a given layer and a given class of word swap distance. Points areconnected by lines according to their swap type (i.e. tree distance). The circles are colored according to layer(see right for a legend). Averages are taken over 600 test sentences, with one of each swap type per sentence,and both distortion and log-likelihood are computed for every word in the sentence.2019); and a parse tree distance reconstruction (Hewitt & Manning, 2019)7. The probe trainedon each of these tasks is a generalized linear model, linearly mapping a datapoint x(i.e. BERTrepresentations from different layers) to a conditional distribution of the labels, p(yj(x))(see A.5for more details). Thus a ready measure of the effect of each type of swap, for a single sentence,islogp(yj(xi))logp(yj(~xi)), where ~xiis same datum as xiin the perturbed representation8.Averaging this quantity over all datapoints gives a measure of content-specific distortion within arepresentation, which we will call “inference impairment”.Based on the three linguistic tasks, the distortion we measure from the adjacent word swaps ismore strongly related to more complex information. The inverted L shape of Fig. 4a suggeststhat increasing distortion is only weakly related to impairment of POS inference, which is perhapsunsurprising given that POS tags can be readily predicted from local context. A deeper syntacticprobe, the GP classifier (4b), does show a consistent positive relationship, but only for swaps whichbreak a phrase boundary (i.e. distance >2). Meanwhile, impairment of the distance probe (4c), whichreconstructs the full parse tree, has a consistently positive relationship with distortion, whose slopeis proportionate to the tree distance of the swap. Thus, when specifically perturbing the phrasalboundary, the representational distortion is related to relatively more complex linguistic information.4.3 A POSSIBLE MECHANISTIC EXPLANATION THROUGH ATTENTIONIn the transformer architecture, contexts are built with the attention mechanism. Recall that attentionis a mechanism for allowing input vectors to interact when forming the output, and the ultimate outputfor a given token is a convex combination of the features of all tokens (Eq. 1). While any interactionsbetween inputs must be mediated by attention, it is not obvious how the contextual information of aparticular layer is captured by attention in that layer. It has been shown qualitatively that, within alayer, BERT allocates attention preferentially to words in the same phrase (Kim et al., 2020). Ournext suite of experiments asks whether this might explain the observed relationship between treedistance and distortion.We find that in many Transformer heads, the attention—much like distortion—is proportional to thesyntactic distance between two words. Fig. 5c summarizes this trend by showing the Spearman rankcorrelation between the parse tree distance of adjacent word pairs, and the attention paid betweenthose words. Different attention heads within a layer range from correlated to anti-correlated, andwith slightly more positively correlated heads in deeper layers. However, there is great variability inthis, suggesting that only certain heads learn to specialize to syntactic phenomena.7While the original paper predicted dependency tree distance, in this paper we instead predict the constituencytree distance.8POS- and GP-tag prediction outputs a sequence of labels for each sentence, while the distance probe outputsthe constituency tree distance between each pair of words. Then logp(yj(xi))is simply the log probability ofan individual label.7Under review as a conference paper at ICLR 2021Figure 5: Attention provides a possible explanation for the trends we observe in distortion. ( a) An exampleof the attention matrices for all heads in a single layer (layer 8), given the above sentence as input. Phrasesin the sentence are drawn as blocks in the matrix. ( b) The rank correlation of attention vs. tree distance forall the heads/layers. ( c) The rank correlation coefficients of distortion (y-axis) and attention (x-axis) againsttree distance, colored by layer. Marker size is proportional to 1 minus the p-value of the distortion/distancecorrelation. ( d) A comparison of p-values on the distortion vs. tree distance correlation without (x-axis) andwith attention splines as covariates (y-axis).We observe that at least some of the observed correlation between swap-induced distortion and parsedistance can be accounted for by attention. Of course, all interactions between inputs are mediatedby attention, but it is not certain that the contextual information in a particular layer comes from theattention at that layer. To test a whether the correlation between tree distance and distortion persistswhen accounting for attention, we used a linear regression with any smooth function of attention asa covariate (see A.4). We observe larger p-values in the controlled regression, indicating that thecorrelations become less significant when accounting for attention (Fig. 5d). This suggests that theattention in each layer helps to build sensitivity to syntactic distance.5 D ISCUSSIONIn this paper, we used the representational change in response to perturbed input in order to study theencoding of hierarchical phrasal structure in deep language models. We also identify a link betweenthe perturbation-induced distortion to the magnitude of attention paid to words within and out ofphrase boundaries as a potential mechanistic explanation. Across different models, we find that theword-level contexts used to represent a sentence grow in size and complexity along the model layers,similar to the increasing size of receptive fields found in sensory systems.In neuroscience, it is well accepted that small receptive fields tuned to simple stimuli (i.e., edges) arecombined to form larger receptive fields tuned to more complex stimuli (i.e., objects) (Riesenhuber &Poggio, 1999). In language, a phrase within a sentence can serve as a conceptual unit, much like anobject in a visual scene, thus motivating our perturbation-based probe for object-like representationalsensitivity of phrases. We showed that BERT and its variants are indeed sensitive to the phrasalunit, as demonstrated by greater invariance to perturbations preserving phrasal boundaries comparedto control perturbations which break the phrasal boundaries (Fig. 2-5 for BERT, see SM for othermodels).Our method and results suggest many interesting future directions. We hope that this work willmotivate: (1) a formal theory of efficient hierarchical data representations in distributed features; (2)a search for the causal connection between attention structure, the representational geometry, andthe model performance; (3) potential applications in network pruning studies; (4) an extension ofthe current work as a hypothesis generator in neuroscience to understand how neural populationsimplement tasks with an underlying compositional structure.8Under review as a conference paper at ICLR 2021Figure 6: Additional experiments and clarifications (by reviewer).9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Reveal the correlations of hierarchical phrase structure and BERT's output, but lack theoretical explanation. ### Review Text --- Overall --- This paper provides some insights into the relation of BERT's output w.r.t. the parser tree (in terms of constituent phrases) of the input sentence. As some previous work has pointed out, BERT model contains the parsing information (e.g., Hewitt & Manning NAACL’19)). This work can be regarded as a moderate reification and improvement of that thought (but it is still limited to existing scopes and methodologies). The merits of this paper: (1) the paper reveals some interesting facts such as BERT are sensitive to phrasal hierarchy and there are behavioural discrepancies between different layer; (2) the experiments are comprehensive, including both the distortion analysis and conventional probe approaches. In my point of view, the main issue of this paper is, like many other works, there is no strong theoretical explanation for the phenomenon being investigated. In this sense, the novelty of this paper is not so strong. --- Major comments --- 1. In experiments, it is not clear whether the randomness of BERT itself has been deducted. The randomness could be caused by the dropout operation which may lead to the discrepancy on output even using the same sentence. 2. In the future version, I recommend to further provide two tests: (1) in current settings, the distortions are showed in sentence-level (e.g., by summing up all distortions within a sentence?). I would like to see a finer-grained test, i.e., whether the most distortion parts are produced by the swapping part; (2) The genre of the Penn treebank dataset is limited to general texts or articles that may be seen in the training corpus of BERT. I would recommend testing on other domains (e.g., biomedical or academic) that BERT never saw before (or structure-less data that do not present a syntactic structure). 3. Do X~ and X in section 2.3 use the same mean and deviation? 4. It seems that Fig. 2(c) takes account into both NP and VP, what if we only constrain the phrases to be only NP? Will the distortion becomes large since swapping subject and object will lead to totally different meanings? 5. There is a lack of explanation about "all words", "swapped", and "unswapped" settings of Fig. 2(d). 6. Is there any intuition for the step back on the layers 11 and 12 in Fig. 4(b) and 4(c)? --- Minor comments --- 1. Strictly speaking, the terminology “gram” in Fig.2(a) should be called "chunk", as “grams” are usually related with sliding window thus overlap with each other. 2. “Fig.3(a)” in the paragraph just above Section 4.2 should be “Fig.2(c)”. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
vlcVTDaufN
ICLR.cc/2021/Conference
2021
Differentiable Combinatorial Losses through Generalized Gradients of Linear Programs
["Xi Gao", "Han Zhang", "Aliakbar Panahi", "Tom Arodz"]
Combinatorial problems with linear objective function play a central role in many computer science applications, and efficient algorithms for solving them are well known. However, the solutions to these problems are not differentiable with respect to the parameters specifying the problem instance – for example, shortest distance between two nodes in a graph is not a differentiable function of graph edge weights. Recently, attempts to integrate combinatorial and, more broadly, convex optimization solvers into gradient-trained models resulted in several approaches for differentiating over the solution vector to the optimization problem. However, in many cases, the interest is in differentiating over only the objective value, not the solution vector, and using existing approaches introduces unnecessary overhead. Here, we show how to perform gradient descent directly over the objective value of the solution to combinatorial problems. We demonstrate advantage of the approach in examples involving sequence-to-sequence modeling using differentiable encoder-decoder architecture with softmax or Gumbel-softmax, and in weakly supervised learning involving a convolutional, residual feed-forward network for image classification.
["combinatorial optimization", "linear programs", "generalized gradient"]
ABSTRACTCombinatorial problems with linear objective function play a central role in manycomputer science applications, and efficient algorithms for solving them are wellknown. However, the solutions to these problems are not differentiable with re-spect to the parameters specifying the problem instance – for example, shortestdistance between two nodes in a graph is not a differentiable function of graph edgeweights. Recently, attempts to integrate combinatorial and, more broadly, convexoptimization solvers into gradient-trained models resulted in several approaches fordifferentiating over the solution vector to the optimization problem. However, inmany cases, the interest is in differentiating over only the objective value, not thesolution vector, and using existing approaches introduces unnecessary overhead.Here, we show how to perform gradient descent directly over the objective valueof the solution to combinatorial problems. We demonstrate advantage of the ap-proach in examples involving sequence-to-sequence modeling using differentiableencoder-decoder architecture with softmax or Gumbel-softmax, and in weaklysupervised learning involving a convolutional, residual feed-forward network forimage classification.1 I NTRODUCTIONCombinatorial optimization problems, such as shortest path in a weighted directed graph, minimumspanning tree in a weighted undirected graph, or optimal assignment of tasks to workers, play acentral role in many computer science applications. We have highly refined, efficient algorithms forsolving these fundamental problems (Cormen et al., 2009; Schrijver, 2003). However, while we caneasily find, for example, the minimal spanning tree in a graph, the total weight of the tree as functionof graph edge weights is not differentiable. This problem hinders using solutions to combinatorialproblems as criteria in training models that rely on differentiability of the objective function withrespect to the model parameters.Losses that are defined by objective value of some feasible solution to a combinatorial problem, notthe optimal one, have been recently proposed for image segmentation using deep models (Zheng et al.,2015; Lin et al., 2016). These focus on a problem where some pixels in the image have segmentationlabels, and the goal is to train a convolutional network that predicts segmentation labels for all pixels.For pixels with labels, a classification loss can be used. For the remaining pixels, a criterion basedon a combinatorial problem – for example the maximum flow / minimal cut problem in a regular,lattice graph connecting all pixels (Boykov et al., 2001) or derived, higher-level super-pixels (Linet al., 2016) – is often used as a loss, in an iterative process of improving discrete segmentation labels(Zheng et al., 2015; Marin et al., 2019). In this approach, the instance of the combinatorial problemis either fixed, or depends only on the input to the network; for example, similarity of neighboringpixel colors defines edge weights. The output of the neural network gives rise to a feasible, but rarelyoptimal, solution to that fixed instance a combinatorial problem, and its quality is used as a loss.For example, pixel labeling proposed by the network is interpreted as a cut in a pre-defined graphconnecting then pixels. Training the network should result in improved cuts, but no attempt to use asolver to find an optimal cut is made.Here, we are considering a different setup, in which each new output of the neural network givesrise to a new instance of a combinatorial problem. A combinatorial algorithm is then used to findthe optimal solution to the problem defined by the output, and the value of the objective function of1Under review as a conference paper at ICLR 2021the optimal solution is used as a loss. After each gradient update, the network will produce a newcombinatorial problem instance, even for the same input sample. Iteratively, the network is expectedto learn to produce combinatorial problem instances that have low optimal objective function value.For example, in sequence-to-sequence modeling, the network will output a new sentence that issupposed to closely match the desired sentence, leading to a new optimal sequence alignment problemto be solved. Initially, the optimal alignment will be poor, but as the network improves and the qualityof the output sentences get higher, the optimal alignment scores will be lower.Recently, progress in integrating combinatorial problems into differentiable models have been madeby modifying combinatorial algorithms to use only differentiable elements (Tschiatschek et al.,2018; Mensch & Blondel, 2018; Chang et al., 2019), for example smoothed max instead of maxin dynamic programming. Another approach involves executing two runs of a non-differentiable,black-box combinatorial algorithm and uses the two solutions to define a differentiable interpolation(Vlastelica Pogan ˇci ́c et al., 2020; Rol ́ınek et al., 2020). Finally, differentiable linear programmingand quadratic programming layers, which can be used to model many combinatorial problems, havebeen proposed recently (Amos & Kolter, 2017; Agrawal et al., 2019; Wilder et al., 2019; Ferber et al.,2019).The approaches above allow for differentiating through optimal solution vectors. In many cases,we are interested only in the optimal objective value, not the solution vector, and the approachesabove introduce unnecessary overhead. We propose an approach for gradient-descent based trainingof a network f(x;)for supervised learning problems involving samples (x;y)with the objectivecriterion involving a loss term of the form L() =h(OptSolutionObjectiveValue(( F(x;);y)),whereh:R!Ris some differentiable function, and is a combinatorial solver for a probleminstance defined by the output of the -parameterized network Ffor feature vector xand by the truelabely. We show that a broad class of combinatorial problems can be integrated into models trainedusing variants of gradient descent. Specifically, we show that for an efficiently solvable combinatorialproblem that can be efficiently expressed as an integer linear program, generalized gradients of theproblem’s objective value with respect to real-valued parameters defining the problem exist and canbe efficiently computed from a single run of a black-box combinatorial algorithm. Using the aboveresult, we show how generalized gradients of combinatorial problems can provide sentence-level lossfor text summarization using differentiable encoder-decoder models that involve softmax or Gumbelsoftmax (Jang et al., 2016), and a multi-element loss for training classification models when onlyweakly supervised, bagged training data is available.2 D IFFERENTIABLE COMBINATORIAL LOSSES2.1 B ACKGROUND ON GENERALIZED GRADIENTSA functionf:X!Rdefined over a convex, bounded open set X2Rpis Lipschitz continuous onan open setB2X if there is a finite K2Rsuch that8x;y2Bjf(x)f(y)jKjjxyjj. Afunction is locally Lipschitz-continuous if for every point x0in its domain, there is a neighborhoodB0, an open ball centered at x0, on which the function is Lipschitz-continuous. For such functions, ageneralized gradient can be defined.Definition 1. (Clarke, 1975) Letf:X!Rbe Lipschitz-continuous in the neighborhood of x2X.Then, the Clarke subdifferential @f(x)offatxis defined as@f(x) = convlimxk!xrf(xk);where the limit is over all convergent sequences involving those xkfor which gradient exists, andconv denotes convex hull, that is, the smallest polyhedron that contains all vectors from a given set.Each element of the set @f(x)is called a generalized gradient offatx.The Rademacher theorem (see e.g. (Evans, 1992)) states that for any locally Lipschitz-continuousfunction the gradient exists almost everywhere; convergent sequences can be found.In optimization algorithms, generalized gradients can be used in the same way as subgradients(Redding & Downs, 1992), that is, nondifferentiability may affect convergence in certain cases.2Under review as a conference paper at ICLR 20212.2 G RADIENT DESCENT OVER COMBINATORIAL OPTIMIZATIONMany combinatorial problems have linear objective function and can be intuitively expressed asinteger linear programs (ILP), that is, linear programs with additional constraint that the solutionvector involves only integers. Any ILP can be reduced to a linear program. Consider an ILPz=ILP(c;A0;b0) := min ucTus:t: A0u=b0; u0; u2Zp;with an optimal solution vector uand optimal objective value z. Then, there exists a correspondinglinear program LP(c;A;b )z=LP(c;A;b ) := min ucTus:t: Au =b; u0;called ideal formulation (Wolsey, 1989), for which uis also an optimal solution vector, with thesame objective value z. For a feasible, bounded p-dimensional integer program, we can view the pair(A0;b0)as a convex polyhedron A0, the set of all feasible solutions. Then, the pair (A;b)in the idealformulation LP is defined as the set of constraints specifying the feasible set A= convfA0\Zpg.Convex hull of a subset of a convex set A0cannot extend beyond A0, thus,Ais convex, contains allinteger solutions from A0, and no other integer solutions. The number of linear constraints in theideal formulation may be exponential in p, and/or inm, the number of the original constraints in A0.Thus, the existence of the ideal formulation LP for an ILP may not have practical utility for solvingthe ILP.For a combinatorial problem and its corresponding ILP, we use the ideal formulation of the ILPas a conceptual tool to define generalized gradient of the objective value of the optimal solutionto the combinatorial problem with respect to the parameters defining the combinatorial problem.Specifically, our approach first uses a single run of an efficient, black-box combinatorial algorithmto produce the optimal solution vector and the associated objective value. Then, the combinatorialproblem is conceptually viewed as an instance of an ILP. A possibly exponentially large linearprogram (LP) equivalent to the ILP is then used, without actually being spelled out or solved, toderive generalized gradients based on the solution vector returned by the combinatorial algorithm.First, we introduce several notions of efficiency of transforming a combinatorial problem into alinear integer program that will be convenient in defining the generalized gradients of combinatorialproblems.Definition 2. LetP(w)be a combinatorial problem that is parameterized by a continuous vectorw2W Rn, whereWis simply connected and nis the problem size, and let k2Zbe a constantthat may depend on the problem type but not on its size. Then, a combinatorial problem is•primal-dual@-efficient if it can be phrased as an integer linear program involving nvariables,withknconstraints in an LP formulation equivalent to the ILP , and the parameters (A;b;c )of the LP formulation depend on wthrough (sub)differentiable functions, c=c(w);A=A(w);b=b(w).•primal@-efficient if it can be phrased as an integer linear program involving nvariables,the parameters wof the problem influence the cost vector cthrough a (sub)differentiablefunctionc=c(w), and do not influence the constraints A;b.•dual@-efficient if it can be phrased as an integer linear program in which the number ofconstraints in the equivalent LP formulation is kn, the parameters wof the problem influencebthrough a (sub)differentiable function b=b(w), and do no influence the constraint matrixAnor the cost vector c.The class of @-efficient problems includes polynomially solvable combinatorial problems withobjective function that is linear in terms of problem parameters. Typically, the functions c=c(w),b=b(w)andA=A(w)are either identity mapping or are constant; for example, in the LP formaximum network flow, the cost vector cis composed directly of edge capacities, and Aanbareconstant for a given flow network topology, and do not depend on capacities.For any polynomially solvable combinatorial problem, we can construct a poly(n)-sized Booleancircuit for the algorithm solving it. For each poly(n)-sized circuit, there is a linear program withpoly(n)variables and constraints that gives the same solution (see (Dasgupta et al., 2008), Chap.3Under review as a conference paper at ICLR 20217). For example, for MST in a graph with Vvertices and Eedges, the Martin’s ILP formulation(Martin, 1991) has only poly(V+E)constraints, but it is an extended formulation that involvesVEadditional variables on top of the typical Evariables used in the standard ILP formulations forMST. Thus, we cannot use it to construct an ILP formulation that would make MST primal-dual@-efficient. Alternatively, there is an ILP for MST with one binary variable per edge, and the weightof the edge only influences the cost vector c, but to prohibit cycles in the solution there is a constraintfor each cycle in the graph, thus the number of constraints is not poly(n)for arbitrary graphs. Theseconstraints are specified fully by the topology of the graph, not by the edge weights, so wdoes notinfluenceAnorb, meeting the conditions for primal @-efficiency. The MST example shows that thereare problems that are primal @-efficient and not primal-dual @-efficient.Some polynomially solvable combinatorial problems are not @-efficient in any of the above sense.For example, fixed-rank combinatorial problems with interaction costs (Lendl et al., 2019) can bephrased succinctly as a bilinear program, but lead to prohibitively large linear programs both in termsof the number of variables and the number of constraints.For@-efficient problems, we can efficiently obtain generalized gradients of the objective value.Theorem 1. Consider a combinatorial problem P(w)of sizen, a parameter vector wfrom theinterior of the parameter domain W, and an algorithm (w)for solving it in time poly(n). Letzbe the optimal objective value returned by . Then,•ifPis primal@-efficient, then the generalized gradients @z(w)exist, and can be efficientlycomputed from U, the set of primal solution of the ideal formulation of integer programcorresponding to P;•ifPis dual@-efficient, then the generalized gradients of @z(w)exist, and can be efficientlycomputed from V, the set of all dual solution to the ideal formulation of the integer programcorresponding to P;•ifPis primal-dual @-efficient, then the generalized gradients of Aoverwexist, and can beefficiently computed from UandV, as defined above.Proof. A series of results (Gal, 1975; Freund, 1985; De Wolf & Smeers, 2000) shows that if theoptimal objective value z=LP(c;A;b )for a linear program is finite at (c;A;b )and in someneighborhood of (c;A;b ), then generalized gradients of zwith respect to c,b, andAexist and are@z(c) =U; @z(b) =V; @z(A) =vuT: (u;v)2VU:We build on these results to obtain generalized gradients of the linear program corresponding to thecombinatorial problem. For the first case in the theorem, definition 2 states that in the linear programcorresponding to P, only the cost vector cdepends on w, through a (sub)differentiable functionc=c(w). Sincewis in the interior of the parameter domain W, the objective value is finite oversome neighborhood of w. Then,@z(w) =@z(c)@c@w=@c@wU;where the generalized gradient z(c)exists and is equal to U.For the second case, the ideal formulation LP exists. Then, from definition 2 we have that@z(w) =@z(b)@b@w=@b@wV:The third case is a direct extension of the first two cases.Theorem 1 indicates that black-box combinatorial algorithms can be used to expand the rangeof transformations that can be efficiently utilized in neural networks. One immediate area ofapplication is using them to specify a loss function. Consider a network F(x;)parameterizedby a vector of tunable parameters . The network transforms a batch of input samples xintoa batch of outputs =F(x;). Then, in the broadest primal-dual @-efficient case, is used,possibly with the true classes y, to formulate parameters (c;A;b ) =g(;y)of a linear programcorresponding to the combinatorial problem, through some (sub)differentiable function g. For4Under review as a conference paper at ICLR 2021Algorithm 1 Minimization of a combinatorial lossInput: batchxX ,yY, networkF(x;), functionsg;h, combinatorial algorithm Output: Loss and its generalized gradient, L();@L()1:procedure COMB LOSSMIN(x;y;;F;g;h; )2: forward pass =F(x;)3: forward pass (c;A;b ) =g(;y)4: run combinatorial solver to find optimal objective value z= (c;A;b )and optimal primaland/or dual solution vectors u,v5: forward pass L() =h(z)6: backward pass through h:@L=@z7: backward pass through :@z(c) =u,@z(b) =v,@z(A) =vuT8: backward pass through gandF9:@L() =@L@zu@c@vuT@A@+v@b@10: returnL(),@L()11:end procedurea givenand given batch samples (x;y), we can then define loss as a function of the optimalobjective value of the linear program corresponding to the combinatorial problem resulting fromg(F(x;);y),L() =h(z(c;A;b )). This approach, summarized in Algorithm 1, allows us toobtain the generalized gradient of the loss with respect to as long as functions gandharedifferentiable. For clarity, in Algorithm 1, we did not consider functions hdepending not just on zbut also onxory, but the extension is straightforward.3 E XAMPLE USECASES AND EXPERIMENTAL VALIDATION3.1 D IFFERENTIATING OVER BIPARTITE MATCHING FOR WEAKLY -SUPERVISED LEARNINGTo illustrate gradient descent over a combinatorial loss, we first focus on a simple image recognitionproblem. Consider a photo of a group of people with a caption listing each of the persons in thepicture, but missing the ”from left to right” part. Given a collection of such labeled photos, can amodel learn to recognize individual faces? Similarly, consider a shopping cart and a printout from theregister. Given a collection of unordered shopping carts together with matching receipts, can a modellearn to recognize individual shopping items? These are example of a weakly-supervised learningwhere the goal is to learn to classify previously unseen feature vectors, but a training sample is a bagof feature vectors accompanied by a bag of correct labels, instead of a feature-vector and a correctlabel. We are not told which class belongs to which sample, which prevents us from directly usingthe standard cross-entropy loss.More formally, consider a d-class classification problem, and a model F(xj;)that for sample xjreturns ad-dimensional vector of class probabilities, pj, withpcjdenoting the predicted conditionalprobability of class cgiven feature vector xj. Letyjdenote ad-dimensional, one-hot representation ofthe true class label of sample xj, withycj= 1if samplejis of classc, and zero otherwise. In weaklysupervised learning involving bags of size b, we are given a tuple of bfeature vectors, X= (xj)bj=1,and a tuple of permuted labels Y=y(i)bi=1as one-hot-vectors, for some permutation ; we willrefer to the j-th element of the tuple YasYj. The permutation is unknown, thus using a loss`(pj;Yj) =`(pj;y(i))to compare predicted distribution over classes for sample jwith one-hotrepresentation of j-th element in the randomly ordered set of true classes Yjmakes no sense, sincemost likelyi6=j;Yj=y(i)is the class for some other sample i, not for sample j.While the permutation is unknown, with repeated presentation of bags of samples and bags of corre-sponding labels, we do have some information connecting the feature vector to classes. Intuitively, wecan try to match model’s outputs for feature vectors in the bag to the class labels using the informationin the probability distribution pjover classes provided by the model for each feature vector xj. Thatis, we can aim to find permutation ^optimal in the average loss sense min^Pbj=1`(pj;^(Y)j). Ifthe class conditional probabilities pjresulting from the model perfectly match the one-hot vectors,the optimal ^will be the inverse of the permutation , that is, ^(Y)j=yj.5Under review as a conference paper at ICLR 2021Algorithm 2 Loss based on bipartite matching for weakly-supervised image classificationInput:X= (xj)bj=1– bag ofbinput images; Y= (Yk)bk=1– a set ofbsample classes to match, inone-hot representation, in arbitrary order; – ResNet18 network weights.Output: Loss (optimal matching cost) and its generalized gradient, L();@L()1:procedure MATCH BAG(X;Y; )2: forward pass, class probabilities pj= softmax(ResNet18( xj;))forj= 1;:::;b3: forward pass, cross-entropy for all image-label pairs Cjk=hlogpj;Ykiforj;k= 1;:::;b4: optimal matching cost and matching matrix: z;M= OptMatching( C),i.e.,M= arg min MhC;MiF,z=hC;MiF5: final loss: cost of optimal matching L() =z6: backward pass through bipartite matching @z(C) =M7: backward pass through cross-entropy, softmax and ResNet18: @L() =M@C@8: returnL(),@L()9:end procedureAb-element permutation can be represented by a bbpermutation matrix M. To findM, we defineabbmatrixCwithCjk=`(pj;Yk), where`represents cross-entropy loss `(p;y) =hlogp;yi,with the logarithm applied element-wise. The elements Cjkcorrespond to edge weight in a bipartitegraph with the feature vectors xprocessed by the neural network on one side, and labels yon theother side. We use a combinatorial solver, for example the Hungarian method with computationalcomplexityOb3, to find the the permutation matrix M= arg min MhC;MiFminimizing theFrobenius inner product of CandM. The procedure is outlined in Algorithm 2.To test the approach, we used the CIFAR100 benchmark image dataset. As a baseline, we trained 5independent fully supervised models with ResNet18 architecture (Zagoruyko & Komodakis, 2016)(see Supplementary Material for details), that is, models where each image is a separate sample withits true class available for loss calculation. To evaluate the ability of our method to provide gradientsof a combinatorial loss defined by weighted matching, during training we explored image bags ofsamples consisting of b=4, 8, 12, 16, 24, or 32 images, and including correct but shuffled image labels.We trained 5 independent models for each bag size with the loss and its gradient provided usingAlgorithm 2. To avoid situations where the combinatorial loss is superficially aided by bags withmostly one class, we ignored any bag that has less than 75% of different classes, that is, for bag ofsize 8, we only consider bags that consist of at least 6 different classes. During testing, same as in thebaseline model experiments, each image had the matching label available for test error calculations.For comparison, we trained a model with the same setup of image bags using cvxpylayers (Agrawalet al., 2019), a recently proposed methods for differentiable layers defined by conic programs. Incontrast to our approach, which uses a combinatorial algorithm and relies on the LP formulation ofthe weighted bipartite matching only conceptually, for the definition of gradients, cvxpylayers solvethe linear program in order to obtain gradients. We also trained the same model using a recentlyproposed approach to approximate gradients of the optimal solution vector, not the optimal objectivevalue, of a combinatorial problem (Vlastelica Pogan ˇci ́c et al., 2020); we used the same combinatorialsolver as in the experiments with our method.Test error for CIFAR100 of the training set reshuffled into bags after each epoch (Fig. 1, left) showsthat for bag sizes up to twelve elements, weak supervision through weighted bipartite graph matchingis almost as effective as supervised learning with true label available for each individual image, thatis, bag of size one. Training using the bipartite matching loss was implemented in three differentways: through interpolated combinatorial gradients proposed in (Vlastelica Pogan ˇci ́c et al., 2020),through differentiable LP approach (cvxpylayers), and through the proposed approach for obtaininggradients of the objective value. All three approaches lead to very similar error rates (Fig. 1, left),indicating these three ways of obtaining gradients provide similar training signal to the network.The two methods that use combinatorial solvers are much more efficient than LP solver-basedcvxpylayers (Fig. 1, right). The performance of the LP-based method decreases for very small bagsizes, where each epoch has large number of individual problems to solve, as well as for large bagsizes, where each problem to be solved involves more computation. Among the two methods usingthe same combinatorial solver, our proposed method is twice as fast as the interpolation method of(Vlastelica Pogan ˇci ́c et al., 2020), which requires solving a combinatorial problem not only in the6Under review as a conference paper at ICLR 2021Figure 1: Test set error (left) and total training time (right) for increasing bag sizes for classifierstrained using the proposed bipartite matching loss with gradients calculated using the proposedapproach and, for comparison, using cvxpylayers (Agrawal et al., 2019) and using an interpolationapproach for obtaining gradients of the solution vector of combinatorial problems (Vlastelica Pogan ˇci ́cet al., 2020). A supervised model with true label available for each individual sample, whichcorresponds to bag of size one, is used as a baseline lower bound on the error that the bag-trainedmodels should attempt to match. Mean, and the 95% confidence interval of the mean, are shown.forward pass, but also in the backwards pass in order to obtain gradients of the solution vector. Theseresults show that the generalized gradient over combinatorial optimization is effective in providingtraining signal to train a large neural network, and can do it much faster than the state-of-the-artalternative approaches.3.2 D IFFERENTIATING OVER GLOBAL SEQUENCE ALIGNMENT FOR SENTENCE -LEVEL LOSSINSEQUENCE -TO-SEQUENCE MODELSAnother use case where a combinatorial loss is advantageous occurs in to sequence-to-sequencenatural language models. We used a standard encoder-decoder architecture for the model (seeSupplementary Material for details). The encoder takes the source sequence on input and prepares acontext vector capturing the source sequence. The decoder is a recurrent network that outputs thepredicted sequence one token at a time, based on the context vector and the output of the previousstep. The output of the decoder at a step tis a vector of probabilities ptover the set of all possibleoutput tokens.Existing encoder-decoder models use cross-entropy loss to compare predicted probabilities ptto thetarget word at position t, encoded as one-hot vector yt. Instead of a sequence-level optimization,position-specific cross entropy loss results in an averaged token-level optimization. We hypothesizethis has detrimental effect on the training process of differentiable sequence-to-sequence modelsthat involve softmax or Gumbel-softmax (Jang et al., 2016) as the mechanism for feeding the outputof the previous step of the decoder as input for the next step. For example, a recurrent model thatlearned to output almost all of the target sentence correctly but is still making the mistake of missingone word early in the sentence will have very high loss at all the words following the missing word –correcting the mistake should involve keeping most of the model and focusing on the missing word,but with position-specific loss, all the outputs are considered wrong and in need of correction.Gaps or spurious words in the output sequence can be treated naturally if we consider globalsequence alignment (GSA) as the loss. Global sequence alignment (Needleman & Wunsch, 1970)is a combinatorial problem in which two sequences are aligned by choosing, at each position, toeither match a token from one sequence to a token from the other, or to introduce a gap in one or theother sequence; each choice has a cost (see Fig. 2). In sequence-to-sequence modeling, the cost ofmatching the decoder’s output from position ito the target sequence token as position kwill be givenbyhlogpi;yki. The cost of a gap, that is, of a horizontal or a vertical move in Fig. 2, is specifiedin a way that promotes closing of the gap; we use the cost of diagonal move from that position asthe cost of the gap, multiplied by a scalar >1to prioritize closing the gaps over improving thematchings. In our experiments, we used = 1:5. The GSA problem can stated as a linear program7Under review as a conference paper at ICLR 2021Figure 2: A directed acyclic graph (DAG) corresponding to the global sequence alignment betweenthe target sequence and the sequence predicted by the RNN model. Each node, except the end ofsequence indicator <=> , has out-degree of three: a diagonal edge corresponding to a match betweenthe predicted and the target sequence, a horizontal edge corresponding to a gap in the predictedsequence, and a vertical edge corresponding to a gap in the target sequence. Optimal sequencealignment is depicted in red, with the weights – the alignment costs – of the selected edges in blue.withpvariables and m+ 1constraints, with the costs of the moves forming the right-hand side of theconstraints. Thus, by Theorem 1, the generalized gradient of the minimum global sequence alignmentwith respect to matching and gap costs is efficiently available.In experiments involving global sequence alignment in sequence-to-sequence models, we usedan encoder-decoder sequence-to-sequence architecture with bidirectional forward-backward RNNencoder and an attention-based RNN decoder (Luong et al., 2015), as implemented in PyTorch-Texar(Hu et al., 2018). While this architecture is no longer the top performer in terms of ROUGE metric –currently, large pre-trained self-attention models are the state-of-the-art – it is much more efficient intraining, allowing for experimenting with different loss functions. During inference, we used beamsearch. During training, to have a differentiable decoder, we use two alternative approaches. First,we feed the probabilities resulting from the softmax layer applied to the outputs of the RNN directlyas the recursive inputs to the RNN. Second, inputs to the RNN are provided by the straight-throughGumbel-softmax distribution (Jang et al., 2016) based on the outputs of the RNN, which is anapproximation of the categorical distribution from which one-hot, single-token outputs are sampled.In both cases, as a baseline for comparisons with the GSA-based loss, we use word-level maximumlikelihood, that is, cross-entropy between the probability vector on output of the softmax layer ofthe RNN and the desired target word at that position. In evaluating the combinatorial GSA loss, weused text summarization task involving the GIGAWORD dataset (Graff & Cieri, 2003) as an exampleof a sequence-to-sequence problem. We used test set ROUGE 1, 2, and L scores (Lin, 2004) as themeasure of quality of the summarizations.The results in Table 1 show that the GSA-based loss leads to improved text summarization results inall three ROUGE metrics compared to position-specific cross-entropy maximum likelihood training,both for the softmax and the Gumbel-softmax approach for providing the recursive input to the RNNin a differentiable way. The increase in accuracy comes at the cost of doubling the training time whenour method is used to provide gradients of the optimal alignment score. A similar increased accuracycan be observed when the interpolation approach (Vlastelica Pogan ˇci ́c et al., 2020) for gradients ofoptimal alignment path is used instead, but the interpolation method further increases the trainingtime, by a factor of two compared to our method. The proposed combinatorial approach is muchmore accurate and efficient than the recently proposed cvxpylayers method. The running time for thecvxpylayers approach is orders of magnitude slower. The cvxpylayers solver managed to reduce thetraining loss for several initial epochs, after which solver errors start to occur and the learning processdiverges. In order to confirm this behavior, we performed 3 additional runs of the cvxpylayers-basedtraining for the softmax model. In all cases, the loss dropped from the initial value in the 90-95 rangeto above 50, after which it increased to 500 or more. For comparison, the proposed combinatorialloss approach and the standard cross-entropy approach reach loss in the 30-32 range by epoch 10.8Under review as a conference paper at ICLR 2021Table 1: Results for the GIGAWORD text summarization task using ROUGE-1, ROUGE-2, andROUGE-L metrics. For the position-specific cross-entropy loss (MLE), for the interpolated combina-torial gradient (GSA-I) (Vlastelica Pogan ˇci ́c et al., 2020) applied to global sequence alignment, andfor our combinatorial method (GSA-L), results are given as mean(std.dev.) over five independent runswith different random seed. For the method involving cvxpylayers (GSA-C) (Agrawal et al., 2019)applied to GSA, we only performed one run. We report test set values for the epoch that minimizesthe total ROUGE score on a separate validation set. Time is per one epoch.Loss Type ROUGE-Total ROUGE-1 ROUGE-2 ROUGE-L Epoch TimeSoftmaxMLE 72.80(0.38) 32.45(0.15) 11.95(0.22) 28.39(0.20) 18.4(1.5) 8 minGSA - C 32.18 17.04 2.49 12.65 3 9 hrGSA - I 75.87(0.82) 33.94(0.31) 12.03(0.35) 29.90(0.32) 13.4(3.4) 32 minGSA - L 76.36(0.60) 34.05(0.21) 12.31(0.20) 29.99(0.24) 15.4(2.5) 17 minGumbel-softmaxMLE 67.50(0.20) 31.25(0.18) 9.72(0.26) 26.52(0.08) 18.0(2.8) 9 minGSA - I 73.36(0.33) 33.44(0.16) 10.90(0.05) 29.01(0.14) 14.8(2.3) 32 minGSA - L 72.62(0.51) 33.25(0.15) 10.60(0.22) 28.77(0.17) 14.0(1.9) 17 min4 R ELATED WORKRecently, (Tschiatschek et al., 2018) proposed an approximate solver for submodular functionmaximization that uses differentiable elements and allows for differentiating through the solver.Differentiable solvers are also considered in (Mensch & Blondel, 2018), where dynamic programmingsolver is re-implemented with the maximum operation replaced by smoothed max. Similar approach isused in differentiable dynamic time warping (Chang et al., 2019). Several authors used a differentialapproximation to linear program solutions instead of introducing differentiable operations intocombinatorial algorithms. WGAN-TS (Liu et al., 2018) solves an LP to obtain the exact empiricalWasserstein distance. Then, to circumvent lack of differentiability of linear programs, WGAN-TSproceeds by training a neural network to approximate the LP solution in order to obtain gradients. Inseq2seq-OT (Chen et al., 2019), an approximation is used to model optimal transport between wordembeddings serving as a regularizer in training sequence-to-sequence models. These approximationapproaches are limited to specific problems and preclude using off-the-shelf combinatorial solvers.Recently, an approach that relies on interpolation to obtain gradients of the optimal solution vector –not optimal objective value as in our method – produced by combinatorial solvers has been proposed(Vlastelica Pogan ˇci ́c et al., 2020; Rol ́ınek et al., 2020). Similar to our approach, it allows for usingoff-the-shelf, black-box implementations of combinatorial algorithms. However, unlike our approach,it requires two executions of the solver, one in the forward phase, and a second execution for a slightlyperturbed problem for the backward phase. As can be seen in our experiments, this results in doublingthe performance overhead compared to our approach.An alternative approach is to use mathematical programming solvers in gradient-trained neuralnetworks. OptNet (Amos & Kolter, 2017) provides differentiable quadratic programming layers,and an efficient GPU-based batch solver, qpth. Cvxpylayers (Agrawal et al., 2019) generalizes thisapproach to a broad class of convex optimization problems expressed as cone programs, whichinclude QP and LP as special cases, using conic solver based on ADMM, providing a general-purposepackage based on the easy-to-use interface of cvxpy, with speed comparable to qpth for QP problems.Other authors (Wilder et al., 2019; Ferber et al., 2019) focus on LP problems, regularize them byadding the quadratic term, and use a QP solver as in OptNet to obtain the optimal solution vectorand its gradient. Quadratic smoothing is also used in (Djolonga & Krause, 2017) in submodular setfunction minimization. While these methods can handle broader class of problems than our method,the reliance on quadratic or linear programming solvers translates to increased solving time. Inthe approach proposed here, linear programming is used only as a theoretical tool that allows fordefining a mapping from the solution to a combinatorial problem to the gradient of its objective value.The solution is obtained by a single run of a combinatorial algorithm, which, as our experimentsconfirm, is faster than using mathematical programming and not affected by numerical instability andconvergence problems.9Under review as a conference paper at ICLR 2021
MOJj-92FseJ
Differentiating over the objective of a linear (integer) program is not an interesting problem
3: Clear rejection
The value of the optimal objective as a function of the cost vector $c$ can be written as $z^*(c) = c^T u^*(c)$ where the optimal solution $u^*$ also depends on $c$. The function $u^*(c)$ is piecewise constant -- there are finitely (resp. countably) many feasible solutions; candidates for $u^*$ -- and so the function $z^*(c)$ is a piecewise linear function of $c$, with gradient $u^*(c)$, wherever it exists (otherwise there is analogous subgradient). Obviously, all it takes for computing $u^*(c)$ is solving -- anyhow -- the combinatorial problem. This is all trivial and well-known, yet the authors do precisely that. Can it be saved by proposing gradients also of w.r.t. constraints? No. These results are (slightly) less trivial but -- as authors admit -- are known since 1975. Moreover, the gradient with respect to $c$ is the only one used in experiments, as far as I understand. Is there independent value in Theorem 1? I do not see it. It seems to be a bulky wrapper around the classical result. It only introduces some sort of transition from a vector specifying a combinatorial problem to a collection of vectors/matrices specifying an integer program. Also, the central concept of generalized gradient merely provides a formal framework to talk about non-unique gradients at boundary regions -- similarly to subgradient, subdifferential -- for the method itself, it has no specific relevance. The claims of better performance compared to cvxpy are also absolutely non-surprising -- cvxpy currently uses a slightly suboptimal -- and a very expensive -- solver for linear programs. That is all.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Differentiable Combinatorial Losses through Generalized Gradients of Linear Programs ### Paper Abstract Combinatorial problems with linear objective function play a central role in many computer science applications, and efficient algorithms for solving them are well known. However, the solutions to these problems are not differentiable with respect to the parameters specifying the problem instance – for example, shortest distance between two nodes in a graph is not a differentiable function of graph edge weights. Recently, attempts to integrate combinatorial and, more broadly, convex optimization solvers into gradient-trained models resulted in several approaches for differentiating over the solution vector to the optimization problem. However, in many cases, the interest is in differentiating over only the objective value, not the solution vector, and using existing approaches introduces unnecessary overhead. Here, we show how to perform gradient descent directly over the objective value of the solution to combinatorial problems. We demonstrate advantage of the approach in examples involving sequence-to-sequence modeling using differentiable encoder-decoder architecture with softmax or Gumbel-softmax, and in weakly supervised learning involving a convolutional, residual feed-forward network for image classification. ### Paper Keywords ["combinatorial optimization", "linear programs", "generalized gradient"] ### Paper Content ABSTRACTCombinatorial problems with linear objective function play a central role in manycomputer science applications, and efficient algorithms for solving them are wellknown. However, the solutions to these problems are not differentiable with re-spect to the parameters specifying the problem instance – for example, shortestdistance between two nodes in a graph is not a differentiable function of graph edgeweights. Recently, attempts to integrate combinatorial and, more broadly, convexoptimization solvers into gradient-trained models resulted in several approaches fordifferentiating over the solution vector to the optimization problem. However, inmany cases, the interest is in differentiating over only the objective value, not thesolution vector, and using existing approaches introduces unnecessary overhead.Here, we show how to perform gradient descent directly over the objective valueof the solution to combinatorial problems. We demonstrate advantage of the ap-proach in examples involving sequence-to-sequence modeling using differentiableencoder-decoder architecture with softmax or Gumbel-softmax, and in weaklysupervised learning involving a convolutional, residual feed-forward network forimage classification.1 I NTRODUCTIONCombinatorial optimization problems, such as shortest path in a weighted directed graph, minimumspanning tree in a weighted undirected graph, or optimal assignment of tasks to workers, play acentral role in many computer science applications. We have highly refined, efficient algorithms forsolving these fundamental problems (Cormen et al., 2009; Schrijver, 2003). However, while we caneasily find, for example, the minimal spanning tree in a graph, the total weight of the tree as functionof graph edge weights is not differentiable. This problem hinders using solutions to combinatorialproblems as criteria in training models that rely on differentiability of the objective function withrespect to the model parameters.Losses that are defined by objective value of some feasible solution to a combinatorial problem, notthe optimal one, have been recently proposed for image segmentation using deep models (Zheng et al.,2015; Lin et al., 2016). These focus on a problem where some pixels in the image have segmentationlabels, and the goal is to train a convolutional network that predicts segmentation labels for all pixels.For pixels with labels, a classification loss can be used. For the remaining pixels, a criterion basedon a combinatorial problem – for example the maximum flow / minimal cut problem in a regular,lattice graph connecting all pixels (Boykov et al., 2001) or derived, higher-level super-pixels (Linet al., 2016) – is often used as a loss, in an iterative process of improving discrete segmentation labels(Zheng et al., 2015; Marin et al., 2019). In this approach, the instance of the combinatorial problemis either fixed, or depends only on the input to the network; for example, similarity of neighboringpixel colors defines edge weights. The output of the neural network gives rise to a feasible, but rarelyoptimal, solution to that fixed instance a combinatorial problem, and its quality is used as a loss.For example, pixel labeling proposed by the network is interpreted as a cut in a pre-defined graphconnecting then pixels. Training the network should result in improved cuts, but no attempt to use asolver to find an optimal cut is made.Here, we are considering a different setup, in which each new output of the neural network givesrise to a new instance of a combinatorial problem. A combinatorial algorithm is then used to findthe optimal solution to the problem defined by the output, and the value of the objective function of1Under review as a conference paper at ICLR 2021the optimal solution is used as a loss. After each gradient update, the network will produce a newcombinatorial problem instance, even for the same input sample. Iteratively, the network is expectedto learn to produce combinatorial problem instances that have low optimal objective function value.For example, in sequence-to-sequence modeling, the network will output a new sentence that issupposed to closely match the desired sentence, leading to a new optimal sequence alignment problemto be solved. Initially, the optimal alignment will be poor, but as the network improves and the qualityof the output sentences get higher, the optimal alignment scores will be lower.Recently, progress in integrating combinatorial problems into differentiable models have been madeby modifying combinatorial algorithms to use only differentiable elements (Tschiatschek et al.,2018; Mensch & Blondel, 2018; Chang et al., 2019), for example smoothed max instead of maxin dynamic programming. Another approach involves executing two runs of a non-differentiable,black-box combinatorial algorithm and uses the two solutions to define a differentiable interpolation(Vlastelica Pogan ˇci ́c et al., 2020; Rol ́ınek et al., 2020). Finally, differentiable linear programmingand quadratic programming layers, which can be used to model many combinatorial problems, havebeen proposed recently (Amos & Kolter, 2017; Agrawal et al., 2019; Wilder et al., 2019; Ferber et al.,2019).The approaches above allow for differentiating through optimal solution vectors. In many cases,we are interested only in the optimal objective value, not the solution vector, and the approachesabove introduce unnecessary overhead. We propose an approach for gradient-descent based trainingof a network f(x;)for supervised learning problems involving samples (x;y)with the objectivecriterion involving a loss term of the form L() =h(OptSolutionObjectiveValue(( F(x;);y)),whereh:R!Ris some differentiable function, and is a combinatorial solver for a probleminstance defined by the output of the -parameterized network Ffor feature vector xand by the truelabely. We show that a broad class of combinatorial problems can be integrated into models trainedusing variants of gradient descent. Specifically, we show that for an efficiently solvable combinatorialproblem that can be efficiently expressed as an integer linear program, generalized gradients of theproblem’s objective value with respect to real-valued parameters defining the problem exist and canbe efficiently computed from a single run of a black-box combinatorial algorithm. Using the aboveresult, we show how generalized gradients of combinatorial problems can provide sentence-level lossfor text summarization using differentiable encoder-decoder models that involve softmax or Gumbelsoftmax (Jang et al., 2016), and a multi-element loss for training classification models when onlyweakly supervised, bagged training data is available.2 D IFFERENTIABLE COMBINATORIAL LOSSES2.1 B ACKGROUND ON GENERALIZED GRADIENTSA functionf:X!Rdefined over a convex, bounded open set X2Rpis Lipschitz continuous onan open setB2X if there is a finite K2Rsuch that8x;y2Bjf(x)f(y)jKjjxyjj. Afunction is locally Lipschitz-continuous if for every point x0in its domain, there is a neighborhoodB0, an open ball centered at x0, on which the function is Lipschitz-continuous. For such functions, ageneralized gradient can be defined.Definition 1. (Clarke, 1975) Letf:X!Rbe Lipschitz-continuous in the neighborhood of x2X.Then, the Clarke subdifferential @f(x)offatxis defined as@f(x) = convlimxk!xrf(xk);where the limit is over all convergent sequences involving those xkfor which gradient exists, andconv denotes convex hull, that is, the smallest polyhedron that contains all vectors from a given set.Each element of the set @f(x)is called a generalized gradient offatx.The Rademacher theorem (see e.g. (Evans, 1992)) states that for any locally Lipschitz-continuousfunction the gradient exists almost everywhere; convergent sequences can be found.In optimization algorithms, generalized gradients can be used in the same way as subgradients(Redding & Downs, 1992), that is, nondifferentiability may affect convergence in certain cases.2Under review as a conference paper at ICLR 20212.2 G RADIENT DESCENT OVER COMBINATORIAL OPTIMIZATIONMany combinatorial problems have linear objective function and can be intuitively expressed asinteger linear programs (ILP), that is, linear programs with additional constraint that the solutionvector involves only integers. Any ILP can be reduced to a linear program. Consider an ILPz=ILP(c;A0;b0) := min ucTus:t: A0u=b0; u0; u2Zp;with an optimal solution vector uand optimal objective value z. Then, there exists a correspondinglinear program LP(c;A;b )z=LP(c;A;b ) := min ucTus:t: Au =b; u0;called ideal formulation (Wolsey, 1989), for which uis also an optimal solution vector, with thesame objective value z. For a feasible, bounded p-dimensional integer program, we can view the pair(A0;b0)as a convex polyhedron A0, the set of all feasible solutions. Then, the pair (A;b)in the idealformulation LP is defined as the set of constraints specifying the feasible set A= convfA0\Zpg.Convex hull of a subset of a convex set A0cannot extend beyond A0, thus,Ais convex, contains allinteger solutions from A0, and no other integer solutions. The number of linear constraints in theideal formulation may be exponential in p, and/or inm, the number of the original constraints in A0.Thus, the existence of the ideal formulation LP for an ILP may not have practical utility for solvingthe ILP.For a combinatorial problem and its corresponding ILP, we use the ideal formulation of the ILPas a conceptual tool to define generalized gradient of the objective value of the optimal solutionto the combinatorial problem with respect to the parameters defining the combinatorial problem.Specifically, our approach first uses a single run of an efficient, black-box combinatorial algorithmto produce the optimal solution vector and the associated objective value. Then, the combinatorialproblem is conceptually viewed as an instance of an ILP. A possibly exponentially large linearprogram (LP) equivalent to the ILP is then used, without actually being spelled out or solved, toderive generalized gradients based on the solution vector returned by the combinatorial algorithm.First, we introduce several notions of efficiency of transforming a combinatorial problem into alinear integer program that will be convenient in defining the generalized gradients of combinatorialproblems.Definition 2. LetP(w)be a combinatorial problem that is parameterized by a continuous vectorw2W Rn, whereWis simply connected and nis the problem size, and let k2Zbe a constantthat may depend on the problem type but not on its size. Then, a combinatorial problem is•primal-dual@-efficient if it can be phrased as an integer linear program involving nvariables,withknconstraints in an LP formulation equivalent to the ILP , and the parameters (A;b;c )of the LP formulation depend on wthrough (sub)differentiable functions, c=c(w);A=A(w);b=b(w).•primal@-efficient if it can be phrased as an integer linear program involving nvariables,the parameters wof the problem influence the cost vector cthrough a (sub)differentiablefunctionc=c(w), and do not influence the constraints A;b.•dual@-efficient if it can be phrased as an integer linear program in which the number ofconstraints in the equivalent LP formulation is kn, the parameters wof the problem influencebthrough a (sub)differentiable function b=b(w), and do no influence the constraint matrixAnor the cost vector c.The class of @-efficient problems includes polynomially solvable combinatorial problems withobjective function that is linear in terms of problem parameters. Typically, the functions c=c(w),b=b(w)andA=A(w)are either identity mapping or are constant; for example, in the LP formaximum network flow, the cost vector cis composed directly of edge capacities, and Aanbareconstant for a given flow network topology, and do not depend on capacities.For any polynomially solvable combinatorial problem, we can construct a poly(n)-sized Booleancircuit for the algorithm solving it. For each poly(n)-sized circuit, there is a linear program withpoly(n)variables and constraints that gives the same solution (see (Dasgupta et al., 2008), Chap.3Under review as a conference paper at ICLR 20217). For example, for MST in a graph with Vvertices and Eedges, the Martin’s ILP formulation(Martin, 1991) has only poly(V+E)constraints, but it is an extended formulation that involvesVEadditional variables on top of the typical Evariables used in the standard ILP formulations forMST. Thus, we cannot use it to construct an ILP formulation that would make MST primal-dual@-efficient. Alternatively, there is an ILP for MST with one binary variable per edge, and the weightof the edge only influences the cost vector c, but to prohibit cycles in the solution there is a constraintfor each cycle in the graph, thus the number of constraints is not poly(n)for arbitrary graphs. Theseconstraints are specified fully by the topology of the graph, not by the edge weights, so wdoes notinfluenceAnorb, meeting the conditions for primal @-efficiency. The MST example shows that thereare problems that are primal @-efficient and not primal-dual @-efficient.Some polynomially solvable combinatorial problems are not @-efficient in any of the above sense.For example, fixed-rank combinatorial problems with interaction costs (Lendl et al., 2019) can bephrased succinctly as a bilinear program, but lead to prohibitively large linear programs both in termsof the number of variables and the number of constraints.For@-efficient problems, we can efficiently obtain generalized gradients of the objective value.Theorem 1. Consider a combinatorial problem P(w)of sizen, a parameter vector wfrom theinterior of the parameter domain W, and an algorithm (w)for solving it in time poly(n). Letzbe the optimal objective value returned by . Then,•ifPis primal@-efficient, then the generalized gradients @z(w)exist, and can be efficientlycomputed from U, the set of primal solution of the ideal formulation of integer programcorresponding to P;•ifPis dual@-efficient, then the generalized gradients of @z(w)exist, and can be efficientlycomputed from V, the set of all dual solution to the ideal formulation of the integer programcorresponding to P;•ifPis primal-dual @-efficient, then the generalized gradients of Aoverwexist, and can beefficiently computed from UandV, as defined above.Proof. A series of results (Gal, 1975; Freund, 1985; De Wolf & Smeers, 2000) shows that if theoptimal objective value z=LP(c;A;b )for a linear program is finite at (c;A;b )and in someneighborhood of (c;A;b ), then generalized gradients of zwith respect to c,b, andAexist and are@z(c) =U; @z(b) =V; @z(A) =vuT: (u;v)2VU:We build on these results to obtain generalized gradients of the linear program corresponding to thecombinatorial problem. For the first case in the theorem, definition 2 states that in the linear programcorresponding to P, only the cost vector cdepends on w, through a (sub)differentiable functionc=c(w). Sincewis in the interior of the parameter domain W, the objective value is finite oversome neighborhood of w. Then,@z(w) =@z(c)@c@w=@c@wU;where the generalized gradient z(c)exists and is equal to U.For the second case, the ideal formulation LP exists. Then, from definition 2 we have that@z(w) =@z(b)@b@w=@b@wV:The third case is a direct extension of the first two cases.Theorem 1 indicates that black-box combinatorial algorithms can be used to expand the rangeof transformations that can be efficiently utilized in neural networks. One immediate area ofapplication is using them to specify a loss function. Consider a network F(x;)parameterizedby a vector of tunable parameters . The network transforms a batch of input samples xintoa batch of outputs =F(x;). Then, in the broadest primal-dual @-efficient case, is used,possibly with the true classes y, to formulate parameters (c;A;b ) =g(;y)of a linear programcorresponding to the combinatorial problem, through some (sub)differentiable function g. For4Under review as a conference paper at ICLR 2021Algorithm 1 Minimization of a combinatorial lossInput: batchxX ,yY, networkF(x;), functionsg;h, combinatorial algorithm Output: Loss and its generalized gradient, L();@L()1:procedure COMB LOSSMIN(x;y;;F;g;h; )2: forward pass =F(x;)3: forward pass (c;A;b ) =g(;y)4: run combinatorial solver to find optimal objective value z= (c;A;b )and optimal primaland/or dual solution vectors u,v5: forward pass L() =h(z)6: backward pass through h:@L=@z7: backward pass through :@z(c) =u,@z(b) =v,@z(A) =vuT8: backward pass through gandF9:@L() =@L@zu@c@vuT@A@+v@b@10: returnL(),@L()11:end procedurea givenand given batch samples (x;y), we can then define loss as a function of the optimalobjective value of the linear program corresponding to the combinatorial problem resulting fromg(F(x;);y),L() =h(z(c;A;b )). This approach, summarized in Algorithm 1, allows us toobtain the generalized gradient of the loss with respect to as long as functions gandharedifferentiable. For clarity, in Algorithm 1, we did not consider functions hdepending not just on zbut also onxory, but the extension is straightforward.3 E XAMPLE USECASES AND EXPERIMENTAL VALIDATION3.1 D IFFERENTIATING OVER BIPARTITE MATCHING FOR WEAKLY -SUPERVISED LEARNINGTo illustrate gradient descent over a combinatorial loss, we first focus on a simple image recognitionproblem. Consider a photo of a group of people with a caption listing each of the persons in thepicture, but missing the ”from left to right” part. Given a collection of such labeled photos, can amodel learn to recognize individual faces? Similarly, consider a shopping cart and a printout from theregister. Given a collection of unordered shopping carts together with matching receipts, can a modellearn to recognize individual shopping items? These are example of a weakly-supervised learningwhere the goal is to learn to classify previously unseen feature vectors, but a training sample is a bagof feature vectors accompanied by a bag of correct labels, instead of a feature-vector and a correctlabel. We are not told which class belongs to which sample, which prevents us from directly usingthe standard cross-entropy loss.More formally, consider a d-class classification problem, and a model F(xj;)that for sample xjreturns ad-dimensional vector of class probabilities, pj, withpcjdenoting the predicted conditionalprobability of class cgiven feature vector xj. Letyjdenote ad-dimensional, one-hot representation ofthe true class label of sample xj, withycj= 1if samplejis of classc, and zero otherwise. In weaklysupervised learning involving bags of size b, we are given a tuple of bfeature vectors, X= (xj)bj=1,and a tuple of permuted labels Y=y(i)bi=1as one-hot-vectors, for some permutation ; we willrefer to the j-th element of the tuple YasYj. The permutation is unknown, thus using a loss`(pj;Yj) =`(pj;y(i))to compare predicted distribution over classes for sample jwith one-hotrepresentation of j-th element in the randomly ordered set of true classes Yjmakes no sense, sincemost likelyi6=j;Yj=y(i)is the class for some other sample i, not for sample j.While the permutation is unknown, with repeated presentation of bags of samples and bags of corre-sponding labels, we do have some information connecting the feature vector to classes. Intuitively, wecan try to match model’s outputs for feature vectors in the bag to the class labels using the informationin the probability distribution pjover classes provided by the model for each feature vector xj. Thatis, we can aim to find permutation ^optimal in the average loss sense min^Pbj=1`(pj;^(Y)j). Ifthe class conditional probabilities pjresulting from the model perfectly match the one-hot vectors,the optimal ^will be the inverse of the permutation , that is, ^(Y)j=yj.5Under review as a conference paper at ICLR 2021Algorithm 2 Loss based on bipartite matching for weakly-supervised image classificationInput:X= (xj)bj=1– bag ofbinput images; Y= (Yk)bk=1– a set ofbsample classes to match, inone-hot representation, in arbitrary order; – ResNet18 network weights.Output: Loss (optimal matching cost) and its generalized gradient, L();@L()1:procedure MATCH BAG(X;Y; )2: forward pass, class probabilities pj= softmax(ResNet18( xj;))forj= 1;:::;b3: forward pass, cross-entropy for all image-label pairs Cjk=hlogpj;Ykiforj;k= 1;:::;b4: optimal matching cost and matching matrix: z;M= OptMatching( C),i.e.,M= arg min MhC;MiF,z=hC;MiF5: final loss: cost of optimal matching L() =z6: backward pass through bipartite matching @z(C) =M7: backward pass through cross-entropy, softmax and ResNet18: @L() =M@C@8: returnL(),@L()9:end procedureAb-element permutation can be represented by a bbpermutation matrix M. To findM, we defineabbmatrixCwithCjk=`(pj;Yk), where`represents cross-entropy loss `(p;y) =hlogp;yi,with the logarithm applied element-wise. The elements Cjkcorrespond to edge weight in a bipartitegraph with the feature vectors xprocessed by the neural network on one side, and labels yon theother side. We use a combinatorial solver, for example the Hungarian method with computationalcomplexityOb3, to find the the permutation matrix M= arg min MhC;MiFminimizing theFrobenius inner product of CandM. The procedure is outlined in Algorithm 2.To test the approach, we used the CIFAR100 benchmark image dataset. As a baseline, we trained 5independent fully supervised models with ResNet18 architecture (Zagoruyko & Komodakis, 2016)(see Supplementary Material for details), that is, models where each image is a separate sample withits true class available for loss calculation. To evaluate the ability of our method to provide gradientsof a combinatorial loss defined by weighted matching, during training we explored image bags ofsamples consisting of b=4, 8, 12, 16, 24, or 32 images, and including correct but shuffled image labels.We trained 5 independent models for each bag size with the loss and its gradient provided usingAlgorithm 2. To avoid situations where the combinatorial loss is superficially aided by bags withmostly one class, we ignored any bag that has less than 75% of different classes, that is, for bag ofsize 8, we only consider bags that consist of at least 6 different classes. During testing, same as in thebaseline model experiments, each image had the matching label available for test error calculations.For comparison, we trained a model with the same setup of image bags using cvxpylayers (Agrawalet al., 2019), a recently proposed methods for differentiable layers defined by conic programs. Incontrast to our approach, which uses a combinatorial algorithm and relies on the LP formulation ofthe weighted bipartite matching only conceptually, for the definition of gradients, cvxpylayers solvethe linear program in order to obtain gradients. We also trained the same model using a recentlyproposed approach to approximate gradients of the optimal solution vector, not the optimal objectivevalue, of a combinatorial problem (Vlastelica Pogan ˇci ́c et al., 2020); we used the same combinatorialsolver as in the experiments with our method.Test error for CIFAR100 of the training set reshuffled into bags after each epoch (Fig. 1, left) showsthat for bag sizes up to twelve elements, weak supervision through weighted bipartite graph matchingis almost as effective as supervised learning with true label available for each individual image, thatis, bag of size one. Training using the bipartite matching loss was implemented in three differentways: through interpolated combinatorial gradients proposed in (Vlastelica Pogan ˇci ́c et al., 2020),through differentiable LP approach (cvxpylayers), and through the proposed approach for obtaininggradients of the objective value. All three approaches lead to very similar error rates (Fig. 1, left),indicating these three ways of obtaining gradients provide similar training signal to the network.The two methods that use combinatorial solvers are much more efficient than LP solver-basedcvxpylayers (Fig. 1, right). The performance of the LP-based method decreases for very small bagsizes, where each epoch has large number of individual problems to solve, as well as for large bagsizes, where each problem to be solved involves more computation. Among the two methods usingthe same combinatorial solver, our proposed method is twice as fast as the interpolation method of(Vlastelica Pogan ˇci ́c et al., 2020), which requires solving a combinatorial problem not only in the6Under review as a conference paper at ICLR 2021Figure 1: Test set error (left) and total training time (right) for increasing bag sizes for classifierstrained using the proposed bipartite matching loss with gradients calculated using the proposedapproach and, for comparison, using cvxpylayers (Agrawal et al., 2019) and using an interpolationapproach for obtaining gradients of the solution vector of combinatorial problems (Vlastelica Pogan ˇci ́cet al., 2020). A supervised model with true label available for each individual sample, whichcorresponds to bag of size one, is used as a baseline lower bound on the error that the bag-trainedmodels should attempt to match. Mean, and the 95% confidence interval of the mean, are shown.forward pass, but also in the backwards pass in order to obtain gradients of the solution vector. Theseresults show that the generalized gradient over combinatorial optimization is effective in providingtraining signal to train a large neural network, and can do it much faster than the state-of-the-artalternative approaches.3.2 D IFFERENTIATING OVER GLOBAL SEQUENCE ALIGNMENT FOR SENTENCE -LEVEL LOSSINSEQUENCE -TO-SEQUENCE MODELSAnother use case where a combinatorial loss is advantageous occurs in to sequence-to-sequencenatural language models. We used a standard encoder-decoder architecture for the model (seeSupplementary Material for details). The encoder takes the source sequence on input and prepares acontext vector capturing the source sequence. The decoder is a recurrent network that outputs thepredicted sequence one token at a time, based on the context vector and the output of the previousstep. The output of the decoder at a step tis a vector of probabilities ptover the set of all possibleoutput tokens.Existing encoder-decoder models use cross-entropy loss to compare predicted probabilities ptto thetarget word at position t, encoded as one-hot vector yt. Instead of a sequence-level optimization,position-specific cross entropy loss results in an averaged token-level optimization. We hypothesizethis has detrimental effect on the training process of differentiable sequence-to-sequence modelsthat involve softmax or Gumbel-softmax (Jang et al., 2016) as the mechanism for feeding the outputof the previous step of the decoder as input for the next step. For example, a recurrent model thatlearned to output almost all of the target sentence correctly but is still making the mistake of missingone word early in the sentence will have very high loss at all the words following the missing word –correcting the mistake should involve keeping most of the model and focusing on the missing word,but with position-specific loss, all the outputs are considered wrong and in need of correction.Gaps or spurious words in the output sequence can be treated naturally if we consider globalsequence alignment (GSA) as the loss. Global sequence alignment (Needleman & Wunsch, 1970)is a combinatorial problem in which two sequences are aligned by choosing, at each position, toeither match a token from one sequence to a token from the other, or to introduce a gap in one or theother sequence; each choice has a cost (see Fig. 2). In sequence-to-sequence modeling, the cost ofmatching the decoder’s output from position ito the target sequence token as position kwill be givenbyhlogpi;yki. The cost of a gap, that is, of a horizontal or a vertical move in Fig. 2, is specifiedin a way that promotes closing of the gap; we use the cost of diagonal move from that position asthe cost of the gap, multiplied by a scalar >1to prioritize closing the gaps over improving thematchings. In our experiments, we used = 1:5. The GSA problem can stated as a linear program7Under review as a conference paper at ICLR 2021Figure 2: A directed acyclic graph (DAG) corresponding to the global sequence alignment betweenthe target sequence and the sequence predicted by the RNN model. Each node, except the end ofsequence indicator <=> , has out-degree of three: a diagonal edge corresponding to a match betweenthe predicted and the target sequence, a horizontal edge corresponding to a gap in the predictedsequence, and a vertical edge corresponding to a gap in the target sequence. Optimal sequencealignment is depicted in red, with the weights – the alignment costs – of the selected edges in blue.withpvariables and m+ 1constraints, with the costs of the moves forming the right-hand side of theconstraints. Thus, by Theorem 1, the generalized gradient of the minimum global sequence alignmentwith respect to matching and gap costs is efficiently available.In experiments involving global sequence alignment in sequence-to-sequence models, we usedan encoder-decoder sequence-to-sequence architecture with bidirectional forward-backward RNNencoder and an attention-based RNN decoder (Luong et al., 2015), as implemented in PyTorch-Texar(Hu et al., 2018). While this architecture is no longer the top performer in terms of ROUGE metric –currently, large pre-trained self-attention models are the state-of-the-art – it is much more efficient intraining, allowing for experimenting with different loss functions. During inference, we used beamsearch. During training, to have a differentiable decoder, we use two alternative approaches. First,we feed the probabilities resulting from the softmax layer applied to the outputs of the RNN directlyas the recursive inputs to the RNN. Second, inputs to the RNN are provided by the straight-throughGumbel-softmax distribution (Jang et al., 2016) based on the outputs of the RNN, which is anapproximation of the categorical distribution from which one-hot, single-token outputs are sampled.In both cases, as a baseline for comparisons with the GSA-based loss, we use word-level maximumlikelihood, that is, cross-entropy between the probability vector on output of the softmax layer ofthe RNN and the desired target word at that position. In evaluating the combinatorial GSA loss, weused text summarization task involving the GIGAWORD dataset (Graff & Cieri, 2003) as an exampleof a sequence-to-sequence problem. We used test set ROUGE 1, 2, and L scores (Lin, 2004) as themeasure of quality of the summarizations.The results in Table 1 show that the GSA-based loss leads to improved text summarization results inall three ROUGE metrics compared to position-specific cross-entropy maximum likelihood training,both for the softmax and the Gumbel-softmax approach for providing the recursive input to the RNNin a differentiable way. The increase in accuracy comes at the cost of doubling the training time whenour method is used to provide gradients of the optimal alignment score. A similar increased accuracycan be observed when the interpolation approach (Vlastelica Pogan ˇci ́c et al., 2020) for gradients ofoptimal alignment path is used instead, but the interpolation method further increases the trainingtime, by a factor of two compared to our method. The proposed combinatorial approach is muchmore accurate and efficient than the recently proposed cvxpylayers method. The running time for thecvxpylayers approach is orders of magnitude slower. The cvxpylayers solver managed to reduce thetraining loss for several initial epochs, after which solver errors start to occur and the learning processdiverges. In order to confirm this behavior, we performed 3 additional runs of the cvxpylayers-basedtraining for the softmax model. In all cases, the loss dropped from the initial value in the 90-95 rangeto above 50, after which it increased to 500 or more. For comparison, the proposed combinatorialloss approach and the standard cross-entropy approach reach loss in the 30-32 range by epoch 10.8Under review as a conference paper at ICLR 2021Table 1: Results for the GIGAWORD text summarization task using ROUGE-1, ROUGE-2, andROUGE-L metrics. For the position-specific cross-entropy loss (MLE), for the interpolated combina-torial gradient (GSA-I) (Vlastelica Pogan ˇci ́c et al., 2020) applied to global sequence alignment, andfor our combinatorial method (GSA-L), results are given as mean(std.dev.) over five independent runswith different random seed. For the method involving cvxpylayers (GSA-C) (Agrawal et al., 2019)applied to GSA, we only performed one run. We report test set values for the epoch that minimizesthe total ROUGE score on a separate validation set. Time is per one epoch.Loss Type ROUGE-Total ROUGE-1 ROUGE-2 ROUGE-L Epoch TimeSoftmaxMLE 72.80(0.38) 32.45(0.15) 11.95(0.22) 28.39(0.20) 18.4(1.5) 8 minGSA - C 32.18 17.04 2.49 12.65 3 9 hrGSA - I 75.87(0.82) 33.94(0.31) 12.03(0.35) 29.90(0.32) 13.4(3.4) 32 minGSA - L 76.36(0.60) 34.05(0.21) 12.31(0.20) 29.99(0.24) 15.4(2.5) 17 minGumbel-softmaxMLE 67.50(0.20) 31.25(0.18) 9.72(0.26) 26.52(0.08) 18.0(2.8) 9 minGSA - I 73.36(0.33) 33.44(0.16) 10.90(0.05) 29.01(0.14) 14.8(2.3) 32 minGSA - L 72.62(0.51) 33.25(0.15) 10.60(0.22) 28.77(0.17) 14.0(1.9) 17 min4 R ELATED WORKRecently, (Tschiatschek et al., 2018) proposed an approximate solver for submodular functionmaximization that uses differentiable elements and allows for differentiating through the solver.Differentiable solvers are also considered in (Mensch & Blondel, 2018), where dynamic programmingsolver is re-implemented with the maximum operation replaced by smoothed max. Similar approach isused in differentiable dynamic time warping (Chang et al., 2019). Several authors used a differentialapproximation to linear program solutions instead of introducing differentiable operations intocombinatorial algorithms. WGAN-TS (Liu et al., 2018) solves an LP to obtain the exact empiricalWasserstein distance. Then, to circumvent lack of differentiability of linear programs, WGAN-TSproceeds by training a neural network to approximate the LP solution in order to obtain gradients. Inseq2seq-OT (Chen et al., 2019), an approximation is used to model optimal transport between wordembeddings serving as a regularizer in training sequence-to-sequence models. These approximationapproaches are limited to specific problems and preclude using off-the-shelf combinatorial solvers.Recently, an approach that relies on interpolation to obtain gradients of the optimal solution vector –not optimal objective value as in our method – produced by combinatorial solvers has been proposed(Vlastelica Pogan ˇci ́c et al., 2020; Rol ́ınek et al., 2020). Similar to our approach, it allows for usingoff-the-shelf, black-box implementations of combinatorial algorithms. However, unlike our approach,it requires two executions of the solver, one in the forward phase, and a second execution for a slightlyperturbed problem for the backward phase. As can be seen in our experiments, this results in doublingthe performance overhead compared to our approach.An alternative approach is to use mathematical programming solvers in gradient-trained neuralnetworks. OptNet (Amos & Kolter, 2017) provides differentiable quadratic programming layers,and an efficient GPU-based batch solver, qpth. Cvxpylayers (Agrawal et al., 2019) generalizes thisapproach to a broad class of convex optimization problems expressed as cone programs, whichinclude QP and LP as special cases, using conic solver based on ADMM, providing a general-purposepackage based on the easy-to-use interface of cvxpy, with speed comparable to qpth for QP problems.Other authors (Wilder et al., 2019; Ferber et al., 2019) focus on LP problems, regularize them byadding the quadratic term, and use a QP solver as in OptNet to obtain the optimal solution vectorand its gradient. Quadratic smoothing is also used in (Djolonga & Krause, 2017) in submodular setfunction minimization. While these methods can handle broader class of problems than our method,the reliance on quadratic or linear programming solvers translates to increased solving time. Inthe approach proposed here, linear programming is used only as a theoretical tool that allows fordefining a mapping from the solution to a combinatorial problem to the gradient of its objective value.The solution is obtained by a single run of a combinatorial algorithm, which, as our experimentsconfirm, is faster than using mathematical programming and not affected by numerical instability andconvergence problems.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Differentiating over the objective of a linear (integer) program is not an interesting problem ### Review Text The value of the optimal objective as a function of the cost vector $c$ can be written as $z^*(c) = c^T u^*(c)$ where the optimal solution $u^*$ also depends on $c$. The function $u^*(c)$ is piecewise constant -- there are finitely (resp. countably) many feasible solutions; candidates for $u^*$ -- and so the function $z^*(c)$ is a piecewise linear function of $c$, with gradient $u^*(c)$, wherever it exists (otherwise there is analogous subgradient). Obviously, all it takes for computing $u^*(c)$ is solving -- anyhow -- the combinatorial problem. This is all trivial and well-known, yet the authors do precisely that. Can it be saved by proposing gradients also of w.r.t. constraints? No. These results are (slightly) less trivial but -- as authors admit -- are known since 1975. Moreover, the gradient with respect to $c$ is the only one used in experiments, as far as I understand. Is there independent value in Theorem 1? I do not see it. It seems to be a bulky wrapper around the classical result. It only introduces some sort of transition from a vector specifying a combinatorial problem to a collection of vectors/matrices specifying an integer program. Also, the central concept of generalized gradient merely provides a formal framework to talk about non-unique gradients at boundary regions -- similarly to subgradient, subdifferential -- for the method itself, it has no specific relevance. The claims of better performance compared to cvxpy are also absolutely non-surprising -- cvxpy currently uses a slightly suboptimal -- and a very expensive -- solver for linear programs. That is all. ### Review Rating 3: Clear rejection ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
wjfD_pio2K
ICLR.cc/2023/BlogPosts
2023
Decay No More
["Fabian Schaipp"]
Weight decay is among the most important tuning parameters to reach high accuracy for large-scale machine learning models. In this blog post, we revisit AdamW, the weight decay version of Adam, summarizing empirical findings as well as theoretical motivations from an optimization perspective.
["weight decay", "adam", "optimization"]
o2182sxlML
Nicely written and relevant blog
7: Good paper, accept
This blog discusses the different ways weight decay can be implemented in the Adam optimizer. A special focus is given to the AdamW version. The blog also discusses the current understanding of why weight decay improves generalisation. The blog is nicely written and provides an easy introduction to this area of the literature. It covers the most relevant papers related to this topic, providing a nice overview which can not be easily found in standard research papers. The topic cover is of relevance for the ICLR community, because weight decay, as pointed out by the authors, is a widely used technique in many SOTA models.
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Decay No More ### Paper Abstract Weight decay is among the most important tuning parameters to reach high accuracy for large-scale machine learning models. In this blog post, we revisit AdamW, the weight decay version of Adam, summarizing empirical findings as well as theoretical motivations from an optimization perspective. ### Paper Keywords ["weight decay", "adam", "optimization"] ### Paper Content <|im_end|> <|im_start|>assistant ### Review Title Nicely written and relevant blog ### Review Text This blog discusses the different ways weight decay can be implemented in the Adam optimizer. A special focus is given to the AdamW version. The blog also discusses the current understanding of why weight decay improves generalisation. The blog is nicely written and provides an easy introduction to this area of the literature. It covers the most relevant papers related to this topic, providing a nice overview which can not be easily found in standard research papers. The topic cover is of relevance for the ICLR community, because weight decay, as pointed out by the authors, is a widely used technique in many SOTA models. ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
ryUprTOv7q0
ICLR.cc/2021/Conference
2021
Quantum Deformed Neural Networks
["Roberto Bondesan", "Max Welling"]
We develop a new quantum neural network layer designed to run efficiently on a quantum computer but that can be simulated on a classical computer when restricted in the way it entangles input states. We first ask how a classical neural network architecture, both fully connected or convolutional, can be executed on a quantum computer using quantum phase estimation. We then deform the classical layer into a quantum design which entangles activations and weights into quantum superpositions. While the full model would need the exponential speedups delivered by a quantum computer, a restricted class of designs represent interesting new classical network layers that still use quantum features. We show that these quantum deformed neural networks can be trained and executed on normal data such as images, and even classically deliver modest improvements over standard architectures.
["Quantum machine learning", "Binary neural networks", "Bayesian deep learning"]
ABSTRACTWe develop a new quantum neural network layer designed to run efficiently ona quantum computer but that can be simulated on a classical computer when re-stricted in the way it entangles input states. We first ask how a classical neuralnetwork architecture, both fully connected or convolutional, can be executed on aquantum computer using quantum phase estimation. We then deform the classicallayer into a quantum design which entangles activations and weights into quantumsuperpositions. While the full model would need the exponential speedups deliv-ered by a quantum computer, a restricted class of designs represent interestingnew classical network layers that still use quantum features. We show that thesequantum deformed neural networks can be trained and executed on normal datasuch as images, and even classically deliver modest improvements over standardarchitectures.1 I NTRODUCTIONQuantum mechanics (QM) is the most accurate description for physical phenomena at very smallscales, such as the behavior of molecules, atoms and subatomic particles. QM has a huge impact onour every day lives through technologies such as lasers, transistors (and thus microchips), supercon-ductors and MRI.A recent view of QM has formulated it as a (Bayesian) statistical methodology that only describesour subjective view of the (quantum) world, and how we update that view in light of evidence(i.e. measurements) (;t Hooft, 2016; Fuchs & Schack, 2013). This is in perfect analogy to theclassical Bayesian view, a statistical paradigm extensively used in artificial intelligence where wemaintain probabilities to represent our beliefs for events in the world.The philosophy of this paper will be to turn this argument on its head. If we can view QM as justanother consistent statistical theory that happens to describe nature at small scales, then we canalso use this theory to describe classical signals by endowing them with a Hilbert space structure.In some sense, the ’only’ difference with Bayesian statistics is that the positive probabilities arereplaced with complex ’amplitudes’. This however has the dramatic effect that, unlike in classicalstatistics, interference between events now becomes a possibility. In this paper we show that thispoint of view uncovers new architectures and potential speedups for running neural networks onquantum computers.We shall restrict our attention here to binary neural networks. We will introduce a new class of quan-tum neural networks and interpret them as generalizations of probabilistic binary neural networks,discussing potential speedups by running the models on a quantum computer. Then we will deviseclassically efficient algorithms to train the networks for a restricted set of quantum circuits. Wepresent results of classical simulations of the quantum neural networks on real world data sizes andrelated gains in accuracy due to the quantum deformations. Contrary to almost all other works onquantum deep learning, our quantum neural networks can be simulated for practical classical prob-lems, such as images or sound. The quantum nature of our models is there to increase the flexibilityof the model-class and add new operators to the toolbox of the deep learning researcher, some ofwhich may only reach their full potential when quantum computing becomes ubiquitous.1Under review as a conference paper at ICLR 20211.1 R ELATED WORKIn Farhi & Neven (2018) variational quantum circuits that can be learnt via stochastic gradientdescent were introduced. Their performance could be studied only on small input tasks such asclassifying 44images, due to the exponential memory requirement to simulate those circuits.Other works on variational quantum circuits for neural networks are Verdon et al. (2018); Beer et al.(2019). Their focus is similarly on the implementation on near term quantum devices and thesemodels cannot be efficiently run on a classical computer. Exceptions are models which use tensornetwork simulations (Cong et al., 2019; Huggins et al., 2019) where the model can be scaled to 88image data with 2classes, at the price of constraining the geometry of the quantum circuit (Hugginset al., 2019). The quantum deformed neural networks introduced in this paper are instead a class ofvariational quantum circuits that can be scaled to the size of data that are used in traditional neuralnetworks as we demonstrate in section 4.2.Another line of work directly uses tensor networks as full precision machine learning models thatcan be scaled to the size of real data (Miles Stoudenmire & Schwab, 2016; Liu et al., 2017; Levineet al., 2017; Levine et al., 2019). However the constraints on the network geometry to allow forefficient contractions limit the expressivity and performance of the models. See however Chenget al. (2020) for recent promising developments. Further, the tensor networks studied in these worksare not unitary maps and do not directly relate to implementations on quantum computers.A large body of work in quantum machine learning focuses on using quantum computing to providespeedups to classical machine learning tasks (Biamonte et al., 2017; Ciliberto et al., 2018; Wiebeet al., 2014), culminating in the discovery of quantum inspired speedups in classical algorithms(Tang, 2019). In particular, (Allcock et al., 2018; Cao et al., 2017; Schuld et al., 2015; Kerenidiset al., 2019) discuss quantum simulations of classical neural networks with the goal of improving theefficiency of classical models on a quantum computer. Our models differ from these works in twoways: i) we use quantum wave-functions to model weight uncertainty, in a way that is reminiscentof Bayesian models; ii) we design our network layers in a way that may only reach its full potentialon a quantum computer due to exponential speedups, but at the same time can, for a restrictedclass of layer designs, be simulated on a classical computer and provide inspiration for new neuralarchitectures. Finally, quantum methods for accelerating Bayesian inference have been discussed inZhao et al. (2019b;a) but only for Gaussian processes while in this work we shall discuss relationsto Bayesian neural networks.2 G ENERALIZED PROBABILISTIC BINARY NEURAL NETWORKSBinary neural networks are neural networks where both weights and activations are binary. LetB=f0;1g. A fully connected binary neural network layer maps the N`activations h(`)at level`to theN`+1activations h(`+1)at level`+ 1using weights W(`)2BN`N`+1:h(`+1)j =f(W(`);h(`)) = 1N`+ 1N`Xi=1W(`)j;ih(`)i!; (x) =0x<121x12: (1)We divide by N`+ 1since the sum can take the N`+ 1valuesf0;:::;N`g. We do not explicitlyconsider biases which can be introduced by fixing some activations to 1. In a classification modelh(0)=xis the input and the last activation function is typically replaced by a softmax which pro-duces output probabilities p(yjx;W), where Wdenotes the collection of weights of the network.GivenMinput/output pairs X= (x1;:::;xM);Y= (y1;:::;yM), a frequentist approach woulddetermine the binary weights so that the likelihood p(YjX;W) =QMi=1p(yijxi;W)is maxi-mized. Here we consider discrete or quantized weights and take the approach of variational opti-mization Staines & Barber (2012), which introduces a weight distribution q(W)to devise a sur-rogate differential objective. For an objective O(W), one has the bound maxW2BNO(W)Eq(W)[O(W)], and the parameters of q(W)are adjusted to maximize the lower bound. In ourcase we consider the objective:maxW2BNlogp(YjX;W)L:=Eq(W)[logp(YjX;W)] =MXi=1Eq(W)[logp(yijxi;W)]:(2)2Under review as a conference paper at ICLR 2021While the optimal solution to equation 2 is a Dirac measure, one can add a regularization term R()to keepqsoft. In appendix A we review the connection with Bayesian deep learning, where q(W)is the approximate posterior, R()is the KL divergence between q(W)and the prior over weights,and the objective is derived by maximizing the evidence lower bound.In both variational Bayes and variational optimization frameworks for binary networks, we havea variational distribution q(W)and probabilistic layers where activations are random variables.We consider an approximate posterior factorized over the layers: q(W) =QL`=1q(`)(W(`)). Ifh(`)p(`), equation 1 leads to the following recursive definition of distributions:p(`+1)(h(`+1)) =Xh2BN`XW2BN`N`+1(h(`+1)f(W(`);h(`)))p(`)(h(`))q(`)(W(`)):(3)We use the shorthand p(`)(h(`))forp(`)(h(`)jx)and the xdependence is understood. The averageappearing in equation 2 can be written as an average over the network output distribution:Eq(W)[logp(yijxi;W)] =Ep(L)(h(L))[gi(yi;h(L))]; (4)where the function giis typically MSE for regression and cross-entropy for classification.In previous works (Shayer et al., 2017; Peters & Welling, 2018), the approximate posterior was takento be factorized: q(W(`)) =Qijqi;j(W(`)i;j), which results in a factorized activation distribution aswell:p(`)(h(`)) =Qip(`)i(h(`)i). (Shayer et al., 2017; Peters & Welling, 2018) used the localreparameterization trick Kingma et al. (2015) to sample activations at each layer.The quantum neural network we introduce below will naturally give a way to sample efficiently fromcomplex distributions and in view of that we here generalize the setting: we act with a stochasticmatrix S(h0;W0jh;W)which depends on parameters and correlates the weights and the inputactivations to a layer as follows:;(h0;W0) =Xh2BNXW2BNMS(h0;W0jh;W)p(h)q(W): (5)To avoid redundancy, we still take q(W)to be factorized and let Screate correlation among theweights as well. The choice of Swill be related to the choice of a unitary matrix Din the quantumcircuit of the quantum neural network. A layer is now made of the two operations, Sand the layermapf, resulting in the following output distribution:p(`+1)(h(`+1)) =Xh2BN`XW2BN`N`+1(h(`+1)f(W(`);h(`)))(`);(h(`);W(`)); (6)which allows one to compute the network output recursively. Both the parameters andwill belearned to solve the following optimization problem:min;R() +R0()L: (7)whereR();R0()are regularization terms for the parameters ;. We call this model a generalizedprobabilistic binary neural network, with deformation parameters chosen such that = 0 givesback the standard probabilistic binary neural network.To study this model on a classical computer we need to choose Swhich leads to an efficient samplingalgorithm for ;. In general, one could use Markov Chain Monte Carlo, but there exists situationsfor which the mixing time of the chain grows exponentially in the size of the problem (Levin & Peres,2017). In the next section we will show how quantum mechanics can enlarge the set of probabilisticbinary neural networks that can be efficiently executed and in the subsequent sections we will showexperimental results for a restricted class of correlated distributions inspired by quantum circuitsthat can be simulated classically.3 Q UANTUM IMPLEMENTATIONQuantum computers can sample from certain correlated distributions more efficiently than classicalcomputers (Aaronson & Chen, 2016; Arute et al., 2019). In this section, we devise a quantum circuit3Under review as a conference paper at ICLR 2021that implements the generalized probabilistic binary neural networks introduced above, encoding;in a quantum circuit. This leads to an exponential speedup for running this model on a quantumcomputer, opening up the study of more complex probabilistic neural networks.A quantum implementation of a binary perceptron was introduced in Schuld et al. (2015) as anapplication of the quantum phase estimation algorithm (Nielsen & Chuang, 2000). However, noquantum advantage of the quantum simulation was shown. Here we will extend that result in severalways: i) we will modify the algorithm to represent the generalized probabilistic layer introducedabove, showing the quantum advantage present in our setting; ii) we will consider the case of multilayer percetrons as well as convolutional networks.3.1 I NTRODUCTION TO QUANTUM NOTATION AND QUANTUM PHASE ESTIMATIONAs a preliminary step, we introduce notations for quantum mechanics. We refer the reader to Ap-pendix B for a more thorough review of quantum mechanics. A qubit is the vector space of nor-malized vectorsj i2C2.Nqubits form the set of unit vectors in (C2)N=C2Nspanned byallN-bit strings,jb1;:::;bNijb1ijbNi,bi2B. Quantum circuits are unitary matriceson this space. The probability of a measurement with outcome iis given by matrix element ofthe projectorjiihijin a statej i, namelypi=h jiihij i=jhij ij2, a formula known asBorn’s rule.Next, we describe the quantum phase estimation (QPE), a quantum algorithm to estimate the eigen-phases of a unitary U. Denote the eigenvalues and eigenvectors of Ubyexp2i2t'andjvi, andassume that the '’s can be represented with a finite number tof bits:'= 2t1'1++ 20't.(This is the case of relevance for a binary network.) Then introduce tancilla qubits in state j0it.Given an input state j i, QPE is the following unitary operation:j0itj iQPE7!Xhvj ij'ijvi: (8)Appendix B.1 reviews the details of the quantum circuit implementing this map, whose complexityis linear int. Now using the notation for the threshold non-linearity introduced in equation 1, andrecalling the expansion 2t'= 21'1++ 2t't, we note that if the first bit '1= 0 then2t' <12and(2t') = 0 , while if'1= 1, then 2t'12and(2t') = 1 . In other words,'1;b=(2t');band the probability p(b)that after the QPE the first ancilla bit is bis given by:Xhvj ih'jhvjhjbihbj1iXhvj ij'ijvi=Xjhvj ij2(2t');b;(9)wherehjbihbj1iis an operator that projects the first bit to the state jbiand leaves the other bitsuntouched.3.2 D EFINITION AND ADVANTAGES OF QUANTUM DEFORMED NEURAL NETWORKSArmed with this background, we can now apply quantum phase estimation to compute the output ofthe probabilistic layer of equation 6. Let Nbe the number of input neurons and Mthat of outputneurons. We introduce qubits to represent inputs and weights bits:jh;Wi2VhVW;Vh=NOi=1(C2)i;VW=NOi=1MOj=1(C2)ij: (10)Then we introduce a Hamiltonian Hjacting non trivially only on the Ninput activations and the Nweights at the j-th row:Hj=NXi=1BWjiBhi; (11)andBhi(BWji) is the matrix B=j1ih1jacting on the i-th activation ( ji-th weight) qubit. Note thatHjsingles out terms from the state jh;Wiwhere bothhj= 1 andWij= 1 and then adds them4Under review as a conference paper at ICLR 2021j0it...j0itj hij W1;:i...j WM;:iQPE(U1)QPE(UM)......(a)j0ij0it2-1j0ij0it1-1j0ij0it1-1jxij W11;:ij W12;:ij W21;:iQPE(U11)QPE(U12)QPE(U21)yLayer 1 Layer 2(b)j0itj hij W1;:i...j0itj hij WM;:iQPE(~U1) QPE(~UM)...(c)Figure 1: (a) Quantum circuit implementing a quantum deformed layer. The thin vertical line indi-cates that the gate acts as identity on the wires crossed by the line. (b) Quantum deformed multilayerperceptron with 2hidden quantum neurons and 1output quantum neuron. jxiis an encoding of theinput signal, yis the prediction. The superscript `inU`jandW`j;:refers to layer `. We split theblocks oft`ancilla qubits into a readout qubit that encodes the layer output amplitude and the rest.(c) Modification of a layer for classical simulations.up, i.e. the eigenvalues of Hjare the preactivations of equation 1:Hjjh;Wi='(h;Wj;:)jh;Wi; ' (h;Wj;:) =NXi=1Wjihi: (12)Now define the unitary operators:Uj=De2iN+1HjD1; (13)where Dis another generic unitary, and as we shall see shortly, its eigenvectors will be related to theentries of the classical stochastic matrix Sin section 2. Since UjUj0=De2iN+1(Hj+Hj0)D1=Uj0Uj, we can diagonalize all the Uj’s simultaneously and since they are conjugate to e2iN+1Hjtheywill have the same eigenvalues. Introducing the eigenbasis jh;WiD=Djh;Wi, we have:Ujjh;WiD= e2iN+1'(h;Wj;:)jh;WiD: (14)Note that'2f0;:::;Ngso we can represent it with exactly tbits,N= 2t1. Then we add Mancilla resources, each of tqubits, and sequentially perform Mquantum phase estimations, one foreachUj, as depicted in figure 1 (a). We choose the following input statej i=j ihMOj=1j iWj;:;j iWj;:=NOi=1qqji(Wji= 0)j0i+qqji(Wji= 1)j1i;(15)where we have chosen the weight input state according to the factorized variational distribution qijintroduced in section 2. In fact, this state corresponds to the following probability distribution viaBorn’s rule:p(h;W) =jhh;Wj ij2=p(h)MYj=1NYi=1qji(Wji); p(h) =jhhj ihj2: (16)The statej hiis discussed below. Now we show that a non-trivial choice of Dleads to an ef-fective correlated distribution. The j-th QPE in figure 1 (a) corresponds to equation 8 where weidentifyjvijh;WiD,j'ij'(h;Wj;:)iand we make use of the j-th block of tancillas.AfterMsteps we compute the outcome probability of a measurement of the first qubit in each oftheMregisters of the ancillas. We can extend equation 9 to the situation of measuring multiplequbits, and recalling that the first bit of an integer is the most significant bit, determining whether2t'(h;Wj;:) = (N+ 1)1'(h;Wj;:)is greater or smaller than 1=2, the probability of outcomeh0= (h01;:::;h0M)isp(h0) =Xh2BNXW2BNMh0;f(W;h)jh jh;WiDj2; (17)5Under review as a conference paper at ICLR 2021wherefis the layer function introduced in equation 1. We refer to appendix C for a detailedderivation. Equation 17 is the generalized probabilistic binary layer introduced in equation 6 whereDcorresponds to a non-trivial Sand a correlated distribution when Dentangles the qubits:(h;W) =jh jDjh;Wij2: (18)The variational parameters ofSare now parameters of the quantum circuit D. Sampling fromcan be done by doing repeated measurements of the first Mancilla qubits of this quantum cir-cuit. On quantum hardware e2iN+1Hjcan be efficiently implemented since it is a product of diagonaltwo-qubits quantum gates. We shall consider unitaries Dwhich have efficient quantum circuit ap-proximations. Then computing the quantum deformed layer output on a quantum computer is goingto take time O(tMu(N))whereu(N)is the time it takes to compute the action of Ujon an inputstate. There exists Dsuch that sampling from equation 18 is exponentially harder classically thanquantum mechanically, a statement forming the basis for quantum supremacy experiments on noisy,intermediate scale quantum computers (Aaronson & Chen, 2016; Arute et al., 2019). Examplesare random circuits with two-dimensional entanglement patterns, which from a machine learningpoint of view can be natural when considering image data. Other examples are Dimplementingtime evolution operators of physical systems, whose simulation is exponentially hard classically,resulting in hardness of sampling from the time evolved wave function. Quantum supremacy exper-iments give foundations to which architectures can benefit from quantum speedups, but we remarkthat the proposed quantum architecture, which relies on quantum phase estimation, is designed forerror-corrected quantum computers.Even better, on quantum hardware we can avoid sampling intermediate activations altogether. Atthe first layer, the input can be prepared by encoding the input bits in the state jxi. For the nextlayers, we simply use the output state as the input to the next layer. One obtains thus the quantumnetwork of figure 1 (b) and the algorithm for a layer is summarized in procedure 1. Note that allthe qubits associated to the intermediate activations are entangled. Therefore the input state j hiwould have to be replaced by a state in Vhplus all the other qubits, where the gates at the next layerwould act only onVhin the manner described in this section. (An equivalent and more economicalmathematical description is to use the reduced density matrix has input state.) We envision twoother possible procedures for what happens after the first layer: i) we sample from equation 17 andinitializej hito the bit string sampled in analogy to the classical quantization of activations; ii) wesample many times to reconstruct the classical distribution and encode it in j hi. In our classicalsimulations below we will be able to actually calculate the probabilities and can avoid sampling.Finally, we remark that at present it is not clear whether the computational speedup exhibited by ourarchitecture translates to a learning advantage. This is an outstanding question whose full answerwill require an empirical evaluation with a quantum computer. Next, we will try to get as close aspossible to answer this question by studying a quantum model that we can simulate classically.3.3 M ODIFICATIONS FOR CLASSICAL SIMULATIONSIn this paper we will provide classical simulations of the quantum neural networks introduced abovefor a restricted class of designs. We do this for two reasons: first to convince the reader that thequantum layers hold promise (even though we can not simulate the proposed architecture in its fullglory due to the lack of access to a quantum computer) and second, to show that these ideas can beinteresting as new designs, even “classically” (by which we mean architectures that can be executedon a classical computer).To parallelize the computations for different output neurons, we do the modifications to the setupjust explained which are depicted in figure 1 (c). We clone the input activation register Mtimes,an operation that quantum mechanically is only approximate (Nielsen & Chuang, 2000) but exactclassically. Then we associate the j-th copy to the j-th row of the weight matrix, thus forming pairsfor eachj= 1;:::;M :jh;Wj;:i2VhVW;j;VW;j=NOi=1(C2)ji (19)6Under review as a conference paper at ICLR 2021Fixingj, we introduce the unitary e2iN+1Hjdiagonal in the basis jh;Wj;:ias in equation 11 anddefine the new unitary:~Uj=Dje2iN+1HjD1j; (20)where w.r.t. equation 13 we now let Djdepend onj. We denote the eigenvectors of ~Ujbyjh;Wj;:iDj=Djjh;Wj;:iand the eigenvalue is '(h;Wj;:)introduced in equation 12. Supposingthat we know p(h) =Qipi(hi), we apply the quantum phase estimation to ~Ujwith input:j ji=j hij iWj;:;j hi=NOi=1hppi(hi= 0)j0i+ppi(hi= 1)j1ii; (21)andj iWj;:is defined in equation 15. Going through similar calculations as those done aboveshows that measurements of the first qubit will be governed by the probability distribution of equa-tion 6 factorized over output channels since the procedure does not couple them: (h;W) =QMj=1jh jjDjjh;Wj;:ij2. So far, we have focused on fully connected layers. We can extendthe derivation of this section to the convolution case, by applying the quantum phase estimation onimages patches of the size equal to the kernel size as explained in appendix D.4 C LASSICAL SIMULATIONS FOR LOW ENTANGLEMENT4.1 T HEORYIt has been remarked in (Shayer et al., 2017; Peters & Welling, 2018) that when the weight and ac-tivation distributions at a given layer are factorized, p(h) =Qipi(hi)andq(W) =Qijqij(Wij),the output distribution in equation 3 can be efficiently approximated using the central limit theorem(CLT). The argument goes as follows: for each jthe preactivations '(h;Wj;:) =PNi=1Wj;ihiaresums of independent binary random variables Wj;ihiwith mean and variance:ji=Ewqji(w)Ehpi(h); 2ji=Ewqji(w2)Ehpi(h2)2ji=ji(1ji); (22)We usedb2=bfor a variable b2f0;1g. The CLT implies that for large Nwe can approximate'(h;Wj;:)with a normal distribution with mean j=Pijiand variance 2j=Pi2ji. Thedistribution of the activation after the non-linearity of equation 1 can thus be computed as:p((1N+1'(h;Wj;:)) = 1) =p(2'(h;Wj;:)N > 0) = 2jN2j; (23)being the CDF of the standard normal distribution. Below we fix jand omit it for notation clarity.As reviewed in appendix B, commuting observables in quantum mechanics behave like classicalrandom variables. The observable of interest for us, DHD1of equation 20, is a sum of commut-ing terms KiDBWiBhiD1and if their joint probability distribution is such that these randomvariables are weakly correlated, i.e.h jKiKi0j ih jKij ih jKi0j i!0;ifjii0j!1; (24)then the CLT for weakly correlated random variables applies, stating that measurements ofDHD1in statej iare governed by a Gaussian distribution N(;2)with=h jDHD1j i; 2=h jDH2D1j i2: (25)Finally, we can plug these values into equation 23 to get the layer output probability.We have cast the problem of simulating the quantum neural network to the problem of computing theexpectation values in equation 25. In physical terms, these are related to correlation functions of HandH2after evolving a state j iwith the operator D. These can be efficiently computed classicallyfor one dimensional and lowly entangled quantum circuits D(Vidal, 2003). In view of that here weconsider a 1d arrangement of activation and weight qubits, labeled by i= 0;:::; 2N1, where theeven qubits are associated with activations and the odd are associated with weights. We then choose:D=N1Yi=0Q2i;2i+1N1Yi=0P2i+1;2i+2; (26)7Under review as a conference paper at ICLR 2021where Q2i;2i+1acts non-trivially on qubits 2i;2i+ 1, i.e. onto the i-th activation and i-th weightqubits, while P2i;2i+1on thei-th weight and i+ 1-th activation qubits. We depict this quantumcircuit in figure 2 (a). As explained in detail in appendix E, the computation of involves the matrixelement of Kiin the product state j iwhile2involves that of KiKi+1. Due to the structureofD, these operators act locally on 4and6sites respectively as depicted in figure 2 (b)-(c). ThisD=h0W0h1W1h2W2P PQ Q Q012345(a)Ki=P PQBBQ1P1P12i-12i2i+12i+12(b)KiKi+1=P P PQ QBB BBQ1Q1P1P1P12i-12i2i+12i+22i+32i+4(c)Figure 2: (a) The entangling circuit DforN= 3. (b)Kientering the computation of . (c)KiKi+1entering the computation of 2. Indices on P;Q;Bare omitted for clarity. Time flowsdownwards.implies that the computation of equation 25, and so of the full layer, can be done in O(N)and easilyparallelized. Appendix E contains more details on the complexity, while Procedure 2 describes thealgorithm for the classical simulation discussed here.Procedure 1 Quantum deformed layer. QPE j(U;I) is quantum phase estimation for a unitary Uacting on the setIof activation qubits and the j-th weights/ancilla qubits. Hjis in equation 11.Input:fqijgj=0;:::;M1i=0;:::;N1,j i,I,D,tOutput:j iforj= 0toM1doj iWj;: NNi=1pqjij0i+p1qjij1ij i j 0itj ij iWj;:U De2iN+1HjD1fThis requires to approximate the unitary with quantum gates gj i QPEj(U;I)j iend forProcedure 2 Classical simulation of a quantum deformed layer with N(M) inputs (outputs).Input:fqijgj=0;:::;M1i=0;:::;N1;fpigi=0;:::;N1;P=fPj2i1;2igj=0;:::;M1i=1;:::;N1;Q=fQj2i;2i+1gj=0;:::;M1i=0;:::;N1Output:fp0igi=1;:::;Mforj= 0toM1dofori= 0toN1do 2i ppi;p1pi 2i+1 pqij;p1qijend forfori= 0toN1doi computeMu( i; ;P;Q)fThis implements equation 45 of appendix E gi;i+1 computeGamma( i; ;P;Q)fThis implements equation 49 of appendix E gend for PN1i=0i2 2PN2i=0(i;i+1ii+1) +PN1i=0(i2i)p0j 2N2p2end for8Under review as a conference paper at ICLR 20214.2 E XPERIMENTSTable 1: Test accuracies for MNIST and FashionMNIST. With the notation cKsSCto indicatea conv2d layer with Cfilters of size [K;K ]andstrideS, anddNfor a dense layer with Noutputneurons, the architectures (Arch.) are A: d10; B:c3s2-8, c3s2-16, d10; C: c3s2-32, c3s2-64, d10.The deformations are: [/]: Pji;i+1=Qji;i+1=1(baseline (Peters & Welling, 2018)); [PQ]:Pji;i+1;Qji;i+1generic; [Q]: Pji;i+1=1;Qji;i+1generic.Arch. Deformation MNIST Fashion MNISTA [/] 91.1 84.2[PQ] 94.3 86.8[Q] 91.6 85.1B [/, /, /] 96.6 87.5[PQ, /, /] 97.6 88.1[Q, /, /] 96.8 87.8C [/, /, /] 98.1 89.3[PQ, /, /] 98.3 89.6[Q, /, /] 98.3 89.5We present experiments for the model of theprevious section. At each layer, qijandDjare learnable. They are optimized to minimizethe loss of equation 7 where following (Peters& Welling, 2018; Shayer et al., 2017) we takeR=P`;i;jq(`)ij(1q(`)ij), andR0is theL2regularization loss of the parameters of Dj.Lcoincides with equation 2. We implementedand trained several architectures with differentdeformations. Table 1 contains results for twostandard image datasets, MNIST and FashionMNIST. Details of the experiments are in ap-pendix F. The classical baseline is based on(Peters & Welling, 2018), but we use fewer lay-ers to make the simulation of the deformationcheaper and use no batch norm, and no maxpooling.The general deformation ([PQ]) performs bestin all cases. In the simplest case of a singledense layer (A), the gain is +3:2%for MNISTand+2:6%for Fashion MNIST on test accu-racy. For convnets, we could only simulate asingle deformed layer due to computational is-sues and the gain is around or less than 1%.We expect that deforming all layers will givea greater boost as the improvements diminish with decreasing the ratio of deformation parametersover classical parameters ( qij). The increase in accuracy comes at the expense of more parameters.In appendix F we present additional results showing that quantum models can still deliver modestaccuracy improvement w.r.t. convolutional networks with the same number of parameters.5 C ONCLUSIONSIn this work we made the following main contributions: 1) we introduced quantum deformed neuralnetworks and identified potential speedups by running these models on a quantum computer; 2) wedevised classically efficient algorithms to train the networks for low entanglement designs of thequantum circuits; 3) for the first time in the literature, we simulated the quantum neural networkson real world data sizes obtaining good accuracy, and showed modest gains due to the quantumdeformations. Running these models on a quantum computer will allow one to explore efficientlymore general deformations, in particular those that cannot be approximated by the central limittheorem when the Hamiltonians will be sums of non-commuting operators. Another interestingfuture direction is to incorporate batch normalization and pooling layers in quantum neural networks.An outstanding question in quantum machine learning is to find quantum advantages for classicalmachine learning tasks. The class of known problems for which a quantum learner can have aprovably exponential advantage over a classical learner is small at the moment Liu et al. (2020), andsome problems that are classically hard to compute can be predicted easily with classical machinelearning Huang et al. (2020). The approach presented here is the next step in a series of papers thattries to benchmark quantum neural networks empirically, e.g. Farhi & Neven (2018); Huggins et al.(2019); Grant et al. (2019; 2018); Bausch (2020). We are the first to show that towards the limit ofentangling circuits the quantum inspired architecture does improve relative to the classical one forreal world data sizes.9Under review as a conference paper at ICLR 2021
AvY1h9NHpD
too many crucial details are deferred to the supplementary material
6: Marginally above acceptance threshold
The fundamental idea in this paper is to endow classical signals with a complex Hilberspace structure so as to harness the probabilistic nature of quantum mechaniscs for pattern recognition based on quantum computing principles. Following this idea, the authors consider (probabiistic) binary neural networks and develop a corresponding kind of quantum neural networks building on the concept of quantum phase estimation. They (convincingly) argue that this idea would make good use of the exponential speedup offered by quantum computers but can still be implemented on classical, digital devices. Experiments with image data corroborate this claim and demonstrate that simulations of the proposed quantum deformation showed improved accuracies when compared to classical probabilistic binary neural networks. The Ideas brought forth in this paper appear to be novel and (to those with a solid background in quantum computing) technically sound. Overall, the paper presents an interesting approach towards quantum computational intelligence which, though likely not yet realizable on real quantum hardware (due to technical limitations such as measurement noise or decoherence times) can also be simulated digitally (in an arguably elegant manner). However, there also are concerns regarding the quality of this manuscript. In additions to several broken LaTeX links (e.g. a reference to Table 4.2 which appears to mean to refer to Table 1) which unnecessarily hamper readability, the overall presentation is lackingcan. Most critically, the paper can hardly be considered self-contained. Numerous important details are deferred to the supplementary material where the authors cover algorithmic details as well as details as to their experimental procedures. In other words, the supplementary material is not just supplementary but crucial for understanding / assessing the content of this work. While this would not justfy an outright rejection, it still diminishes the overall quality of this paper.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Quantum Deformed Neural Networks ### Paper Abstract We develop a new quantum neural network layer designed to run efficiently on a quantum computer but that can be simulated on a classical computer when restricted in the way it entangles input states. We first ask how a classical neural network architecture, both fully connected or convolutional, can be executed on a quantum computer using quantum phase estimation. We then deform the classical layer into a quantum design which entangles activations and weights into quantum superpositions. While the full model would need the exponential speedups delivered by a quantum computer, a restricted class of designs represent interesting new classical network layers that still use quantum features. We show that these quantum deformed neural networks can be trained and executed on normal data such as images, and even classically deliver modest improvements over standard architectures. ### Paper Keywords ["Quantum machine learning", "Binary neural networks", "Bayesian deep learning"] ### Paper Content ABSTRACTWe develop a new quantum neural network layer designed to run efficiently ona quantum computer but that can be simulated on a classical computer when re-stricted in the way it entangles input states. We first ask how a classical neuralnetwork architecture, both fully connected or convolutional, can be executed on aquantum computer using quantum phase estimation. We then deform the classicallayer into a quantum design which entangles activations and weights into quantumsuperpositions. While the full model would need the exponential speedups deliv-ered by a quantum computer, a restricted class of designs represent interestingnew classical network layers that still use quantum features. We show that thesequantum deformed neural networks can be trained and executed on normal datasuch as images, and even classically deliver modest improvements over standardarchitectures.1 I NTRODUCTIONQuantum mechanics (QM) is the most accurate description for physical phenomena at very smallscales, such as the behavior of molecules, atoms and subatomic particles. QM has a huge impact onour every day lives through technologies such as lasers, transistors (and thus microchips), supercon-ductors and MRI.A recent view of QM has formulated it as a (Bayesian) statistical methodology that only describesour subjective view of the (quantum) world, and how we update that view in light of evidence(i.e. measurements) (;t Hooft, 2016; Fuchs & Schack, 2013). This is in perfect analogy to theclassical Bayesian view, a statistical paradigm extensively used in artificial intelligence where wemaintain probabilities to represent our beliefs for events in the world.The philosophy of this paper will be to turn this argument on its head. If we can view QM as justanother consistent statistical theory that happens to describe nature at small scales, then we canalso use this theory to describe classical signals by endowing them with a Hilbert space structure.In some sense, the ’only’ difference with Bayesian statistics is that the positive probabilities arereplaced with complex ’amplitudes’. This however has the dramatic effect that, unlike in classicalstatistics, interference between events now becomes a possibility. In this paper we show that thispoint of view uncovers new architectures and potential speedups for running neural networks onquantum computers.We shall restrict our attention here to binary neural networks. We will introduce a new class of quan-tum neural networks and interpret them as generalizations of probabilistic binary neural networks,discussing potential speedups by running the models on a quantum computer. Then we will deviseclassically efficient algorithms to train the networks for a restricted set of quantum circuits. Wepresent results of classical simulations of the quantum neural networks on real world data sizes andrelated gains in accuracy due to the quantum deformations. Contrary to almost all other works onquantum deep learning, our quantum neural networks can be simulated for practical classical prob-lems, such as images or sound. The quantum nature of our models is there to increase the flexibilityof the model-class and add new operators to the toolbox of the deep learning researcher, some ofwhich may only reach their full potential when quantum computing becomes ubiquitous.1Under review as a conference paper at ICLR 20211.1 R ELATED WORKIn Farhi & Neven (2018) variational quantum circuits that can be learnt via stochastic gradientdescent were introduced. Their performance could be studied only on small input tasks such asclassifying 44images, due to the exponential memory requirement to simulate those circuits.Other works on variational quantum circuits for neural networks are Verdon et al. (2018); Beer et al.(2019). Their focus is similarly on the implementation on near term quantum devices and thesemodels cannot be efficiently run on a classical computer. Exceptions are models which use tensornetwork simulations (Cong et al., 2019; Huggins et al., 2019) where the model can be scaled to 88image data with 2classes, at the price of constraining the geometry of the quantum circuit (Hugginset al., 2019). The quantum deformed neural networks introduced in this paper are instead a class ofvariational quantum circuits that can be scaled to the size of data that are used in traditional neuralnetworks as we demonstrate in section 4.2.Another line of work directly uses tensor networks as full precision machine learning models thatcan be scaled to the size of real data (Miles Stoudenmire & Schwab, 2016; Liu et al., 2017; Levineet al., 2017; Levine et al., 2019). However the constraints on the network geometry to allow forefficient contractions limit the expressivity and performance of the models. See however Chenget al. (2020) for recent promising developments. Further, the tensor networks studied in these worksare not unitary maps and do not directly relate to implementations on quantum computers.A large body of work in quantum machine learning focuses on using quantum computing to providespeedups to classical machine learning tasks (Biamonte et al., 2017; Ciliberto et al., 2018; Wiebeet al., 2014), culminating in the discovery of quantum inspired speedups in classical algorithms(Tang, 2019). In particular, (Allcock et al., 2018; Cao et al., 2017; Schuld et al., 2015; Kerenidiset al., 2019) discuss quantum simulations of classical neural networks with the goal of improving theefficiency of classical models on a quantum computer. Our models differ from these works in twoways: i) we use quantum wave-functions to model weight uncertainty, in a way that is reminiscentof Bayesian models; ii) we design our network layers in a way that may only reach its full potentialon a quantum computer due to exponential speedups, but at the same time can, for a restrictedclass of layer designs, be simulated on a classical computer and provide inspiration for new neuralarchitectures. Finally, quantum methods for accelerating Bayesian inference have been discussed inZhao et al. (2019b;a) but only for Gaussian processes while in this work we shall discuss relationsto Bayesian neural networks.2 G ENERALIZED PROBABILISTIC BINARY NEURAL NETWORKSBinary neural networks are neural networks where both weights and activations are binary. LetB=f0;1g. A fully connected binary neural network layer maps the N`activations h(`)at level`to theN`+1activations h(`+1)at level`+ 1using weights W(`)2BN`N`+1:h(`+1)j =f(W(`);h(`)) = 1N`+ 1N`Xi=1W(`)j;ih(`)i!; (x) =0x<121x12: (1)We divide by N`+ 1since the sum can take the N`+ 1valuesf0;:::;N`g. We do not explicitlyconsider biases which can be introduced by fixing some activations to 1. In a classification modelh(0)=xis the input and the last activation function is typically replaced by a softmax which pro-duces output probabilities p(yjx;W), where Wdenotes the collection of weights of the network.GivenMinput/output pairs X= (x1;:::;xM);Y= (y1;:::;yM), a frequentist approach woulddetermine the binary weights so that the likelihood p(YjX;W) =QMi=1p(yijxi;W)is maxi-mized. Here we consider discrete or quantized weights and take the approach of variational opti-mization Staines & Barber (2012), which introduces a weight distribution q(W)to devise a sur-rogate differential objective. For an objective O(W), one has the bound maxW2BNO(W)Eq(W)[O(W)], and the parameters of q(W)are adjusted to maximize the lower bound. In ourcase we consider the objective:maxW2BNlogp(YjX;W)L:=Eq(W)[logp(YjX;W)] =MXi=1Eq(W)[logp(yijxi;W)]:(2)2Under review as a conference paper at ICLR 2021While the optimal solution to equation 2 is a Dirac measure, one can add a regularization term R()to keepqsoft. In appendix A we review the connection with Bayesian deep learning, where q(W)is the approximate posterior, R()is the KL divergence between q(W)and the prior over weights,and the objective is derived by maximizing the evidence lower bound.In both variational Bayes and variational optimization frameworks for binary networks, we havea variational distribution q(W)and probabilistic layers where activations are random variables.We consider an approximate posterior factorized over the layers: q(W) =QL`=1q(`)(W(`)). Ifh(`)p(`), equation 1 leads to the following recursive definition of distributions:p(`+1)(h(`+1)) =Xh2BN`XW2BN`N`+1(h(`+1)f(W(`);h(`)))p(`)(h(`))q(`)(W(`)):(3)We use the shorthand p(`)(h(`))forp(`)(h(`)jx)and the xdependence is understood. The averageappearing in equation 2 can be written as an average over the network output distribution:Eq(W)[logp(yijxi;W)] =Ep(L)(h(L))[gi(yi;h(L))]; (4)where the function giis typically MSE for regression and cross-entropy for classification.In previous works (Shayer et al., 2017; Peters & Welling, 2018), the approximate posterior was takento be factorized: q(W(`)) =Qijqi;j(W(`)i;j), which results in a factorized activation distribution aswell:p(`)(h(`)) =Qip(`)i(h(`)i). (Shayer et al., 2017; Peters & Welling, 2018) used the localreparameterization trick Kingma et al. (2015) to sample activations at each layer.The quantum neural network we introduce below will naturally give a way to sample efficiently fromcomplex distributions and in view of that we here generalize the setting: we act with a stochasticmatrix S(h0;W0jh;W)which depends on parameters and correlates the weights and the inputactivations to a layer as follows:;(h0;W0) =Xh2BNXW2BNMS(h0;W0jh;W)p(h)q(W): (5)To avoid redundancy, we still take q(W)to be factorized and let Screate correlation among theweights as well. The choice of Swill be related to the choice of a unitary matrix Din the quantumcircuit of the quantum neural network. A layer is now made of the two operations, Sand the layermapf, resulting in the following output distribution:p(`+1)(h(`+1)) =Xh2BN`XW2BN`N`+1(h(`+1)f(W(`);h(`)))(`);(h(`);W(`)); (6)which allows one to compute the network output recursively. Both the parameters andwill belearned to solve the following optimization problem:min;R() +R0()L: (7)whereR();R0()are regularization terms for the parameters ;. We call this model a generalizedprobabilistic binary neural network, with deformation parameters chosen such that = 0 givesback the standard probabilistic binary neural network.To study this model on a classical computer we need to choose Swhich leads to an efficient samplingalgorithm for ;. In general, one could use Markov Chain Monte Carlo, but there exists situationsfor which the mixing time of the chain grows exponentially in the size of the problem (Levin & Peres,2017). In the next section we will show how quantum mechanics can enlarge the set of probabilisticbinary neural networks that can be efficiently executed and in the subsequent sections we will showexperimental results for a restricted class of correlated distributions inspired by quantum circuitsthat can be simulated classically.3 Q UANTUM IMPLEMENTATIONQuantum computers can sample from certain correlated distributions more efficiently than classicalcomputers (Aaronson & Chen, 2016; Arute et al., 2019). In this section, we devise a quantum circuit3Under review as a conference paper at ICLR 2021that implements the generalized probabilistic binary neural networks introduced above, encoding;in a quantum circuit. This leads to an exponential speedup for running this model on a quantumcomputer, opening up the study of more complex probabilistic neural networks.A quantum implementation of a binary perceptron was introduced in Schuld et al. (2015) as anapplication of the quantum phase estimation algorithm (Nielsen & Chuang, 2000). However, noquantum advantage of the quantum simulation was shown. Here we will extend that result in severalways: i) we will modify the algorithm to represent the generalized probabilistic layer introducedabove, showing the quantum advantage present in our setting; ii) we will consider the case of multilayer percetrons as well as convolutional networks.3.1 I NTRODUCTION TO QUANTUM NOTATION AND QUANTUM PHASE ESTIMATIONAs a preliminary step, we introduce notations for quantum mechanics. We refer the reader to Ap-pendix B for a more thorough review of quantum mechanics. A qubit is the vector space of nor-malized vectorsj i2C2.Nqubits form the set of unit vectors in (C2)N=C2Nspanned byallN-bit strings,jb1;:::;bNijb1ijbNi,bi2B. Quantum circuits are unitary matriceson this space. The probability of a measurement with outcome iis given by matrix element ofthe projectorjiihijin a statej i, namelypi=h jiihij i=jhij ij2, a formula known asBorn’s rule.Next, we describe the quantum phase estimation (QPE), a quantum algorithm to estimate the eigen-phases of a unitary U. Denote the eigenvalues and eigenvectors of Ubyexp2i2t'andjvi, andassume that the '’s can be represented with a finite number tof bits:'= 2t1'1++ 20't.(This is the case of relevance for a binary network.) Then introduce tancilla qubits in state j0it.Given an input state j i, QPE is the following unitary operation:j0itj iQPE7!Xhvj ij'ijvi: (8)Appendix B.1 reviews the details of the quantum circuit implementing this map, whose complexityis linear int. Now using the notation for the threshold non-linearity introduced in equation 1, andrecalling the expansion 2t'= 21'1++ 2t't, we note that if the first bit '1= 0 then2t' <12and(2t') = 0 , while if'1= 1, then 2t'12and(2t') = 1 . In other words,'1;b=(2t');band the probability p(b)that after the QPE the first ancilla bit is bis given by:Xhvj ih'jhvjhjbihbj1iXhvj ij'ijvi=Xjhvj ij2(2t');b;(9)wherehjbihbj1iis an operator that projects the first bit to the state jbiand leaves the other bitsuntouched.3.2 D EFINITION AND ADVANTAGES OF QUANTUM DEFORMED NEURAL NETWORKSArmed with this background, we can now apply quantum phase estimation to compute the output ofthe probabilistic layer of equation 6. Let Nbe the number of input neurons and Mthat of outputneurons. We introduce qubits to represent inputs and weights bits:jh;Wi2VhVW;Vh=NOi=1(C2)i;VW=NOi=1MOj=1(C2)ij: (10)Then we introduce a Hamiltonian Hjacting non trivially only on the Ninput activations and the Nweights at the j-th row:Hj=NXi=1BWjiBhi; (11)andBhi(BWji) is the matrix B=j1ih1jacting on the i-th activation ( ji-th weight) qubit. Note thatHjsingles out terms from the state jh;Wiwhere bothhj= 1 andWij= 1 and then adds them4Under review as a conference paper at ICLR 2021j0it...j0itj hij W1;:i...j WM;:iQPE(U1)QPE(UM)......(a)j0ij0it2-1j0ij0it1-1j0ij0it1-1jxij W11;:ij W12;:ij W21;:iQPE(U11)QPE(U12)QPE(U21)yLayer 1 Layer 2(b)j0itj hij W1;:i...j0itj hij WM;:iQPE(~U1) QPE(~UM)...(c)Figure 1: (a) Quantum circuit implementing a quantum deformed layer. The thin vertical line indi-cates that the gate acts as identity on the wires crossed by the line. (b) Quantum deformed multilayerperceptron with 2hidden quantum neurons and 1output quantum neuron. jxiis an encoding of theinput signal, yis the prediction. The superscript `inU`jandW`j;:refers to layer `. We split theblocks oft`ancilla qubits into a readout qubit that encodes the layer output amplitude and the rest.(c) Modification of a layer for classical simulations.up, i.e. the eigenvalues of Hjare the preactivations of equation 1:Hjjh;Wi='(h;Wj;:)jh;Wi; ' (h;Wj;:) =NXi=1Wjihi: (12)Now define the unitary operators:Uj=De2iN+1HjD1; (13)where Dis another generic unitary, and as we shall see shortly, its eigenvectors will be related to theentries of the classical stochastic matrix Sin section 2. Since UjUj0=De2iN+1(Hj+Hj0)D1=Uj0Uj, we can diagonalize all the Uj’s simultaneously and since they are conjugate to e2iN+1Hjtheywill have the same eigenvalues. Introducing the eigenbasis jh;WiD=Djh;Wi, we have:Ujjh;WiD= e2iN+1'(h;Wj;:)jh;WiD: (14)Note that'2f0;:::;Ngso we can represent it with exactly tbits,N= 2t1. Then we add Mancilla resources, each of tqubits, and sequentially perform Mquantum phase estimations, one foreachUj, as depicted in figure 1 (a). We choose the following input statej i=j ihMOj=1j iWj;:;j iWj;:=NOi=1qqji(Wji= 0)j0i+qqji(Wji= 1)j1i;(15)where we have chosen the weight input state according to the factorized variational distribution qijintroduced in section 2. In fact, this state corresponds to the following probability distribution viaBorn’s rule:p(h;W) =jhh;Wj ij2=p(h)MYj=1NYi=1qji(Wji); p(h) =jhhj ihj2: (16)The statej hiis discussed below. Now we show that a non-trivial choice of Dleads to an ef-fective correlated distribution. The j-th QPE in figure 1 (a) corresponds to equation 8 where weidentifyjvijh;WiD,j'ij'(h;Wj;:)iand we make use of the j-th block of tancillas.AfterMsteps we compute the outcome probability of a measurement of the first qubit in each oftheMregisters of the ancillas. We can extend equation 9 to the situation of measuring multiplequbits, and recalling that the first bit of an integer is the most significant bit, determining whether2t'(h;Wj;:) = (N+ 1)1'(h;Wj;:)is greater or smaller than 1=2, the probability of outcomeh0= (h01;:::;h0M)isp(h0) =Xh2BNXW2BNMh0;f(W;h)jh jh;WiDj2; (17)5Under review as a conference paper at ICLR 2021wherefis the layer function introduced in equation 1. We refer to appendix C for a detailedderivation. Equation 17 is the generalized probabilistic binary layer introduced in equation 6 whereDcorresponds to a non-trivial Sand a correlated distribution when Dentangles the qubits:(h;W) =jh jDjh;Wij2: (18)The variational parameters ofSare now parameters of the quantum circuit D. Sampling fromcan be done by doing repeated measurements of the first Mancilla qubits of this quantum cir-cuit. On quantum hardware e2iN+1Hjcan be efficiently implemented since it is a product of diagonaltwo-qubits quantum gates. We shall consider unitaries Dwhich have efficient quantum circuit ap-proximations. Then computing the quantum deformed layer output on a quantum computer is goingto take time O(tMu(N))whereu(N)is the time it takes to compute the action of Ujon an inputstate. There exists Dsuch that sampling from equation 18 is exponentially harder classically thanquantum mechanically, a statement forming the basis for quantum supremacy experiments on noisy,intermediate scale quantum computers (Aaronson & Chen, 2016; Arute et al., 2019). Examplesare random circuits with two-dimensional entanglement patterns, which from a machine learningpoint of view can be natural when considering image data. Other examples are Dimplementingtime evolution operators of physical systems, whose simulation is exponentially hard classically,resulting in hardness of sampling from the time evolved wave function. Quantum supremacy exper-iments give foundations to which architectures can benefit from quantum speedups, but we remarkthat the proposed quantum architecture, which relies on quantum phase estimation, is designed forerror-corrected quantum computers.Even better, on quantum hardware we can avoid sampling intermediate activations altogether. Atthe first layer, the input can be prepared by encoding the input bits in the state jxi. For the nextlayers, we simply use the output state as the input to the next layer. One obtains thus the quantumnetwork of figure 1 (b) and the algorithm for a layer is summarized in procedure 1. Note that allthe qubits associated to the intermediate activations are entangled. Therefore the input state j hiwould have to be replaced by a state in Vhplus all the other qubits, where the gates at the next layerwould act only onVhin the manner described in this section. (An equivalent and more economicalmathematical description is to use the reduced density matrix has input state.) We envision twoother possible procedures for what happens after the first layer: i) we sample from equation 17 andinitializej hito the bit string sampled in analogy to the classical quantization of activations; ii) wesample many times to reconstruct the classical distribution and encode it in j hi. In our classicalsimulations below we will be able to actually calculate the probabilities and can avoid sampling.Finally, we remark that at present it is not clear whether the computational speedup exhibited by ourarchitecture translates to a learning advantage. This is an outstanding question whose full answerwill require an empirical evaluation with a quantum computer. Next, we will try to get as close aspossible to answer this question by studying a quantum model that we can simulate classically.3.3 M ODIFICATIONS FOR CLASSICAL SIMULATIONSIn this paper we will provide classical simulations of the quantum neural networks introduced abovefor a restricted class of designs. We do this for two reasons: first to convince the reader that thequantum layers hold promise (even though we can not simulate the proposed architecture in its fullglory due to the lack of access to a quantum computer) and second, to show that these ideas can beinteresting as new designs, even “classically” (by which we mean architectures that can be executedon a classical computer).To parallelize the computations for different output neurons, we do the modifications to the setupjust explained which are depicted in figure 1 (c). We clone the input activation register Mtimes,an operation that quantum mechanically is only approximate (Nielsen & Chuang, 2000) but exactclassically. Then we associate the j-th copy to the j-th row of the weight matrix, thus forming pairsfor eachj= 1;:::;M :jh;Wj;:i2VhVW;j;VW;j=NOi=1(C2)ji (19)6Under review as a conference paper at ICLR 2021Fixingj, we introduce the unitary e2iN+1Hjdiagonal in the basis jh;Wj;:ias in equation 11 anddefine the new unitary:~Uj=Dje2iN+1HjD1j; (20)where w.r.t. equation 13 we now let Djdepend onj. We denote the eigenvectors of ~Ujbyjh;Wj;:iDj=Djjh;Wj;:iand the eigenvalue is '(h;Wj;:)introduced in equation 12. Supposingthat we know p(h) =Qipi(hi), we apply the quantum phase estimation to ~Ujwith input:j ji=j hij iWj;:;j hi=NOi=1hppi(hi= 0)j0i+ppi(hi= 1)j1ii; (21)andj iWj;:is defined in equation 15. Going through similar calculations as those done aboveshows that measurements of the first qubit will be governed by the probability distribution of equa-tion 6 factorized over output channels since the procedure does not couple them: (h;W) =QMj=1jh jjDjjh;Wj;:ij2. So far, we have focused on fully connected layers. We can extendthe derivation of this section to the convolution case, by applying the quantum phase estimation onimages patches of the size equal to the kernel size as explained in appendix D.4 C LASSICAL SIMULATIONS FOR LOW ENTANGLEMENT4.1 T HEORYIt has been remarked in (Shayer et al., 2017; Peters & Welling, 2018) that when the weight and ac-tivation distributions at a given layer are factorized, p(h) =Qipi(hi)andq(W) =Qijqij(Wij),the output distribution in equation 3 can be efficiently approximated using the central limit theorem(CLT). The argument goes as follows: for each jthe preactivations '(h;Wj;:) =PNi=1Wj;ihiaresums of independent binary random variables Wj;ihiwith mean and variance:ji=Ewqji(w)Ehpi(h); 2ji=Ewqji(w2)Ehpi(h2)2ji=ji(1ji); (22)We usedb2=bfor a variable b2f0;1g. The CLT implies that for large Nwe can approximate'(h;Wj;:)with a normal distribution with mean j=Pijiand variance 2j=Pi2ji. Thedistribution of the activation after the non-linearity of equation 1 can thus be computed as:p((1N+1'(h;Wj;:)) = 1) =p(2'(h;Wj;:)N > 0) = 2jN2j; (23)being the CDF of the standard normal distribution. Below we fix jand omit it for notation clarity.As reviewed in appendix B, commuting observables in quantum mechanics behave like classicalrandom variables. The observable of interest for us, DHD1of equation 20, is a sum of commut-ing terms KiDBWiBhiD1and if their joint probability distribution is such that these randomvariables are weakly correlated, i.e.h jKiKi0j ih jKij ih jKi0j i!0;ifjii0j!1; (24)then the CLT for weakly correlated random variables applies, stating that measurements ofDHD1in statej iare governed by a Gaussian distribution N(;2)with=h jDHD1j i; 2=h jDH2D1j i2: (25)Finally, we can plug these values into equation 23 to get the layer output probability.We have cast the problem of simulating the quantum neural network to the problem of computing theexpectation values in equation 25. In physical terms, these are related to correlation functions of HandH2after evolving a state j iwith the operator D. These can be efficiently computed classicallyfor one dimensional and lowly entangled quantum circuits D(Vidal, 2003). In view of that here weconsider a 1d arrangement of activation and weight qubits, labeled by i= 0;:::; 2N1, where theeven qubits are associated with activations and the odd are associated with weights. We then choose:D=N1Yi=0Q2i;2i+1N1Yi=0P2i+1;2i+2; (26)7Under review as a conference paper at ICLR 2021where Q2i;2i+1acts non-trivially on qubits 2i;2i+ 1, i.e. onto the i-th activation and i-th weightqubits, while P2i;2i+1on thei-th weight and i+ 1-th activation qubits. We depict this quantumcircuit in figure 2 (a). As explained in detail in appendix E, the computation of involves the matrixelement of Kiin the product state j iwhile2involves that of KiKi+1. Due to the structureofD, these operators act locally on 4and6sites respectively as depicted in figure 2 (b)-(c). ThisD=h0W0h1W1h2W2P PQ Q Q012345(a)Ki=P PQBBQ1P1P12i-12i2i+12i+12(b)KiKi+1=P P PQ QBB BBQ1Q1P1P1P12i-12i2i+12i+22i+32i+4(c)Figure 2: (a) The entangling circuit DforN= 3. (b)Kientering the computation of . (c)KiKi+1entering the computation of 2. Indices on P;Q;Bare omitted for clarity. Time flowsdownwards.implies that the computation of equation 25, and so of the full layer, can be done in O(N)and easilyparallelized. Appendix E contains more details on the complexity, while Procedure 2 describes thealgorithm for the classical simulation discussed here.Procedure 1 Quantum deformed layer. QPE j(U;I) is quantum phase estimation for a unitary Uacting on the setIof activation qubits and the j-th weights/ancilla qubits. Hjis in equation 11.Input:fqijgj=0;:::;M1i=0;:::;N1,j i,I,D,tOutput:j iforj= 0toM1doj iWj;: NNi=1pqjij0i+p1qjij1ij i j 0itj ij iWj;:U De2iN+1HjD1fThis requires to approximate the unitary with quantum gates gj i QPEj(U;I)j iend forProcedure 2 Classical simulation of a quantum deformed layer with N(M) inputs (outputs).Input:fqijgj=0;:::;M1i=0;:::;N1;fpigi=0;:::;N1;P=fPj2i1;2igj=0;:::;M1i=1;:::;N1;Q=fQj2i;2i+1gj=0;:::;M1i=0;:::;N1Output:fp0igi=1;:::;Mforj= 0toM1dofori= 0toN1do 2i ppi;p1pi 2i+1 pqij;p1qijend forfori= 0toN1doi computeMu( i; ;P;Q)fThis implements equation 45 of appendix E gi;i+1 computeGamma( i; ;P;Q)fThis implements equation 49 of appendix E gend for PN1i=0i2 2PN2i=0(i;i+1ii+1) +PN1i=0(i2i)p0j 2N2p2end for8Under review as a conference paper at ICLR 20214.2 E XPERIMENTSTable 1: Test accuracies for MNIST and FashionMNIST. With the notation cKsSCto indicatea conv2d layer with Cfilters of size [K;K ]andstrideS, anddNfor a dense layer with Noutputneurons, the architectures (Arch.) are A: d10; B:c3s2-8, c3s2-16, d10; C: c3s2-32, c3s2-64, d10.The deformations are: [/]: Pji;i+1=Qji;i+1=1(baseline (Peters & Welling, 2018)); [PQ]:Pji;i+1;Qji;i+1generic; [Q]: Pji;i+1=1;Qji;i+1generic.Arch. Deformation MNIST Fashion MNISTA [/] 91.1 84.2[PQ] 94.3 86.8[Q] 91.6 85.1B [/, /, /] 96.6 87.5[PQ, /, /] 97.6 88.1[Q, /, /] 96.8 87.8C [/, /, /] 98.1 89.3[PQ, /, /] 98.3 89.6[Q, /, /] 98.3 89.5We present experiments for the model of theprevious section. At each layer, qijandDjare learnable. They are optimized to minimizethe loss of equation 7 where following (Peters& Welling, 2018; Shayer et al., 2017) we takeR=P`;i;jq(`)ij(1q(`)ij), andR0is theL2regularization loss of the parameters of Dj.Lcoincides with equation 2. We implementedand trained several architectures with differentdeformations. Table 1 contains results for twostandard image datasets, MNIST and FashionMNIST. Details of the experiments are in ap-pendix F. The classical baseline is based on(Peters & Welling, 2018), but we use fewer lay-ers to make the simulation of the deformationcheaper and use no batch norm, and no maxpooling.The general deformation ([PQ]) performs bestin all cases. In the simplest case of a singledense layer (A), the gain is +3:2%for MNISTand+2:6%for Fashion MNIST on test accu-racy. For convnets, we could only simulate asingle deformed layer due to computational is-sues and the gain is around or less than 1%.We expect that deforming all layers will givea greater boost as the improvements diminish with decreasing the ratio of deformation parametersover classical parameters ( qij). The increase in accuracy comes at the expense of more parameters.In appendix F we present additional results showing that quantum models can still deliver modestaccuracy improvement w.r.t. convolutional networks with the same number of parameters.5 C ONCLUSIONSIn this work we made the following main contributions: 1) we introduced quantum deformed neuralnetworks and identified potential speedups by running these models on a quantum computer; 2) wedevised classically efficient algorithms to train the networks for low entanglement designs of thequantum circuits; 3) for the first time in the literature, we simulated the quantum neural networkson real world data sizes obtaining good accuracy, and showed modest gains due to the quantumdeformations. Running these models on a quantum computer will allow one to explore efficientlymore general deformations, in particular those that cannot be approximated by the central limittheorem when the Hamiltonians will be sums of non-commuting operators. Another interestingfuture direction is to incorporate batch normalization and pooling layers in quantum neural networks.An outstanding question in quantum machine learning is to find quantum advantages for classicalmachine learning tasks. The class of known problems for which a quantum learner can have aprovably exponential advantage over a classical learner is small at the moment Liu et al. (2020), andsome problems that are classically hard to compute can be predicted easily with classical machinelearning Huang et al. (2020). The approach presented here is the next step in a series of papers thattries to benchmark quantum neural networks empirically, e.g. Farhi & Neven (2018); Huggins et al.(2019); Grant et al. (2019; 2018); Bausch (2020). We are the first to show that towards the limit ofentangling circuits the quantum inspired architecture does improve relative to the classical one forreal world data sizes.9Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title too many crucial details are deferred to the supplementary material ### Review Text The fundamental idea in this paper is to endow classical signals with a complex Hilberspace structure so as to harness the probabilistic nature of quantum mechaniscs for pattern recognition based on quantum computing principles. Following this idea, the authors consider (probabiistic) binary neural networks and develop a corresponding kind of quantum neural networks building on the concept of quantum phase estimation. They (convincingly) argue that this idea would make good use of the exponential speedup offered by quantum computers but can still be implemented on classical, digital devices. Experiments with image data corroborate this claim and demonstrate that simulations of the proposed quantum deformation showed improved accuracies when compared to classical probabilistic binary neural networks. The Ideas brought forth in this paper appear to be novel and (to those with a solid background in quantum computing) technically sound. Overall, the paper presents an interesting approach towards quantum computational intelligence which, though likely not yet realizable on real quantum hardware (due to technical limitations such as measurement noise or decoherence times) can also be simulated digitally (in an arguably elegant manner). However, there also are concerns regarding the quality of this manuscript. In additions to several broken LaTeX links (e.g. a reference to Table 4.2 which appears to mean to refer to Table 1) which unnecessarily hamper readability, the overall presentation is lackingcan. Most critically, the paper can hardly be considered self-contained. Numerous important details are deferred to the supplementary material where the authors cover algorithmic details as well as details as to their experimental procedures. In other words, the supplementary material is not just supplementary but crucial for understanding / assessing the content of this work. While this would not justfy an outright rejection, it still diminishes the overall quality of this paper. ### Review Rating 6: Marginally above acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
XsQLS6Ls5-
NeurIPS.cc/2022/Workshop/Offline_RL
2022
Model-based Trajectory Stitching for Improved Offline Reinforcement Learning
["Charles Alexander Hepburn", "Giovanni Montana"]
In many real-world applications, collecting large and high-quality datasets may be too costly or impractical. Offline reinforcement learning (RL) aims to infer an optimal decision-making policy from a fixed set of data. Getting the most information from historical data is then vital for good performance once the policy is deployed. We propose a model-based data augmentation strategy, Trajectory Stitching (TS), to improve the quality of sub-optimal historical trajectories. TS introduces unseen actions joining previously disconnected states: using a probabilistic notion of state reachability, it effectively ‘stitches’ together parts of the historical demonstrations to generate new, higher quality ones. A stitching event consists of a transition between a pair of observed states through a synthetic and highly probable action. New actions are introduced only when they are expected to be beneficial, according to an estimated state-value function. We show that using this data augmentation strategy jointly with behavioural cloning (BC) leads to improvements over the behaviour-cloned policy from the original dataset. Improving over the BC policy could then be used as a launchpad for online RL through planning and demonstration-guided RL.
["policy", "trajectory", "improved offline reinforcement", "data augmentation strategy", "many", "applications", "large", "datasets", "costly"]
Model-based Trajectory Stitching for ImprovedOffline Reinforcement LearningCharles A. Hepburn1Giovanni Montana1,21University of Warwick2Alan Turing Institute{Charlie.Hepburn,g.montana}@warwick.ac.ukAbstractIn many real-world applications, collecting large and high-quality datasets maybe too costly or impractical. Offline reinforcement learning (RL) aims to inferan optimal decision-making policy from a fixed set of data. Getting the mostinformation from historical data is then vital for good performance once the policyis deployed. We propose a model-based data augmentation strategy, TrajectoryStitching (TS), to improve the quality of sub-optimal historical trajectories. TSintroduces unseen actions joining previously disconnected states: using a prob-abilistic notion of state reachability, it effectively ‘stitches’ together parts of thehistorical demonstrations to generate new, higher quality ones. A stitching eventconsists of a transition between a pair of observed states through a synthetic andhighly probable action. New actions are introduced only when they are expected tobe beneficial, according to an estimated state-value function. We show that usingthis data augementation strategy jointly with behavioural cloning (BC) leads toimprovements over the behaviour-cloned policy from the original dataset. Improv-ing over the BC policy could then be used as a launchpad for online RL throughplanning and demonstration-guided RL.1 IntroductionBehavioural cloning (BC) [ 51,52] is one of the simplest imitation learning methods to obtain adecision-making policy from expert demonstrations. BC treats the imitation learning problem as asupervised learning one. Given expert trajectories - the expert’s paths through the state space - a policynetwork is trained to reproduce the expert behaviour: for a given observation, the action taken by thepolicy must closely approximate the one taken by the expert. Although a simple method, BC hasshown to be very effective across many application domains [ 51,55,32,50], and has been particularlysuccessful in cases where the dataset is large and has wide coverage [ 13]. An appealing aspect ofBC is that it is applied in an offline setting, using only the historical data. Unlike reinforcementlearning (RL) methods, BC does not require further interactions with the environment. Offline policylearning can be advantageous in many circumstances, especially when collecting new data throughinteractions is expensive, time-consuming or dangerous; or in cases where deploying a partiallytrained, sub-optimal policy in the real-world may be unethical, e.g. in autonomous driving andmedical applications.BC extracts the behaviour policy which created the dataset. Consequently, when applied to sub-optimal data (i.e. when some or all trajectories have been generated by non-expert demonstrators),the resulting behavioural policy is also expected to be sub-optimal. This is due to the fact that BChas no mechanism to infer the importance of each state-action pair. Other drawbacks of BC are itstendency to overfit when given a small number of demonstrations and the state distributional shiftbetween training and test distributions [ 54,13]. In the area of imitation learning, significant effortshave been made to overcome such limitations, however the available methodologies generally rely3rd Offline Reinforcement Learning Workshop at Neural Information Processing Systems, 2022.StateActionGenerated actionStitching eventState ValueFigure 1: Simplified illustration of Trajectory Stitching. Each original trajectory (a sequence of statesand actions) in the dataset Dis indicated as Tiwithi= 1,2,3. A first stitching event is seen intrajectory T1whereby a transition to a state originally visited in T2takes place. A second stitchingevent involves a jump to a state originally visited in T3. At each event, jumping to a new stateincreases the current trajectory’s future expected returns. The resulting trajectory (in bold) consists ofa sequence of states, all originally visited in D, but connected by imagined actions; it replaces T1inthe new dataset.on interacting with the environment [ 54,18,28,47]. So, a question arises: can we help BC infer asuperior policy only from available sub-optimal data without the need to collect additional expertdemonstrations?Our investigation is related to the emerging body of work on offline RL, which is motivated bythe aim of inferring expert policies with only a fixed set of sub-optimal data [ 46,48]. A majorobstacle towards this aim is posed by the notion of action distributional shift [23,43,48]. This isintroduced when the policy being optimised deviates from the behaviour policy, and is caused bythe action-value function overestimating out-of-distribution (OOD) actions. A number of existingmethods address the issue by constraining the actions that can be taken. In some cases, this isachieved by constraining the policy to actions close to those in the dataset [ 23,43,60,31,67,21],or by manipulating the action-value function to penalise OOD actions [ 45,1,39,62]. In situationswhere the data is sub-optimal, offline RL has been shown to recover a superior policy to BC [ 23,44].Improving BC will in turn improve many offline RL policies that rely on an explicit behaviour policyof the dataset [2, 64, 21].In contrast to existing offline learning approaches, we turn the problem on its head: rather thantrying to regularise or constrain the policy somehow, we investigate whether the data itself can beenriched using only the available demonstrations and an improved policy derived through a standardBC algorithm, without any additional modifications. To explore this new avenue, we propose amodel-based data augmentation method called Trajectory Stitching (TS). Our ultimate aim is todevelop a procedure that identifies sub-optimal trajectories and replaces them with better ones. Newtrajectories are obtained by stitching existing ones together, without the need to generate unseenstates. The proposed strategy consists of replaying each existing trajectory in the dataset: for eachstate-action pair leading to a particular next state along a trajectory, we ask whether a different actioncould have been taken instead, which would have landed at a different seen state from a differenttrajectory. An actual jump to the new state only occurs when generating such an action is plausibleand it is expected to improve the quality of the original trajectory - in which case we have a stitchingevent .An illustrative representation of this procedure can be seen in Figure 1, where we assume to haveat our disposal only three historical trajectories. In this example, a trajectory has been improvedthrough two stitching events. In practice, to determine the stitching points, TS uses a probabilisticview of state-reachability that depends on learned dynamics models of the environment. Thesemodels are evaluated only on in-distribution states enabling accurate prediction. In order to assessthe expected future improvement introduced by a potential stitching event, we utilise a state-valuefunction and reward model. Thus, TS can be thought of as a data-driven, automated procedureyielding highly plausible and higher-quality demonstrations to facilitate supervised learning; at the2same time, sub-optimal demonstrations are removed altogether whilst keeping the diverse set of seenstates.Demonstrations can be used to guide RL, to improve on the speed-up of learning of online RL. Inthese cases, BC can be used to initialise or regularise the training policy [ 53,49]. Running TS onthe datasets beforehand could be used to improve on the sample efficiency further as the initialisedpolicies will be better; as well as regularising the policy towards an improved one. In future workwe aim to leverage TS as a launchpad for online RL. Specifically, an improved BC policy would beuseful in improving the sample efficiency for planning [ 2,64] as well as deployment efficiency inoffline-to-online RL [25, 66].Our experimental results show that TS produces higher-quality data, with BC-derived policies alwayssuperior than those inferred on the original data. Remarkably, we demonstrate that TS-augmenteddata allow BC to compete with SOTA offline RL algorithms on highly complex continuous controlopenAI gym tasks implemented in MuJoCo using the D4RL offline benchmarking suite [ 20]. Interms of a larger system, BC-derived policies are used as a prior to many methods, so a reasonedapproach to improving the BC policy could improve these methods also.2 Problem setupWe consider the offline RL problem setting, which consists of finding an optimal decision-makingpolicy from a fixed dataset. The policy is a mapping from states to actions, π:S → A , wherebySandAare the state and action spaces, respectively. The dataset is made up of transitions D={(si, ai, ri, s′i)}, of current state, si; action performed in that state, ai; the state in which theaction takes the agent, s′i; and the reward for transitioning, ri. The actions have been taken by anunknown behaviour policy, πβ, acting in a Markov decision process (MDP). The MDP is defined asM= (S,A,P,R, γ), where P:S × A × S → [0,1]is the transition probability function whichdefines the dynamics of the environment, R:S × A × S → Ris the reward function and γ∈(0,1]is a scalar discount factor [59].In offline RL, the agent must learn a policy, π∗(a|s), that maximises the returns defined as theexpected sum of discounted rewards, Eπ[P∞t=0rtγt], without ever having access to πβ. Here we areinterested in performing imitation learning through BC, which mimics πβby performing supervisedlearning on the state-action pairs in D[51,52]. More specifically, assuming a deterministic policy,BC minimisesπBC(s) = arg minπEs,a∼D[(π(s)−a)2]. (1)The resulting policy also minimises the KL-divergence between the trajectory distributions of thelearned policy and πβ[34]. Our objective for TS is to improve the dataset, by replacing existingtrajectories with high-return ones, so that BC can extract a higher-performing behaviour policy thanthe original. Many offline RL algorithms bias the learned policy towards the behaviour-cloned one[2,21,64] to ensure the policy does not deviate too far from the behaviour policy. Being able toextract a high-achieving policy would be useful in many of these offline RL methods.3 Trajectory StitchingOverview. The proposed data augmentation method, Trajectory Stitching, augments Dby stitchingtogether high value regions of different trajectories. Stitching events are discovered by searching forcandidate next states which lead to higher returns. These higher quality states are determined by astate-value function, V(s), which is trained using the historical data. This function is unaffected bydistributional shift due to only being evaluated on in-distribution states.Suppose that the transition (s, a, s′)came from some trajectory TiinD, for which the joint densityfunction is p(s, a, s′)∝p(s′|s)p(a|s, s′); here, p(s′|s)represents the environment’s forward dynam-ics and p(a|s, s′)is its inverse dynamics. Our aim is to replace s′andawith a candidate next state,ˆs′and connecting action ˆa, which leads to higher returns. To generate a new transition, first we lookfor a candidate next state, ˆs′̸=s′, amongst all the states in D, that has been visited by any othertrajectory. A suitable criterion to evaluate next state candidates is given by the forward dynamics;conditional on s, we require that the new next state must be at least as likely to have been observedass′, i.e. we impose p(ˆs′|s)≥p(s′|s). To be beneficial, the candidate next state must not only be3likely to be reached from sunder the environment dynamics, but must also lead to higher returnscompared to the current next state. Thus, we also require that, under the pre-trained state-valuefunction, V(ˆs′)> V(s′). Where both these conditions are satisfied, a plausible action connectingsand the newly found ˆs′is obtained by finding an action that maximises the inverse dynamics, i.e.arg maxˆap(ˆa|s,ˆs′). When the process is completed, we have a stitching event .For each trajectory TiinD, we sequentially consider all its transitions (s, a, s′)until a stitching eventtakes place, which leads to a different trajectory, Tj. This process is then repeated for Tj, starting atthe current state, until no more stitching events are possible. For example, let us have two trajectoriesT1andT2, with lengths NandMrespectively. TS stitches time point ninT1to time point minT2which would lead to a new trajectory to replace T1,(s(1)1, a(1)1, s(1)2, . . . , s(1)n−1, a(1)n−1, s(1)n,ˆa, s(2)m, a(2)m, s(2)m+1, . . . , a(2)M−1, s(2)M).Here s(i)j, a(i)jrepresents a state-action pair for Tiat time point j. Upon completing this process, wehave created new and plausible trajectories, under the empirical state distribution, with overall higherexpected cumulative returns.In practice, we do not assume that the forward dynamics, inverse dynamics, reward function andstate-value function are known; hence they need to be estimated from the available data. In theremainder of this section we describe the models used to infer these quantities. Algorithm 1 (seeAppendix) details the full TS procedure.Next state search via a learned dynamics model. The search for a candidate next state requiresa learned forward dynamics model, i.e. p(s′|s). Model-based RL approaches typically use suchdynamics’ models conditioned on the action as well as the state to make predictions [ 30,63,36,2].Here, we use the model differently, only to guide the search process and identify of a suitable nextstate to transition to. Specifically, conditional on s, the dynamics model is used to assess the relativelikelihood of observing any other s′in the dataset compared to the observed one.The environment dynamics are assumed to be Gaussian, and we use a neural network to predictthe mean vector and covariance matrix, i.e. ˆpξ(st+1|st) =N(μξ(st),Σξ(st)); here, ξindicate theparameters of the neural network. Modelling the environment dynamics as a Gaussian distribution iscommon for continuous state-space applications [ 30,63,36,62]. Furthermore, we take an ensembleEofNdynamics models, {ˆpiξ(st+1|st) =N(μiξ,Σiξ)}Ni=1. Each model is trained via maximumlikelihood estimation so it minimises the following lossLˆp(ξ) =Es,s′∼D[(μξ(s)−s′)TΣ−1ξ(s)(μξ(s)−s′) + log |Σξ(s)|],where|·|refers to the determinant of a matrix. Each model’s parameter vector is initialised differently;using such an ensemble strategy has been shown to take into account the epistemic uncertainty, i.e.the uncertainty in the model parameters [7, 12, 2, 62].Once the models have been fitted, to decide whether ˆs′can replace s′along any trajectory, we take aconservative approach by requiring thatmini∈Eˆpiξ(ˆs′|s)>meani∈Eˆpiξ(s′|s).where the minimum and mean are taken over the ensemble Eof dynamics models.Value function estimation and reward prediction model. Value functions are widely used inreinforcement learning to determine the quality of an agent’s current position [ 59]. In our context, weuse a state-value function to assess whether a candidate next state offers a potential improvementover the original next state. To accurately estimate the future returns given the current state, wecalculate a state-value function dependent on the behaviour policy of the dataset. The functionVθ(s)is approximated by a MLP neural network parameterised by θ. The parameters are learned byminimising the squared Bellman error [59],LV(θ) =Es,r,s′∼D[(r+γVθ(s′)−Vθ(s))2]. (2)Vθis only used to observe the value of in-distribution states, thus avoiding the OOD issue whenevaluating value functions which occurs in offline RL. The value function will only be queried once acandidate new state has been found such that p(ˆs′|s)≥p(s′|s).4Value functions require rewards for training, therefore a reward must be estimated for unseen tuples(s,ˆa,ˆs′). To this end, we train a conditional Wasserstein-GAN [ 24,3] consisting of a generator, Gφand a discriminator Dψ, with parameters of the neural networks φandψrespectively. A WassersteinGAN is used due to the training stability over GANs [ 3], as well as their predictive performanceover MLPs and V AEs. The discriminator takes in the state, action, reward, next state and determineswhether this transition is from the dataset. The generator loss function is:LG(φ) =E z∼p(z)s,a,s′∼D ̃r∼Gφ(z,s,a,s′)[Dψ(s, a, s′, ̃r)].Here z∼p(z)is a noise vector sampled independently from N(0,1), the standard normal. Thediscriminator loss function is:LD(ψ) =Es,a,r,s′∼D[Dψ(s, a, s′, r)]−E z∼p(z)s,a,s′∼D ̃r∼Gφ(z,s,a,s′)[Dψ(s, a, s′, ̃r)].Once trained, a reward will be predicted for the stitching event when a new action has been generatedbetween two previously disconnected states.Action generation via an inverse dynamics model. Sampling a suitable action that leads fromsto the newly found state ˆs′requires an inverse dynamics model. Specifically, we require that asynthetic action must maximise the estimated conditional density, p(a|s,ˆs′). To this end, we train aconditional variational autoencoder (CV AE) [ 38,57], consisting of an encoder qω1and a decoder pω2where ω1andω2are the respective parameters of the neural networks.The encoder converts the input data into a lower-dimensional latent representation zwhereas thedecoder generates data from the latent space. The CV AE objective is to maximise logp(a|s,ˆs′)bymaximising its lower boundmaxω1,ω2logp(a|s,ˆs′, z)≥maxω1,ω2Ez∼qω1[logpω2(a|s,ˆs′, z)]−DKL[qω1(z|a, s,ˆs′)||P(z|s,ˆs′)],where z∼ N (0,1)is the prior for the latent variable z, and DKLrepresents the KL-divergence[42, 41]. This process ensures that the most plausible action is generated conditional on sandˆs′.Iterated TS and BC. TS is run for multiple iterations, updating the value function before eachone based on the new data and improved behaviour policy. All other models remain fixed as we donot have any updated information about the underlying MDP. From the new dataset, we extract thebehaviour policy using BC, minimising Equation (1). We train BC for 100k gradient steps, reportingthe best policy from checkpoints of every 10k steps from 40k onwards. This ensures that BC hastrained enough and does not overfit.4 Experimental resultsIn this section, we provide empirical evidence that TS can produce higher-quality datasets, comparedto the original data, by showing BC infers improved policies without collecting any more data fromthe environment. We call a BC policy run on a TS dataset TS+BC. We compare our method withselected offline RL methods using D4RL datasets. This is to give an insight into how much TS canimprove BC by reaching the SOTA performance level of offline RL.Performance assessment on D4RL data. To investigate the benefits of TS+BC as an offline policylearning strategy, we compare its performance with selected state-of-the-art offline RL methods:TD3+BC [ 21], IQL [ 40], MBOP [ 2] and Diffuser [ 29]. These baselines represent model-freeand model-based methods and achieve top results. We make the comparisons on the D4RL [ 20]benchmarking datasets of the openAI gym MuJoCo tasks; see Table 1. Three complex continuousenvironments are tested: Hopper, Halfcheetah and Walker2d, with different levels of difficulty. The“medium" datasets were gathered by the original authors using a single policy produced from theearly-stopping of an agent trained by soft actor-critic (SAC) [ 26,27]. The “medium-replay" datasetsare the replay buffers from the training of the “medium" policies. The “expert" datasets were obtainedfrom a policy trained to an expert level, and the “medium-expert" datasets are the combination ofboth the “medium" and “expert" datasets. In all the cases we have considered, TS+BC outperforms5Dataset TD3+BC IQL MBOP Diffuser BC TS+BC (ours)hopper-medium 59.3 66.3 48.8 58.5 55.3 64.3±4.2(+16 .3%)halfcheetah-medium 48.3 47.4 44.6 44.2 42.9 43.2±0.3(+0.7%)walker2d-medium 83.7 78.3 41.0 79.7 75.6 78.8±1.2(+4.2%)Average-medium 63.8 64.0 44.8 60.8 57.9 62.1hopper-medexp 98.0 91.5 55.1 107.2 62.3 94.8±11.7(+52 .2%)halfcheetah-medexp 90.7 86.7 105.9 79.8 60.7 86.9±2.5(+43 .2%)walker2d-medexp 110.1 109.6 70.2 108.4 108.2 108.8±0.5(+0.6%)Average-medexp 99.6 95.9 77.1 98.5 77.1 96.8hopper-medreplay 60.9 94.7 12.4 96.8 29.6 50.2±17.2(+69 .6%)halfcheetah-medreplay 44.6 44.2 42.3 42.2 38.5 39.8±0.6(+3.4%)walker2d-medreplay 81.8 73.9 9.7 61.2 34.7 61.5±5.6(+77 .2%)Average-medreplay 62.4 70.9 21.5 66.7 34.3 50.5hopper-expert 107.8 - - - 111.0 111.8±0.5(+0.7%)halfcheetah-expert 96.7 - - - 92.9 93.2±0.6(+0.3%)walker2d-expert 110.2 - - - 109.0 108.9±0.2(−0.1%)Average-expert 104.9 - - - 104.3 104.6Table 1: Average normalised scores achieved on three locomotion tasks (Hopper, Halfcheetah andWalker2d) using the D4RL v2 data sets. The results for competing methods have been gathered fromthe original publications. Bold scores represent values within 5%of the highest average score of thelevels of difficulty. TS+BC: In brackets we report the percentage improvement achieved by BC afterTS relative to the BC baseline.the BC baseline, showing that TS creates a higher quality dataset as claimed. Also, while only everusing BC to obtain the final policy, TS+BC is very competitive with current state-of-the-art offlineRL methods, especially for the medium, medium-expert and expert datasets. For medium-replaydatasets, although TS+BC still attains much higher performing policies than the original BC, weobserve lower competitiveness against other offline DRL methods. Due to the way these datasets havebeen developed, they would appear to be more naturally suited to dynamical programming-basedalgorithms.Implementation details. Calculating p(s′|s)for all s′∈ D may be computationally inefficient.To speed this up in the MuJoCo environments, we initially select a smaller set of candidate nextstates by thresholding the Euclidean distance. Although on its own a geometric distance would not besufficient to identify stitching events, we found that in our environments it can help reduce the set ofcandidate next states thus alleviating the computational workload.To pre-select a smaller set of candidate next states, we use two criteria. Firstly, from a transition(s, a, r, s′)∈ D, a neighbourhood of states around sis taken and the following state in the trajectoryis collected. Secondly, all the states in a neighbourhood around s′are collected. This process ensuresall candidate next states are geometrically-similar to s′or are preceded by geometrically-similarstates. The neighbourhood of a state is an ε−ballaround the state. When εis large enough, we canretain all feasible candidate next states for evaluation with the forward dynamic model. Figure 4 (seeAppendix) illustrates this procedure.5 ConclusionsIn this paper, we have proposed a data augmentation strategy, Trajectory Stitching, which can beapplied to historical datasets containing demonstrations of sequential decisions taken to solve acomplex task. Without further interactions with the environment, TS can improve the quality of thedemonstrations, which in turn has the effect of boosting the performance of BC-extracted policiessignificantly. This method could be used to extract an improved explicit behavioural cloning policyregulariser for offline RL. This would be specifically important for an offline planning algorithm.TS+BC can be leveraged further for online RL, where the learned policy can be initialised using BC6or the sample efficiency of the algorithm is improved by regularising towards the behavioural policyof demonstrations.BC is used in many offline RL algorithms, such as a prior policy in offline planning [ 2,64] andas a policy regulariser [ 21]. Although in this paper we have not explored the potential benefits ofcombining TS with offline reinforcement learning algorithms, our results on the D4RL benchmarkingdatasets show that TS improves over the initial BC policy, and can in fact reach the level of state-of-the-art offline RL methods. This suggests that many methods could be improved by employingTS either to find a better behavioural cloned policy or by enhancing an initial dataset. We expectTS to be used as a “first- step" to fully leverage the given dataset, enriching the dataset by addinghighly-likely (under the environment dynamics) transitions.Upon acceptance of this paper, we learned about a related methodology called BATS (Best ActionTrajectory Stitching) [ 10]. BATS augments the dataset by adding transitions from planning using alearned model of the environment. Our model-based TS approach differs from BATS in a number offundamental ways. First, BATS takes a geometric approach to defining state similarity; state-actionsare rolled-out using the dynamics model until a state is found that is within δof a state in the dataset.Using geometric distances is often inappropriate; e.g. two states may be close in Euclidean distance,yet reaching one from another may be impossible (e.g. in navigation task environments where wallsor other obstacles preclude reaching a nearby state). As such, our stitching events are based on thedynamics of the environment and are only assessed between two in-distribution states. Second, BATSallows stitches between states that are k-steps apart; this means the reward function needs to bepenalised to favour state-action pairs in the original dataset, as model error can compound resultingin unlikely rollouts. In contrast, we only allow one-step stitching between in-distribution states anduse the value function to extend the horizon rather than a learned model, this means all our modelscan be trusted to give accurate predictions without the need for penalties. Finally, BATS adds allstitched actions to the original dataset, then create a new dataset by running value iteration, which iseventually used to learn a policy through BC. This raises many questions about the number of newtrajectories need to be collected in this way to extract an optimal policy using BC, as well as otherpolicy learning approaches more suitable to this set up. Our method, is much more suited to policylearning by BC, as after performing TS we are left with a dataset with only high-quality trajectories,where the low-value parts are removed after the stitching event.We believe that model-based TS opens up a number of directions for future work. For example, itcan be extended to multi-agent offline policy learning, for which initial attempts have been madeto control the distributional shift with numerous agents [ 61]. TS could even be used without thevalue function to increase the heterogeneity of the data without collecting new data. This could beused in conjunction with other offline imitation learning methods [ 9,19]. This line of investigationwould specifically be useful in situations where collecting new data is expensive or dangerous, butlearning from a larger, more heterogeneous data set with additional coverage is expected to improveperformance.6 AcknowledgementsCH acknowledges support from the Engineering and Physical Sciences Research Council through theMathematics of Systems Centre for Doctoral Training at the University of Warwick (EP/S022244/1).GM acknowledges support from a UKRI AI Turing Acceleration Fellowship (EP/V024868/1).References[1]Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offlinereinforcement learning with diversified q-ensemble. Advances in Neural Information ProcessingSystems , 34, 2021.[2]Arthur Argenson and Gabriel Dulac-Arnold. Model-based offline planning. arXiv preprintarXiv:2008.05556 , 2020.[3]Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarialnetworks. In International conference on machine learning , pages 214–223. PMLR, 2017.7[4]Giorgio Bacci, Giovanni Bacci, Kim G Larsen, and Radu Mardare. Computing behavioraldistances, compositionally. In International Symposium on Mathematical Foundations ofComputer Science , pages 74–85. Springer, 2013.[5]Giorgio Bacci, Giovanni Bacci, Kim G Larsen, and Radu Mardare. On-the-fly exact compu-tation of bisimilarity distances. In International Conference on Tools and Algorithms for theConstruction and Analysis of Systems , pages 1–15. Springer, 2013.[6]Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang,and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540 , 2016.[7]Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. Advances in neuralinformation processing systems , 31, 2018.[8]Pablo Samuel Castro. Scalable methods for computing state similarity in deterministic markovdecision processes. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34,pages 10069–10076, 2020.[9]Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, Rahul Kidambi, and Wen Sun. Mitigatingcovariate shift in imitation learning via offline data with partial coverage. Advances in NeuralInformation Processing Systems , 34:965–979, 2021.[10] Ian Char, Viraj Mehta, Adam Villaflor, John M Dolan, and Jeff Schneider. Bats: Best actiontrajectory stitching. arXiv preprint arXiv:2204.12026 , 2022.[11] Di Chen, Franck van Breugel, and James Worrell. On the complexity of computing probabilisticbisimilarity. In International Conference on Foundations of Software Science and ComputationalStructures , pages 437–451. Springer, 2012.[12] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcementlearning in a handful of trials using probabilistic dynamics models. Advances in neuralinformation processing systems , 31, 2018.[13] Felipe Codevilla, Eder Santana, Antonio M López, and Adrien Gaidon. Exploring the limitationsof behavior cloning for autonomous driving. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision , pages 9329–9338, 2019.[14] Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, andMatthieu Geist. Offline reinforcement learning with pseudometric learning. In InternationalConference on Machine Learning , pages 2307–2318. PMLR, 2021.[15] Vladimir Feinberg, Alvin Wan, Ion Stoica, Michael I Jordan, Joseph E Gonzalez, and SergeyLevine. Model-based value estimation for efficient model-free reinforcement learning. arXivpreprint arXiv:1803.00101 , 2018.[16] Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite markov decisionprocesses. In UAI, volume 4, pages 162–169, 2004.[17] Norman Ferns, Pablo Samuel Castro, Doina Precup, and Prakash Panangaden. Methods forcomputing state similarity in markov decision processes. arXiv preprint arXiv:1206.6836 , 2012.[18] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimalcontrol via policy optimization. In International conference on machine learning , pages 49–58.PMLR, 2016.[19] Pete Florence, Corey Lynch, Andy Zeng, Oscar A Ramirez, Ayzaan Wahid, Laura Downs,Adrian Wong, Johnny Lee, Igor Mordatch, and Jonathan Tompson. Implicit behavioral cloning.InConference on Robot Learning , pages 158–168. PMLR, 2022.[20] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets fordeep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[21] Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning.Advances in Neural Information Processing Systems , 34, 2021.8[22] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation errorin actor-critic methods. In International conference on machine learning , pages 1587–1596.PMLR, 2018.[23] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learningwithout exploration. In International Conference on Machine Learning , pages 2052–2062.PMLR, 2019.[24] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neuralinformation processing systems , 27, 2014.[25] Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relaypolicy learning: Solving long-horizon tasks via imitation and reinforcement learning. arXivpreprint arXiv:1910.11956 , 2019.[26] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Internationalconference on machine learning , pages 1861–1870. PMLR, 2018.[27] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan,Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithmsand applications. arXiv preprint arXiv:1812.05905 , 2018.[28] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. Advances in neuralinformation processing systems , 29, 2016.[29] Michael Janner, Yilun Du, Joshua B Tenenbaum, and Sergey Levine. Planning with diffusionfor flexible behavior synthesis. arXiv preprint arXiv:2205.09991 , 2022.[30] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model:Model-based policy optimization. Advances in Neural Information Processing Systems , 32,2019.[31] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza,Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcementlearning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456 , 2019.[32] M Waleed Kadous, Claude Sammut, and R Sheh. Behavioural cloning for robots in unstructuredenvironments. In Advances in Neural Information Processing Systems Workshop , 2005.[33] Gabriel Kalweit and Joschka Boedecker. Uncertainty-driven imagination for continuous deepreinforcement learning. In Conference on Robot Learning , pages 195–206. PMLR, 2017.[34] Liyiming Ke, Sanjiban Choudhury, Matt Barnes, Wen Sun, Gilwoo Lee, and Siddhartha Srini-vasa. Imitation learning as f-divergence minimization. In International Workshop on theAlgorithmic Foundations of Robotics , pages 313–329. Springer, 2020.[35] Mete Kemertas and Tristan Aumentado-Armstrong. Towards robust bisimulation metric learning.Advances in Neural Information Processing Systems , 34, 2021.[36] Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel:Model-based offline reinforcement learning. Advances in neural information processing systems ,33:21810–21823, 2020.[37] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[38] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.[39] Ilya Kostrikov, Rob Fergus, Jonathan Tompson, and Ofir Nachum. Offline reinforcementlearning with fisher divergence critic regularization. In International Conference on MachineLearning , pages 5774–5783. PMLR, 2021.9[40] Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicitq-learning. arXiv preprint arXiv:2110.06169 , 2021.[41] Solomon Kullback. Information theory and statistics . Courier Corporation, 1997.[42] Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals ofmathematical statistics , 22(1):79–86, 1951.[43] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. Advances in Neural Information ProcessingSystems , 32, 2019.[44] Aviral Kumar, Joey Hong, Anikait Singh, and Sergey Levine. When should we prefer offlinereinforcement learning over behavioral cloning? arXiv preprint arXiv:2204.05618 , 2022.[45] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning foroffline reinforcement learning. Advances in Neural Information Processing Systems , 33:1179–1191, 2020.[46] Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. InReinforcement learning , pages 45–73. Springer, 2012.[47] Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dudik, Yisong Yue, and Hal Daumé III.Hierarchical imitation and reinforcement learning. In International conference on machinelearning , pages 2917–2926. PMLR, 2018.[48] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning:Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[49] Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Over-coming exploration in reinforcement learning with demonstrations. In 2018 IEEE internationalconference on robotics and automation (ICRA) , pages 6292–6299. IEEE, 2018.[50] Tim Pearce and Jun Zhu. Counter-strike deathmatch with large-scale behavioural cloning. arXivpreprint arXiv:2104.04258 , 2021.[51] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances inneural information processing systems , 1, 1988.[52] Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation.Neural computation , 3(1):88–97, 1991.[53] Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, EmanuelTodorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcementlearning and demonstrations. arXiv preprint arXiv:1709.10087 , 2017.[54] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning andstructured prediction to no-regret online learning. In Proceedings of the fourteenth interna-tional conference on artificial intelligence and statistics , pages 627–635. JMLR Workshop andConference Proceedings, 2011.[55] Claude Sammut, Scott Hurst, Dana Kedzier, and Donald Michie. Learning to fly. In MachineLearning Proceedings 1992 , pages 385–393. Elsevier, 1992.[56] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trustregion policy optimization. In International conference on machine learning , pages 1889–1897.PMLR, 2015.[57] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation usingdeep conditional generative models. Advances in neural information processing systems , 28,2015.[58] Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACMSigart Bulletin , 2(4):160–163, 1991.10[59] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press,1998.[60] Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcementlearning. arXiv preprint arXiv:1911.11361 , 2019.[61] Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao Huang, Jun Yang,and Qianchuan Zhao. Believe what you see: Implicit constraint approach for offline multi-agentreinforcement learning. Advances in Neural Information Processing Systems , 34:10299–10312,2021.[62] Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and ChelseaFinn. Combo: Conservative offline model-based policy optimization. Advances in NeuralInformation Processing Systems , 34, 2021.[63] Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y Zou, Sergey Levine, ChelseaFinn, and Tengyu Ma. Mopo: Model-based offline policy optimization. Advances in NeuralInformation Processing Systems , 33:14129–14142, 2020.[64] Xianyuan Zhan, Xiangyu Zhu, and Haoran Xu. Model-based offline planning with trajectorypruning. arXiv preprint arXiv:2105.07351 , 2021.[65] Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learninginvariant representations for reinforcement learning without reconstruction. arXiv preprintarXiv:2006.10742 , 2020.[66] Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, and Joni Pajarinen. Adaptive behaviorcloning regularization for stable offline-to-online reinforcement learning. 2021.[67] Wenxuan Zhou, Sujay Bajracharya, and David Held. Plas: Latent action space for offlinereinforcement learning. arXiv preprint arXiv:2011.07213 , 2020.117 Appendix7.1 Related workImitation learning. Imitation learning methods aim to emulate a policy from expert demonstrations.DAgger [ 54] is an online learning approach that iteratively updates a deterministic policy; it addressesthe state distributional shift problem of BC through an on-policy method for data collection; similarlyto TS, the original dataset is augmented, but this involves on-line interactions. GAIL [ 28] iterativelyupdates a generative adversarial network [ 24] to determine whether a state-action pair can be deemedas expert; a policy is then inferred using a trust region policy optimisation step [ 56]. TS also usesgenerative modelling, but this is to create data points likely to have come from the data that connecthigh-value regions. Whereas imitation learning relies on expert demonstrations, TS creates higherquality datasets from existing, possibly sub-optimal data, to improve off-line policy learning.Offline reinforcement learning. Several model-free offline RL methods deal with distributionalshift in two ways: 1) by regularising the policy to stay close to actions given in the dataset [ 23,43,60,31,67,21] or 2) by pessimistically evaluating the Q-value to penalise OOD actions [ 1,39,45].For instance, BCQ [ 23] uses a V AE to generate likely actions in order to constrain the policy. TheTD3+BC algorithm [ 21] offers a simplified policy constraint approach; it adds a behavioural cloningregularisation term to the policy update biasing actions towards those in the dataset. Alternatively,CQL [ 45] adjusts the value of the state-action pairs to “push down” on OOD actions and “push up”on in-distribution actions. IQL [ 40] avoids querying OOD actions altogether by manipulating theQ-value to have a state-value function in the SARSA-style update. All the above methods try todirectly deal with OOD actions, either by avoiding them or safely handling them in either the policyimprovement or evaluation step. In contrast, TS generates unseen actions between in-distributionstates; by doing so, we avoid distributional shift by evaluating a state-value function only on seenstates.Model-based algorithms rely on an approximation of the environment’s dynamics [ 58,30]. In theonline setting, they tend to improve sample efficiency [ 33,30,15,7,12]. In an offline learningcontext, the learned dynamics have been exploited in various ways. For instance, Model-based Offlinepolicy Optimization (MOPO) [ 63] augments the dataset by performing rollouts using a learned,uncertainty-penalised, MDP. Unlike MOPO, TS does not introduce imagined states, but only actionsbetween reachable unconnected states. Diffuser [ 29] uses a diffusion probabilistic model to predict awhole trajectory rather than a single state-action pair; it can generate unseen trajectories that have highlikelihood under the data and maximise the cumulative rewards of a trajectory ensuring long-horizonaccuracy. In contrast, our generative models are not used for planning hence we do not requiresampling a full trajectory; instead, our models are designed to only be evaluated locally ensuringone-step accuracy between sandˆs′.State similarity metrics. A central aspect of the proposed data augmentation method consists ofdefining the stitching event, which uses a notion of state similarity to determine whether two states are“close” together. Using geometric distances only would often be inappropriate; e.g. two states may beclose in Euclidean distance, yet reaching one from another may be impossible (e.g. in navigationtask environments where walls or other obstacles preclude reaching a nearby state). Bisimulationmetrics [ 16] capture state similarity based on the dynamics of the environment. These have beenused in RL mainly for system state aggregation [ 17,35,65]; they are expensive to compute [ 11] andusually require full-state enumeration [ 4,5,14]. A scalable approach for state-similarity has recentlybeen introduced by using a pseudometric [ 8] which made calculating state-similarity possible foroffline RL. PLOFF [ 14] is an offline RL algorithm that uses a state-action pseudometric to bias thepolicy evaluation and improvement steps keeping the policy close to the dataset. PLOFF uses apseudometric to stay close to the data, we can bypass this notion altogether by requiring reachabilityin one step.7.2 Trajectory StitchingThe full procedure for the Trajectory stitching method is outlined in Algorithm 1.12Algorithm 1 Trajectory StitchingInitialise: An action generator pω1, a reward generator Gφ, an ensemble of dynamics models{ˆpiξ(s′|s)}Ni=1, an acceptance threshold ̃p, and a dataset D0made up of Ttrajectories (T1, . . .TT)1:fork= 0, . . . , K do2: Train state-value function, VonDkby minimising Equation (2).3: fort= 1, . . . , T do4: Select s, s′=s0, s′0∈ Tt5: Initialise new trajectory, ˆTt6: while not done do7: Create set of candidate states from neighbourhood, {ˆs′j}Nj=1∼Neighbourhood8: Evaluate dynamics models for new set of states and take minimum, miniˆpiξ,π(ˆs′|s)9: ifminiˆpiξ(ˆs′j|s)>mean iˆPiξ(s′|s),V(ˆs′j) = max iV(ˆs′i)andV(ˆs′j)> V(s′)then10: Generate a new action and reward, ̃a∼pω1(z, s,ˆs′j), ̃r∼Gφ(z, s, ̃a,ˆs′j)11: Add(s, ̃a, ̃r,ˆs′j)to new trajectory ˆTt12: Sets= ˆs′j13: else14: Add original transition, (s, a, r, s′)to the new trajectory ˆTt15: Sets=s′16: end if17: end while18: ifPˆTtri>(1 + ̃p)∗PTtrithen19: ˆTt=ˆTt20: else21: ˆTt=Tt22: end if23: end for24: Collect trajectories into dataset, Dk+1= (ˆT1, . . .ˆTT)25:end for7.3 Further ExperimentsExpected performance on sub-optimal data. BC minimises the KL-divergence of trajectorydistributions between the learned policy and πβ[34]. As TS has the effect of improving πβ, thissuggests that the KL-divergence between the trajectory distributions of the learned policy and theexpert policy would be smaller post TS. To investigate this, we used two complex locomotion tasks,Hopper and Walker2D, in OpenAI’s gym [ 6]. Independently for each task, we first train an expertpolicy, π∗, with TD3 [ 22], and use this policy to generate a baseline noisy dataset by sampling theexpert policy in the environment and adding white noise to the actions, i.e. a=π∗(s) +ε. A rangeof different, sub-optimal datasets are created by adding a certain amount of expert trajectories to thenoisy dataset so that they make up x%of the total trajectories. Using this procedure, we create eightdifferent datasets by controlling x, which took values in the set {0,0.1,2.5,5,10,20,30,40}. BCis run on each dataset for 5random seeds. In all experiments we run TS for five iterations, as thisprovides enough to increase the quality of the data without being overly computationally expensive(see the Appendix for results across different iterations). We run TS (for five iterations) on eachdataset over three different random seeds and then create BC policies over the 5 random seeds, giving15 TS+BC policies. Random seeds, cause different TS trajectories as they affect the latent variablessampled for the reward function and inverse dynamics model. Also, the initialisation of weights israndomised for the value function and BC policies, so the robustness of the methods is tested overmultiple seeds.Figure 3 shows the scores as average returns from 10 trajectory evaluations of the learned policies.TS+BC consistently improves on BC across all levels of expertise, for both the Hopper and Walker2denvironments. As the percentage of expert data increases, TS is available to leverage more high-value transitions, consistently improving over the BC baseline. Figure 2 (left) shows the averagedifference in KL-divergences of the BC and TS+BC policies against the expert policy. Precisely, the13 00.1 2.5 5 10 20 30 40Percentage of Expert Data101MSE(*(s),(s))HopperTS+BC BC00.1 2.5 5 10 20 30 40Percentage of Expert Data101MSE(*(s),(s))Walker2d0 0.1 2.5 5 10 20 30 40Percentage of Expert Data0.00.10.20.30.40.50.60.7Difference in KL divergencesHopper Walker2dFigure 2: Estimated KL-divergence and MSE of the BC and TS+BC policies on the Hopper andWalker2d environments as the fraction of expert trajectories increases. (Left) Relative differencebetween the KL-divergence of the BC policy and the expert and the KL-divergence of the TS+BCpolicy and the expert. Larger values represent the TS+BC policy being closer to the expert than theBC policy. MSE between actions evaluated from the expert policy and the learned policy on statesfrom the Hopper (Middle) and Walker2d (Right) environments. The y-axes (Middle and Right) are ona log-scale. All policies were collected by training BC over 5 random seeds, with TS being evaluatedover 3 different random seeds. All KL-divergences were scaled between 0 and 1, depending on theminimum and maximum values per task, before the difference was taken.00.1 2.5 5 10 20 30 40Percentage of Expert Data500100015002000250030003500ScoreHopper00.1 2.5 5 10 20 30 40Percentage of Expert Data1500200025003000350040004500ScoreWalker2dTS+BCBCFigure 3: Comparative performance of BC and TS+BC as the fraction of expert trajectories increasesup to 40%. For two environments, Hopper (left) and Walked2D (right), we report the average returnof 10 trajectory evaluations of the best checkpoint during BC training. BC has been trained over 5random seeds and TS has produced 3 datasets over different random seeds.y-axis represents DKL(ρπ∗(τ), ρπBC(τ))−DKL(ρπ∗(τ), ρπTS+BC(τ)), where ρπ(τ)is the trajectorydistribution for policy π. So, a positive value represents the TS+BC policy being closer to the expert,and a negative value represents the BC policy being closer to the expert, with the absolute valuerepresenting the degree to which this is the case. We also scale the average KL-divergence between 0and1, where 0is the smallest KL-divergence and 1is the largest KL-divergence, per task. This makesthe scale comparable between Hopper and Walker2d. The KL divergences are calculated following[34],DKL(ρπ∗(τ), ρπ(τ)) =Es∼ρπ∗,a∼π∗(s)[logπ∗(a|s)−logπ(a|s)]. The Figure shows that BCcan extract a behaviour policy closer to the expert after performing TS on the dataset, except in the0%case for Walker2D, however the difference is not significant. TS seems to work particularly wellwith a minimum of 2.5%expert data for Hopper and 0.1%for Walker2d.Furthermore, Figure 2 (middle and right) shows the mean square error (MSE) between actions fromthe expert policy and the learned policy for the Hopper (middle) and Walker2d (right) tasks. Actionsare selected by collecting 10 trajectory evaluations of an expert policy. As we expect, the TS+BCpolicies produce actions closer to the experts on most levels of dataset expertise. The only surprisingresult is that for 0%expert data on the Walker2d environment the BC policy produces actions closerto the expert than the TS+BC policy. This is likely due to TS not having any expert data to leverageand so cannot produce any expert trajectories. However, TS still produces a higher-quality datasetthan previous as shown by the increased performance on the average returns. This offers empiricalconfirmation that TS does have the effect of improving the underlying behaviour policy of the dataset.147.4 Further implementation detailsIn this section we report on all the hyperparameters required for TS as used on the D4RL datasets.All hyperparameters have been kept the same for every dataset, notable the acceptance threshold of ̃p= 0.1. TS consists of four components: a forward dynamics model, an inverse dynamics model, areward function and a value function. Table 2 provides an overview of the implementation details andhyperparameters for each TS component. As our default optimiser we have used Adam [ 37] withdefault hyperparameters, unless stated otherwise.Forward dynamics model. Each forward dynamics model in the ensamble consists of a neuralnetwork with three hidden layers of size 200with ReLU activation. The network takes a state sasinput and outputs a mean μand standard deviation σof a Gaussian distribution N(μ, σ2). For allexperiments, an ensemble size of 7is used with the best 5being chosen.Inverse dynamics model. To sample actions from the inverse dynamics model of the environment,we have implemented a CV AE with two hidden layers with ReLU activation. The size of the hiddenlayer depends on the size of the dataset [ 67]: when the dataset has less than 900,000transitions (e.g.the medium-replay datasets) the layer has 256nodes; when larger, it has 750nodes. The encoder qω1takes in a tuple consisting of state, action and next state; it encodes it into a mean μqand standarddeviation σqof a Gaussian distribution N(μq, σq). The latent variable zis then sampled from thisdistribution and used as input for the decoder along with the state, s, and next state, s′. The decoderoutputs an action that is likely to connect sands′. The CV AE is trained for 400,000gradient stepswith hyperparameters given in Table 2.Reward function. The reward function is used to predict reward signals associated with newtransitions, s, a, s′. For this model, we use a conditional-WGAN with two hidden layers of size 512.The generator, Gφ, takes in a state s, action a, next state s′and latent variable z; it outputs a reward rfor that that transition. The decoder takes a full transition of (s, a, r, s′)as input to determine whetherthis transition is likely to have come from the dataset or not.Value function. Similarly to previous methods [ 23], our value function Vθtakes the minimum oftwo value functions, {Vθ1, Vθ2}. Each value function is a neural network with two hidden layers ofsize256and a ReLU activation. The value function takes in a state sand determines the sum offuture rewards of being in that state and following the policy (of the dataset) thereon.Figure 4: Visualisation of our two definitions of a neighbourhood. For a transition (st, at, st+1)∈ D,the neighbourhoods are used to reduce the size of the set of candidate next states. (Left) All stateswithin an ε-ball of the current state, st, are taken and the next state in their respective trajectories(joined by an action shown as an arrow) are added to the set of candidate next states. (Right) Allstates within an ε-ball of the next state, st+1are added to the set of candidate next states. The full setof candidate next states are highlighted in yellow.KL-divergence experiment. As the KL-divergence requires a continuous policy, the BC policynetwork is a 2-layer MLP of size 256with ReLU activation, but with the final layer outputting theparameters of a Gaussian, μsandσs. We carry out maximum likelihood estimation using a batchsize of 256. For the Walker2d experiments, TS was slightly adapted to only accept new trajectories15Hyperparameter ValueOptimiser AdamForward Dynamics Learning rate 3e-4model Batch size 256Ensemble size 7Optimiser AdamInverse Dynamics Learning rate 1e-4model Batch size 100Latent dim 2*action dimOptimiser Adamβ= (0.5,0.999)Learning rate 1e-4Reward Function Batch size 256Latent dim 2L2 regularisation 1e-4Optimiser AdamValue Function Learning rate 3e-4Batch size 256Table 2: Hyperparameters and values for models used in TSif they made less than ten changes. For each level of difficulty, TS is run 3times and the scoresare the average of the mean returns over 10evaluation trajectories of 5random seeds of BC. Tocompute the KL-divergence, a continuous expert policy is also required, but TD3 gives a deterministicone. To overcome this, a continuous expert policy is created by assuming a state-dependent normaldistribution centred around π∗(s)with a standard deviation of 0.01.D4RL experiments. For the D4RL experiments, we run TS 3 times for each dataset and averagethe mean returns over 10evaluation trajectories of 5random seeds of BC, to attain the results forTS+BC. For the BC results, we average the mean returns over 10evaluation trajectories of 5randomseeds. The BC policy network is a 2-layer MLP of size 256with ReLU activation, the final layer hastanh activation multiplied by the action dimension. We use the Adam optimiser with a learning rateof1e−3and a batch size of 256.7.5 Number of iterations of TSTS can be repeated multiple times, each time using a newly estimated value function to take intoaccount the newly generated transitions. In all our experiments, we choose 5 iterations. Figure 5shows the scores of the D4RL environments on the different iterations, with the standard deviationacross seeds shown as the error bar. With iteration 0we indicate the BC score as obtained on theoriginal D4RL datasets. For all datasets, we observe that the average scores of BC increase initiallyover a few iterations, then remain stable with only some minor random fluctuations. For Hopper andWalker2d medium-replay, there is a higher degree of standard deviation across the seeds, which givesa less stable average as the number of iterations increases.7.6 Ablation studyTS uses a value function to estimate the future returns from any given state. Therefore TS+BC has anatural advantage over just BC which uses only the states and actions. To ensure that using a valuefunction is only sufficient to improve the performance of BC, we have test a weighted version of theBC loss function whereby the weights are given by the estimated value function, i.e.πBC(s) = arg minπEs,a∼D[Vθ(s)(π(s)−a)2].This weighted-BC method gives larger weight to the high-value states and lower weight to thelow-value states during training. On the Hopper medium and medium-expert datasets, training160123456789255075100MediumHopper0123456789255075100Walker2d0123456789255075100Halfcheetah0123456789255075100Medium Expert012345678925507510001234567892550751000123456789255075100Medium Replay012345678925507510001234567892550751000123456789Iteration255075100Expert0123456789Iteration2550751000123456789Iteration255075100Figure 5: Returns of BC extracted policies as the number of iterations of TS is increased. Iteration 0are the BC scores on the original D4RL datasets. The errors bars represent the standard deviation ofthe average returns of 10 trajectory evaluations over 5 random seeds of BC and 3 random seeds of TS.this weighted-BC method only gives a slight improvement over the original BC-cloned policy. ForHopper-medium, weighted-BC achieves an average score of 59.21(with standard deviation 3.4);this is an improvement over BC ( 55.3), but lower lower than TS+BC ( 64.3). Weighted-BC onhopper-medexp achieves an average score of 66.02(with standard deviation 6.9); again, this is aslight improvement over BC ( 62.3), but significantly lower than TS+BC ( 94.8). The experimentsindicate that using a value function to weight the relative importance of seen states when optimisingthe BC objective function is not sufficient to achieve the performance gains introduced by TS.17
-Yl5KMi9GgX
Review
5: Marginally below acceptance threshold
This paper presents a data augmentation scheme for BC in the offline setting. The authors propose to produce trajectory stitching based on the likelihood of the next state, next action, and high values. The authors train a forward dynamics model, an inverse dynamics model, a value function with a reward generator and a policy. The results show that trajectory stitching + BC can improve BC on D4RL tasks. Pros: 1. The idea of conducting trajectory stitching for BC is neat and original since BC usually suffers from suboptimal data where offline RL can excel. This idea can be an effective approach to bridging that gap. 2. The empirical result on D4RL clearly shows that the method can improve over BC. Cons: 1. The method is pretty complex with many components as listed in the summary and there are many hyperparameters/models to tune. It is unclear if such a scheme is general enough in all kinds of domains. I suspect that making it work in various settings could be challenging. 2. It is unclear if the approach can outperform simple baselines such as percentage BC where we simply train on trajectories with top K% return and returned-weighted BC e.g. RWR and AWR. 3. The method is still much worse than offline RL in medium-replay datasets, suggesting that it's not doing a good job of stitching in diverse datasets.
5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Model-based Trajectory Stitching for Improved Offline Reinforcement Learning ### Paper Abstract In many real-world applications, collecting large and high-quality datasets may be too costly or impractical. Offline reinforcement learning (RL) aims to infer an optimal decision-making policy from a fixed set of data. Getting the most information from historical data is then vital for good performance once the policy is deployed. We propose a model-based data augmentation strategy, Trajectory Stitching (TS), to improve the quality of sub-optimal historical trajectories. TS introduces unseen actions joining previously disconnected states: using a probabilistic notion of state reachability, it effectively ‘stitches’ together parts of the historical demonstrations to generate new, higher quality ones. A stitching event consists of a transition between a pair of observed states through a synthetic and highly probable action. New actions are introduced only when they are expected to be beneficial, according to an estimated state-value function. We show that using this data augmentation strategy jointly with behavioural cloning (BC) leads to improvements over the behaviour-cloned policy from the original dataset. Improving over the BC policy could then be used as a launchpad for online RL through planning and demonstration-guided RL. ### Paper Keywords ["policy", "trajectory", "improved offline reinforcement", "data augmentation strategy", "many", "applications", "large", "datasets", "costly"] ### Paper Content Model-based Trajectory Stitching for ImprovedOffline Reinforcement LearningCharles A. Hepburn1Giovanni Montana1,21University of Warwick2Alan Turing Institute{Charlie.Hepburn,g.montana}@warwick.ac.ukAbstractIn many real-world applications, collecting large and high-quality datasets maybe too costly or impractical. Offline reinforcement learning (RL) aims to inferan optimal decision-making policy from a fixed set of data. Getting the mostinformation from historical data is then vital for good performance once the policyis deployed. We propose a model-based data augmentation strategy, TrajectoryStitching (TS), to improve the quality of sub-optimal historical trajectories. TSintroduces unseen actions joining previously disconnected states: using a prob-abilistic notion of state reachability, it effectively ‘stitches’ together parts of thehistorical demonstrations to generate new, higher quality ones. A stitching eventconsists of a transition between a pair of observed states through a synthetic andhighly probable action. New actions are introduced only when they are expected tobe beneficial, according to an estimated state-value function. We show that usingthis data augementation strategy jointly with behavioural cloning (BC) leads toimprovements over the behaviour-cloned policy from the original dataset. Improv-ing over the BC policy could then be used as a launchpad for online RL throughplanning and demonstration-guided RL.1 IntroductionBehavioural cloning (BC) [ 51,52] is one of the simplest imitation learning methods to obtain adecision-making policy from expert demonstrations. BC treats the imitation learning problem as asupervised learning one. Given expert trajectories - the expert’s paths through the state space - a policynetwork is trained to reproduce the expert behaviour: for a given observation, the action taken by thepolicy must closely approximate the one taken by the expert. Although a simple method, BC hasshown to be very effective across many application domains [ 51,55,32,50], and has been particularlysuccessful in cases where the dataset is large and has wide coverage [ 13]. An appealing aspect ofBC is that it is applied in an offline setting, using only the historical data. Unlike reinforcementlearning (RL) methods, BC does not require further interactions with the environment. Offline policylearning can be advantageous in many circumstances, especially when collecting new data throughinteractions is expensive, time-consuming or dangerous; or in cases where deploying a partiallytrained, sub-optimal policy in the real-world may be unethical, e.g. in autonomous driving andmedical applications.BC extracts the behaviour policy which created the dataset. Consequently, when applied to sub-optimal data (i.e. when some or all trajectories have been generated by non-expert demonstrators),the resulting behavioural policy is also expected to be sub-optimal. This is due to the fact that BChas no mechanism to infer the importance of each state-action pair. Other drawbacks of BC are itstendency to overfit when given a small number of demonstrations and the state distributional shiftbetween training and test distributions [ 54,13]. In the area of imitation learning, significant effortshave been made to overcome such limitations, however the available methodologies generally rely3rd Offline Reinforcement Learning Workshop at Neural Information Processing Systems, 2022.StateActionGenerated actionStitching eventState ValueFigure 1: Simplified illustration of Trajectory Stitching. Each original trajectory (a sequence of statesand actions) in the dataset Dis indicated as Tiwithi= 1,2,3. A first stitching event is seen intrajectory T1whereby a transition to a state originally visited in T2takes place. A second stitchingevent involves a jump to a state originally visited in T3. At each event, jumping to a new stateincreases the current trajectory’s future expected returns. The resulting trajectory (in bold) consists ofa sequence of states, all originally visited in D, but connected by imagined actions; it replaces T1inthe new dataset.on interacting with the environment [ 54,18,28,47]. So, a question arises: can we help BC infer asuperior policy only from available sub-optimal data without the need to collect additional expertdemonstrations?Our investigation is related to the emerging body of work on offline RL, which is motivated bythe aim of inferring expert policies with only a fixed set of sub-optimal data [ 46,48]. A majorobstacle towards this aim is posed by the notion of action distributional shift [23,43,48]. This isintroduced when the policy being optimised deviates from the behaviour policy, and is caused bythe action-value function overestimating out-of-distribution (OOD) actions. A number of existingmethods address the issue by constraining the actions that can be taken. In some cases, this isachieved by constraining the policy to actions close to those in the dataset [ 23,43,60,31,67,21],or by manipulating the action-value function to penalise OOD actions [ 45,1,39,62]. In situationswhere the data is sub-optimal, offline RL has been shown to recover a superior policy to BC [ 23,44].Improving BC will in turn improve many offline RL policies that rely on an explicit behaviour policyof the dataset [2, 64, 21].In contrast to existing offline learning approaches, we turn the problem on its head: rather thantrying to regularise or constrain the policy somehow, we investigate whether the data itself can beenriched using only the available demonstrations and an improved policy derived through a standardBC algorithm, without any additional modifications. To explore this new avenue, we propose amodel-based data augmentation method called Trajectory Stitching (TS). Our ultimate aim is todevelop a procedure that identifies sub-optimal trajectories and replaces them with better ones. Newtrajectories are obtained by stitching existing ones together, without the need to generate unseenstates. The proposed strategy consists of replaying each existing trajectory in the dataset: for eachstate-action pair leading to a particular next state along a trajectory, we ask whether a different actioncould have been taken instead, which would have landed at a different seen state from a differenttrajectory. An actual jump to the new state only occurs when generating such an action is plausibleand it is expected to improve the quality of the original trajectory - in which case we have a stitchingevent .An illustrative representation of this procedure can be seen in Figure 1, where we assume to haveat our disposal only three historical trajectories. In this example, a trajectory has been improvedthrough two stitching events. In practice, to determine the stitching points, TS uses a probabilisticview of state-reachability that depends on learned dynamics models of the environment. Thesemodels are evaluated only on in-distribution states enabling accurate prediction. In order to assessthe expected future improvement introduced by a potential stitching event, we utilise a state-valuefunction and reward model. Thus, TS can be thought of as a data-driven, automated procedureyielding highly plausible and higher-quality demonstrations to facilitate supervised learning; at the2same time, sub-optimal demonstrations are removed altogether whilst keeping the diverse set of seenstates.Demonstrations can be used to guide RL, to improve on the speed-up of learning of online RL. Inthese cases, BC can be used to initialise or regularise the training policy [ 53,49]. Running TS onthe datasets beforehand could be used to improve on the sample efficiency further as the initialisedpolicies will be better; as well as regularising the policy towards an improved one. In future workwe aim to leverage TS as a launchpad for online RL. Specifically, an improved BC policy would beuseful in improving the sample efficiency for planning [ 2,64] as well as deployment efficiency inoffline-to-online RL [25, 66].Our experimental results show that TS produces higher-quality data, with BC-derived policies alwayssuperior than those inferred on the original data. Remarkably, we demonstrate that TS-augmenteddata allow BC to compete with SOTA offline RL algorithms on highly complex continuous controlopenAI gym tasks implemented in MuJoCo using the D4RL offline benchmarking suite [ 20]. Interms of a larger system, BC-derived policies are used as a prior to many methods, so a reasonedapproach to improving the BC policy could improve these methods also.2 Problem setupWe consider the offline RL problem setting, which consists of finding an optimal decision-makingpolicy from a fixed dataset. The policy is a mapping from states to actions, π:S → A , wherebySandAare the state and action spaces, respectively. The dataset is made up of transitions D={(si, ai, ri, s′i)}, of current state, si; action performed in that state, ai; the state in which theaction takes the agent, s′i; and the reward for transitioning, ri. The actions have been taken by anunknown behaviour policy, πβ, acting in a Markov decision process (MDP). The MDP is defined asM= (S,A,P,R, γ), where P:S × A × S → [0,1]is the transition probability function whichdefines the dynamics of the environment, R:S × A × S → Ris the reward function and γ∈(0,1]is a scalar discount factor [59].In offline RL, the agent must learn a policy, π∗(a|s), that maximises the returns defined as theexpected sum of discounted rewards, Eπ[P∞t=0rtγt], without ever having access to πβ. Here we areinterested in performing imitation learning through BC, which mimics πβby performing supervisedlearning on the state-action pairs in D[51,52]. More specifically, assuming a deterministic policy,BC minimisesπBC(s) = arg minπEs,a∼D[(π(s)−a)2]. (1)The resulting policy also minimises the KL-divergence between the trajectory distributions of thelearned policy and πβ[34]. Our objective for TS is to improve the dataset, by replacing existingtrajectories with high-return ones, so that BC can extract a higher-performing behaviour policy thanthe original. Many offline RL algorithms bias the learned policy towards the behaviour-cloned one[2,21,64] to ensure the policy does not deviate too far from the behaviour policy. Being able toextract a high-achieving policy would be useful in many of these offline RL methods.3 Trajectory StitchingOverview. The proposed data augmentation method, Trajectory Stitching, augments Dby stitchingtogether high value regions of different trajectories. Stitching events are discovered by searching forcandidate next states which lead to higher returns. These higher quality states are determined by astate-value function, V(s), which is trained using the historical data. This function is unaffected bydistributional shift due to only being evaluated on in-distribution states.Suppose that the transition (s, a, s′)came from some trajectory TiinD, for which the joint densityfunction is p(s, a, s′)∝p(s′|s)p(a|s, s′); here, p(s′|s)represents the environment’s forward dynam-ics and p(a|s, s′)is its inverse dynamics. Our aim is to replace s′andawith a candidate next state,ˆs′and connecting action ˆa, which leads to higher returns. To generate a new transition, first we lookfor a candidate next state, ˆs′̸=s′, amongst all the states in D, that has been visited by any othertrajectory. A suitable criterion to evaluate next state candidates is given by the forward dynamics;conditional on s, we require that the new next state must be at least as likely to have been observedass′, i.e. we impose p(ˆs′|s)≥p(s′|s). To be beneficial, the candidate next state must not only be3likely to be reached from sunder the environment dynamics, but must also lead to higher returnscompared to the current next state. Thus, we also require that, under the pre-trained state-valuefunction, V(ˆs′)> V(s′). Where both these conditions are satisfied, a plausible action connectingsand the newly found ˆs′is obtained by finding an action that maximises the inverse dynamics, i.e.arg maxˆap(ˆa|s,ˆs′). When the process is completed, we have a stitching event .For each trajectory TiinD, we sequentially consider all its transitions (s, a, s′)until a stitching eventtakes place, which leads to a different trajectory, Tj. This process is then repeated for Tj, starting atthe current state, until no more stitching events are possible. For example, let us have two trajectoriesT1andT2, with lengths NandMrespectively. TS stitches time point ninT1to time point minT2which would lead to a new trajectory to replace T1,(s(1)1, a(1)1, s(1)2, . . . , s(1)n−1, a(1)n−1, s(1)n,ˆa, s(2)m, a(2)m, s(2)m+1, . . . , a(2)M−1, s(2)M).Here s(i)j, a(i)jrepresents a state-action pair for Tiat time point j. Upon completing this process, wehave created new and plausible trajectories, under the empirical state distribution, with overall higherexpected cumulative returns.In practice, we do not assume that the forward dynamics, inverse dynamics, reward function andstate-value function are known; hence they need to be estimated from the available data. In theremainder of this section we describe the models used to infer these quantities. Algorithm 1 (seeAppendix) details the full TS procedure.Next state search via a learned dynamics model. The search for a candidate next state requiresa learned forward dynamics model, i.e. p(s′|s). Model-based RL approaches typically use suchdynamics’ models conditioned on the action as well as the state to make predictions [ 30,63,36,2].Here, we use the model differently, only to guide the search process and identify of a suitable nextstate to transition to. Specifically, conditional on s, the dynamics model is used to assess the relativelikelihood of observing any other s′in the dataset compared to the observed one.The environment dynamics are assumed to be Gaussian, and we use a neural network to predictthe mean vector and covariance matrix, i.e. ˆpξ(st+1|st) =N(μξ(st),Σξ(st)); here, ξindicate theparameters of the neural network. Modelling the environment dynamics as a Gaussian distribution iscommon for continuous state-space applications [ 30,63,36,62]. Furthermore, we take an ensembleEofNdynamics models, {ˆpiξ(st+1|st) =N(μiξ,Σiξ)}Ni=1. Each model is trained via maximumlikelihood estimation so it minimises the following lossLˆp(ξ) =Es,s′∼D[(μξ(s)−s′)TΣ−1ξ(s)(μξ(s)−s′) + log |Σξ(s)|],where|·|refers to the determinant of a matrix. Each model’s parameter vector is initialised differently;using such an ensemble strategy has been shown to take into account the epistemic uncertainty, i.e.the uncertainty in the model parameters [7, 12, 2, 62].Once the models have been fitted, to decide whether ˆs′can replace s′along any trajectory, we take aconservative approach by requiring thatmini∈Eˆpiξ(ˆs′|s)>meani∈Eˆpiξ(s′|s).where the minimum and mean are taken over the ensemble Eof dynamics models.Value function estimation and reward prediction model. Value functions are widely used inreinforcement learning to determine the quality of an agent’s current position [ 59]. In our context, weuse a state-value function to assess whether a candidate next state offers a potential improvementover the original next state. To accurately estimate the future returns given the current state, wecalculate a state-value function dependent on the behaviour policy of the dataset. The functionVθ(s)is approximated by a MLP neural network parameterised by θ. The parameters are learned byminimising the squared Bellman error [59],LV(θ) =Es,r,s′∼D[(r+γVθ(s′)−Vθ(s))2]. (2)Vθis only used to observe the value of in-distribution states, thus avoiding the OOD issue whenevaluating value functions which occurs in offline RL. The value function will only be queried once acandidate new state has been found such that p(ˆs′|s)≥p(s′|s).4Value functions require rewards for training, therefore a reward must be estimated for unseen tuples(s,ˆa,ˆs′). To this end, we train a conditional Wasserstein-GAN [ 24,3] consisting of a generator, Gφand a discriminator Dψ, with parameters of the neural networks φandψrespectively. A WassersteinGAN is used due to the training stability over GANs [ 3], as well as their predictive performanceover MLPs and V AEs. The discriminator takes in the state, action, reward, next state and determineswhether this transition is from the dataset. The generator loss function is:LG(φ) =E z∼p(z)s,a,s′∼D ̃r∼Gφ(z,s,a,s′)[Dψ(s, a, s′, ̃r)].Here z∼p(z)is a noise vector sampled independently from N(0,1), the standard normal. Thediscriminator loss function is:LD(ψ) =Es,a,r,s′∼D[Dψ(s, a, s′, r)]−E z∼p(z)s,a,s′∼D ̃r∼Gφ(z,s,a,s′)[Dψ(s, a, s′, ̃r)].Once trained, a reward will be predicted for the stitching event when a new action has been generatedbetween two previously disconnected states.Action generation via an inverse dynamics model. Sampling a suitable action that leads fromsto the newly found state ˆs′requires an inverse dynamics model. Specifically, we require that asynthetic action must maximise the estimated conditional density, p(a|s,ˆs′). To this end, we train aconditional variational autoencoder (CV AE) [ 38,57], consisting of an encoder qω1and a decoder pω2where ω1andω2are the respective parameters of the neural networks.The encoder converts the input data into a lower-dimensional latent representation zwhereas thedecoder generates data from the latent space. The CV AE objective is to maximise logp(a|s,ˆs′)bymaximising its lower boundmaxω1,ω2logp(a|s,ˆs′, z)≥maxω1,ω2Ez∼qω1[logpω2(a|s,ˆs′, z)]−DKL[qω1(z|a, s,ˆs′)||P(z|s,ˆs′)],where z∼ N (0,1)is the prior for the latent variable z, and DKLrepresents the KL-divergence[42, 41]. This process ensures that the most plausible action is generated conditional on sandˆs′.Iterated TS and BC. TS is run for multiple iterations, updating the value function before eachone based on the new data and improved behaviour policy. All other models remain fixed as we donot have any updated information about the underlying MDP. From the new dataset, we extract thebehaviour policy using BC, minimising Equation (1). We train BC for 100k gradient steps, reportingthe best policy from checkpoints of every 10k steps from 40k onwards. This ensures that BC hastrained enough and does not overfit.4 Experimental resultsIn this section, we provide empirical evidence that TS can produce higher-quality datasets, comparedto the original data, by showing BC infers improved policies without collecting any more data fromthe environment. We call a BC policy run on a TS dataset TS+BC. We compare our method withselected offline RL methods using D4RL datasets. This is to give an insight into how much TS canimprove BC by reaching the SOTA performance level of offline RL.Performance assessment on D4RL data. To investigate the benefits of TS+BC as an offline policylearning strategy, we compare its performance with selected state-of-the-art offline RL methods:TD3+BC [ 21], IQL [ 40], MBOP [ 2] and Diffuser [ 29]. These baselines represent model-freeand model-based methods and achieve top results. We make the comparisons on the D4RL [ 20]benchmarking datasets of the openAI gym MuJoCo tasks; see Table 1. Three complex continuousenvironments are tested: Hopper, Halfcheetah and Walker2d, with different levels of difficulty. The“medium" datasets were gathered by the original authors using a single policy produced from theearly-stopping of an agent trained by soft actor-critic (SAC) [ 26,27]. The “medium-replay" datasetsare the replay buffers from the training of the “medium" policies. The “expert" datasets were obtainedfrom a policy trained to an expert level, and the “medium-expert" datasets are the combination ofboth the “medium" and “expert" datasets. In all the cases we have considered, TS+BC outperforms5Dataset TD3+BC IQL MBOP Diffuser BC TS+BC (ours)hopper-medium 59.3 66.3 48.8 58.5 55.3 64.3±4.2(+16 .3%)halfcheetah-medium 48.3 47.4 44.6 44.2 42.9 43.2±0.3(+0.7%)walker2d-medium 83.7 78.3 41.0 79.7 75.6 78.8±1.2(+4.2%)Average-medium 63.8 64.0 44.8 60.8 57.9 62.1hopper-medexp 98.0 91.5 55.1 107.2 62.3 94.8±11.7(+52 .2%)halfcheetah-medexp 90.7 86.7 105.9 79.8 60.7 86.9±2.5(+43 .2%)walker2d-medexp 110.1 109.6 70.2 108.4 108.2 108.8±0.5(+0.6%)Average-medexp 99.6 95.9 77.1 98.5 77.1 96.8hopper-medreplay 60.9 94.7 12.4 96.8 29.6 50.2±17.2(+69 .6%)halfcheetah-medreplay 44.6 44.2 42.3 42.2 38.5 39.8±0.6(+3.4%)walker2d-medreplay 81.8 73.9 9.7 61.2 34.7 61.5±5.6(+77 .2%)Average-medreplay 62.4 70.9 21.5 66.7 34.3 50.5hopper-expert 107.8 - - - 111.0 111.8±0.5(+0.7%)halfcheetah-expert 96.7 - - - 92.9 93.2±0.6(+0.3%)walker2d-expert 110.2 - - - 109.0 108.9±0.2(−0.1%)Average-expert 104.9 - - - 104.3 104.6Table 1: Average normalised scores achieved on three locomotion tasks (Hopper, Halfcheetah andWalker2d) using the D4RL v2 data sets. The results for competing methods have been gathered fromthe original publications. Bold scores represent values within 5%of the highest average score of thelevels of difficulty. TS+BC: In brackets we report the percentage improvement achieved by BC afterTS relative to the BC baseline.the BC baseline, showing that TS creates a higher quality dataset as claimed. Also, while only everusing BC to obtain the final policy, TS+BC is very competitive with current state-of-the-art offlineRL methods, especially for the medium, medium-expert and expert datasets. For medium-replaydatasets, although TS+BC still attains much higher performing policies than the original BC, weobserve lower competitiveness against other offline DRL methods. Due to the way these datasets havebeen developed, they would appear to be more naturally suited to dynamical programming-basedalgorithms.Implementation details. Calculating p(s′|s)for all s′∈ D may be computationally inefficient.To speed this up in the MuJoCo environments, we initially select a smaller set of candidate nextstates by thresholding the Euclidean distance. Although on its own a geometric distance would not besufficient to identify stitching events, we found that in our environments it can help reduce the set ofcandidate next states thus alleviating the computational workload.To pre-select a smaller set of candidate next states, we use two criteria. Firstly, from a transition(s, a, r, s′)∈ D, a neighbourhood of states around sis taken and the following state in the trajectoryis collected. Secondly, all the states in a neighbourhood around s′are collected. This process ensuresall candidate next states are geometrically-similar to s′or are preceded by geometrically-similarstates. The neighbourhood of a state is an ε−ballaround the state. When εis large enough, we canretain all feasible candidate next states for evaluation with the forward dynamic model. Figure 4 (seeAppendix) illustrates this procedure.5 ConclusionsIn this paper, we have proposed a data augmentation strategy, Trajectory Stitching, which can beapplied to historical datasets containing demonstrations of sequential decisions taken to solve acomplex task. Without further interactions with the environment, TS can improve the quality of thedemonstrations, which in turn has the effect of boosting the performance of BC-extracted policiessignificantly. This method could be used to extract an improved explicit behavioural cloning policyregulariser for offline RL. This would be specifically important for an offline planning algorithm.TS+BC can be leveraged further for online RL, where the learned policy can be initialised using BC6or the sample efficiency of the algorithm is improved by regularising towards the behavioural policyof demonstrations.BC is used in many offline RL algorithms, such as a prior policy in offline planning [ 2,64] andas a policy regulariser [ 21]. Although in this paper we have not explored the potential benefits ofcombining TS with offline reinforcement learning algorithms, our results on the D4RL benchmarkingdatasets show that TS improves over the initial BC policy, and can in fact reach the level of state-of-the-art offline RL methods. This suggests that many methods could be improved by employingTS either to find a better behavioural cloned policy or by enhancing an initial dataset. We expectTS to be used as a “first- step" to fully leverage the given dataset, enriching the dataset by addinghighly-likely (under the environment dynamics) transitions.Upon acceptance of this paper, we learned about a related methodology called BATS (Best ActionTrajectory Stitching) [ 10]. BATS augments the dataset by adding transitions from planning using alearned model of the environment. Our model-based TS approach differs from BATS in a number offundamental ways. First, BATS takes a geometric approach to defining state similarity; state-actionsare rolled-out using the dynamics model until a state is found that is within δof a state in the dataset.Using geometric distances is often inappropriate; e.g. two states may be close in Euclidean distance,yet reaching one from another may be impossible (e.g. in navigation task environments where wallsor other obstacles preclude reaching a nearby state). As such, our stitching events are based on thedynamics of the environment and are only assessed between two in-distribution states. Second, BATSallows stitches between states that are k-steps apart; this means the reward function needs to bepenalised to favour state-action pairs in the original dataset, as model error can compound resultingin unlikely rollouts. In contrast, we only allow one-step stitching between in-distribution states anduse the value function to extend the horizon rather than a learned model, this means all our modelscan be trusted to give accurate predictions without the need for penalties. Finally, BATS adds allstitched actions to the original dataset, then create a new dataset by running value iteration, which iseventually used to learn a policy through BC. This raises many questions about the number of newtrajectories need to be collected in this way to extract an optimal policy using BC, as well as otherpolicy learning approaches more suitable to this set up. Our method, is much more suited to policylearning by BC, as after performing TS we are left with a dataset with only high-quality trajectories,where the low-value parts are removed after the stitching event.We believe that model-based TS opens up a number of directions for future work. For example, itcan be extended to multi-agent offline policy learning, for which initial attempts have been madeto control the distributional shift with numerous agents [ 61]. TS could even be used without thevalue function to increase the heterogeneity of the data without collecting new data. This could beused in conjunction with other offline imitation learning methods [ 9,19]. This line of investigationwould specifically be useful in situations where collecting new data is expensive or dangerous, butlearning from a larger, more heterogeneous data set with additional coverage is expected to improveperformance.6 AcknowledgementsCH acknowledges support from the Engineering and Physical Sciences Research Council through theMathematics of Systems Centre for Doctoral Training at the University of Warwick (EP/S022244/1).GM acknowledges support from a UKRI AI Turing Acceleration Fellowship (EP/V024868/1).References[1]Gaon An, Seungyong Moon, Jang-Hyun Kim, and Hyun Oh Song. Uncertainty-based offlinereinforcement learning with diversified q-ensemble. Advances in Neural Information ProcessingSystems , 34, 2021.[2]Arthur Argenson and Gabriel Dulac-Arnold. Model-based offline planning. arXiv preprintarXiv:2008.05556 , 2020.[3]Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarialnetworks. In International conference on machine learning , pages 214–223. PMLR, 2017.7[4]Giorgio Bacci, Giovanni Bacci, Kim G Larsen, and Radu Mardare. Computing behavioraldistances, compositionally. In International Symposium on Mathematical Foundations ofComputer Science , pages 74–85. Springer, 2013.[5]Giorgio Bacci, Giovanni Bacci, Kim G Larsen, and Radu Mardare. On-the-fly exact compu-tation of bisimilarity distances. In International Conference on Tools and Algorithms for theConstruction and Analysis of Systems , pages 1–15. Springer, 2013.[6]Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang,and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540 , 2016.[7]Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. Advances in neuralinformation processing systems , 31, 2018.[8]Pablo Samuel Castro. Scalable methods for computing state similarity in deterministic markovdecision processes. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 34,pages 10069–10076, 2020.[9]Jonathan Chang, Masatoshi Uehara, Dhruv Sreenivas, Rahul Kidambi, and Wen Sun. Mitigatingcovariate shift in imitation learning via offline data with partial coverage. Advances in NeuralInformation Processing Systems , 34:965–979, 2021.[10] Ian Char, Viraj Mehta, Adam Villaflor, John M Dolan, and Jeff Schneider. Bats: Best actiontrajectory stitching. arXiv preprint arXiv:2204.12026 , 2022.[11] Di Chen, Franck van Breugel, and James Worrell. On the complexity of computing probabilisticbisimilarity. In International Conference on Foundations of Software Science and ComputationalStructures , pages 437–451. Springer, 2012.[12] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcementlearning in a handful of trials using probabilistic dynamics models. Advances in neuralinformation processing systems , 31, 2018.[13] Felipe Codevilla, Eder Santana, Antonio M López, and Adrien Gaidon. Exploring the limitationsof behavior cloning for autonomous driving. In Proceedings of the IEEE/CVF InternationalConference on Computer Vision , pages 9329–9338, 2019.[14] Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, andMatthieu Geist. Offline reinforcement learning with pseudometric learning. In InternationalConference on Machine Learning , pages 2307–2318. PMLR, 2021.[15] Vladimir Feinberg, Alvin Wan, Ion Stoica, Michael I Jordan, Joseph E Gonzalez, and SergeyLevine. Model-based value estimation for efficient model-free reinforcement learning. arXivpreprint arXiv:1803.00101 , 2018.[16] Norm Ferns, Prakash Panangaden, and Doina Precup. Metrics for finite markov decisionprocesses. In UAI, volume 4, pages 162–169, 2004.[17] Norman Ferns, Pablo Samuel Castro, Doina Precup, and Prakash Panangaden. Methods forcomputing state similarity in markov decision processes. arXiv preprint arXiv:1206.6836 , 2012.[18] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimalcontrol via policy optimization. In International conference on machine learning , pages 49–58.PMLR, 2016.[19] Pete Florence, Corey Lynch, Andy Zeng, Oscar A Ramirez, Ayzaan Wahid, Laura Downs,Adrian Wong, Johnny Lee, Igor Mordatch, and Jonathan Tompson. Implicit behavioral cloning.InConference on Robot Learning , pages 158–168. PMLR, 2022.[20] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets fordeep data-driven reinforcement learning. arXiv preprint arXiv:2004.07219 , 2020.[21] Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning.Advances in Neural Information Processing Systems , 34, 2021.8[22] Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation errorin actor-critic methods. In International conference on machine learning , pages 1587–1596.PMLR, 2018.[23] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learningwithout exploration. In International Conference on Machine Learning , pages 2052–2062.PMLR, 2019.[24] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neuralinformation processing systems , 27, 2014.[25] Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relaypolicy learning: Solving long-horizon tasks via imitation and reinforcement learning. arXivpreprint arXiv:1910.11956 , 2019.[26] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Internationalconference on machine learning , pages 1861–1870. PMLR, 2018.[27] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan,Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithmsand applications. arXiv preprint arXiv:1812.05905 , 2018.[28] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. Advances in neuralinformation processing systems , 29, 2016.[29] Michael Janner, Yilun Du, Joshua B Tenenbaum, and Sergey Levine. Planning with diffusionfor flexible behavior synthesis. arXiv preprint arXiv:2205.09991 , 2022.[30] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model:Model-based policy optimization. Advances in Neural Information Processing Systems , 32,2019.[31] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza,Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcementlearning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456 , 2019.[32] M Waleed Kadous, Claude Sammut, and R Sheh. Behavioural cloning for robots in unstructuredenvironments. In Advances in Neural Information Processing Systems Workshop , 2005.[33] Gabriel Kalweit and Joschka Boedecker. Uncertainty-driven imagination for continuous deepreinforcement learning. In Conference on Robot Learning , pages 195–206. PMLR, 2017.[34] Liyiming Ke, Sanjiban Choudhury, Matt Barnes, Wen Sun, Gilwoo Lee, and Siddhartha Srini-vasa. Imitation learning as f-divergence minimization. In International Workshop on theAlgorithmic Foundations of Robotics , pages 313–329. Springer, 2020.[35] Mete Kemertas and Tristan Aumentado-Armstrong. Towards robust bisimulation metric learning.Advances in Neural Information Processing Systems , 34, 2021.[36] Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel:Model-based offline reinforcement learning. Advances in neural information processing systems ,33:21810–21823, 2020.[37] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[38] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprintarXiv:1312.6114 , 2013.[39] Ilya Kostrikov, Rob Fergus, Jonathan Tompson, and Ofir Nachum. Offline reinforcementlearning with fisher divergence critic regularization. In International Conference on MachineLearning , pages 5774–5783. PMLR, 2021.9[40] Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicitq-learning. arXiv preprint arXiv:2110.06169 , 2021.[41] Solomon Kullback. Information theory and statistics . Courier Corporation, 1997.[42] Solomon Kullback and Richard A Leibler. On information and sufficiency. The annals ofmathematical statistics , 22(1):79–86, 1951.[43] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. Advances in Neural Information ProcessingSystems , 32, 2019.[44] Aviral Kumar, Joey Hong, Anikait Singh, and Sergey Levine. When should we prefer offlinereinforcement learning over behavioral cloning? arXiv preprint arXiv:2204.05618 , 2022.[45] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning foroffline reinforcement learning. Advances in Neural Information Processing Systems , 33:1179–1191, 2020.[46] Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. InReinforcement learning , pages 45–73. Springer, 2012.[47] Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dudik, Yisong Yue, and Hal Daumé III.Hierarchical imitation and reinforcement learning. In International conference on machinelearning , pages 2917–2926. PMLR, 2018.[48] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning:Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643 , 2020.[49] Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, and Pieter Abbeel. Over-coming exploration in reinforcement learning with demonstrations. In 2018 IEEE internationalconference on robotics and automation (ICRA) , pages 6292–6299. IEEE, 2018.[50] Tim Pearce and Jun Zhu. Counter-strike deathmatch with large-scale behavioural cloning. arXivpreprint arXiv:2104.04258 , 2021.[51] Dean A Pomerleau. Alvinn: An autonomous land vehicle in a neural network. Advances inneural information processing systems , 1, 1988.[52] Dean A Pomerleau. Efficient training of artificial neural networks for autonomous navigation.Neural computation , 3(1):88–97, 1991.[53] Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, EmanuelTodorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcementlearning and demonstrations. arXiv preprint arXiv:1709.10087 , 2017.[54] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning andstructured prediction to no-regret online learning. In Proceedings of the fourteenth interna-tional conference on artificial intelligence and statistics , pages 627–635. JMLR Workshop andConference Proceedings, 2011.[55] Claude Sammut, Scott Hurst, Dana Kedzier, and Donald Michie. Learning to fly. In MachineLearning Proceedings 1992 , pages 385–393. Elsevier, 1992.[56] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trustregion policy optimization. In International conference on machine learning , pages 1889–1897.PMLR, 2015.[57] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation usingdeep conditional generative models. Advances in neural information processing systems , 28,2015.[58] Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACMSigart Bulletin , 2(4):160–163, 1991.10[59] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press,1998.[60] Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcementlearning. arXiv preprint arXiv:1911.11361 , 2019.[61] Yiqin Yang, Xiaoteng Ma, Chenghao Li, Zewu Zheng, Qiyuan Zhang, Gao Huang, Jun Yang,and Qianchuan Zhao. Believe what you see: Implicit constraint approach for offline multi-agentreinforcement learning. Advances in Neural Information Processing Systems , 34:10299–10312,2021.[62] Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, and ChelseaFinn. Combo: Conservative offline model-based policy optimization. Advances in NeuralInformation Processing Systems , 34, 2021.[63] Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y Zou, Sergey Levine, ChelseaFinn, and Tengyu Ma. Mopo: Model-based offline policy optimization. Advances in NeuralInformation Processing Systems , 33:14129–14142, 2020.[64] Xianyuan Zhan, Xiangyu Zhu, and Haoran Xu. Model-based offline planning with trajectorypruning. arXiv preprint arXiv:2105.07351 , 2021.[65] Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learninginvariant representations for reinforcement learning without reconstruction. arXiv preprintarXiv:2006.10742 , 2020.[66] Yi Zhao, Rinu Boney, Alexander Ilin, Juho Kannala, and Joni Pajarinen. Adaptive behaviorcloning regularization for stable offline-to-online reinforcement learning. 2021.[67] Wenxuan Zhou, Sujay Bajracharya, and David Held. Plas: Latent action space for offlinereinforcement learning. arXiv preprint arXiv:2011.07213 , 2020.117 Appendix7.1 Related workImitation learning. Imitation learning methods aim to emulate a policy from expert demonstrations.DAgger [ 54] is an online learning approach that iteratively updates a deterministic policy; it addressesthe state distributional shift problem of BC through an on-policy method for data collection; similarlyto TS, the original dataset is augmented, but this involves on-line interactions. GAIL [ 28] iterativelyupdates a generative adversarial network [ 24] to determine whether a state-action pair can be deemedas expert; a policy is then inferred using a trust region policy optimisation step [ 56]. TS also usesgenerative modelling, but this is to create data points likely to have come from the data that connecthigh-value regions. Whereas imitation learning relies on expert demonstrations, TS creates higherquality datasets from existing, possibly sub-optimal data, to improve off-line policy learning.Offline reinforcement learning. Several model-free offline RL methods deal with distributionalshift in two ways: 1) by regularising the policy to stay close to actions given in the dataset [ 23,43,60,31,67,21] or 2) by pessimistically evaluating the Q-value to penalise OOD actions [ 1,39,45].For instance, BCQ [ 23] uses a V AE to generate likely actions in order to constrain the policy. TheTD3+BC algorithm [ 21] offers a simplified policy constraint approach; it adds a behavioural cloningregularisation term to the policy update biasing actions towards those in the dataset. Alternatively,CQL [ 45] adjusts the value of the state-action pairs to “push down” on OOD actions and “push up”on in-distribution actions. IQL [ 40] avoids querying OOD actions altogether by manipulating theQ-value to have a state-value function in the SARSA-style update. All the above methods try todirectly deal with OOD actions, either by avoiding them or safely handling them in either the policyimprovement or evaluation step. In contrast, TS generates unseen actions between in-distributionstates; by doing so, we avoid distributional shift by evaluating a state-value function only on seenstates.Model-based algorithms rely on an approximation of the environment’s dynamics [ 58,30]. In theonline setting, they tend to improve sample efficiency [ 33,30,15,7,12]. In an offline learningcontext, the learned dynamics have been exploited in various ways. For instance, Model-based Offlinepolicy Optimization (MOPO) [ 63] augments the dataset by performing rollouts using a learned,uncertainty-penalised, MDP. Unlike MOPO, TS does not introduce imagined states, but only actionsbetween reachable unconnected states. Diffuser [ 29] uses a diffusion probabilistic model to predict awhole trajectory rather than a single state-action pair; it can generate unseen trajectories that have highlikelihood under the data and maximise the cumulative rewards of a trajectory ensuring long-horizonaccuracy. In contrast, our generative models are not used for planning hence we do not requiresampling a full trajectory; instead, our models are designed to only be evaluated locally ensuringone-step accuracy between sandˆs′.State similarity metrics. A central aspect of the proposed data augmentation method consists ofdefining the stitching event, which uses a notion of state similarity to determine whether two states are“close” together. Using geometric distances only would often be inappropriate; e.g. two states may beclose in Euclidean distance, yet reaching one from another may be impossible (e.g. in navigationtask environments where walls or other obstacles preclude reaching a nearby state). Bisimulationmetrics [ 16] capture state similarity based on the dynamics of the environment. These have beenused in RL mainly for system state aggregation [ 17,35,65]; they are expensive to compute [ 11] andusually require full-state enumeration [ 4,5,14]. A scalable approach for state-similarity has recentlybeen introduced by using a pseudometric [ 8] which made calculating state-similarity possible foroffline RL. PLOFF [ 14] is an offline RL algorithm that uses a state-action pseudometric to bias thepolicy evaluation and improvement steps keeping the policy close to the dataset. PLOFF uses apseudometric to stay close to the data, we can bypass this notion altogether by requiring reachabilityin one step.7.2 Trajectory StitchingThe full procedure for the Trajectory stitching method is outlined in Algorithm 1.12Algorithm 1 Trajectory StitchingInitialise: An action generator pω1, a reward generator Gφ, an ensemble of dynamics models{ˆpiξ(s′|s)}Ni=1, an acceptance threshold ̃p, and a dataset D0made up of Ttrajectories (T1, . . .TT)1:fork= 0, . . . , K do2: Train state-value function, VonDkby minimising Equation (2).3: fort= 1, . . . , T do4: Select s, s′=s0, s′0∈ Tt5: Initialise new trajectory, ˆTt6: while not done do7: Create set of candidate states from neighbourhood, {ˆs′j}Nj=1∼Neighbourhood8: Evaluate dynamics models for new set of states and take minimum, miniˆpiξ,π(ˆs′|s)9: ifminiˆpiξ(ˆs′j|s)>mean iˆPiξ(s′|s),V(ˆs′j) = max iV(ˆs′i)andV(ˆs′j)> V(s′)then10: Generate a new action and reward, ̃a∼pω1(z, s,ˆs′j), ̃r∼Gφ(z, s, ̃a,ˆs′j)11: Add(s, ̃a, ̃r,ˆs′j)to new trajectory ˆTt12: Sets= ˆs′j13: else14: Add original transition, (s, a, r, s′)to the new trajectory ˆTt15: Sets=s′16: end if17: end while18: ifPˆTtri>(1 + ̃p)∗PTtrithen19: ˆTt=ˆTt20: else21: ˆTt=Tt22: end if23: end for24: Collect trajectories into dataset, Dk+1= (ˆT1, . . .ˆTT)25:end for7.3 Further ExperimentsExpected performance on sub-optimal data. BC minimises the KL-divergence of trajectorydistributions between the learned policy and πβ[34]. As TS has the effect of improving πβ, thissuggests that the KL-divergence between the trajectory distributions of the learned policy and theexpert policy would be smaller post TS. To investigate this, we used two complex locomotion tasks,Hopper and Walker2D, in OpenAI’s gym [ 6]. Independently for each task, we first train an expertpolicy, π∗, with TD3 [ 22], and use this policy to generate a baseline noisy dataset by sampling theexpert policy in the environment and adding white noise to the actions, i.e. a=π∗(s) +ε. A rangeof different, sub-optimal datasets are created by adding a certain amount of expert trajectories to thenoisy dataset so that they make up x%of the total trajectories. Using this procedure, we create eightdifferent datasets by controlling x, which took values in the set {0,0.1,2.5,5,10,20,30,40}. BCis run on each dataset for 5random seeds. In all experiments we run TS for five iterations, as thisprovides enough to increase the quality of the data without being overly computationally expensive(see the Appendix for results across different iterations). We run TS (for five iterations) on eachdataset over three different random seeds and then create BC policies over the 5 random seeds, giving15 TS+BC policies. Random seeds, cause different TS trajectories as they affect the latent variablessampled for the reward function and inverse dynamics model. Also, the initialisation of weights israndomised for the value function and BC policies, so the robustness of the methods is tested overmultiple seeds.Figure 3 shows the scores as average returns from 10 trajectory evaluations of the learned policies.TS+BC consistently improves on BC across all levels of expertise, for both the Hopper and Walker2denvironments. As the percentage of expert data increases, TS is available to leverage more high-value transitions, consistently improving over the BC baseline. Figure 2 (left) shows the averagedifference in KL-divergences of the BC and TS+BC policies against the expert policy. Precisely, the13 00.1 2.5 5 10 20 30 40Percentage of Expert Data101MSE(*(s),(s))HopperTS+BC BC00.1 2.5 5 10 20 30 40Percentage of Expert Data101MSE(*(s),(s))Walker2d0 0.1 2.5 5 10 20 30 40Percentage of Expert Data0.00.10.20.30.40.50.60.7Difference in KL divergencesHopper Walker2dFigure 2: Estimated KL-divergence and MSE of the BC and TS+BC policies on the Hopper andWalker2d environments as the fraction of expert trajectories increases. (Left) Relative differencebetween the KL-divergence of the BC policy and the expert and the KL-divergence of the TS+BCpolicy and the expert. Larger values represent the TS+BC policy being closer to the expert than theBC policy. MSE between actions evaluated from the expert policy and the learned policy on statesfrom the Hopper (Middle) and Walker2d (Right) environments. The y-axes (Middle and Right) are ona log-scale. All policies were collected by training BC over 5 random seeds, with TS being evaluatedover 3 different random seeds. All KL-divergences were scaled between 0 and 1, depending on theminimum and maximum values per task, before the difference was taken.00.1 2.5 5 10 20 30 40Percentage of Expert Data500100015002000250030003500ScoreHopper00.1 2.5 5 10 20 30 40Percentage of Expert Data1500200025003000350040004500ScoreWalker2dTS+BCBCFigure 3: Comparative performance of BC and TS+BC as the fraction of expert trajectories increasesup to 40%. For two environments, Hopper (left) and Walked2D (right), we report the average returnof 10 trajectory evaluations of the best checkpoint during BC training. BC has been trained over 5random seeds and TS has produced 3 datasets over different random seeds.y-axis represents DKL(ρπ∗(τ), ρπBC(τ))−DKL(ρπ∗(τ), ρπTS+BC(τ)), where ρπ(τ)is the trajectorydistribution for policy π. So, a positive value represents the TS+BC policy being closer to the expert,and a negative value represents the BC policy being closer to the expert, with the absolute valuerepresenting the degree to which this is the case. We also scale the average KL-divergence between 0and1, where 0is the smallest KL-divergence and 1is the largest KL-divergence, per task. This makesthe scale comparable between Hopper and Walker2d. The KL divergences are calculated following[34],DKL(ρπ∗(τ), ρπ(τ)) =Es∼ρπ∗,a∼π∗(s)[logπ∗(a|s)−logπ(a|s)]. The Figure shows that BCcan extract a behaviour policy closer to the expert after performing TS on the dataset, except in the0%case for Walker2D, however the difference is not significant. TS seems to work particularly wellwith a minimum of 2.5%expert data for Hopper and 0.1%for Walker2d.Furthermore, Figure 2 (middle and right) shows the mean square error (MSE) between actions fromthe expert policy and the learned policy for the Hopper (middle) and Walker2d (right) tasks. Actionsare selected by collecting 10 trajectory evaluations of an expert policy. As we expect, the TS+BCpolicies produce actions closer to the experts on most levels of dataset expertise. The only surprisingresult is that for 0%expert data on the Walker2d environment the BC policy produces actions closerto the expert than the TS+BC policy. This is likely due to TS not having any expert data to leverageand so cannot produce any expert trajectories. However, TS still produces a higher-quality datasetthan previous as shown by the increased performance on the average returns. This offers empiricalconfirmation that TS does have the effect of improving the underlying behaviour policy of the dataset.147.4 Further implementation detailsIn this section we report on all the hyperparameters required for TS as used on the D4RL datasets.All hyperparameters have been kept the same for every dataset, notable the acceptance threshold of ̃p= 0.1. TS consists of four components: a forward dynamics model, an inverse dynamics model, areward function and a value function. Table 2 provides an overview of the implementation details andhyperparameters for each TS component. As our default optimiser we have used Adam [ 37] withdefault hyperparameters, unless stated otherwise.Forward dynamics model. Each forward dynamics model in the ensamble consists of a neuralnetwork with three hidden layers of size 200with ReLU activation. The network takes a state sasinput and outputs a mean μand standard deviation σof a Gaussian distribution N(μ, σ2). For allexperiments, an ensemble size of 7is used with the best 5being chosen.Inverse dynamics model. To sample actions from the inverse dynamics model of the environment,we have implemented a CV AE with two hidden layers with ReLU activation. The size of the hiddenlayer depends on the size of the dataset [ 67]: when the dataset has less than 900,000transitions (e.g.the medium-replay datasets) the layer has 256nodes; when larger, it has 750nodes. The encoder qω1takes in a tuple consisting of state, action and next state; it encodes it into a mean μqand standarddeviation σqof a Gaussian distribution N(μq, σq). The latent variable zis then sampled from thisdistribution and used as input for the decoder along with the state, s, and next state, s′. The decoderoutputs an action that is likely to connect sands′. The CV AE is trained for 400,000gradient stepswith hyperparameters given in Table 2.Reward function. The reward function is used to predict reward signals associated with newtransitions, s, a, s′. For this model, we use a conditional-WGAN with two hidden layers of size 512.The generator, Gφ, takes in a state s, action a, next state s′and latent variable z; it outputs a reward rfor that that transition. The decoder takes a full transition of (s, a, r, s′)as input to determine whetherthis transition is likely to have come from the dataset or not.Value function. Similarly to previous methods [ 23], our value function Vθtakes the minimum oftwo value functions, {Vθ1, Vθ2}. Each value function is a neural network with two hidden layers ofsize256and a ReLU activation. The value function takes in a state sand determines the sum offuture rewards of being in that state and following the policy (of the dataset) thereon.Figure 4: Visualisation of our two definitions of a neighbourhood. For a transition (st, at, st+1)∈ D,the neighbourhoods are used to reduce the size of the set of candidate next states. (Left) All stateswithin an ε-ball of the current state, st, are taken and the next state in their respective trajectories(joined by an action shown as an arrow) are added to the set of candidate next states. (Right) Allstates within an ε-ball of the next state, st+1are added to the set of candidate next states. The full setof candidate next states are highlighted in yellow.KL-divergence experiment. As the KL-divergence requires a continuous policy, the BC policynetwork is a 2-layer MLP of size 256with ReLU activation, but with the final layer outputting theparameters of a Gaussian, μsandσs. We carry out maximum likelihood estimation using a batchsize of 256. For the Walker2d experiments, TS was slightly adapted to only accept new trajectories15Hyperparameter ValueOptimiser AdamForward Dynamics Learning rate 3e-4model Batch size 256Ensemble size 7Optimiser AdamInverse Dynamics Learning rate 1e-4model Batch size 100Latent dim 2*action dimOptimiser Adamβ= (0.5,0.999)Learning rate 1e-4Reward Function Batch size 256Latent dim 2L2 regularisation 1e-4Optimiser AdamValue Function Learning rate 3e-4Batch size 256Table 2: Hyperparameters and values for models used in TSif they made less than ten changes. For each level of difficulty, TS is run 3times and the scoresare the average of the mean returns over 10evaluation trajectories of 5random seeds of BC. Tocompute the KL-divergence, a continuous expert policy is also required, but TD3 gives a deterministicone. To overcome this, a continuous expert policy is created by assuming a state-dependent normaldistribution centred around π∗(s)with a standard deviation of 0.01.D4RL experiments. For the D4RL experiments, we run TS 3 times for each dataset and averagethe mean returns over 10evaluation trajectories of 5random seeds of BC, to attain the results forTS+BC. For the BC results, we average the mean returns over 10evaluation trajectories of 5randomseeds. The BC policy network is a 2-layer MLP of size 256with ReLU activation, the final layer hastanh activation multiplied by the action dimension. We use the Adam optimiser with a learning rateof1e−3and a batch size of 256.7.5 Number of iterations of TSTS can be repeated multiple times, each time using a newly estimated value function to take intoaccount the newly generated transitions. In all our experiments, we choose 5 iterations. Figure 5shows the scores of the D4RL environments on the different iterations, with the standard deviationacross seeds shown as the error bar. With iteration 0we indicate the BC score as obtained on theoriginal D4RL datasets. For all datasets, we observe that the average scores of BC increase initiallyover a few iterations, then remain stable with only some minor random fluctuations. For Hopper andWalker2d medium-replay, there is a higher degree of standard deviation across the seeds, which givesa less stable average as the number of iterations increases.7.6 Ablation studyTS uses a value function to estimate the future returns from any given state. Therefore TS+BC has anatural advantage over just BC which uses only the states and actions. To ensure that using a valuefunction is only sufficient to improve the performance of BC, we have test a weighted version of theBC loss function whereby the weights are given by the estimated value function, i.e.πBC(s) = arg minπEs,a∼D[Vθ(s)(π(s)−a)2].This weighted-BC method gives larger weight to the high-value states and lower weight to thelow-value states during training. On the Hopper medium and medium-expert datasets, training160123456789255075100MediumHopper0123456789255075100Walker2d0123456789255075100Halfcheetah0123456789255075100Medium Expert012345678925507510001234567892550751000123456789255075100Medium Replay012345678925507510001234567892550751000123456789Iteration255075100Expert0123456789Iteration2550751000123456789Iteration255075100Figure 5: Returns of BC extracted policies as the number of iterations of TS is increased. Iteration 0are the BC scores on the original D4RL datasets. The errors bars represent the standard deviation ofthe average returns of 10 trajectory evaluations over 5 random seeds of BC and 3 random seeds of TS.this weighted-BC method only gives a slight improvement over the original BC-cloned policy. ForHopper-medium, weighted-BC achieves an average score of 59.21(with standard deviation 3.4);this is an improvement over BC ( 55.3), but lower lower than TS+BC ( 64.3). Weighted-BC onhopper-medexp achieves an average score of 66.02(with standard deviation 6.9); again, this is aslight improvement over BC ( 62.3), but significantly lower than TS+BC ( 94.8). The experimentsindicate that using a value function to weight the relative importance of seen states when optimisingthe BC objective function is not sufficient to achieve the performance gains introduced by TS.17<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text This paper presents a data augmentation scheme for BC in the offline setting. The authors propose to produce trajectory stitching based on the likelihood of the next state, next action, and high values. The authors train a forward dynamics model, an inverse dynamics model, a value function with a reward generator and a policy. The results show that trajectory stitching + BC can improve BC on D4RL tasks. Pros: 1. The idea of conducting trajectory stitching for BC is neat and original since BC usually suffers from suboptimal data where offline RL can excel. This idea can be an effective approach to bridging that gap. 2. The empirical result on D4RL clearly shows that the method can improve over BC. Cons: 1. The method is pretty complex with many components as listed in the summary and there are many hyperparameters/models to tune. It is unclear if such a scheme is general enough in all kinds of domains. I suspect that making it work in various settings could be challenging. 2. It is unclear if the approach can outperform simple baselines such as percentage BC where we simply train on trajectories with top K% return and returned-weighted BC e.g. RWR and AWR. 3. The method is still much worse than offline RL in medium-replay datasets, suggesting that it's not doing a good job of stitching in diverse datasets. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature<|im_end|> <|im_end|>
OEgDatKuz2O
ICLR.cc/2021/Conference
2021
EMTL: A Generative Domain Adaptation Approach
["Jianfeng Zhang", "Illyyne Saffar", "Aladin Virmaux", "Bal\u00e1zs K\u00e9gl"]
We propose an unsupervised domain adaptation approach based on generative models. We show that when the source probability density function can be learned, one-step Expectation–Maximization iteration plus an additional marginal density function constraint will produce a proper mediator probability density function to bridge the gap between the source and target domains. The breakthrough is based on modern generative models (autoregressive mixture density nets) that are competitive to discriminative models on moderate-dimensional classification problems. By decoupling the source density estimation from the adaption steps, we can design a domain adaptation approach where the source data is locked away after being processed only once, opening the door to transfer when data security or privacy concerns impede the use of traditional domain adaptation. We demonstrate that our approach can achieve state-of-the-art performance on synthetic and real data sets, without accessing the source data at the adaptation phase.
["unsupervised domain adaptation", "EM", "generative model", "density estimation", "deep learning", "transfer learning"]
ABSTRACTWe propose an unsupervised domain adaptation approach based on generativemodels. We show that when the source probability density function can be learned,one-step Expectation–Maximization iteration plus an additional marginal densityfunction constraint will produce a proper mediator probability density functionto bridge the gap between the source and target domains. The breakthrough isbased on modern generative models (autoregressive mixture density nets) thatare competitive to discriminative models on moderate-dimensional classificationproblems. By decoupling the source density estimation from the adaption steps,we can design a domain adaptation approach where the source data is locked awayafter being processed only once, opening the door to transfer when data securityor privacy concerns impede the use of traditional domain adaptation. We demon-strate that our approach can achieve state-of-the-art performance on synthetic andreal data sets, without accessing the source data at the adaptation phase.1 I NTRODUCTIONIn the classical supervised learning paradigm, we assume that the training and test data come fromthe same distribution. In practice, this assumption often does not hold. When the pipeline includesmassive data labeling, models are routinely retrained after each data collecion campaign. However,data labeling costs often make retraining impractical. Without labeled data, it is still possible totrain the model by using a training set which is relevant but not identically distributed to the test set.Due to the distribution shift between the training and test sets, the performance usually cannot beguaranteed.Domain adaptation (DA) is a machine learning subdomain that aims at learning a model from biasedtraining data. It explores the relationship between source (labeled training data) and target (test data)domains to find the mapping function and fix the bias, so that the model learned on the source datacan be applied in target domain. Usually some target data is needed during the training phase tocalibrate the model. In unsupervised domain adaptation (UDA) only unlabeled target data is neededduring training phase. UDA is an appealing learning paradigm since obtaining unlabeled data isusually easy in a lot of applications. UDA allows the model to be deployed in various target domainswith different shifts using a single labeled source data set.Due to these appealing operational features, UDA has became a prominent research field with var-ious approaches. Kouw & Loog (2019) and Zhuang et al. (2020) surveyed the latest progress onUDA and found that most of the approaches are based on discriminative models, either by reweight-ing the source instances to approximate the target distribution or learning a feature mapping functionto reduce the statistical distance between the source and target domains. After calibrating, a discrim-inative model is trained on the adjusted source data and used in target domain. In this workflow, theadaptation algorithm usually have to access the source and target data simultaneously . However,accessing the source data during the adaptation phase is not possible when the source data is sensi-tive (for example because of security or privacy issues). In particular, in our application workflowan industrial company is selling devices to various service companies which cannot share their cus-tomer data with each other. The industrial company may contract with one of the service companiesto access their data during an R&D phase, but this data will not be available when the industrialcompany sells the device (and the predictive model) to other service companies.1Under review as a conference paper at ICLR 2021In this paper we propose EMTL, a generative UDA algorithm for binary classification that does nothave to access the source data during the adaptation phase. We use density estimation to estimatethe joint source probability function ps(x;y)and the marginal target probability function pt(x)anduse them for domain adaption. To solve the data security issue, EMTL decouples source densityestimation from the adaptation steps. In this way, after the source preprocessing we can put away ordelete the source data. Our approach is motivated by the theory on domain adaptation (Ben-Davidet al., 2010) which claims that the error of a hypothesis hon the target domain can be bounded bythree items: the error on the source domain, the distance between source and target distributions, andthe expected difference in labeling functions. This theorem motivated us to define a mediator densityfunction pm(x;y)i) whose conditional probability yjxis equal to the conditional probability of thesource and ii) whose marginal density on xis equal to the marginal density of the target. We canthen construct a Bayes optimal classifier on the target domain under the assumption of covariateshift (the distribution yjxis the same in the source and target domains).Our approach became practical with the recent advances in (autoregressive) neural density estima-tion (Uria et al., 2013). We learn pm(x;y)from ps(x;y)and pt(x)to bridge the gap between thesource and target domains. We regard the label on the target data as a latent variable and show thatifps(xjy=i)be learned perfectly for i2f0;1g, then a one-step Expectation–Maximization (andthis is why our algorithm named EMTL) iteration will produce a density function pm(x;y)withthe following properties on the target data: i) minimizing the Kullback–Leibler divergence betweenpm(yijxi)and ps(yijxi); ii) maximizing the log-likelihoodPlogpm(xi). Then, by adding anadditional marginal constraint on pm(xi)to make it close to pt(xi)on the target data explicitly,we obtain the final objective function for EMTL. Although this analysis assumes a simple covariateshift , we will experimentally show that EMTL can go beyond this assumption and work well inother distribution shifts.We conduct experiments on synthetic and real data to demonstrate the effectiveness of EMTL. First,we construct a simple two-dimensional data set to visualize the performance of EMTL. Second, weuse UCI benchmark data sets and the Amazon reviews data set to show that EMTL is competitivewith state-of-the-art UDA algorithms, without accessing the source data at the adaptation phase.To our best knowledge, EMTL is the first work using density estimation for unsupervised domainadaptation. Unlike other existing generative approaches (Kingma et al., 2014; Karbalayghareh et al.,2018; Sankaranarayanan et al., 2018), EMTL can decouple the source density estimation processfrom the adaption phase and thus it can be used in situations where the source data is not availableat the adaptation phase due to security or privacy reasons.2 R ELATED WORKZhuang et al. (2020), Kouw & Loog (2019) and Pan & Yang (2009) categorize DA approaches intoinstance-based and feature-based techniques. Instance-based approaches reweight labeled sourcesamples according to the ratio of between the source and the target densities. Importance weightingmethods reweight source samples to reduce the divergence between the source and target densities(Huang et al., 2007; Gretton et al., 2007; Sugiyama et al., 2007). In contrast, class importanceweighting methods reweight source samples to make the source and target label distribution the same(Azizzadenesheli et al., 2019; Lipton et al., 2018; Zhang et al., 2013). Feature-based approacheslearn a new representation for the source and the target by minimizing the divergence between thesource and target distributions. Subspace mapping methods assume that there is a common subspacebetween the source and target (Fernando et al., 2013; Gong et al., 2012). Courty et al. (2017)proposed to use optimal transport to constrain the learning process of the transformation function.Other methods aim at learning a representation which is domain-invariant among domains (Gonget al., 2016; Pan et al., 2010).Besides these shallow models, deep learning has also been widely applied in domain adaptation(Tzeng et al., 2017; Ganin et al., 2016; Long et al., 2015). DANN (Ganin et al., 2016) learnsa representation using a neural network which is discriminative for the source task while cannotdistinguish the source and target domains from each other. Kingma et al. (2014) and Belhaj et al.(2018) proposed a variational inference based semi-supervised learning approach by regarding themissing label as latent variable and then performing posterior inference.2Under review as a conference paper at ICLR 20213 N OTATION AND PROBLEM DEFINITIONWe consider the unsupervised domain adaptation problem in a binary classification setting (the setupis trivial to extend to multi-class classification). Let p(x;y)be a joint density function defined onXY , where x2Rpis the feature vector and y2f0;1gis the label. We denote the conditionalprobabilityp(y= 1jx)byq(x). A hypothesis or model is a function h:X7! [0;1]. We define theerror ofhas the expected disagreement between h(x)andq(x), i.e.,(h) =Expjh(x)q(x)j: (1)We use superscripts s and t to distinguish the source and target domains, that is, ps(x;y)andpt(x;y)are the joint density functions in the source and target domains respectively. In general, we assumethat ps(x;y)6=pt(x;y).LetDs=f(xsi;ysi)gnsi=1andUt=fxtignti=1bei.i.d. data sets generated from the source distributionps(x;y)and the marginal target distribution pt(x), respectively, where nsand ntare source andtarget sample sizes. The objective of unsupervised domain adaptation is to learn a model ^hby usinglabeledDsand unlabeledUt, which achieves lowest error in target domain.4 G ENERATIVE APPROACHBen-David et al. (2010) proved that the error of a hypothesis hin the target domain t(h)can bebounded by the sum of error in source domain s(h), the distribution distance between the twodomains, and the expected L1distance between two conditional probabilities.Theorem 1 (Ben-David et al. (2010), Theorem 1) For a hypothesis h,t(h)s(h) +d1(ps(x);pt(x)) + minfExpsjqs(x)qt(x)j;Exptjqs(x)qt(x)jg; (2)whered1(ps(x);pt(x)) = 2supB2BjPrs(B)Prt(B)jis the twice the total variation distance of twodomain distributions and qs(x)andqt(x)are the source and target probabilities of y= 1jx, re-spectively.In the covariate shift setting, we assume that the conditional probability p(yjx)is invariant betweenthe source and the target domains. Thus in the right hand side of Eq. (2), the third component willbe zero, which means that the target error is bounded by the source error plus the distance betweentwo domains. Many current unsupervised domain adaptation solutions work on how to reduce thedistance between the two domain densities. Importance-sampling-based approaches manage to re-sample the source data to mimic the target data distribution, and feature-mapping-based approachesdo that by learning a transformation function (x)for the source data. However, both approachesneed to access source and target data simultaneously.In this paper, we propose a domain adaptation approach based on generative models. First, we learnall multivariate densities using RNADE (Uria et al., 2013), an autoregressive version of Bishop(1994)’s mixture density nets. We found RNADE excellent in learning medium-dimensional densi-ties, and in a certain sense it is RNADE that made our approach feasible. Second, we introduce amediator joint density function pm(x;y)that bridges the gap between ps(x;y)and pt(x;y). Sincethe source distribution information is stored in the learned generative model after training, we donot need to access source data in the adaptation phase.4.1 D ENSITY FUNCTIONDue to recent developments in neural density estimation, we can estimate moderate-dimensionaldensities efficiently. In this paper, we use real-valued autoregressive density estimator (RNADE) ofUria et al. (2013). RNADE is an autoregressive version of mixture density nets of Bishop (1994)which fights the curse of dimensionality by estimating conditional densities, and provides explicitlikelihood by using mixtures of Gaussians.To estimate p(x), letx= [x1;x2;;xp]be apdimensional random vector. RNADE decom-poses the joint density function using the chain rule and models each p(xijx<i)with a mixture of3Under review as a conference paper at ICLR 2021Gaussians whose parameters depend on observed x<i. Formally,p(x) =pYi=1p(xijx<i) =pYi=1dXj=1j(x<i)N(xi;j(x<i);2j(x<i)); (3)where x<i= [x1;;xi1]anddis the number of Gaussian components. The weights j, meansj, and variances jare modeled by a single neural net whose architecture makes sure that theparameterj(x<i)depends only on x<i. The neural net is trained to maximize the likelihood ofthe training data. We denote the RNADE model by the function f(x;!), where!represents all theparameters (neural net weights) in RNADE, and use it to approximate p(x). The conditional densityp(xjy)can be estimated in the same way by just selecting xjyas the training data. In followingsections, we denote the maximum likelihood parameters of ps(xjy= 0) ,ps(xjy= 1) , and pt(x)by!s0,!s1, and!t, respectively. We further denote the proportion of class 0 in the source domainbys0=#fys=0gns. The full parameter vector [!s0;!s1;s0]ofps(x;y)and ps(x)is denoted by s.4.2 T HE MEDIATOR DISTRIBUTIONBy Eq. (2), the target error can be bounded by the source error plus the distance between the twomarginal distributions plus the expected difference in p(y= 1jx)between two domains. Thismotivated us to construct a mediator distribution pm(x;y)(Figure 1) which has two properties:it has the same conditional distribution as the source: pm(yjx) = ps(yjx), andit has the same marginal distribution as the target: pm(x) = pt(x).ps(x;y)! pm|{z}ps(yjx)=pm(yjx)pm(x)=pt(x)z}|{(x;y)! pt(x;y)Figure 1: The mediator has the same conditional probability as the source and the same marginalprobability as target. According to Theorem 1, we will have t(h)m(h)for any hypotheses hsince the last two terms are zero.In the covariate shift setting, we can then solve the unsupervised domain adaptation problemperfectly: i) the first property forces p(yjx)to be the same in source and mediator distribu-tions, and in the covariate shift setting we have ps(yjx) = pt(yjx), then this property makespm(yjx) = pt(yjx); ii) the second property makes the marginal distributions of the mediator andthe target the same, which leads to d1(pm(x);pt(x)) = 0 . Under these two conditions, for anymodelh, we will have t(h)m(h)since the last two terms of Eq. (2) will be zero. Furthermore,given the mediator distribution pm(x;y), it is easy to learn the best model (Bayes classifier)^h(x) =pm(xjy= 1) pm(y= 1)pm(x); (4)which achieves the tightest bound for the target error. In summary, by introducing the mediator dis-tribution pm(x;y), we can bound the target error by the mediator error. In the following sections, wewill introduce how to learn pm(x;y)from ps(x;y)and pt(x)using the expectation-maximization(EM) algorithm combined with a marginal constraint term.5 EMTLIf we regard the missing label yas a latent variable that generates observed xin the target domain,we can use the EM algorithm to infer y. We consider that the target density p(x;)is a mixturewith two components p(xjy=i;)wherei2f0;1g. Whenconverges to its limit in EM, wecan recover the joint density function p(x;y;). We denote this joint density function by pm(x;y).However, this pm(x;y)may be far away from the ground truth pt(x;y). The mismatch comes fromtwo facts: i) EM can easily converge to a bad local minimum because of a bad initialization, andii) EM tends to find inner structure (e.g., clusters) of the data but this structure may be irrelevant4Under review as a conference paper at ICLR 2021to the true label. The local minimum problem is due to parameter initialization, and the structure-label mismatching problem comes from not having a-priori information of the label. When we havea fully known source distribution ps(x;y), these two issues can be solved by selecting a properinitialization plus a constraint on marginal distribution.The first observation is that in a lot of cases we can directly use the source model in the target domainand it is better than random guess. We use this intuition to make the source model ps(x;y)as theinitial guess of pm(x;y). Following section 4.1, we use RNADE to model pm(xjy)and denoteparameters of pm(x;y)bym= [!m0;!m1;m0]. Initializing pm(x;y)by using ps(x;y)meanswe set(0)m, the initial state of min the EM algorithm, to s. The next EM iterations can be seenas a way to fine-tune musing the target data. In the next sections we will formally analyze thisintuitive algorithm.5.1 A NALYSIS(1)mFirst we link the EM algorithm with initial (0)m=sto Theorem 1. In each iteration, EM alternatesbetween two steps: E step defines a Q function as Q(j(t)) =Eyjx;(t)logp(;x;y)and M stepdo the maximization (t+1)= arg maxQ(j(t)). After the first EM iteration, we have(1)m= arg maxQ(j(0)m) = arg max1ntntXi=1Eyijxti;slogp(xti;yi;): (5)Supposesis learned perfectly from source data, which means that we can replace p(x;y;(0)m)byps(x;y). Thus the expectation operation in Eq. (5) can be written asEyijxti;s[] =Xj2f0;1gp(yi=jjxti;s)=Xj2f0;1gps(yi=jjxti) (6)for any random variable . This expectation links the source distribution with the target. We rewritethe full expectation expression of Eq. (5) asEyijxti;slogp(xti;yi;) =Xj2f0;1gps(yi=jjxti) logp(xti;yi=j;)=DKL(ps(yijxti)kp(yijxti;)) + logp(xti;)Hps(yijxti);(7)whereHps(yijxti)is the conditional entropy on probability ps. This equation shows that the ex-pected log-likelihood can be decomposed into the sum of three items. the first item is the negativeKL-divergence between the two conditional distributions ps(yijxti)andp(yijxti;); the second itemis the target log-likelihood logp(xtij); the last item is the negative entropy of the source conditionaldistribution, which is irrelevant to parameter so can be ignored during the optimization.Therefore, by setting (0)massand maximizing the Qfunction in the first EM iteration, we willget a pm(x;y)which minimizes the KL-divergence between pm(yjx)with ps(yjx)and maximizeslogpm(x). Minimizing the KL-divergence reduces the third term of Eq. (2) and maximizing thelog-likelihood forces pm(x)to move towards pt(x)implicitly, which reduces the second item ofEq. (2). This suggests that the Bayes classifier pm(yjx)can be a proper classifier for target domain.5.2 M ARGINAL CONSTRAINTIn the previous section, we implicitly reduce the distance between pm(x)and pt(x)by maximizingthe log-likelihood of p(x;)on the target data. To further control the target error bound Eq. (2),we explicitly add a marginal constraint for pm(x;y)by minimizing the distance between the twomarginal distributions. Rather than calculating d1(pm(x);pt(x))directly, we use the KL-divergenceto measure the distance between two distributions since we can explicitly calculate the pm(xti)andpt(xti)by using our density estimators. Furthermore, according to Pinsker’s inequality (Tsybakov,2008), we haved1(pm(x);pt(x))p2 DKL(pm(x)kpt(x)); (8)5Under review as a conference paper at ICLR 2021thus minimizing the KL-divergence also controls d1(pm(x);pt(x)). Since we only have samples xtifrom the target domain, we use an empirical version of the KL-divergence. The marginal constraintis defined asM() =p2ntXi=1_ pt(xti) log_ pt(xti)_ pm(xti)12=p2ntXi=1_f(xti;!t) log_f(xti;!t)_p(xti;)12; (9)where _p=p=Ppand_f=f=Pfare normalized discrete distributions on the target samples.5.3 O BJECTIVE FUNCTION OF EMTLBy putting the QandMfunctions together, we get the objective function= arg minQ(j(0)m) +M() (10)of our generative domain adaptation approach, where (0)m=sandis a non-negative hyperpa-rameter that controls the trade-off of the two terms.In real-life scenarios, both p(x)andp(yjx)can be different in the source and target domains sothe covariate shift assumption may be violated. To go beyond this assumption, we need to relaxthe constraint on ps(yjx) = pt(yjx)which is used in justifying Q(j(0)). As we will show inSection 6, by setting a large and doing more iterations, EMTL will reduce the weight on theQfunction and allow us to escape from covariate shift constraints. We summarize the process ofEMTL in Algorithm 1.Algorithm 1: EMTL AlgorithmResult: EMTL classifier pm(y= 1jx)Initializes= [!s0;!s1;s0]and!tusingDsandUt, respectively;Initialize(0)mbysandt= 1;whiletnitrdo(t)m= arg minQ(j(t1)m) +M();t=t+ 1;endpm(x;y) =p(x;y;(t)m);pm(y= 1jx) =pm(xjy=1) pm(y=1)pm(x)=(1(t)m0)f(x;!(t)m1)(1(t)m0)f(x;!(t)m1)+(t)m0f(x;!(t)m0);6 E XPERIMENTSIn this section, we present experiments on both synthetic (Section 6.1) and real-life data (Section 6.2)to validate the effectiveness of EMTL.6.1 E XPERIMENTS ON SYNTHETIC DATA SETWe study the performance of EMTL under conditional shift where ps(xjy)6=pt(xjy)using avariant of inter-twinning moons example (Ganin et al., 2016). In the source domain we generatean upper moon (class 0) and a lower moon (class 1) with 1000 points in each class. In the targetdomain, we first generate 2000 samples as in the source then rotate the data by 40to make thetarget distribution of xjydifferent from the source. Figure 2 (left) shows the source and targetdistributions. In this experiments, we set the number of Gaussian components to 10 and the hiddenlayer dimension to 30 in the RNADE model.We setto 1 and 200 to illustrate how a large helps the model to escape from covariate shiftconstraint. Figure 2 (upper right) shows the prediction results in the target data using = 1. Whennitr= 0, the EMTL classifier is the source Bayes classifier. In the upper moon, the model mis-classifies the middle and the tail parts as class 1. This is because according to the source distribu-tion, these areas are closer to class 1. The same misclassification occurs in lower moon. As nitr6Under review as a conference paper at ICLR 20213 2 1 0 1 2 321012Source class 0Source class 1Unlabeled Target2.5 0.0 2.5=1.0n_itr=0Predicted class 0Predicted class 12.5 0.0 2.5n_itr=12.5 0.0 2.5n_itr=102.5 0.0 2.5n_itr=502.5 0.0 2.5=200.02.5 0.0 2.5 2.5 0.0 2.5 2.5 0.0 2.5Figure 2: Inter-twining moons example. (Left) Samples from the source and target distributionswhere there is a 40rotation in target; (Right) EMTL result on the target test data under differentiterations and s. Smallresults in a local optima. Larger allows the objective function to escapefrom the ps(yjx) = pt(yjx)constraint which is wrong in this case.increases, the misclassification reduces slightly, because the objective function focuses more on op-timizing the Qfunction thus keeping p(yjx)stable in each iteration. As a contrast, in Figure 2(bottom right), when setting to 200, the first iteration reduces the misclassification significantlyand finally the error converges to zero. By setting a large , the conclusion of this example is two-fold: i) the ps(yjx) = pt(yjx)constraint will be relieved thus resulting in a better adaptation result,and ii) one-step iteration will increase the performance significantly thus suggesting that we do notneed too many iterations. According to ii), in our following experiments the nitris fixed as 1. Weshow more experimental results using different s in Appendix A.1 and Figure 3.6.2 E XPERIMENTS ON REAL -LIFE DATA SETSIn this section, we validate EMTL on real-life data sets by comparing its performance with twostandard supervised learning and three domain adaptation algorithms. The validation is conductedon three UCI data sets and the Amazon reviews data set. First, we create two benchmarks: the sourceRF/SVM is the model trained only using source data (as a baseline) and the target RF/SVM is themodel trained only using labeled target data (as an upper bound). A random forest (RF) classifieris used on the UCI data sets and a support vector machine (SVM) is used on the Amazon reviewsdata set. The three DA algorithms are kernel mean matching (KMM, Huang et al. (2007)), subspacealignment (SA, Fernando et al. (2013)) and domain adversarial neural network (DANN, Ganin et al.(2016)). For the UCI data sets, both KMM and SA are based on RF and for Amazon reviews dataset SVM is used. In KMM, we us an RBF kernel with the kernel width set as the median distanceamong the data. In DANN, is fixed as 0.1. In EMTL, we set the number of components to 5 andthe hidden layer size to 10 for RNADE model and to 1. For each transfer task, five-fold crossvalidation (CV) is conducted. In each CV fold, we randomly select 90% source samples and 90%target samples respectively to train the model. We average the output of the five models and calculatethe 95% confidence interval of the mean. For the UCI tasks, ROC AUC score is the used metric sincewe are dealing with imbalanced classification tasks. For Amazon reviews tasks accuracy is the usedmetric. Table 1 and 2 summarize the experimental results. Numbers marked in bold indicate the topperforming DA algorithms (more than one bold means they are not significantly different).UCI data sets. Three UCI data sets (Abalone, Adult, and Bank Marketing) are used in our experi-ments (Dua & Graff, 2017; Moro et al., 2014). We preprocess the data first: i) only select numericalfeatures; ii) add uniform noise to smooth the data from integer to real for Adult and Bank data sets.Since the original goal in these data sets is not transfer learning, we use a variant biased samplingapproach proposed by Gretton et al. (2009) and Bifet & Gavald `a (2009) to create different domainsfor each data set. More precisely, for each data set we train a RF classifier to find the most importantfeature, then sort the data along this feature and split the data in the middle. We regard the first50% (denoted by A) and second 50% (denoted by B) as the two domains. When doing domain7Under review as a conference paper at ICLR 2021Table 1: Experimental results on UCI data sets. AUC(%) is used as a metric.Task Source RF Target RF KMM SA DANN EMTLAbalone A!B 67.11.1 72.70.5 66.52.2 67.80.6 67.50.4 65.72.8Abalone B!A 67.51.2 81.20.4 59.44.6 68.52.1 69.50.7 70.80.7Adult A!B 84.40.2 84.80.2 83.40.4 82.80.2 84.70.1 84.80.3Adult B!A 82.10.1 83.10.1 81.30.4 81.00.2 82.80.3 82.70.4Bank A!B 70.10.3 81.50.1 69.31.1 70.40.9 70.80.5 70.51.7Bank B!A 76.70.7 83.00.6 74.80.5 76.60.4 78.40.2 79.30.8Table 2: Experimental result on Amazon reviews data set. Accuracy(%) is used as a metric.Task Source SVM Target SVM KMM SA DANN EMTLB!D 80.00.0 79.90.1 79.70.2 79.90.1 79.90.0 79.50.1B!E 70.30.1 72.40.2 72.90.2 73.00.2 69.70.3 71.50.2B!K 75.70.1 76.20.1 76.30.0 76.10.1 75.70.1 76.00.1D!B 75.50.0 75.50.1 75.30.1 75.30.1 75.40.1 75.70.0D!E 71.80.1 74.20.1 74.60.1 74.40.0 71.50.1 72.30.2D!K 75.70.1 77.00.0 76.80.1 77.40.1 75.60.3 76.10.2E!B 70.30.1 71.00.1 71.80.1 71.40.1 70.50.0 69.50.3E!D 72.20.0 73.10.1 73.10.3 73.10.1 72.10.1 72.70.2E!K 85.80.1 86.20.0 83.60.8 86.00.1 85.80.2 85.30.1K!B 71.50.0 71.60.1 71.40.2 71.50.0 71.30.1 71.60.1K!D 70.60.0 71.70.2 72.60.3 72.40.1 70.60.1 71.60.2K!E 83.90.0 84.30.0 84.20.1 84.30.1 84.00.1 83.90.2adaptation, we use 75% of the target domain samples to train the model and use the other 25% targetdomain samples as test data. Finally, we use normal quantile transformation to normalize the sourceand target data sets respectively. Table 3 Appendix A.2 summarizes the features of the data sets wecreated for the experiments. Table 1 shows the results on the test data for UCI data sets. We findthat the performance of EMTL is not significantly different from DANN in all tasks (remember thatour goal was not the beat the state of the art but to match it, without accessing the source data at theadaptation phase). On the two Adult tasks and Bank B !A, although the average score of EMTLis less than that of Target RF, the differences are small.Amazon reviews. This data set (Ganin et al., 2016) includes four products, books (B), DVD (D),electronics (E) and kitchen (K) reviews from the Amazon website. Each product (or domain)has 2000 labeled reviews and about 4000 unlabeled reviews. Each review is encoded by a 5000-dimensional feature vector and a binary label (if it is labeled): 0 if its ranking is lower than threestars, and 1 otherwise. We create twelve transfer learning tasks using these four domains. AsRNADE is not designed for ultra high dimensional cases, we overcome this constraint by reducingthe number of features from 5000 to5using a feed forward Neuronal Network (FNN). More pre-cisely, for each task we train a 2-hidden layer FNN on the source data. Then, we cut the last layerand we use the trained network to encode both source and target to 5 dimensions. Table 2 shows theresults on the test data for Amazon reviews data set. We notice that EMTL is slightly better thanDANN in most of the tasks and still comparable with both KMM and SA.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we have presented a density-estimation-based unsupervised domain adaptation ap-proach EMTL. Thanks to the excellent performance of autoregressive mixture density models (e.g.,RNADE) on medium-dimensional problems, EMTL is competitive to state-of-the-art solutions. Theadvantage of EMTL is to decouple the source density estimation phase from the model adaptationphase: we do not need to access the source data when adapting the model to the target domain. Thisproperty allows our solution to be deployed in applications where the source data is not availableafter preprocessing. In our future work, we aim to extend EMTL to more general cases, includinghigh-dimensional as well as more complex data (e.g., time series).8Under review as a conference paper at ICLR 2021
97XgRYd7qC
An interesting motivation, but more work would be needed.
5: Marginally below acceptance threshold
This paper proposes a novel method for Unsupervised Domain Adaptation (UDA) when the source domain's privacy should be preserved. The authors propose EMTL, which is a generative method using multivariate densities using RNADE (Uria et al., 2013) and a mediator joint density function bridging both source and target domains. EMTL achieves comparable performances to those of DANN (Ganin et al., 2016) on a single dataset. **Pros** - Unique motivation for UDA and privacy-preserving. - Well formulated method using RNADE and a mediator density function. In the adaptation phase, the source domain data can be deleted. **Cons** - There is a closely related paper for privacy-preserving UDA (Song et al., 2020) before the deadline of ICLR 2021. The method by Song et al. utilized a framework of federated learning and encryption. Thus the approaches are different from each other, but the motivation for privacy-preserving is close. The authors should compare them quantitatively. Song et al. Privacy-Preserving Unsupervised Domain Adaptation in Federated Setting. IEEE Access, Vol. 8, pp.143233-143240, 2020. - Although the adaptation phase does not require the source domain data, a probabilistic function $p^m(y|x)$ should be available. The reviewer just concerns if model inversion attacks, such as (Fredrikson et al., 2015), violate the source domain's privacy. Fredrikson et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In CCS, 2015. - It is reasonable to compare ETML to DANN since both methods have conceptually similar characteristics: matching the data distribution and learning the posterior probabilities of the label given a sample. However, as the authors referred to in the main text, several methods have similar characteristics and much better performance than DANN. It is good to know if ETML is complementary to those methods through further experiments. - Experiments on a single real dataset are difficult to convince about the generality of UDA performance. Additional experiments on other datasets such as visual ones can strengthen the generality. **Overall rating** The reviewer is leaning toward rejection, although the motivation is clear. The rating can be upgraded if the authors can solve the cons above. **Additional comment after rebuttal** Happy to hear that the authors plan to upgrade their draft. Since the submitted paper is not updated, the reviewer keeps the first rating but also looks forward to read a revised version in another conference or journal.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title EMTL: A Generative Domain Adaptation Approach ### Paper Abstract We propose an unsupervised domain adaptation approach based on generative models. We show that when the source probability density function can be learned, one-step Expectation–Maximization iteration plus an additional marginal density function constraint will produce a proper mediator probability density function to bridge the gap between the source and target domains. The breakthrough is based on modern generative models (autoregressive mixture density nets) that are competitive to discriminative models on moderate-dimensional classification problems. By decoupling the source density estimation from the adaption steps, we can design a domain adaptation approach where the source data is locked away after being processed only once, opening the door to transfer when data security or privacy concerns impede the use of traditional domain adaptation. We demonstrate that our approach can achieve state-of-the-art performance on synthetic and real data sets, without accessing the source data at the adaptation phase. ### Paper Keywords ["unsupervised domain adaptation", "EM", "generative model", "density estimation", "deep learning", "transfer learning"] ### Paper Content ABSTRACTWe propose an unsupervised domain adaptation approach based on generativemodels. We show that when the source probability density function can be learned,one-step Expectation–Maximization iteration plus an additional marginal densityfunction constraint will produce a proper mediator probability density functionto bridge the gap between the source and target domains. The breakthrough isbased on modern generative models (autoregressive mixture density nets) thatare competitive to discriminative models on moderate-dimensional classificationproblems. By decoupling the source density estimation from the adaption steps,we can design a domain adaptation approach where the source data is locked awayafter being processed only once, opening the door to transfer when data securityor privacy concerns impede the use of traditional domain adaptation. We demon-strate that our approach can achieve state-of-the-art performance on synthetic andreal data sets, without accessing the source data at the adaptation phase.1 I NTRODUCTIONIn the classical supervised learning paradigm, we assume that the training and test data come fromthe same distribution. In practice, this assumption often does not hold. When the pipeline includesmassive data labeling, models are routinely retrained after each data collecion campaign. However,data labeling costs often make retraining impractical. Without labeled data, it is still possible totrain the model by using a training set which is relevant but not identically distributed to the test set.Due to the distribution shift between the training and test sets, the performance usually cannot beguaranteed.Domain adaptation (DA) is a machine learning subdomain that aims at learning a model from biasedtraining data. It explores the relationship between source (labeled training data) and target (test data)domains to find the mapping function and fix the bias, so that the model learned on the source datacan be applied in target domain. Usually some target data is needed during the training phase tocalibrate the model. In unsupervised domain adaptation (UDA) only unlabeled target data is neededduring training phase. UDA is an appealing learning paradigm since obtaining unlabeled data isusually easy in a lot of applications. UDA allows the model to be deployed in various target domainswith different shifts using a single labeled source data set.Due to these appealing operational features, UDA has became a prominent research field with var-ious approaches. Kouw & Loog (2019) and Zhuang et al. (2020) surveyed the latest progress onUDA and found that most of the approaches are based on discriminative models, either by reweight-ing the source instances to approximate the target distribution or learning a feature mapping functionto reduce the statistical distance between the source and target domains. After calibrating, a discrim-inative model is trained on the adjusted source data and used in target domain. In this workflow, theadaptation algorithm usually have to access the source and target data simultaneously . However,accessing the source data during the adaptation phase is not possible when the source data is sensi-tive (for example because of security or privacy issues). In particular, in our application workflowan industrial company is selling devices to various service companies which cannot share their cus-tomer data with each other. The industrial company may contract with one of the service companiesto access their data during an R&D phase, but this data will not be available when the industrialcompany sells the device (and the predictive model) to other service companies.1Under review as a conference paper at ICLR 2021In this paper we propose EMTL, a generative UDA algorithm for binary classification that does nothave to access the source data during the adaptation phase. We use density estimation to estimatethe joint source probability function ps(x;y)and the marginal target probability function pt(x)anduse them for domain adaption. To solve the data security issue, EMTL decouples source densityestimation from the adaptation steps. In this way, after the source preprocessing we can put away ordelete the source data. Our approach is motivated by the theory on domain adaptation (Ben-Davidet al., 2010) which claims that the error of a hypothesis hon the target domain can be bounded bythree items: the error on the source domain, the distance between source and target distributions, andthe expected difference in labeling functions. This theorem motivated us to define a mediator densityfunction pm(x;y)i) whose conditional probability yjxis equal to the conditional probability of thesource and ii) whose marginal density on xis equal to the marginal density of the target. We canthen construct a Bayes optimal classifier on the target domain under the assumption of covariateshift (the distribution yjxis the same in the source and target domains).Our approach became practical with the recent advances in (autoregressive) neural density estima-tion (Uria et al., 2013). We learn pm(x;y)from ps(x;y)and pt(x)to bridge the gap between thesource and target domains. We regard the label on the target data as a latent variable and show thatifps(xjy=i)be learned perfectly for i2f0;1g, then a one-step Expectation–Maximization (andthis is why our algorithm named EMTL) iteration will produce a density function pm(x;y)withthe following properties on the target data: i) minimizing the Kullback–Leibler divergence betweenpm(yijxi)and ps(yijxi); ii) maximizing the log-likelihoodPlogpm(xi). Then, by adding anadditional marginal constraint on pm(xi)to make it close to pt(xi)on the target data explicitly,we obtain the final objective function for EMTL. Although this analysis assumes a simple covariateshift , we will experimentally show that EMTL can go beyond this assumption and work well inother distribution shifts.We conduct experiments on synthetic and real data to demonstrate the effectiveness of EMTL. First,we construct a simple two-dimensional data set to visualize the performance of EMTL. Second, weuse UCI benchmark data sets and the Amazon reviews data set to show that EMTL is competitivewith state-of-the-art UDA algorithms, without accessing the source data at the adaptation phase.To our best knowledge, EMTL is the first work using density estimation for unsupervised domainadaptation. Unlike other existing generative approaches (Kingma et al., 2014; Karbalayghareh et al.,2018; Sankaranarayanan et al., 2018), EMTL can decouple the source density estimation processfrom the adaption phase and thus it can be used in situations where the source data is not availableat the adaptation phase due to security or privacy reasons.2 R ELATED WORKZhuang et al. (2020), Kouw & Loog (2019) and Pan & Yang (2009) categorize DA approaches intoinstance-based and feature-based techniques. Instance-based approaches reweight labeled sourcesamples according to the ratio of between the source and the target densities. Importance weightingmethods reweight source samples to reduce the divergence between the source and target densities(Huang et al., 2007; Gretton et al., 2007; Sugiyama et al., 2007). In contrast, class importanceweighting methods reweight source samples to make the source and target label distribution the same(Azizzadenesheli et al., 2019; Lipton et al., 2018; Zhang et al., 2013). Feature-based approacheslearn a new representation for the source and the target by minimizing the divergence between thesource and target distributions. Subspace mapping methods assume that there is a common subspacebetween the source and target (Fernando et al., 2013; Gong et al., 2012). Courty et al. (2017)proposed to use optimal transport to constrain the learning process of the transformation function.Other methods aim at learning a representation which is domain-invariant among domains (Gonget al., 2016; Pan et al., 2010).Besides these shallow models, deep learning has also been widely applied in domain adaptation(Tzeng et al., 2017; Ganin et al., 2016; Long et al., 2015). DANN (Ganin et al., 2016) learnsa representation using a neural network which is discriminative for the source task while cannotdistinguish the source and target domains from each other. Kingma et al. (2014) and Belhaj et al.(2018) proposed a variational inference based semi-supervised learning approach by regarding themissing label as latent variable and then performing posterior inference.2Under review as a conference paper at ICLR 20213 N OTATION AND PROBLEM DEFINITIONWe consider the unsupervised domain adaptation problem in a binary classification setting (the setupis trivial to extend to multi-class classification). Let p(x;y)be a joint density function defined onXY , where x2Rpis the feature vector and y2f0;1gis the label. We denote the conditionalprobabilityp(y= 1jx)byq(x). A hypothesis or model is a function h:X7! [0;1]. We define theerror ofhas the expected disagreement between h(x)andq(x), i.e.,(h) =Expjh(x)q(x)j: (1)We use superscripts s and t to distinguish the source and target domains, that is, ps(x;y)andpt(x;y)are the joint density functions in the source and target domains respectively. In general, we assumethat ps(x;y)6=pt(x;y).LetDs=f(xsi;ysi)gnsi=1andUt=fxtignti=1bei.i.d. data sets generated from the source distributionps(x;y)and the marginal target distribution pt(x), respectively, where nsand ntare source andtarget sample sizes. The objective of unsupervised domain adaptation is to learn a model ^hby usinglabeledDsand unlabeledUt, which achieves lowest error in target domain.4 G ENERATIVE APPROACHBen-David et al. (2010) proved that the error of a hypothesis hin the target domain t(h)can bebounded by the sum of error in source domain s(h), the distribution distance between the twodomains, and the expected L1distance between two conditional probabilities.Theorem 1 (Ben-David et al. (2010), Theorem 1) For a hypothesis h,t(h)s(h) +d1(ps(x);pt(x)) + minfExpsjqs(x)qt(x)j;Exptjqs(x)qt(x)jg; (2)whered1(ps(x);pt(x)) = 2supB2BjPrs(B)Prt(B)jis the twice the total variation distance of twodomain distributions and qs(x)andqt(x)are the source and target probabilities of y= 1jx, re-spectively.In the covariate shift setting, we assume that the conditional probability p(yjx)is invariant betweenthe source and the target domains. Thus in the right hand side of Eq. (2), the third component willbe zero, which means that the target error is bounded by the source error plus the distance betweentwo domains. Many current unsupervised domain adaptation solutions work on how to reduce thedistance between the two domain densities. Importance-sampling-based approaches manage to re-sample the source data to mimic the target data distribution, and feature-mapping-based approachesdo that by learning a transformation function (x)for the source data. However, both approachesneed to access source and target data simultaneously.In this paper, we propose a domain adaptation approach based on generative models. First, we learnall multivariate densities using RNADE (Uria et al., 2013), an autoregressive version of Bishop(1994)’s mixture density nets. We found RNADE excellent in learning medium-dimensional densi-ties, and in a certain sense it is RNADE that made our approach feasible. Second, we introduce amediator joint density function pm(x;y)that bridges the gap between ps(x;y)and pt(x;y). Sincethe source distribution information is stored in the learned generative model after training, we donot need to access source data in the adaptation phase.4.1 D ENSITY FUNCTIONDue to recent developments in neural density estimation, we can estimate moderate-dimensionaldensities efficiently. In this paper, we use real-valued autoregressive density estimator (RNADE) ofUria et al. (2013). RNADE is an autoregressive version of mixture density nets of Bishop (1994)which fights the curse of dimensionality by estimating conditional densities, and provides explicitlikelihood by using mixtures of Gaussians.To estimate p(x), letx= [x1;x2;;xp]be apdimensional random vector. RNADE decom-poses the joint density function using the chain rule and models each p(xijx<i)with a mixture of3Under review as a conference paper at ICLR 2021Gaussians whose parameters depend on observed x<i. Formally,p(x) =pYi=1p(xijx<i) =pYi=1dXj=1j(x<i)N(xi;j(x<i);2j(x<i)); (3)where x<i= [x1;;xi1]anddis the number of Gaussian components. The weights j, meansj, and variances jare modeled by a single neural net whose architecture makes sure that theparameterj(x<i)depends only on x<i. The neural net is trained to maximize the likelihood ofthe training data. We denote the RNADE model by the function f(x;!), where!represents all theparameters (neural net weights) in RNADE, and use it to approximate p(x). The conditional densityp(xjy)can be estimated in the same way by just selecting xjyas the training data. In followingsections, we denote the maximum likelihood parameters of ps(xjy= 0) ,ps(xjy= 1) , and pt(x)by!s0,!s1, and!t, respectively. We further denote the proportion of class 0 in the source domainbys0=#fys=0gns. The full parameter vector [!s0;!s1;s0]ofps(x;y)and ps(x)is denoted by s.4.2 T HE MEDIATOR DISTRIBUTIONBy Eq. (2), the target error can be bounded by the source error plus the distance between the twomarginal distributions plus the expected difference in p(y= 1jx)between two domains. Thismotivated us to construct a mediator distribution pm(x;y)(Figure 1) which has two properties:it has the same conditional distribution as the source: pm(yjx) = ps(yjx), andit has the same marginal distribution as the target: pm(x) = pt(x).ps(x;y)! pm|{z}ps(yjx)=pm(yjx)pm(x)=pt(x)z}|{(x;y)! pt(x;y)Figure 1: The mediator has the same conditional probability as the source and the same marginalprobability as target. According to Theorem 1, we will have t(h)m(h)for any hypotheses hsince the last two terms are zero.In the covariate shift setting, we can then solve the unsupervised domain adaptation problemperfectly: i) the first property forces p(yjx)to be the same in source and mediator distribu-tions, and in the covariate shift setting we have ps(yjx) = pt(yjx), then this property makespm(yjx) = pt(yjx); ii) the second property makes the marginal distributions of the mediator andthe target the same, which leads to d1(pm(x);pt(x)) = 0 . Under these two conditions, for anymodelh, we will have t(h)m(h)since the last two terms of Eq. (2) will be zero. Furthermore,given the mediator distribution pm(x;y), it is easy to learn the best model (Bayes classifier)^h(x) =pm(xjy= 1) pm(y= 1)pm(x); (4)which achieves the tightest bound for the target error. In summary, by introducing the mediator dis-tribution pm(x;y), we can bound the target error by the mediator error. In the following sections, wewill introduce how to learn pm(x;y)from ps(x;y)and pt(x)using the expectation-maximization(EM) algorithm combined with a marginal constraint term.5 EMTLIf we regard the missing label yas a latent variable that generates observed xin the target domain,we can use the EM algorithm to infer y. We consider that the target density p(x;)is a mixturewith two components p(xjy=i;)wherei2f0;1g. Whenconverges to its limit in EM, wecan recover the joint density function p(x;y;). We denote this joint density function by pm(x;y).However, this pm(x;y)may be far away from the ground truth pt(x;y). The mismatch comes fromtwo facts: i) EM can easily converge to a bad local minimum because of a bad initialization, andii) EM tends to find inner structure (e.g., clusters) of the data but this structure may be irrelevant4Under review as a conference paper at ICLR 2021to the true label. The local minimum problem is due to parameter initialization, and the structure-label mismatching problem comes from not having a-priori information of the label. When we havea fully known source distribution ps(x;y), these two issues can be solved by selecting a properinitialization plus a constraint on marginal distribution.The first observation is that in a lot of cases we can directly use the source model in the target domainand it is better than random guess. We use this intuition to make the source model ps(x;y)as theinitial guess of pm(x;y). Following section 4.1, we use RNADE to model pm(xjy)and denoteparameters of pm(x;y)bym= [!m0;!m1;m0]. Initializing pm(x;y)by using ps(x;y)meanswe set(0)m, the initial state of min the EM algorithm, to s. The next EM iterations can be seenas a way to fine-tune musing the target data. In the next sections we will formally analyze thisintuitive algorithm.5.1 A NALYSIS(1)mFirst we link the EM algorithm with initial (0)m=sto Theorem 1. In each iteration, EM alternatesbetween two steps: E step defines a Q function as Q(j(t)) =Eyjx;(t)logp(;x;y)and M stepdo the maximization (t+1)= arg maxQ(j(t)). After the first EM iteration, we have(1)m= arg maxQ(j(0)m) = arg max1ntntXi=1Eyijxti;slogp(xti;yi;): (5)Supposesis learned perfectly from source data, which means that we can replace p(x;y;(0)m)byps(x;y). Thus the expectation operation in Eq. (5) can be written asEyijxti;s[] =Xj2f0;1gp(yi=jjxti;s)=Xj2f0;1gps(yi=jjxti) (6)for any random variable . This expectation links the source distribution with the target. We rewritethe full expectation expression of Eq. (5) asEyijxti;slogp(xti;yi;) =Xj2f0;1gps(yi=jjxti) logp(xti;yi=j;)=DKL(ps(yijxti)kp(yijxti;)) + logp(xti;)Hps(yijxti);(7)whereHps(yijxti)is the conditional entropy on probability ps. This equation shows that the ex-pected log-likelihood can be decomposed into the sum of three items. the first item is the negativeKL-divergence between the two conditional distributions ps(yijxti)andp(yijxti;); the second itemis the target log-likelihood logp(xtij); the last item is the negative entropy of the source conditionaldistribution, which is irrelevant to parameter so can be ignored during the optimization.Therefore, by setting (0)massand maximizing the Qfunction in the first EM iteration, we willget a pm(x;y)which minimizes the KL-divergence between pm(yjx)with ps(yjx)and maximizeslogpm(x). Minimizing the KL-divergence reduces the third term of Eq. (2) and maximizing thelog-likelihood forces pm(x)to move towards pt(x)implicitly, which reduces the second item ofEq. (2). This suggests that the Bayes classifier pm(yjx)can be a proper classifier for target domain.5.2 M ARGINAL CONSTRAINTIn the previous section, we implicitly reduce the distance between pm(x)and pt(x)by maximizingthe log-likelihood of p(x;)on the target data. To further control the target error bound Eq. (2),we explicitly add a marginal constraint for pm(x;y)by minimizing the distance between the twomarginal distributions. Rather than calculating d1(pm(x);pt(x))directly, we use the KL-divergenceto measure the distance between two distributions since we can explicitly calculate the pm(xti)andpt(xti)by using our density estimators. Furthermore, according to Pinsker’s inequality (Tsybakov,2008), we haved1(pm(x);pt(x))p2 DKL(pm(x)kpt(x)); (8)5Under review as a conference paper at ICLR 2021thus minimizing the KL-divergence also controls d1(pm(x);pt(x)). Since we only have samples xtifrom the target domain, we use an empirical version of the KL-divergence. The marginal constraintis defined asM() =p2ntXi=1_ pt(xti) log_ pt(xti)_ pm(xti)12=p2ntXi=1_f(xti;!t) log_f(xti;!t)_p(xti;)12; (9)where _p=p=Ppand_f=f=Pfare normalized discrete distributions on the target samples.5.3 O BJECTIVE FUNCTION OF EMTLBy putting the QandMfunctions together, we get the objective function= arg minQ(j(0)m) +M() (10)of our generative domain adaptation approach, where (0)m=sandis a non-negative hyperpa-rameter that controls the trade-off of the two terms.In real-life scenarios, both p(x)andp(yjx)can be different in the source and target domains sothe covariate shift assumption may be violated. To go beyond this assumption, we need to relaxthe constraint on ps(yjx) = pt(yjx)which is used in justifying Q(j(0)). As we will show inSection 6, by setting a large and doing more iterations, EMTL will reduce the weight on theQfunction and allow us to escape from covariate shift constraints. We summarize the process ofEMTL in Algorithm 1.Algorithm 1: EMTL AlgorithmResult: EMTL classifier pm(y= 1jx)Initializes= [!s0;!s1;s0]and!tusingDsandUt, respectively;Initialize(0)mbysandt= 1;whiletnitrdo(t)m= arg minQ(j(t1)m) +M();t=t+ 1;endpm(x;y) =p(x;y;(t)m);pm(y= 1jx) =pm(xjy=1) pm(y=1)pm(x)=(1(t)m0)f(x;!(t)m1)(1(t)m0)f(x;!(t)m1)+(t)m0f(x;!(t)m0);6 E XPERIMENTSIn this section, we present experiments on both synthetic (Section 6.1) and real-life data (Section 6.2)to validate the effectiveness of EMTL.6.1 E XPERIMENTS ON SYNTHETIC DATA SETWe study the performance of EMTL under conditional shift where ps(xjy)6=pt(xjy)using avariant of inter-twinning moons example (Ganin et al., 2016). In the source domain we generatean upper moon (class 0) and a lower moon (class 1) with 1000 points in each class. In the targetdomain, we first generate 2000 samples as in the source then rotate the data by 40to make thetarget distribution of xjydifferent from the source. Figure 2 (left) shows the source and targetdistributions. In this experiments, we set the number of Gaussian components to 10 and the hiddenlayer dimension to 30 in the RNADE model.We setto 1 and 200 to illustrate how a large helps the model to escape from covariate shiftconstraint. Figure 2 (upper right) shows the prediction results in the target data using = 1. Whennitr= 0, the EMTL classifier is the source Bayes classifier. In the upper moon, the model mis-classifies the middle and the tail parts as class 1. This is because according to the source distribu-tion, these areas are closer to class 1. The same misclassification occurs in lower moon. As nitr6Under review as a conference paper at ICLR 20213 2 1 0 1 2 321012Source class 0Source class 1Unlabeled Target2.5 0.0 2.5=1.0n_itr=0Predicted class 0Predicted class 12.5 0.0 2.5n_itr=12.5 0.0 2.5n_itr=102.5 0.0 2.5n_itr=502.5 0.0 2.5=200.02.5 0.0 2.5 2.5 0.0 2.5 2.5 0.0 2.5Figure 2: Inter-twining moons example. (Left) Samples from the source and target distributionswhere there is a 40rotation in target; (Right) EMTL result on the target test data under differentiterations and s. Smallresults in a local optima. Larger allows the objective function to escapefrom the ps(yjx) = pt(yjx)constraint which is wrong in this case.increases, the misclassification reduces slightly, because the objective function focuses more on op-timizing the Qfunction thus keeping p(yjx)stable in each iteration. As a contrast, in Figure 2(bottom right), when setting to 200, the first iteration reduces the misclassification significantlyand finally the error converges to zero. By setting a large , the conclusion of this example is two-fold: i) the ps(yjx) = pt(yjx)constraint will be relieved thus resulting in a better adaptation result,and ii) one-step iteration will increase the performance significantly thus suggesting that we do notneed too many iterations. According to ii), in our following experiments the nitris fixed as 1. Weshow more experimental results using different s in Appendix A.1 and Figure 3.6.2 E XPERIMENTS ON REAL -LIFE DATA SETSIn this section, we validate EMTL on real-life data sets by comparing its performance with twostandard supervised learning and three domain adaptation algorithms. The validation is conductedon three UCI data sets and the Amazon reviews data set. First, we create two benchmarks: the sourceRF/SVM is the model trained only using source data (as a baseline) and the target RF/SVM is themodel trained only using labeled target data (as an upper bound). A random forest (RF) classifieris used on the UCI data sets and a support vector machine (SVM) is used on the Amazon reviewsdata set. The three DA algorithms are kernel mean matching (KMM, Huang et al. (2007)), subspacealignment (SA, Fernando et al. (2013)) and domain adversarial neural network (DANN, Ganin et al.(2016)). For the UCI data sets, both KMM and SA are based on RF and for Amazon reviews dataset SVM is used. In KMM, we us an RBF kernel with the kernel width set as the median distanceamong the data. In DANN, is fixed as 0.1. In EMTL, we set the number of components to 5 andthe hidden layer size to 10 for RNADE model and to 1. For each transfer task, five-fold crossvalidation (CV) is conducted. In each CV fold, we randomly select 90% source samples and 90%target samples respectively to train the model. We average the output of the five models and calculatethe 95% confidence interval of the mean. For the UCI tasks, ROC AUC score is the used metric sincewe are dealing with imbalanced classification tasks. For Amazon reviews tasks accuracy is the usedmetric. Table 1 and 2 summarize the experimental results. Numbers marked in bold indicate the topperforming DA algorithms (more than one bold means they are not significantly different).UCI data sets. Three UCI data sets (Abalone, Adult, and Bank Marketing) are used in our experi-ments (Dua & Graff, 2017; Moro et al., 2014). We preprocess the data first: i) only select numericalfeatures; ii) add uniform noise to smooth the data from integer to real for Adult and Bank data sets.Since the original goal in these data sets is not transfer learning, we use a variant biased samplingapproach proposed by Gretton et al. (2009) and Bifet & Gavald `a (2009) to create different domainsfor each data set. More precisely, for each data set we train a RF classifier to find the most importantfeature, then sort the data along this feature and split the data in the middle. We regard the first50% (denoted by A) and second 50% (denoted by B) as the two domains. When doing domain7Under review as a conference paper at ICLR 2021Table 1: Experimental results on UCI data sets. AUC(%) is used as a metric.Task Source RF Target RF KMM SA DANN EMTLAbalone A!B 67.11.1 72.70.5 66.52.2 67.80.6 67.50.4 65.72.8Abalone B!A 67.51.2 81.20.4 59.44.6 68.52.1 69.50.7 70.80.7Adult A!B 84.40.2 84.80.2 83.40.4 82.80.2 84.70.1 84.80.3Adult B!A 82.10.1 83.10.1 81.30.4 81.00.2 82.80.3 82.70.4Bank A!B 70.10.3 81.50.1 69.31.1 70.40.9 70.80.5 70.51.7Bank B!A 76.70.7 83.00.6 74.80.5 76.60.4 78.40.2 79.30.8Table 2: Experimental result on Amazon reviews data set. Accuracy(%) is used as a metric.Task Source SVM Target SVM KMM SA DANN EMTLB!D 80.00.0 79.90.1 79.70.2 79.90.1 79.90.0 79.50.1B!E 70.30.1 72.40.2 72.90.2 73.00.2 69.70.3 71.50.2B!K 75.70.1 76.20.1 76.30.0 76.10.1 75.70.1 76.00.1D!B 75.50.0 75.50.1 75.30.1 75.30.1 75.40.1 75.70.0D!E 71.80.1 74.20.1 74.60.1 74.40.0 71.50.1 72.30.2D!K 75.70.1 77.00.0 76.80.1 77.40.1 75.60.3 76.10.2E!B 70.30.1 71.00.1 71.80.1 71.40.1 70.50.0 69.50.3E!D 72.20.0 73.10.1 73.10.3 73.10.1 72.10.1 72.70.2E!K 85.80.1 86.20.0 83.60.8 86.00.1 85.80.2 85.30.1K!B 71.50.0 71.60.1 71.40.2 71.50.0 71.30.1 71.60.1K!D 70.60.0 71.70.2 72.60.3 72.40.1 70.60.1 71.60.2K!E 83.90.0 84.30.0 84.20.1 84.30.1 84.00.1 83.90.2adaptation, we use 75% of the target domain samples to train the model and use the other 25% targetdomain samples as test data. Finally, we use normal quantile transformation to normalize the sourceand target data sets respectively. Table 3 Appendix A.2 summarizes the features of the data sets wecreated for the experiments. Table 1 shows the results on the test data for UCI data sets. We findthat the performance of EMTL is not significantly different from DANN in all tasks (remember thatour goal was not the beat the state of the art but to match it, without accessing the source data at theadaptation phase). On the two Adult tasks and Bank B !A, although the average score of EMTLis less than that of Target RF, the differences are small.Amazon reviews. This data set (Ganin et al., 2016) includes four products, books (B), DVD (D),electronics (E) and kitchen (K) reviews from the Amazon website. Each product (or domain)has 2000 labeled reviews and about 4000 unlabeled reviews. Each review is encoded by a 5000-dimensional feature vector and a binary label (if it is labeled): 0 if its ranking is lower than threestars, and 1 otherwise. We create twelve transfer learning tasks using these four domains. AsRNADE is not designed for ultra high dimensional cases, we overcome this constraint by reducingthe number of features from 5000 to5using a feed forward Neuronal Network (FNN). More pre-cisely, for each task we train a 2-hidden layer FNN on the source data. Then, we cut the last layerand we use the trained network to encode both source and target to 5 dimensions. Table 2 shows theresults on the test data for Amazon reviews data set. We notice that EMTL is slightly better thanDANN in most of the tasks and still comparable with both KMM and SA.7 C ONCLUSIONS AND FUTURE WORKIn this paper, we have presented a density-estimation-based unsupervised domain adaptation ap-proach EMTL. Thanks to the excellent performance of autoregressive mixture density models (e.g.,RNADE) on medium-dimensional problems, EMTL is competitive to state-of-the-art solutions. Theadvantage of EMTL is to decouple the source density estimation phase from the model adaptationphase: we do not need to access the source data when adapting the model to the target domain. Thisproperty allows our solution to be deployed in applications where the source data is not availableafter preprocessing. In our future work, we aim to extend EMTL to more general cases, includinghigh-dimensional as well as more complex data (e.g., time series).8Under review as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title An interesting motivation, but more work would be needed. ### Review Text This paper proposes a novel method for Unsupervised Domain Adaptation (UDA) when the source domain's privacy should be preserved. The authors propose EMTL, which is a generative method using multivariate densities using RNADE (Uria et al., 2013) and a mediator joint density function bridging both source and target domains. EMTL achieves comparable performances to those of DANN (Ganin et al., 2016) on a single dataset. **Pros** - Unique motivation for UDA and privacy-preserving. - Well formulated method using RNADE and a mediator density function. In the adaptation phase, the source domain data can be deleted. **Cons** - There is a closely related paper for privacy-preserving UDA (Song et al., 2020) before the deadline of ICLR 2021. The method by Song et al. utilized a framework of federated learning and encryption. Thus the approaches are different from each other, but the motivation for privacy-preserving is close. The authors should compare them quantitatively. Song et al. Privacy-Preserving Unsupervised Domain Adaptation in Federated Setting. IEEE Access, Vol. 8, pp.143233-143240, 2020. - Although the adaptation phase does not require the source domain data, a probabilistic function $p^m(y|x)$ should be available. The reviewer just concerns if model inversion attacks, such as (Fredrikson et al., 2015), violate the source domain's privacy. Fredrikson et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In CCS, 2015. - It is reasonable to compare ETML to DANN since both methods have conceptually similar characteristics: matching the data distribution and learning the posterior probabilities of the label given a sample. However, as the authors referred to in the main text, several methods have similar characteristics and much better performance than DANN. It is good to know if ETML is complementary to those methods through further experiments. - Experiments on a single real dataset are difficult to convince about the generality of UDA performance. Additional experiments on other datasets such as visual ones can strengthen the generality. **Overall rating** The reviewer is leaning toward rejection, although the motivation is clear. The rating can be upgraded if the authors can solve the cons above. **Additional comment after rebuttal** Happy to hear that the authors plan to upgrade their draft. Since the submitted paper is not updated, the reviewer keeps the first rating but also looks forward to read a revised version in another conference or journal. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
HygXkJHtvB
ICLR.cc/2020/Conference
2020
Using Objective Bayesian Methods to Determine the Optimal Degree of Curvature within the Loss Landscape
["Devon Jarvis", "Richard Klein", "Benjamin Rosman"]
The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model. In this work, however, we aim to show that this is only true for a noiseless system and in general the trend of the model towards wide areas in the landscape reflect the propensity of the model to overfit the training data. Utilizing the objective Bayesian (Jeffreys) prior we instead propose a different determinant of the optimal width within the parameter landscape determined solely by the curvature of the landscape. In doing so we utilize the decomposition of the landscape into the dimensions of principal curvature and find the first principal curvature dimension of the parameter space to be independent of noise within the training data.
["Objective Bayes", "Information Geometry", "Artificial Neural Networks"]
ABSTRACTThe efficacy of the width of the basin of attraction surrounding a minimum inparameter space as an indicator for the generalizability of a model parametriza-tion is a point of contention surrounding the training of artificial neural networks,with the dominant view being that wider areas in the landscape reflect better gen-eralizability by the trained model. In this work, however, we aim to show thatthis is only true for a noiseless system and in general the trend of the model to-wards wide areas in the landscape reflect the propensity of the model to overfitthe training data. Utilizing the objective Bayesian (Jeffreys) prior we instead pro-pose a different determinant of the optimal width within the parameter landscapedetermined solely by the curvature of the landscape. In doing so we utilize the de-composition of the landscape into the dimensions of principal curvature and findthe first principal curvature dimension of the parameter space to be independentof noise within the training data.1 I NTRODUCTIONWhen training a neural network we aim to find a parametrization which minimizes the variance ofthe data around the model’s conditional mean value. A statistic which is reflective of this variance isknown as a loss function and can be seen as creating a landscape mapping a model parametrizationto a corresponding loss value. Thus, higher points in the landscape reflect higher loss values anda worse model parametrization. The saliency of other features of the loss landscape on the modelperformance are relatively less clear and in some cases are points of contention within the field. Onesuch point is whether the width of a basin in the landscape surrounding a local minimum (we willalso refer to this as the width of the minimum) is reflective of the ability of a model parametrizationat the minimum to generalize to unseen data. It is a common notion that the wider the minimumin the landscape, as measured by the Hessian matrix of the loss function (Keskar et al., 2016; Dinhet al., 2017), the better the model parametrization will generalize. The intuition behind such a beliefis simply that, wider minima reflect that a model will experience less deviation in its loss metric asa result of minor deviations of its parameter values. As a result the model is more robust than if itwere to be parametrized by a very specific parameter set found at a sharp minimum.In this work we aim to demonstrate that the width of a minimum is a key feature of the loss land-scape and provides significant information on the progress of the training of a model. We deviate,however, from the views of the field that the widest minima provide the best generalizability by re-flecting that there is instead an optimal width or curvature around the parametrization with the bestgeneralizability which is not necessarily the widest point in the landscape. To this end Section 2provides the necessary background information that we will utilize in developing our theories whichare presented in Section 3. Section 4 and Section 5 then provide empirical evidence in support ofthe theoretical findings with Section 4 describing the methods employed to test the theories. Section5 then provides and discusses the results of these empirical tests. Finally we conclude in Section 6with our closing remarks.The contributions of this work are threefold. Firstly we evaluate the concept of Energy-Entropycompetition of neural networks (Zhang et al., 2018) in the context of the bias-variance trade-off(Geman et al., 1992) and reflect that a correlation exists between energy and entropy as opposed to1Under review as a conference paper at ICLR 2020a competition or trade-off as was first presented. Secondly we utilize the Fisher Information of theloss landscape in the area of a minimum to reflect that an optimal level of curvature exists within thelandscape which does not necessarily occur at the point in the landscape with the least curvature.Further to this, we provide a novel view on the overfitting of models to their training data using theloss landscape. Finally, the Fisher Information is utilized in defining the objective Bayesian priorknown as the Jeffreys prior and we show that the test error of the model reaches its minimum valueat the point in the landscape which corresponds to the Jeffreys prior. In addition, we show that at thispoint in the landscape the dimension of principal curvature of the model is at its maximum entropy.In doing so we also reflect the noise invariance of the dimension of principal curvature.2 B ACKGROUND2.1 F ISHER INFORMATION AND UNINFORMATIVE (JEFFREYS ) PRIORSThe Fisher Information (which we denote by ()) is a metric dependent on the model parametriza-tion which measures the amount of information that a sufficient statistic based on the observabledata, such as the variance of the data around the model predictions (Jaynes, 2003), carries aboutan unknown parameter . In the case of a Gaussian model, the Fisher Information is equal to theExpected Hessian of the log-likelihood of the Gaussian. The necessary regulatory conditions forthis equality to be true apply to the entire exponential family of distributions, however, in our case itis sufficient for this to hold for the Gaussian distribution (Ly et al., 2017). One of the key propertiesof the Fisher Information Matrix is that its determinant is invariant under reparametrizations of atrained model. Thus, when the parameters used in modelling a distribution are changed, the FisherInformation in each dimension will change, however, the determinant or volume of informationremains unchanged between the model parametrizations.The invariance property of the Fisher Information was the reason for its utilization in Jeffreys (1946)who sought to create a Bayesian prior with such an invariance property. The resultant prior is knownas the Jeffreys prior and is shown in Equation 1, where H()denotes both the Hessian and ExpectedHessian matrices (we will treat the Expected Hessian and Hessian interchangeably for the remainderof this work, as is common in the literature (Zhang et al., 2018; Karakida et al., 2019)).P()/pdet() =pdetH() (1)As has been shown in Jaynes (1968; 2003) the utility of the Jeffreys prior is not limited to theinvariance property, as the Jeffreys prior is an example of an uninformative or objective prior, andas a result informs the posterior distribution as little as possible. The Jeffreys prior is thus used toreflect complete prior ignorance about the correct model parametrization, resulting in a posteriordistribution with parameters completely determined by the observed data. A key perspective of thisproperty is that the Jeffreys prior, thus, imposes a uniform distribution over the function space of themodel, not the parameter space. This is due to the density of the prior being inversely proportional tothe Hessian at a certain parametrization, and as such places higher density on parametrizations with aunique function approximation and low parameter variance. This would result in an even distributionover the function approximations and as a result the choice between function approximations isleft to the model learning from the data. The relationship between the Hessian and the parametervariance of a model is discussed further in Section 2.2 below.2.2 T HEBIAS-VARIANCE DILEMMA , ENERGY -ENTROPY COMPETITION AND MINIMUMDESCRIPTION LENGTHIt is a well-known fact that a model learning to equate its conditional mean precisely to the valuesfound in the training data is not always beneficial to the performance of the model on unseen data.In particular when we observe a decrease in training error but increase in validation or test errorwe say that the model is overfitting the training data (Hawkins, 2004). In Geman et al. (1992) it isshown empirically that to decrease the variance of the data around a model’s predictions (reduce thetraining error) it is necessary for the variance in the model parameters to increase. Further, Gemanet al. (1992) reflect that a large parameter variance corresponds to the overfitting of the model tothe training data. This trade-off between the bias of the model and the variance of its parameters is2Under review as a conference paper at ICLR 2020known as the Bias-Variance Dilemma (Sammut & Webb, 2011). In Geman et al. (1992) the onlymeans presented to mitigate this dilemma is to obtain more training data.We see, however, that neural networks are capable of learning complex tasks with limited data andeven generalize in spite of the Bias-Variance Dilemma. In Zhang et al. (2018) it is argued thatthe success of neural networks is due to a bias of Gradient Descent towards wider minima in theloss landscape. To reflect this, Zhang et al. (2018) derive a Bayesian posterior distribution for theparameters of a model given the training data. To derive this distribution, Zhang et al. (2018) utilize aGaussian likelihood with conjugate Gaussian prior, which we generalize in Equation 2 by allowingany prior distribution exp(h())which results in a proper posterior distribution. The exponentialterm of this generalized prior h()is seen as some function of whilef(xi;)denotes the functionapproximation by the neural network. 2iis the variance of the output corresponding to data pointxi, and finally yiis the true output for a particular xiin the training data. The derivation of Equation2 can be found in Appendix A.1.P(jX) =1Zexp" PXi=1(yif(xi;))222ih() +12logdet(H())!#(2)From Equation 2 we see that to maximize the probability of a parametrization we must simultane-ously maximize the model likelihood (by minimizing the first term in the exponential), the priorprobability of the parametrization and the model entropy. The model entropy is reflected by thefinal term in the exponential and is inversely proportional to the Hessian of the loss landscape atthe parametrization. Using the posterior distribution Zhang et al. (2018) note that maximizing themodel likelihood is not the only factor which should be used in determining the model parametriza-tion, and in some cases it may be beneficial to trade-off some training error for an increase in modelentropy, which the authors called Energy-Entropy competition. Zhang et al. (2018) state that the biasof Gradient Descent towards wider minima, with smaller Hessian values, results in the model nat-urally maximizing entropy, aiding in its generalizability. We see, however, from the Bias-VarianceDilemma that by reducing the bias of the model on the training data, and increasing its likelihood,that the model entropy will naturally also increase due to the higher variance in the parameter values.With the perspective of both the Bias-Variance Dilemma and Energy-Entropy competition we seethat wide points in the loss landscape have been related to both overfitting and improved generaliz-ability of a model parametrization. Thus, from one perspective we aim for sharp minima within thelandscape and from the other we should aim for wide minima. The issue of the width of a minimumis further confounded by Dinh et al. (2017) which states that the width of a minimum is not a con-sistent indicator of the ability of a model to generalize. The impact of the width of a minimum inthe landscape is still an open question and one which we try address in this work.The Minimum Description Length (MDL) Principle is an information theoretic principle whichstates that the optimal model for a set of data provides the best compression of the data (Rissa-nen, 1978). In other words the optimal model is the simplest model which incurs the least trainingerror. This principle is another example of the trade-off between model complexity and minimiz-ing the model bias. Due to its assertion that the optimal model is the simplest unbiased one theMDL Principle is the mathematical formulation of Occam’s Razor, and is expressed by Equation 3,which reflects that to compress the data Doptimally we must find the parametrization with the netminimum entropy in the parameter space L()and in the data given the parametrization L(Dj).L(D) = min2#(L() +L(Dj)) (3)For the exponential family of likelihood distributions the Jeffreys prior is used to enforce the MDLproperty on the posterior distribution and results in a minimax optimal posterior, which is to saythat the maximal risk of the model parametrization is minimal out of all unbiased parametrizations(Lehmann & Casella, 2006). The minimax optimality property, thus, provides the lowest upperbound of the risk for all model parametrizations. Thus the MDL property is related to the Bias-Variance Dilemma and MDL posterior distributions aim to avoid overfitting.A final necessary principle which encompasses all of the topics above is the Likelihood Principle(Jaynes, 2003), which states that within the context of a specified model, the likelihood distribu-3Under review as a conference paper at ICLR 2020tionL(Dj)trained from data Dcontains all the information about the model parameters that iscontained in D.2.3 P RINCIPAL CURVATUREThe Jeffreys prior, and by extension the Fisher Information, finds further utility in its use as a rightHaar measure for the parameter space of a normal distribution (Berger, 2013). The Haar measure isused to assign an invariant volume to a subset of locally compact topological groups and, thus, formsan integral over the compact groups. In the case of the parameter space of the normal distributionthe topological groups will be of parametrizations with similar function approximations and, thus,similar loss metric within the basin surrounding a local minimum in the loss landscape. Further, wenote that the parameter space of a probabilistic model forms a statistical manifold and by extensiona Riemannian manifold (Rao, 1945). The metric tensor for statistical manifolds is the Fisher Infor-mation metric (Skovgaard, 1984), defined as the expected value of the individual elements of theFisher Information matrix, which forms the tangent space of such manifolds. As stated in Section2.1, in the case of Gaussian parameter spaces the Fisher Information Matrix can be equally derivedas the Hessian matrix of the loss function relative to the model parameters. This is significant asthe Hessian matrix is used in the area of a critical point on a Riemannian manifold for obtaining theshape operator (Spivak, 1970), and as a result the principal curvatures at the point (Porteous, 2001).In the case of a Gaussian parameter space the shape operator is the Gaussian curvature defined asthe determinant of the Hessian matrix det(H())(Koenderink & Van Doorn, 1992). The princi-pal curvatures are defined as the eigenvalues of the Hessian matrix and decompose the manifoldinto orthogonal dimensions of curvature, with the first eigenvector reflecting the dimension of mostcurvature.It is important to note that while the parameter space of a statistical model forms a Riemannian man-ifold, when parametrized by an overly-determined model such as a neural network, the parameterspace will not be Riemannian but rather semi-Riemannian, due to the fact that the Fisher Informa-tion metric will no longer be defined over the entire manifold. Such undefined points for the metricare a result of a singular metric tensor at the model parametrization and occur due to the covarianceof parameters within the model. Covariant parameters necessarily occur with the addition of hiddenlayers to the model and result in dimensions on the manifold in which the parameters may be variedwithout altering the behaviour of the model. This results in dimensions of no curvature along themanifold. As seen in Section 5, this is not a destructive point for the training procedure, however,we must necessarily remain cognisant of such covariant dimensions along the statistical manifold.3 M ODEL ENTROPY , THELOSSLANDSCAPE AND GENERALIZATIONThe aim of training a neural network is to find the most probable parametrization for a model asdetermined by the posterior probability reflected by Equation 2. This is achieved by maximizingthe combination of the likelihood, prior probability and entropy of the model parametrization. Thelikelihood we increase normally by decreasing the variance of the training data around the modelpredictions. The entropy term we have no direct control over as the landscape is completely deter-mined by the data, the sufficient statistic being used to determine the parameters (which is the lossmetric) and the hypothesis (the model architecture being trained). So the only component of theposterior left to be determined is the prior. As with most work in Bayesian statistics this is the mostdifficult part and must be treated with great care. There are presently two common approaches tosetting this prior distribution, the first of which being to not specify one, or more precisely use animplicit uniform prior (Chaudhari & Soatto, 2018) and, thus, use maximum likelihood estimationto determine the parameter values. The second common approach is to utilize a conjugate Gaussianwith a mean of 0for the prior. In practice this method takes the form of L2 regularization, alsoknown as weight decay (Krogh & Hertz, 1992), with h()/jjjj2in Equation 2. Neither approachhas proven to be sufficient consistently for deterring models from overfitting, without introducinga form of bias, due to their unjustified assumptions about the correct model parametrization. Thisis relatively clear in the conjugate Gaussian prior approach which assumes a mean of 0for the pa-rameter values. In a case where we have explicit prior knowledge that such a mean and distributionis in-fact correct for the model parameters then this would be a correct approach, however, in al-most every case we are completely ignorant to the values of the true parametrization and, thus, we4Under review as a conference paper at ICLR 2020would be biasing our models to some degree by using this prior. It is necessary when developing anunbiased model that this absence of prior knowledge be reflected in the training procedure.The source of error from the uniform prior is slightly more nuanced, as initial intuition would suggestthat giving equal probability to all values a parameter could take is a correct means of reflectingour prior ignorance about the parameter values. However, this method fails to accurately reflectthe probabilities that unlikely parametrizations may be correct and in a sense exhibits a kind ofconfirmation bias. As discussed in Section 2.2 the model which is optimal is the one which hasthe lowest variance of the data around its predictions L(Dj)while maintaining low variance inthe model parameter space L(). In addition the bias-variance trade-off (Geman et al., 1992) statethat to decrease data variance we must increase parameter variance reflecting the trade-off betweenL()andL(Dj)in the MDL equation. Further, we related this to the Energy-Entropy competitionconcept (Zhang et al., 2018), where it was also stated that neural networks are biased towards wideminima. Hence, by placing equal prior probability on all areas of the landscape we see that theuse of the uniform prior will result in a posterior distribution over the model parameters whichplaces excessive density on high variance areas of the landscape while at the same time places toolittle probability on very specific, low variance parametrizations of the model. This results in thedevelopment of a sub-optimal model which favours wider minima within the loss landscape and asa result excessively reduces the variance of the data around its predicted values.Thus we conjecture that a correct prior for a model would be the Jeffreys prior shown in Equation 1and Equation 4:P()/pdet() =pdetH() (4)As a result we see in Equation 2 that such a prior would give the equation for h()ash() =12logdet(H()) (5)We note that, with the use of the Jeffreys prior, the prior and entropy term in the posterior formulationcancel out, leaving the likelihood term as the only factor determining the posterior probability, ascan be see in Equation 6.P(jX) =1Zexp" PXi=1(yif(xi;))222i12logdet(H()) +12logdet(H())!#=1Zexp PXi=1(yif(xi;))222i! (6)It is, thus, possible to utilize the loss landscape in the area of a minimum to determine the degree ofcertainty we may have in our model parametrization being the true parametrization and as a resultdetermine the necessary Jeffreys prior probability. This is due to the fact that higher entropy meanswider minima which reflects higher parameter variance and the necessity to be less certain of theparametrization in that area. The opposite is true for an increase in certainty in our parametrizationat a sharp minimum. Furthermore, this would mean we are objectively setting our prior based on themodel behaviour given the observed data and sufficient statistic. Note, we do not say we determineour prior based on the hypothesis as the determinant of the Fisher Information/Hessian is invariantunder reparametrizations. This means that in the area of a minimum, by transforming the hypothesisto be modelled by an alternate set of parameters 0the dimensions and volume of the landscape willadjust such thatpdet(H()) =pdet(H(0))(Fisher, 1922). A necessary distinction regardingthe Jeffreys prior is that, while it places the full parameter determination on the data, it does notnecessarily result in a posterior distribution which has extracted all information from the data. Infor-mation which provides an insufficient decrease in data variance to warrant the increase in variancein the model parameters will not be utilized as the model naturally “distrusts” this information byproviding a relatively lower prior probability to the more entropic parametrization found in the widerbasin. This is where we see the utility of the Jeffreys prior with regard to the MDL property reflectedby Equation 3 as it balances both model complexity L()while fitting the data L(Dj).The primary power of the Jeffreys prior comes from the use of the Fisher Information. Naturally asthe model fits the data and captures information, the amount of information left in the data which5Under review as a conference paper at ICLR 2020remains uncaptured by the model decreases. This is observed as a decrease in the Fisher Informa-tion. We see, however, that the model entropy increases as the Hessian matrix, and by extensionthe Fisher Information, decreases, which is again in agreement with the bias-variance trade-off. Theconsequence of this observation is that to capture all the information from a sufficient statistic deter-mined by the training data we must utilize increasingly complex models, capable of modelling finerdetails found in the data. The utility of such fine details to the model performance exhibits dimin-ishing returns to a point where the perturbation of a parameter capturing these details will not resultin any significant deviation in the model behaviour. Simply, as more information is shared betweenparameters, the individual importance of a parameter decreases. This is in contrast to an under-parametrized model where the parameters capture as much of the most important information fromthe sufficient statistic as possible and rely heavily on this information in determining its behaviour.In light of the Fisher Information we see the Maximum Likelihood Principle further reflects that theuse of the uniform prior biases our models towards maximum entropy within the loss landscape byextracting all information from the training data at the expense of higher model complexity.It must be noted that the propensity of neural networks to extract all information from the trainingdata is not an inherently negative quality of the models. Quite the opposite, it is reflective of thepower and capability of the models which are designed to learn the variances within data and utilizethis information in determining their behaviour. As a result, however, the efficacy of these modelsis directly related to the reliability of the data on which they are trained and for all the informationfound in the training data to be present and reflective of the entire population of data being modelled.This is seldom the case as training data is inevitably noisy, either due to noise from sampling andcapturing of the data, or due to confounding aspects of the task domain which on average do notaffect the population data distribution but do provide a source of structured noise when their effectis observed on the training data. Minimizing the Fisher Information metric and fulfilling the Maxi-mum Likelihood Principle in such cases would reflect that the information found in the noise of thetraining data was utilized in determining the model parameter values, which is clearly undesirableand is known as the model over-fitting the training data (Hawkins, 2004).We, thus, see that the notion of the widest possible minima in the loss landscape providing thebest generalization performance is only true in a noiseless environment. The view, however, thatwide areas in the landscape generalize better is true as this width in the landscape would merelyreflect that the model has captured more information from the data than a parametrization foundin a steeper portion of the landscape. Naturally this would provide better test error performanceby the model if it has captured the information found in the training data which is reflective of theinformation within the population data distribution being modelled. We conclude that the width ofthe landscape in which a model finds itself is demonstrably important and that there is a precise widthin the landscape which provides the model with the best possible test error performance. This pointwould be exactly where the model prior is equal to the Jeffreys prior as determined by the FisherInformation of the loss landscape. This is due to the aversion of the Jeffreys prior to any informationwhich does not justify the increase in model entropy by a superior decrease in data variance orprediction error, while remaining objective in the sense that the prior is completely determined bythe loss landscape and has as little effect on the parameter posterior distribution as possible. Thus,noise in the training data which only serves to perturb and hinder the learning of the model withoutproviding sufficient benefit to how the training data is fit will not be learned by the model.4 M ETHODSFrom the above discussion it is clear that it is not enough to merely reduce the variance of the dataaround the model prediction as it is possible for the model to reduce this variance to an excessdegree. We, thus, require a metric for the difference between the model and true distributions whichis only minimized when the two distributions are identical. This is not the KL-divergence as thismetric merely reflects the density of a distribution Bwhich lies outside of another distribution A. Itis, thus, possible to minimize the KL-divergence while the distributions are not identical by havingdistributionAsurround or encapsulate distribution B. We will, thus, use the Jeffreys divergence asthe difference metric between the two distributions, as the Jeffreys divergence is uniquely minimizedwhen the two distributions are identical. The Jeffreys divergence is shown below in Equation 7 andis merely the sum of the KL-divergence for the true parameter distribution T()compared to the6Under review as a conference paper at ICLR 2020model parameter distribution P(jX)and the opposite KL-divergence. We show the derivation ofthe Jeffreys Divergence from this summation in Section A.2 of the Appendix.DJ(T()jjP(jX)) =Z(T()P(jX))(lnT()lnP(jX))d (7)Naturally Equation 7 is intractable due to the necessity to integrate over all parametrizations. How-ever, as discussed in Section 3, the use of maximum likelihood estimation results in an excess ofdensity on high variance parametrizations in the posterior parameter distribution in Equation 2. Itwas thus sufficient to evaluate the Jeffreys Divergence at a single point in parameter space and ob-serve the relative densities at that point, providing a distance metric as opposed to a divergencemetric. We will, thus, refer to the Jeffreys Distance for the remainder of this work, with the formulashown in Equation 8.DJ(T()jjP(jX)) = (T()P(jX))(lnT()lnP(jX)) (8)The necessity of a distance metric being positive semi-definite is upheld by this metric as it isclear when (T()P(jX))<0then (lnT()lnP(jX))<0. Likewise when (T()P(jX))>0then (lnT()lnP(jX))>0. This is a benefit of the symmetrical property ofthe Jeffreys Divergence which the KL-divergence does not possess. Thus, substituting the modelposterior formula from Equation 2 as well as the true model distribution in Equation 9 into thelogarithmic terms of Equation 8 we obtain the formulation shown in Equation 10.T() =1Zexp PXi=1(yif(xi;))222i!(9)(lnT()lnP(jX)) = PXi=1(yif(xi;))222i+PXi=1(yif(xi;))222ih() +12logdet(H()) +ZZ(10)Assuming now that =, and thusf(xi;) =f(xi;), as would be the case at the end ofan unbiased training procedure, we see that the two variance terms in Equation 10 will cancel out.Further, we see that the only means of obtaining a 0value for the expression is to use the Jeffreysprior causing h()to cancel with the entropy term12logdet(H()), as discussed above in Section3, with Equation 5 and Equation 6. Finally we see that using the Jeffreys prior would result in theposterior model distribution shown in the last line of Equation 6. If f(xi;) =f(xi;)then it isclear thatZ=Zis the necessary corresponding normalizing constant, and, thus, these terms willalso cancel out in Equation 10. A similar argument can be made for the probabilities component ofthe Jeffreys distance (T()P(jX))in Equation 8, whereby we equate the two likelihood terms,then the use of the Jeffreys prior ensures that the posterior distributions do not differ resulting in aJeffreys Distance of 0.From Equation 10 we see that the use of the Jeffreys prior while minimizing the error of the modeldirects the model to a parametrization which results in the model likelihood being equal to that ofthe true underlying distribution likelihood. This allowed us to determine the point at which themodel found the most probable parametrization in the landscape corresponding to the use of theJeffreys prior, as being the parametrization which equated the model likelihood and true distributionlikelihood values. In light of the bias-variance trade-off this would also mean that if the modelreduces its bias to a greater extent that it would have moved to a point of excess parameter varianceand model entropy as well of an excessively low Fisher Information metric. As expressed in Section3 this is indicative of the use of noise by the model in determining its parameter values. As a resultthe model would have overfit the training data. We, thus, reach the intuitive conclusion that once amodel parametrization becomes a more likely distribution to have generated the training data thanthe underlying true distribution itself, that the model will have began to overfit the training data.7Under review as a conference paper at ICLR 2020The first empirical result presented in Section 5 is, thus, a distribution reflecting the proximity ofthe training step at which minimum test error was reached to the training step at which the modellikelihood is equal to that of the true underlying distribution likelihood (we will refer to this as thelikelihoods intersecting) over a number of network training procedures. The aim of this experimentis to empirically confirm that the Jeffrey Prior parametrization provides the minimum test error fora model. For this experiment it is necessary to possess a ground truth on the likelihood of the truedata generating distribution on the training data. Unfortunately such ground truth information is notavailable on real-world data sets. As a result it was necessary for our experiments to use syntheticdata which was generated by a ground truth network, which we shall refer to as the “True” network.A “Training” network will then learn to model this ground truth network on noisy training data.Hence, the procedure for this first experiment is as follows. We create a randomly generated Truenetwork with depth between 5and15layers. The widths of the model layers are randomly sam-pled between 5and100neurons. The layers are sorted in descending order of width, as is commonfor network architectures used in practice and results in the wider, earlier layers extracting fea-tures for the later layers. We then prepend the 100neuron input layer and append the 1neuronoutput layer. All layers except the last are sigmoidal. This is the model used to generate data.We then randomly initialize our training network. The number of layers in this network is ran-domly chosen from the range of [TrueNetworkSize + 5;25]to ensure we obtain a sufficientlylarge network to overfit the data. The widths of this network’s layers are sampled from the rangeof[TrueNetworksSmallestLayer; 100]. This is again to ensure the model is over-parametrized.The True networks parameter values as well as the Training networks initial values are sampleduniformly from [1:0;1:0]with a random positional bias added to the True network parameters inthe range of [0:5;0:5]. This bias is to ensure the Training network starts with a significant degreeof error. Finally, we utilize randomly sampled values between [0:0;1:0]as input to the models, witha training batch size of 50data points and a test batch size of 500. This data is input to the Truenetwork and we obtain the corresponding data labels as output. Lastly we add Gaussian noise to theTraining data only (while the Test data remains clean) with a mean of 0and variance of 0:2. TheTraining network is then trained to model the True network using this data and we observe the pointswhere the likelihoods intersect and where the test error is minimized. This process was repeated for935separate training procedures. The distribution of the distances between the likelihoods inter-secting and the minimum test error are shown below in Section 5.Having observed the relationship between test error and the relative likelihoods of the training andtrue networks, we then decompose the parameter landscape into its dimensions of principal curva-ture to reflect the impact of noise in the training data on the landscape by first training on noiselessdata until a certain training error is achieved (error <0:4), at which point the noise is added to thedata. We observe and interpret the resulting impact of the addition of the noise on the curvature ofthe landscape. Secondly we aim to observe the relationship between the dimensions of principal cur-vature and the generalizability of a model parametrization by observing the entropy on a Riemanniansub-manifold of the original statistical manifold (Chen, 2019), defined over the primary dimensionsof principal curvature, relative to the test error and likelihood values of a model parametrization.Due to the necessity to calculate a full Hessian matrix1for these experiments a smaller network wasutilized, than in the first experimental procedure. The procedure is the same as described above forthe first experimental procedure, however, in this case the training model was composed of 4hiddenlayers with widths of 25,20,12and7neurons respectively. As shown empirically in Karakida et al.(2019), however, the behaviour of the Eigen-decomposition of the Fisher Information is independentof the size of the network architecture, and, hence, this smaller network is sufficient for observingthe effect of noise on the dimensions of principle curvature of the model. Hence, the Hessian matrixcalculated has a dimensionality of 831831elements. The training data was again obtained froma true network. In this case different permutations of 25-bit binary strings were input to this modelwhich returned the corresponding scalar using one hidden layer of 5neurons and a linear outputlayer.1The Hessian matrix was calculated using the Jax library (Bradbury et al., 2018)8Under review as a conference paper at ICLR 2020Figure 1: Distributions of the number of parameter update steps (left) and the difference in test error(right) between the Jeffreys prior parametrization and the minimum test error from 935individualtraining procedures (Kernel Density Estimate shown in red on the left).5 R ESULTSAs stated in Section 4 the first empirical result aimed to determine if the point in the landscapewhich is most probable under the posterior distribution resulting from the use of the Jeffreys priorpossesses the optimal test error performance. The results of this first experiment are presented inFigure 1, where the left image reflects the distribution of the number of parameter update stepsbetween where the Jeffreys prior parametrization occurs (where we observe the intersection of thelikelihoods) and where minimum test error occurs. We see in this image that the highest density isplaced around 0, with a vast majority of training procedures having the Jeffreys prior parametrizationcoincide exactly with the point of minimum test error. This supports the assertion that the Jeffreysprior results in the parametrization with the best generalization performance. We do, however,observe a small uniform spread of density to the right of 0in this image. We observed that this isdue to the test error oscillating once the Jeffreys prior parametrization is reached. This is merelya result of our inability to fine-tune the learning rate for the individual training procedures of therandomly generated training networks, with significantly different architectures. To reflect the factthat the test error for the Jeffreys prior parametrization in the trainings where oscillations occur isnegligibly different from the minimum test error we present the right image of Figure 1. In this imagewe plot the density of the difference in test error of the Jeffreys prior parametrization compared tothe minimum test error of the 935separate training procedures. To make the error independent ofthe size of the regression values being modelled we divide the error by the mean of the regressiony-values (generally this value is around 2:0for the respective training procedures). We observe thatin all trainings the discrepancy in test error is less than 0:1, with almost all discrepancies being lessthan0:05. These error discrepancies are negligible and, thus, these results empirically confirm ourhypothesis that the Jeffreys prior parametrization corresponds to the minimum test error.The results of injecting noise into the training data only once the model has been sufficiently trainedon clean data can be see in Figure 2. In this figure we present the Eigenvalues of 3of the 5principalcurvatures of the loss landscape. Thus, each value reflects the inverse of the variance of the modelin the dimension of the corresponding Eigenvector and a lower value reflects a higher entropy inthe given dimension. From these results we can see that the injection of noise results in a suddenincrease in the entropy of the lower principal curvature dimensions but has no effect on the firstdimension of principal curvature. As the Fisher Information is reflected by the entropy this wouldmean that the low principal curvature dimensions capture more information at the injection of thenoise into the data, while the information captured by the first principal curvature dimension remainssmooth throughout the training procedure. This would reflect a noise invariance of the principalcurvature of the landscape and as a result we can see that the dimension of principal curvature inthe landscape is exclusively responsible for the capturing of the true or primary data information.In light of this noise invariance we observed the entropy of a sub-manifold corresponding only tothe dimensions of high curvature (Eigenvalues >1) during the training of a model. The results ofthis experiment are shown in Figure 3. In agreement with Figure 1 we see that in the area of theJeffreys prior parametrization (intersection of likelihood values), the test error reaches its minimum9Under review as a conference paper at ICLR 2020Figure 2: Impact of the addition of noise on 3of the 5Principal Curvatures of the Loss Landscape.Vertical lines signifies point of noise injection.Figure 3: The Jeffreys prior parametrization is found in the area of parameter space minimizing testerror and maximizing entropy on the principal sub-manifold.value. A number of insights can be gained from the third image in Figure 3. Firstly, that the FisherInformation matrix and the Fisher Information Metric, are non-singular and positive semi-definiteon this sub-manifold. This reflects that the dimensions responsible for capturing true information inthe data are convex, with positive Gaussian curvature and that it is sufficient for the model to merelyminimize this well-behaved region of parameter space. We, further, observe that the Jeffreys priorparametrization maximizes the entropy of this sub-manifold, reflecting that it still captures all trueinformation from the data. We see the green portion of the entropy metric as being the area wherethe entropy is within 0:003of its maximum value. The fact that the entropy begins decreasing laterin the training is reflective of the model forgetting true information while learning the noise once itstarts overfitting. The fact that the entropy stagnates past the Jeffreys prior parametrization is due tothe fact that the lower dimensions of principal curvatures in the sub-manifold were minorly sensitiveto noise and that in this region the model is beginning to learn noise and forget true information atthe same rate. We have, thus, demonstrated that maximizing entropy is beneficial in the absence ofnoise. However, when noise is present in the data, maximum entropy corresponds to overfitting.6 C ONCLUSIONWe see that the notion of the width of the loss landscape being an indicator of a robust parametriza-tion is correct, however, this is conditional upon the model being developed in a noiseless domainor, more significantly along a dimension of parameter space which is independent to the noise ofthe domain. With the aid of the Fisher Information perspective of the geometry of the landscapewe see that the higher entropic points in the landscape directly reflect the absence of further in-formation upon which the parameter values may be determined. Thus, we see the propensity ofmaximum likelihood models towards such high entropic points as a reflection of their propensityto utilize all information in determining the parameter values, including noise. Thus, we make thefinal conclusion that the point of maximum entropy in the loss landscape does not possess the bestgeneralization performance and corresponds to the overfitting of the model to the training data. In-stead the optimal point in the landscape occurs at maximum entropy in the dimension of principalcurvature which corresponds to the most probable parametrization found by a Bayesian posteriordistribution resulting from the use of the Jeffreys prior.10Under review as a conference paper at ICLR 2020
SJxwSVtcKr
Official Blind Review #1
1: Reject
This paper targets at a deep learning theory contribution based on information geometry. This contribution is tightly based on Zhang et al. (2018) and explains the generalization of deep learning from a Bayesian perspective. The main contribution the authors claimed is an optimal degree of curvature exist which gives the best generalization guarantees, which is in contrast to the commonly perceived "the wider the better". First of all, the writing (including language etc) is of poor quality, to the extent that the submission is very difficult to read and can be rejected merely based on this, with unusual expressions, missing punctuations, super long sentenses, and wongly used words. The reviewer won't list example here because they are everywhere. What is even worse is the conceptral errors and defected derivations. For example, in eq.(1), the authors equate the Fisher information matrix (which is an expected Hessian) to the Hessian matrix, this is subject to conditions which must be clearly given right before/after the equation. As their results are largely based on the correctness of eq.(2), let's examine the derivations in appendix A.1. In the first equation in A.1, what is the subindex "j"? "Utilizing Laplace Approximation of the integral": such approximations have conditions that must be clearly stated. It is not clear how one can get the last approximation in page 12 from the previous equations. In summary, their eq.(2) is a loose approximation which is subject to a set of conditions (that are not given), and the derivation is of poor quality. As a theoreiritical contirbution, the authors did not manage to converge to some simple and clear statements (theorems or equvalent). Instead, the contribution is largely *explanatory*. It is hard to observe anything new, given the poor writing and organization. The first 4 pages are mainly introductions of previous works. The authors used information geometry and minimum description length to explain the generalization of deep learning. This is a small area. It is hard to miss closely related works by simple searching. Instead, the authors only cited Rissanen (1978). On the other hand, as the authors used the spectrum properties of the Fisher information matrix, there are some recent works by Amari which can be cited.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Using Objective Bayesian Methods to Determine the Optimal Degree of Curvature within the Loss Landscape ### Paper Abstract The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model. In this work, however, we aim to show that this is only true for a noiseless system and in general the trend of the model towards wide areas in the landscape reflect the propensity of the model to overfit the training data. Utilizing the objective Bayesian (Jeffreys) prior we instead propose a different determinant of the optimal width within the parameter landscape determined solely by the curvature of the landscape. In doing so we utilize the decomposition of the landscape into the dimensions of principal curvature and find the first principal curvature dimension of the parameter space to be independent of noise within the training data. ### Paper Keywords ["Objective Bayes", "Information Geometry", "Artificial Neural Networks"] ### Paper Content ABSTRACTThe efficacy of the width of the basin of attraction surrounding a minimum inparameter space as an indicator for the generalizability of a model parametriza-tion is a point of contention surrounding the training of artificial neural networks,with the dominant view being that wider areas in the landscape reflect better gen-eralizability by the trained model. In this work, however, we aim to show thatthis is only true for a noiseless system and in general the trend of the model to-wards wide areas in the landscape reflect the propensity of the model to overfitthe training data. Utilizing the objective Bayesian (Jeffreys) prior we instead pro-pose a different determinant of the optimal width within the parameter landscapedetermined solely by the curvature of the landscape. In doing so we utilize the de-composition of the landscape into the dimensions of principal curvature and findthe first principal curvature dimension of the parameter space to be independentof noise within the training data.1 I NTRODUCTIONWhen training a neural network we aim to find a parametrization which minimizes the variance ofthe data around the model’s conditional mean value. A statistic which is reflective of this variance isknown as a loss function and can be seen as creating a landscape mapping a model parametrizationto a corresponding loss value. Thus, higher points in the landscape reflect higher loss values anda worse model parametrization. The saliency of other features of the loss landscape on the modelperformance are relatively less clear and in some cases are points of contention within the field. Onesuch point is whether the width of a basin in the landscape surrounding a local minimum (we willalso refer to this as the width of the minimum) is reflective of the ability of a model parametrizationat the minimum to generalize to unseen data. It is a common notion that the wider the minimumin the landscape, as measured by the Hessian matrix of the loss function (Keskar et al., 2016; Dinhet al., 2017), the better the model parametrization will generalize. The intuition behind such a beliefis simply that, wider minima reflect that a model will experience less deviation in its loss metric asa result of minor deviations of its parameter values. As a result the model is more robust than if itwere to be parametrized by a very specific parameter set found at a sharp minimum.In this work we aim to demonstrate that the width of a minimum is a key feature of the loss land-scape and provides significant information on the progress of the training of a model. We deviate,however, from the views of the field that the widest minima provide the best generalizability by re-flecting that there is instead an optimal width or curvature around the parametrization with the bestgeneralizability which is not necessarily the widest point in the landscape. To this end Section 2provides the necessary background information that we will utilize in developing our theories whichare presented in Section 3. Section 4 and Section 5 then provide empirical evidence in support ofthe theoretical findings with Section 4 describing the methods employed to test the theories. Section5 then provides and discusses the results of these empirical tests. Finally we conclude in Section 6with our closing remarks.The contributions of this work are threefold. Firstly we evaluate the concept of Energy-Entropycompetition of neural networks (Zhang et al., 2018) in the context of the bias-variance trade-off(Geman et al., 1992) and reflect that a correlation exists between energy and entropy as opposed to1Under review as a conference paper at ICLR 2020a competition or trade-off as was first presented. Secondly we utilize the Fisher Information of theloss landscape in the area of a minimum to reflect that an optimal level of curvature exists within thelandscape which does not necessarily occur at the point in the landscape with the least curvature.Further to this, we provide a novel view on the overfitting of models to their training data using theloss landscape. Finally, the Fisher Information is utilized in defining the objective Bayesian priorknown as the Jeffreys prior and we show that the test error of the model reaches its minimum valueat the point in the landscape which corresponds to the Jeffreys prior. In addition, we show that at thispoint in the landscape the dimension of principal curvature of the model is at its maximum entropy.In doing so we also reflect the noise invariance of the dimension of principal curvature.2 B ACKGROUND2.1 F ISHER INFORMATION AND UNINFORMATIVE (JEFFREYS ) PRIORSThe Fisher Information (which we denote by ()) is a metric dependent on the model parametriza-tion which measures the amount of information that a sufficient statistic based on the observabledata, such as the variance of the data around the model predictions (Jaynes, 2003), carries aboutan unknown parameter . In the case of a Gaussian model, the Fisher Information is equal to theExpected Hessian of the log-likelihood of the Gaussian. The necessary regulatory conditions forthis equality to be true apply to the entire exponential family of distributions, however, in our case itis sufficient for this to hold for the Gaussian distribution (Ly et al., 2017). One of the key propertiesof the Fisher Information Matrix is that its determinant is invariant under reparametrizations of atrained model. Thus, when the parameters used in modelling a distribution are changed, the FisherInformation in each dimension will change, however, the determinant or volume of informationremains unchanged between the model parametrizations.The invariance property of the Fisher Information was the reason for its utilization in Jeffreys (1946)who sought to create a Bayesian prior with such an invariance property. The resultant prior is knownas the Jeffreys prior and is shown in Equation 1, where H()denotes both the Hessian and ExpectedHessian matrices (we will treat the Expected Hessian and Hessian interchangeably for the remainderof this work, as is common in the literature (Zhang et al., 2018; Karakida et al., 2019)).P()/pdet() =pdetH() (1)As has been shown in Jaynes (1968; 2003) the utility of the Jeffreys prior is not limited to theinvariance property, as the Jeffreys prior is an example of an uninformative or objective prior, andas a result informs the posterior distribution as little as possible. The Jeffreys prior is thus used toreflect complete prior ignorance about the correct model parametrization, resulting in a posteriordistribution with parameters completely determined by the observed data. A key perspective of thisproperty is that the Jeffreys prior, thus, imposes a uniform distribution over the function space of themodel, not the parameter space. This is due to the density of the prior being inversely proportional tothe Hessian at a certain parametrization, and as such places higher density on parametrizations with aunique function approximation and low parameter variance. This would result in an even distributionover the function approximations and as a result the choice between function approximations isleft to the model learning from the data. The relationship between the Hessian and the parametervariance of a model is discussed further in Section 2.2 below.2.2 T HEBIAS-VARIANCE DILEMMA , ENERGY -ENTROPY COMPETITION AND MINIMUMDESCRIPTION LENGTHIt is a well-known fact that a model learning to equate its conditional mean precisely to the valuesfound in the training data is not always beneficial to the performance of the model on unseen data.In particular when we observe a decrease in training error but increase in validation or test errorwe say that the model is overfitting the training data (Hawkins, 2004). In Geman et al. (1992) it isshown empirically that to decrease the variance of the data around a model’s predictions (reduce thetraining error) it is necessary for the variance in the model parameters to increase. Further, Gemanet al. (1992) reflect that a large parameter variance corresponds to the overfitting of the model tothe training data. This trade-off between the bias of the model and the variance of its parameters is2Under review as a conference paper at ICLR 2020known as the Bias-Variance Dilemma (Sammut & Webb, 2011). In Geman et al. (1992) the onlymeans presented to mitigate this dilemma is to obtain more training data.We see, however, that neural networks are capable of learning complex tasks with limited data andeven generalize in spite of the Bias-Variance Dilemma. In Zhang et al. (2018) it is argued thatthe success of neural networks is due to a bias of Gradient Descent towards wider minima in theloss landscape. To reflect this, Zhang et al. (2018) derive a Bayesian posterior distribution for theparameters of a model given the training data. To derive this distribution, Zhang et al. (2018) utilize aGaussian likelihood with conjugate Gaussian prior, which we generalize in Equation 2 by allowingany prior distribution exp(h())which results in a proper posterior distribution. The exponentialterm of this generalized prior h()is seen as some function of whilef(xi;)denotes the functionapproximation by the neural network. 2iis the variance of the output corresponding to data pointxi, and finally yiis the true output for a particular xiin the training data. The derivation of Equation2 can be found in Appendix A.1.P(jX) =1Zexp" PXi=1(yif(xi;))222ih() +12logdet(H())!#(2)From Equation 2 we see that to maximize the probability of a parametrization we must simultane-ously maximize the model likelihood (by minimizing the first term in the exponential), the priorprobability of the parametrization and the model entropy. The model entropy is reflected by thefinal term in the exponential and is inversely proportional to the Hessian of the loss landscape atthe parametrization. Using the posterior distribution Zhang et al. (2018) note that maximizing themodel likelihood is not the only factor which should be used in determining the model parametriza-tion, and in some cases it may be beneficial to trade-off some training error for an increase in modelentropy, which the authors called Energy-Entropy competition. Zhang et al. (2018) state that the biasof Gradient Descent towards wider minima, with smaller Hessian values, results in the model nat-urally maximizing entropy, aiding in its generalizability. We see, however, from the Bias-VarianceDilemma that by reducing the bias of the model on the training data, and increasing its likelihood,that the model entropy will naturally also increase due to the higher variance in the parameter values.With the perspective of both the Bias-Variance Dilemma and Energy-Entropy competition we seethat wide points in the loss landscape have been related to both overfitting and improved generaliz-ability of a model parametrization. Thus, from one perspective we aim for sharp minima within thelandscape and from the other we should aim for wide minima. The issue of the width of a minimumis further confounded by Dinh et al. (2017) which states that the width of a minimum is not a con-sistent indicator of the ability of a model to generalize. The impact of the width of a minimum inthe landscape is still an open question and one which we try address in this work.The Minimum Description Length (MDL) Principle is an information theoretic principle whichstates that the optimal model for a set of data provides the best compression of the data (Rissa-nen, 1978). In other words the optimal model is the simplest model which incurs the least trainingerror. This principle is another example of the trade-off between model complexity and minimiz-ing the model bias. Due to its assertion that the optimal model is the simplest unbiased one theMDL Principle is the mathematical formulation of Occam’s Razor, and is expressed by Equation 3,which reflects that to compress the data Doptimally we must find the parametrization with the netminimum entropy in the parameter space L()and in the data given the parametrization L(Dj).L(D) = min2#(L() +L(Dj)) (3)For the exponential family of likelihood distributions the Jeffreys prior is used to enforce the MDLproperty on the posterior distribution and results in a minimax optimal posterior, which is to saythat the maximal risk of the model parametrization is minimal out of all unbiased parametrizations(Lehmann & Casella, 2006). The minimax optimality property, thus, provides the lowest upperbound of the risk for all model parametrizations. Thus the MDL property is related to the Bias-Variance Dilemma and MDL posterior distributions aim to avoid overfitting.A final necessary principle which encompasses all of the topics above is the Likelihood Principle(Jaynes, 2003), which states that within the context of a specified model, the likelihood distribu-3Under review as a conference paper at ICLR 2020tionL(Dj)trained from data Dcontains all the information about the model parameters that iscontained in D.2.3 P RINCIPAL CURVATUREThe Jeffreys prior, and by extension the Fisher Information, finds further utility in its use as a rightHaar measure for the parameter space of a normal distribution (Berger, 2013). The Haar measure isused to assign an invariant volume to a subset of locally compact topological groups and, thus, formsan integral over the compact groups. In the case of the parameter space of the normal distributionthe topological groups will be of parametrizations with similar function approximations and, thus,similar loss metric within the basin surrounding a local minimum in the loss landscape. Further, wenote that the parameter space of a probabilistic model forms a statistical manifold and by extensiona Riemannian manifold (Rao, 1945). The metric tensor for statistical manifolds is the Fisher Infor-mation metric (Skovgaard, 1984), defined as the expected value of the individual elements of theFisher Information matrix, which forms the tangent space of such manifolds. As stated in Section2.1, in the case of Gaussian parameter spaces the Fisher Information Matrix can be equally derivedas the Hessian matrix of the loss function relative to the model parameters. This is significant asthe Hessian matrix is used in the area of a critical point on a Riemannian manifold for obtaining theshape operator (Spivak, 1970), and as a result the principal curvatures at the point (Porteous, 2001).In the case of a Gaussian parameter space the shape operator is the Gaussian curvature defined asthe determinant of the Hessian matrix det(H())(Koenderink & Van Doorn, 1992). The princi-pal curvatures are defined as the eigenvalues of the Hessian matrix and decompose the manifoldinto orthogonal dimensions of curvature, with the first eigenvector reflecting the dimension of mostcurvature.It is important to note that while the parameter space of a statistical model forms a Riemannian man-ifold, when parametrized by an overly-determined model such as a neural network, the parameterspace will not be Riemannian but rather semi-Riemannian, due to the fact that the Fisher Informa-tion metric will no longer be defined over the entire manifold. Such undefined points for the metricare a result of a singular metric tensor at the model parametrization and occur due to the covarianceof parameters within the model. Covariant parameters necessarily occur with the addition of hiddenlayers to the model and result in dimensions on the manifold in which the parameters may be variedwithout altering the behaviour of the model. This results in dimensions of no curvature along themanifold. As seen in Section 5, this is not a destructive point for the training procedure, however,we must necessarily remain cognisant of such covariant dimensions along the statistical manifold.3 M ODEL ENTROPY , THELOSSLANDSCAPE AND GENERALIZATIONThe aim of training a neural network is to find the most probable parametrization for a model asdetermined by the posterior probability reflected by Equation 2. This is achieved by maximizingthe combination of the likelihood, prior probability and entropy of the model parametrization. Thelikelihood we increase normally by decreasing the variance of the training data around the modelpredictions. The entropy term we have no direct control over as the landscape is completely deter-mined by the data, the sufficient statistic being used to determine the parameters (which is the lossmetric) and the hypothesis (the model architecture being trained). So the only component of theposterior left to be determined is the prior. As with most work in Bayesian statistics this is the mostdifficult part and must be treated with great care. There are presently two common approaches tosetting this prior distribution, the first of which being to not specify one, or more precisely use animplicit uniform prior (Chaudhari & Soatto, 2018) and, thus, use maximum likelihood estimationto determine the parameter values. The second common approach is to utilize a conjugate Gaussianwith a mean of 0for the prior. In practice this method takes the form of L2 regularization, alsoknown as weight decay (Krogh & Hertz, 1992), with h()/jjjj2in Equation 2. Neither approachhas proven to be sufficient consistently for deterring models from overfitting, without introducinga form of bias, due to their unjustified assumptions about the correct model parametrization. Thisis relatively clear in the conjugate Gaussian prior approach which assumes a mean of 0for the pa-rameter values. In a case where we have explicit prior knowledge that such a mean and distributionis in-fact correct for the model parameters then this would be a correct approach, however, in al-most every case we are completely ignorant to the values of the true parametrization and, thus, we4Under review as a conference paper at ICLR 2020would be biasing our models to some degree by using this prior. It is necessary when developing anunbiased model that this absence of prior knowledge be reflected in the training procedure.The source of error from the uniform prior is slightly more nuanced, as initial intuition would suggestthat giving equal probability to all values a parameter could take is a correct means of reflectingour prior ignorance about the parameter values. However, this method fails to accurately reflectthe probabilities that unlikely parametrizations may be correct and in a sense exhibits a kind ofconfirmation bias. As discussed in Section 2.2 the model which is optimal is the one which hasthe lowest variance of the data around its predictions L(Dj)while maintaining low variance inthe model parameter space L(). In addition the bias-variance trade-off (Geman et al., 1992) statethat to decrease data variance we must increase parameter variance reflecting the trade-off betweenL()andL(Dj)in the MDL equation. Further, we related this to the Energy-Entropy competitionconcept (Zhang et al., 2018), where it was also stated that neural networks are biased towards wideminima. Hence, by placing equal prior probability on all areas of the landscape we see that theuse of the uniform prior will result in a posterior distribution over the model parameters whichplaces excessive density on high variance areas of the landscape while at the same time places toolittle probability on very specific, low variance parametrizations of the model. This results in thedevelopment of a sub-optimal model which favours wider minima within the loss landscape and asa result excessively reduces the variance of the data around its predicted values.Thus we conjecture that a correct prior for a model would be the Jeffreys prior shown in Equation 1and Equation 4:P()/pdet() =pdetH() (4)As a result we see in Equation 2 that such a prior would give the equation for h()ash() =12logdet(H()) (5)We note that, with the use of the Jeffreys prior, the prior and entropy term in the posterior formulationcancel out, leaving the likelihood term as the only factor determining the posterior probability, ascan be see in Equation 6.P(jX) =1Zexp" PXi=1(yif(xi;))222i12logdet(H()) +12logdet(H())!#=1Zexp PXi=1(yif(xi;))222i! (6)It is, thus, possible to utilize the loss landscape in the area of a minimum to determine the degree ofcertainty we may have in our model parametrization being the true parametrization and as a resultdetermine the necessary Jeffreys prior probability. This is due to the fact that higher entropy meanswider minima which reflects higher parameter variance and the necessity to be less certain of theparametrization in that area. The opposite is true for an increase in certainty in our parametrizationat a sharp minimum. Furthermore, this would mean we are objectively setting our prior based on themodel behaviour given the observed data and sufficient statistic. Note, we do not say we determineour prior based on the hypothesis as the determinant of the Fisher Information/Hessian is invariantunder reparametrizations. This means that in the area of a minimum, by transforming the hypothesisto be modelled by an alternate set of parameters 0the dimensions and volume of the landscape willadjust such thatpdet(H()) =pdet(H(0))(Fisher, 1922). A necessary distinction regardingthe Jeffreys prior is that, while it places the full parameter determination on the data, it does notnecessarily result in a posterior distribution which has extracted all information from the data. Infor-mation which provides an insufficient decrease in data variance to warrant the increase in variancein the model parameters will not be utilized as the model naturally “distrusts” this information byproviding a relatively lower prior probability to the more entropic parametrization found in the widerbasin. This is where we see the utility of the Jeffreys prior with regard to the MDL property reflectedby Equation 3 as it balances both model complexity L()while fitting the data L(Dj).The primary power of the Jeffreys prior comes from the use of the Fisher Information. Naturally asthe model fits the data and captures information, the amount of information left in the data which5Under review as a conference paper at ICLR 2020remains uncaptured by the model decreases. This is observed as a decrease in the Fisher Informa-tion. We see, however, that the model entropy increases as the Hessian matrix, and by extensionthe Fisher Information, decreases, which is again in agreement with the bias-variance trade-off. Theconsequence of this observation is that to capture all the information from a sufficient statistic deter-mined by the training data we must utilize increasingly complex models, capable of modelling finerdetails found in the data. The utility of such fine details to the model performance exhibits dimin-ishing returns to a point where the perturbation of a parameter capturing these details will not resultin any significant deviation in the model behaviour. Simply, as more information is shared betweenparameters, the individual importance of a parameter decreases. This is in contrast to an under-parametrized model where the parameters capture as much of the most important information fromthe sufficient statistic as possible and rely heavily on this information in determining its behaviour.In light of the Fisher Information we see the Maximum Likelihood Principle further reflects that theuse of the uniform prior biases our models towards maximum entropy within the loss landscape byextracting all information from the training data at the expense of higher model complexity.It must be noted that the propensity of neural networks to extract all information from the trainingdata is not an inherently negative quality of the models. Quite the opposite, it is reflective of thepower and capability of the models which are designed to learn the variances within data and utilizethis information in determining their behaviour. As a result, however, the efficacy of these modelsis directly related to the reliability of the data on which they are trained and for all the informationfound in the training data to be present and reflective of the entire population of data being modelled.This is seldom the case as training data is inevitably noisy, either due to noise from sampling andcapturing of the data, or due to confounding aspects of the task domain which on average do notaffect the population data distribution but do provide a source of structured noise when their effectis observed on the training data. Minimizing the Fisher Information metric and fulfilling the Maxi-mum Likelihood Principle in such cases would reflect that the information found in the noise of thetraining data was utilized in determining the model parameter values, which is clearly undesirableand is known as the model over-fitting the training data (Hawkins, 2004).We, thus, see that the notion of the widest possible minima in the loss landscape providing thebest generalization performance is only true in a noiseless environment. The view, however, thatwide areas in the landscape generalize better is true as this width in the landscape would merelyreflect that the model has captured more information from the data than a parametrization foundin a steeper portion of the landscape. Naturally this would provide better test error performanceby the model if it has captured the information found in the training data which is reflective of theinformation within the population data distribution being modelled. We conclude that the width ofthe landscape in which a model finds itself is demonstrably important and that there is a precise widthin the landscape which provides the model with the best possible test error performance. This pointwould be exactly where the model prior is equal to the Jeffreys prior as determined by the FisherInformation of the loss landscape. This is due to the aversion of the Jeffreys prior to any informationwhich does not justify the increase in model entropy by a superior decrease in data variance orprediction error, while remaining objective in the sense that the prior is completely determined bythe loss landscape and has as little effect on the parameter posterior distribution as possible. Thus,noise in the training data which only serves to perturb and hinder the learning of the model withoutproviding sufficient benefit to how the training data is fit will not be learned by the model.4 M ETHODSFrom the above discussion it is clear that it is not enough to merely reduce the variance of the dataaround the model prediction as it is possible for the model to reduce this variance to an excessdegree. We, thus, require a metric for the difference between the model and true distributions whichis only minimized when the two distributions are identical. This is not the KL-divergence as thismetric merely reflects the density of a distribution Bwhich lies outside of another distribution A. Itis, thus, possible to minimize the KL-divergence while the distributions are not identical by havingdistributionAsurround or encapsulate distribution B. We will, thus, use the Jeffreys divergence asthe difference metric between the two distributions, as the Jeffreys divergence is uniquely minimizedwhen the two distributions are identical. The Jeffreys divergence is shown below in Equation 7 andis merely the sum of the KL-divergence for the true parameter distribution T()compared to the6Under review as a conference paper at ICLR 2020model parameter distribution P(jX)and the opposite KL-divergence. We show the derivation ofthe Jeffreys Divergence from this summation in Section A.2 of the Appendix.DJ(T()jjP(jX)) =Z(T()P(jX))(lnT()lnP(jX))d (7)Naturally Equation 7 is intractable due to the necessity to integrate over all parametrizations. How-ever, as discussed in Section 3, the use of maximum likelihood estimation results in an excess ofdensity on high variance parametrizations in the posterior parameter distribution in Equation 2. Itwas thus sufficient to evaluate the Jeffreys Divergence at a single point in parameter space and ob-serve the relative densities at that point, providing a distance metric as opposed to a divergencemetric. We will, thus, refer to the Jeffreys Distance for the remainder of this work, with the formulashown in Equation 8.DJ(T()jjP(jX)) = (T()P(jX))(lnT()lnP(jX)) (8)The necessity of a distance metric being positive semi-definite is upheld by this metric as it isclear when (T()P(jX))<0then (lnT()lnP(jX))<0. Likewise when (T()P(jX))>0then (lnT()lnP(jX))>0. This is a benefit of the symmetrical property ofthe Jeffreys Divergence which the KL-divergence does not possess. Thus, substituting the modelposterior formula from Equation 2 as well as the true model distribution in Equation 9 into thelogarithmic terms of Equation 8 we obtain the formulation shown in Equation 10.T() =1Zexp PXi=1(yif(xi;))222i!(9)(lnT()lnP(jX)) = PXi=1(yif(xi;))222i+PXi=1(yif(xi;))222ih() +12logdet(H()) +ZZ(10)Assuming now that =, and thusf(xi;) =f(xi;), as would be the case at the end ofan unbiased training procedure, we see that the two variance terms in Equation 10 will cancel out.Further, we see that the only means of obtaining a 0value for the expression is to use the Jeffreysprior causing h()to cancel with the entropy term12logdet(H()), as discussed above in Section3, with Equation 5 and Equation 6. Finally we see that using the Jeffreys prior would result in theposterior model distribution shown in the last line of Equation 6. If f(xi;) =f(xi;)then it isclear thatZ=Zis the necessary corresponding normalizing constant, and, thus, these terms willalso cancel out in Equation 10. A similar argument can be made for the probabilities component ofthe Jeffreys distance (T()P(jX))in Equation 8, whereby we equate the two likelihood terms,then the use of the Jeffreys prior ensures that the posterior distributions do not differ resulting in aJeffreys Distance of 0.From Equation 10 we see that the use of the Jeffreys prior while minimizing the error of the modeldirects the model to a parametrization which results in the model likelihood being equal to that ofthe true underlying distribution likelihood. This allowed us to determine the point at which themodel found the most probable parametrization in the landscape corresponding to the use of theJeffreys prior, as being the parametrization which equated the model likelihood and true distributionlikelihood values. In light of the bias-variance trade-off this would also mean that if the modelreduces its bias to a greater extent that it would have moved to a point of excess parameter varianceand model entropy as well of an excessively low Fisher Information metric. As expressed in Section3 this is indicative of the use of noise by the model in determining its parameter values. As a resultthe model would have overfit the training data. We, thus, reach the intuitive conclusion that once amodel parametrization becomes a more likely distribution to have generated the training data thanthe underlying true distribution itself, that the model will have began to overfit the training data.7Under review as a conference paper at ICLR 2020The first empirical result presented in Section 5 is, thus, a distribution reflecting the proximity ofthe training step at which minimum test error was reached to the training step at which the modellikelihood is equal to that of the true underlying distribution likelihood (we will refer to this as thelikelihoods intersecting) over a number of network training procedures. The aim of this experimentis to empirically confirm that the Jeffrey Prior parametrization provides the minimum test error fora model. For this experiment it is necessary to possess a ground truth on the likelihood of the truedata generating distribution on the training data. Unfortunately such ground truth information is notavailable on real-world data sets. As a result it was necessary for our experiments to use syntheticdata which was generated by a ground truth network, which we shall refer to as the “True” network.A “Training” network will then learn to model this ground truth network on noisy training data.Hence, the procedure for this first experiment is as follows. We create a randomly generated Truenetwork with depth between 5and15layers. The widths of the model layers are randomly sam-pled between 5and100neurons. The layers are sorted in descending order of width, as is commonfor network architectures used in practice and results in the wider, earlier layers extracting fea-tures for the later layers. We then prepend the 100neuron input layer and append the 1neuronoutput layer. All layers except the last are sigmoidal. This is the model used to generate data.We then randomly initialize our training network. The number of layers in this network is ran-domly chosen from the range of [TrueNetworkSize + 5;25]to ensure we obtain a sufficientlylarge network to overfit the data. The widths of this network’s layers are sampled from the rangeof[TrueNetworksSmallestLayer; 100]. This is again to ensure the model is over-parametrized.The True networks parameter values as well as the Training networks initial values are sampleduniformly from [1:0;1:0]with a random positional bias added to the True network parameters inthe range of [0:5;0:5]. This bias is to ensure the Training network starts with a significant degreeof error. Finally, we utilize randomly sampled values between [0:0;1:0]as input to the models, witha training batch size of 50data points and a test batch size of 500. This data is input to the Truenetwork and we obtain the corresponding data labels as output. Lastly we add Gaussian noise to theTraining data only (while the Test data remains clean) with a mean of 0and variance of 0:2. TheTraining network is then trained to model the True network using this data and we observe the pointswhere the likelihoods intersect and where the test error is minimized. This process was repeated for935separate training procedures. The distribution of the distances between the likelihoods inter-secting and the minimum test error are shown below in Section 5.Having observed the relationship between test error and the relative likelihoods of the training andtrue networks, we then decompose the parameter landscape into its dimensions of principal curva-ture to reflect the impact of noise in the training data on the landscape by first training on noiselessdata until a certain training error is achieved (error <0:4), at which point the noise is added to thedata. We observe and interpret the resulting impact of the addition of the noise on the curvature ofthe landscape. Secondly we aim to observe the relationship between the dimensions of principal cur-vature and the generalizability of a model parametrization by observing the entropy on a Riemanniansub-manifold of the original statistical manifold (Chen, 2019), defined over the primary dimensionsof principal curvature, relative to the test error and likelihood values of a model parametrization.Due to the necessity to calculate a full Hessian matrix1for these experiments a smaller network wasutilized, than in the first experimental procedure. The procedure is the same as described above forthe first experimental procedure, however, in this case the training model was composed of 4hiddenlayers with widths of 25,20,12and7neurons respectively. As shown empirically in Karakida et al.(2019), however, the behaviour of the Eigen-decomposition of the Fisher Information is independentof the size of the network architecture, and, hence, this smaller network is sufficient for observingthe effect of noise on the dimensions of principle curvature of the model. Hence, the Hessian matrixcalculated has a dimensionality of 831831elements. The training data was again obtained froma true network. In this case different permutations of 25-bit binary strings were input to this modelwhich returned the corresponding scalar using one hidden layer of 5neurons and a linear outputlayer.1The Hessian matrix was calculated using the Jax library (Bradbury et al., 2018)8Under review as a conference paper at ICLR 2020Figure 1: Distributions of the number of parameter update steps (left) and the difference in test error(right) between the Jeffreys prior parametrization and the minimum test error from 935individualtraining procedures (Kernel Density Estimate shown in red on the left).5 R ESULTSAs stated in Section 4 the first empirical result aimed to determine if the point in the landscapewhich is most probable under the posterior distribution resulting from the use of the Jeffreys priorpossesses the optimal test error performance. The results of this first experiment are presented inFigure 1, where the left image reflects the distribution of the number of parameter update stepsbetween where the Jeffreys prior parametrization occurs (where we observe the intersection of thelikelihoods) and where minimum test error occurs. We see in this image that the highest density isplaced around 0, with a vast majority of training procedures having the Jeffreys prior parametrizationcoincide exactly with the point of minimum test error. This supports the assertion that the Jeffreysprior results in the parametrization with the best generalization performance. We do, however,observe a small uniform spread of density to the right of 0in this image. We observed that this isdue to the test error oscillating once the Jeffreys prior parametrization is reached. This is merelya result of our inability to fine-tune the learning rate for the individual training procedures of therandomly generated training networks, with significantly different architectures. To reflect the factthat the test error for the Jeffreys prior parametrization in the trainings where oscillations occur isnegligibly different from the minimum test error we present the right image of Figure 1. In this imagewe plot the density of the difference in test error of the Jeffreys prior parametrization compared tothe minimum test error of the 935separate training procedures. To make the error independent ofthe size of the regression values being modelled we divide the error by the mean of the regressiony-values (generally this value is around 2:0for the respective training procedures). We observe thatin all trainings the discrepancy in test error is less than 0:1, with almost all discrepancies being lessthan0:05. These error discrepancies are negligible and, thus, these results empirically confirm ourhypothesis that the Jeffreys prior parametrization corresponds to the minimum test error.The results of injecting noise into the training data only once the model has been sufficiently trainedon clean data can be see in Figure 2. In this figure we present the Eigenvalues of 3of the 5principalcurvatures of the loss landscape. Thus, each value reflects the inverse of the variance of the modelin the dimension of the corresponding Eigenvector and a lower value reflects a higher entropy inthe given dimension. From these results we can see that the injection of noise results in a suddenincrease in the entropy of the lower principal curvature dimensions but has no effect on the firstdimension of principal curvature. As the Fisher Information is reflected by the entropy this wouldmean that the low principal curvature dimensions capture more information at the injection of thenoise into the data, while the information captured by the first principal curvature dimension remainssmooth throughout the training procedure. This would reflect a noise invariance of the principalcurvature of the landscape and as a result we can see that the dimension of principal curvature inthe landscape is exclusively responsible for the capturing of the true or primary data information.In light of this noise invariance we observed the entropy of a sub-manifold corresponding only tothe dimensions of high curvature (Eigenvalues >1) during the training of a model. The results ofthis experiment are shown in Figure 3. In agreement with Figure 1 we see that in the area of theJeffreys prior parametrization (intersection of likelihood values), the test error reaches its minimum9Under review as a conference paper at ICLR 2020Figure 2: Impact of the addition of noise on 3of the 5Principal Curvatures of the Loss Landscape.Vertical lines signifies point of noise injection.Figure 3: The Jeffreys prior parametrization is found in the area of parameter space minimizing testerror and maximizing entropy on the principal sub-manifold.value. A number of insights can be gained from the third image in Figure 3. Firstly, that the FisherInformation matrix and the Fisher Information Metric, are non-singular and positive semi-definiteon this sub-manifold. This reflects that the dimensions responsible for capturing true information inthe data are convex, with positive Gaussian curvature and that it is sufficient for the model to merelyminimize this well-behaved region of parameter space. We, further, observe that the Jeffreys priorparametrization maximizes the entropy of this sub-manifold, reflecting that it still captures all trueinformation from the data. We see the green portion of the entropy metric as being the area wherethe entropy is within 0:003of its maximum value. The fact that the entropy begins decreasing laterin the training is reflective of the model forgetting true information while learning the noise once itstarts overfitting. The fact that the entropy stagnates past the Jeffreys prior parametrization is due tothe fact that the lower dimensions of principal curvatures in the sub-manifold were minorly sensitiveto noise and that in this region the model is beginning to learn noise and forget true information atthe same rate. We have, thus, demonstrated that maximizing entropy is beneficial in the absence ofnoise. However, when noise is present in the data, maximum entropy corresponds to overfitting.6 C ONCLUSIONWe see that the notion of the width of the loss landscape being an indicator of a robust parametriza-tion is correct, however, this is conditional upon the model being developed in a noiseless domainor, more significantly along a dimension of parameter space which is independent to the noise ofthe domain. With the aid of the Fisher Information perspective of the geometry of the landscapewe see that the higher entropic points in the landscape directly reflect the absence of further in-formation upon which the parameter values may be determined. Thus, we see the propensity ofmaximum likelihood models towards such high entropic points as a reflection of their propensityto utilize all information in determining the parameter values, including noise. Thus, we make thefinal conclusion that the point of maximum entropy in the loss landscape does not possess the bestgeneralization performance and corresponds to the overfitting of the model to the training data. In-stead the optimal point in the landscape occurs at maximum entropy in the dimension of principalcurvature which corresponds to the most probable parametrization found by a Bayesian posteriordistribution resulting from the use of the Jeffreys prior.10Under review as a conference paper at ICLR 2020<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #1 ### Review Text This paper targets at a deep learning theory contribution based on information geometry. This contribution is tightly based on Zhang et al. (2018) and explains the generalization of deep learning from a Bayesian perspective. The main contribution the authors claimed is an optimal degree of curvature exist which gives the best generalization guarantees, which is in contrast to the commonly perceived "the wider the better". First of all, the writing (including language etc) is of poor quality, to the extent that the submission is very difficult to read and can be rejected merely based on this, with unusual expressions, missing punctuations, super long sentenses, and wongly used words. The reviewer won't list example here because they are everywhere. What is even worse is the conceptral errors and defected derivations. For example, in eq.(1), the authors equate the Fisher information matrix (which is an expected Hessian) to the Hessian matrix, this is subject to conditions which must be clearly given right before/after the equation. As their results are largely based on the correctness of eq.(2), let's examine the derivations in appendix A.1. In the first equation in A.1, what is the subindex "j"? "Utilizing Laplace Approximation of the integral": such approximations have conditions that must be clearly stated. It is not clear how one can get the last approximation in page 12 from the previous equations. In summary, their eq.(2) is a loose approximation which is subject to a set of conditions (that are not given), and the derivation is of poor quality. As a theoreiritical contirbution, the authors did not manage to converge to some simple and clear statements (theorems or equvalent). Instead, the contribution is largely *explanatory*. It is hard to observe anything new, given the poor writing and organization. The first 4 pages are mainly introductions of previous works. The authors used information geometry and minimum description length to explain the generalization of deep learning. This is a small area. It is hard to miss closely related works by simple searching. Instead, the authors only cited Rissanen (1978). On the other hand, as the authors used the spectrum properties of the Fisher information matrix, there are some recent works by Amari which can be cited. ### Review Rating 1: Reject ### Review Confidence <|im_end|> <|im_end|>
MJAqnaC2vO1
ICLR.cc/2021/Conference
2021
Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation
["Hao Li", "Chenxin Tao", "Xizhou Zhu", "Xiaogang Wang", "Gao Huang", "Jifeng Dai"]
Designing proper loss functions is essential in training deep networks. Especially in the field of semantic segmentation, various evaluation metrics have been proposed for diverse scenarios. Despite the success of the widely adopted cross-entropy loss and its variants, the mis-alignment between the loss functions and evaluation metrics degrades the network performance. Meanwhile, manually designing loss functions for each specific metric requires expertise and significant manpower. In this paper, we propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric. We substitute the non-differentiable operations in the metrics with parameterized functions, and conduct parameter search to optimize the shape of loss surfaces. Two constraints are introduced to regularize the search space and make the search efficient. Extensive experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently. The searched losses can generalize well to other datasets and networks. Code shall be released at https://github.com/fundamentalvision/Auto-Seg-Loss.
["Loss Function Search", "Metric Surrogate", "Semantic Segmentation"]
ABSTRACTDesigning proper loss functions is essential in training deep networks. Especiallyin the field of semantic segmentation, various evaluation metrics have been pro-posed for diverse scenarios. Despite the success of the widely adopted cross-entropy loss and its variants, the mis-alignment between the loss functions andevaluation metrics degrades the network performance. Meanwhile, manually de-signing loss functions for each specific metric requires expertise and significantmanpower. In this paper, we propose to automate the design of metric-specific lossfunctions by searching differentiable surrogate losses for each metric. We substi-tute the non-differentiable operations in the metrics with parameterized functions,and conduct parameter search to optimize the shape of loss surfaces. Two con-straints are introduced to regularize the search space and make the search efficient.Extensive experiments on PASCAL VOC and Cityscapes demonstrate that thesearched surrogate losses outperform the manually designed loss functions con-sistently. The searched losses can generalize well to other datasets and networks.Code shall be released at https://github.com/fundamentalvision/Auto-Seg-Loss .1 I NTRODUCTIONLoss functions are of indispensable components in training deep networks, as they drive the featurelearning process for various applications with specific evaluation metrics. However, most metrics,like the commonly used 0-1 classification error, are non-differentiable in their original forms andcannot be directly optimized via gradient-based methods. Empirically, the cross-entropy loss serveswell as an effective surrogate objective function for a variety of tasks concerning categorization.This phenomenon is especially prevailing in image semantic segmentation, where various evaluationmetrics have been designed to address the diverse task focusing on different scenarios. Some metricsmeasure the accuracy on the whole image, while others focus more on the segmentation boundaries.Although cross-entropy and its variants work well for many metrics, the mis-alignment betweennetwork training and evaluation still exist and inevitably leads to performance degradation.Typically, there are two ways for designing metric-specific loss functions in semantic segmenta-tion. The first is to modify the standard cross-entropy loss to meet the target metric (Ronnebergeret al., 2015; Wu et al., 2016). The other is to design other clever surrogate losses for specific eval-uation metrics (Rahman & Wang, 2016; Milletari et al., 2016). Despite the improvements, thesehandcrafted losses need expertise and are non-trivial to extend to other evaluation metrics.In contrast to designing loss functions manually, an alternative approach is to find a framework thatcan design proper loss functions for different evaluation metrics in an automated manner, motivatedby recent progress in AutoML (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018; Li et al., 2019).Although automating the design process for loss functions is attractive, it is non-trivial to apply anEqual contribution.yThis work is done when Hao Li and Chenxin Tao are interns at SenseTime Research.zCorresponding author.1Published as a conference paper at ICLR 2021AutoML framework to loss functions. Typical AutoML algorithms require a proper search space, inwhich some search algorithms are conducted. Previous search spaces are either unsuitable for lossdesign, or too general to be searched efficiently. Recently Li et al. (2019) and Wang et al. (2020)proposed search spaces based on existing handcrafted loss functions. And the algorithm searches forthe best combination. However, these search spaces are still limited to the variants of cross-entropyloss, and thus do not address the mis-alignment problem well.In this paper, we propose a general framework for searching surrogate losses for mainstream non-differentiable segmentation metrics. The key idea is that we can build the search space according tothe form of evaluation metrics. In this way, the training criteria and evaluation metrics are unified.Meanwhile, the search space is compact enough for efficient search. Specifically, the metrics arefirst relaxed to the continuous domain by substituting the one-hot prediction and logical operations,which are the non-differentiable parts in most metrics, with their differentiable approximations.Parameterized functions are introduced to approximate the logical operations, ensuring that the losssurfaces are smooth while effective for training. The loss parameterization functions can be ofarbitrary families defined on [0;1]. Parameter search is further conducted on the chosen familyso as to optimize the network performance on the validation set with the given evaluation metric.Two essential constraints are introduced to regularize the parameter search space. We find that thesearched surrogate losses can effectively generalize to different networks and datasets. Extensiveexperiments on Pascal VOC (Everingham et al., 2015) and Cityscapes (Cordts et al., 2016) showour approach delivers accuracy superior than the existing losses specifically designed for individualsegmentation metrics with a mild computational overhead.Our contributions can be summarized as follows: 1) Our approach is the first general framework ofsurrogate loss search for mainstream segmentation metrics. 2) We propose an effective parameterregularization and parameter search algorithm, which can find loss surrogates optimizing the targetmetric performance with mild computational overhead. 3) The surrogate losses obtained via the pro-posed searching framework promote our understandings on loss function design and by themselvesare novel contributions, because they are different from existing loss functions specifically designedfor individual metrics, and are transferable across different datasets and networks.2 R ELATED WORKLoss function design is an active topic in deep network training (Ma, 2020). In the area of imagesemantic segmentation, cross-entropy loss is widely used (Ronneberger et al., 2015; Chen et al.,2018). But the cross-entropy loss is designed for optimizing the global accuracy measure (Rahman& Wang, 2016; Patel et al., 2020), which is not aligned with many other metrics. Numerous studiesare conducted to design proper loss functions for the prevalent evaluation metrics. For the mIoUmetric, many works (Ronneberger et al., 2015; Wu et al., 2016) incorporate class frequency to mit-igate the class imbalance problem. For the boundary F1 score, the losses at boundary regions areup-weighted (Caliva et al., 2019; Qin et al., 2019), so as to deliver more accurate boundaries. Theseworks carefully analyze the property of specific evaluation metrics, and design the loss functions ina fully handcrafted way, which needs expertise. By contrast, we propose a unified framework forderiving parameterized surrogate losses for various evaluation metrics. Wherein, the parameters aresearched by reinforcement learning in an automatic way. The networks trained with the searchedsurrogate losses deliver accuracy on par or even superior than those with the best handcrafted losses.Direct loss optimization for non-differentiable evaluation metrics has long been studied for struc-tural SVM models (Joachims, 2005; Yue et al., 2007; Ranjbar et al., 2012). However, the gradientsw.r.t. features cannot be derived from these approaches. Therefore, they cannot drive the trainingof deep networks through back-propagation. Hazan et al. (2010) proposes to optimize structuralSVM with gradient descent, where loss-augmented inference is applied to get the gradients of theexpectation of evaluation metrics. Song et al. (2016) further extends this approach to non-linearmodels (e.g., deep neural networks). However, the computational complexity is very high duringeach step in gradient descent. Although Song et al. (2016) and Mohapatra et al. (2018) have de-signed efficient algorithms for the Average Precision (AP) metric, other metrics still need speciallydesigned efficient algorithms. Our method, by contrast, is general for the mainstream segmentationmetrics. Thanks to the good generalizability, our method only needs to perform the search process2Published as a conference paper at ICLR 2021once for a specific metric, and the searched surrogate loss can be directly used henceforth. Applyingthe searched loss for training networks brings very little additional computational cost.Surrogate loss is introduced to derive loss gradients for the non-differentiable evaluation metrics.There are usually two ways for designing surrogate losses. The first is to handcraft an approximateddifferentiable metric function. For the IoU measure, Rahman & Wang (2016) propose to approxi-mate the intersection and union seperately using the softmax probabilities in a differentiable form,and show its effectiveness on binary segmentation tasks. Berman et al. (2018) further deal withmulti-class segmentation problems by extending mIoU from binary inputs to the continuous domainwith the convex Lov `asz extension, and their method outperforms standard cross entropy loss inmulti-class segmentation tasks. For the F1 measure, dice loss is proposed by Milletari et al. (2016)as a direct objective by substituting the binary prediction with the softmax probability. In spite ofthe success, they do not apply for other metrics.The second solution is to train a network to approximate the target metric. Nagendar et al. (2018)train a network to approximate mIoU. Patel et al. (2020) design a neural network to learn embeddingsfor predictions and ground truths for tasks other than segmentation. This line of research focuseson minimizing the approximation error w.r.t. the target metrics. But there is no guarantee that theirapproximations provide good loss signals for training. These approximated losses are just employedin a post-tuning setup, still relying on cross-entropy pre-trained models. Our method significantlydiffers in that we search surrogate losses to directly optimize the evaluation metrics in applications.AutoML is a long-pursued target of machine learning (He et al., 2019). Recently a sub-field ofAutoML, neural architecture search (NAS), has attracted much attention due to its success in au-tomating the process of neural network architecture design (Zoph & Le, 2017; Pham et al., 2018;Liu et al., 2018). As an essential element, loss function has also raised the interest of researchers toautomate its design process. Li et al. (2019) and Wang et al. (2020) design search spaces based on ex-isting human-designed loss functions and search for the best combination parameters. There are twoissues: a) the search process outputs whole network models rather than loss functions. For every newnetwork or dataset, the expensive search procedure is conducted again, and b) the search space arefilled with variants of cross-entropy, which cannot solve the mis-alignment between cross-entropyloss and many target metrics. By contrast, our method outputs the searched surrogate loss functionsof close form with the target metrics, which are transferable between networks and datasets.3 R EVISITING EVALUATION METRICS FOR SEMANTIC SEGMENTATIONVarious evaluation metrics are defined for semantic segmentation, to address the diverse task focus-ing on different scenarios. Most of them are of three typical classes: Acc-based, IoU-based, andF1-score-based. This section revisits the evaluation metrics, under a unified notation set.Table 1 summarizes the mainstream evaluation metrics. The notations are as follows: suppose thevalidation set is composed of Nimages, labeled with categories from Cclasses (background in-cluded). Let In;n2f1;:::;Ngbe then-th image, and Ynbe the corresponding ground-truthsegmentation mask. Here Yn=fyn;c;h;wgc;h;w is a one-hot vector, where yn;c;h;w2f0;1gindi-cates whether the pixel at spatial location (h;w)belongs to the c-th category ( c2f1;:::;Cg).In evaluation, the ground-truth segmentation mask Ynis compared to the network prediction^Yn=f^yn;c;h;wgc;h;w , where ^yn;c;h;w2f0;1g.^yn;c;h;w is quantized from the continuous scoresproduced by the network (by argmax operation).Acc-based metrics. The global accuracy measure (gAcc) counts the number of pixels correctlyclassified. It can be written with logical operator AND as Eq. (1). The gAcc metric counts eachpixel equally, so the results of the long-tailed categories have little impact on the metric number.The mean accuracy (mAcc) metric mitigates this by normalizing within each category as in Eq. (2).IoU-based metrics. The evaluation is on set similarity rather than pixel accuracy. The intersection-over-union (IoU) score is evaluated between the prediction and the ground-truth mask of each cate-gory. The mean IoU (mIoU) metric averages the IoU scores of all categories, as in Eq. (3).In the variants, the frequency weighted IoU (FWIoU) metric weighs each category IoU score by thecategory pixel number, as in Eq. (4). The boudary IoU (BIoU) (Kohli et al., 2009) metric only caresabout the segmentation quality around the boundary, so it picks the boundary pixels out in evaluation3Published as a conference paper at ICLR 2021Table 1: Revisiting mainstream metrics for semantic segmentation. The metrics with ymeasure thesegmentation accuracy on the whole image. The metrics with focus on the boundary quality.Type Name FormulaAcc-basedGlobal Accuracyy gAcc =Pn;c;h;w^yn;c;h;w ANDyn;c;h;wPn;c;h;wyn;c;h;w(1)Mean Accuracyy mAcc =1CXcPn;h;w^yn;c;h;w ANDyn;c;h;wPn;h;wyn;c;h;w(2)IoU-basedMean IoUy mIoU =1CXcPn;h;w^yn;c;h;w ANDyn;c;h;wPn;h;w^yn;c;h;w ORyn;c;h;w(3)Frequency Weighted IoUy FWIoU =XcPn;h;wyn;c;h;wPn;c0;h;wyn;c0;h;wPn;h;w^yn;c;h;w ANDyn;c;h;wPn;h;w^yn;c;h;w ORyn;c;h;w(4)Boundary IoUBIoU =1CXcPnPh;w2BD(yn)^yn;c;h;w ANDyn;c;h;wPnPh;w2BD(yn)^yn;c;h;w ORyn;c;h;wwhere BD(y) =yXOR Min -Pooling(y)(5)F1-score-based Boundary F1 ScoreBF1-score =1CXc2preccrecallc(precc+ recallc)where precc=Pn;h;w BD(^yn)c;h;w AND Max -Pooling(BD( yn)c;h;w)Pn;h;w BD(^yn)c;h;w;recallc=Pn;h;w Max -Pooling(BD(^ yn)c;h;w) AND(BD( yn)c;h;w)Pn;h;wBD(yn)c;h;w(6)and ignores the rest pixels. It can be calculated with Eq. (5), in which BD(yn)denotes the boundaryregion in map yn.BD(yn)is derived by applying XOR operation on the min-pooled ground-truthmask. The stride of the Min-Pooling()is 1.F1-score-based metrics. F1-score is a criterion that takes both precision and recall into consider-ation. A well-known metric of this type is boundary F1-score (BF1-score) (Csurka et al., 2013),which is widely used for evaluating boundary segmentation accuracy. The computation of precisionand recall in BF1-score is as in Eq. (6), where BD(^yn)andBD(yn)are derived from Eq. (5). Maxpooling with stride 1, Max -Pooling(), is applied on the boundary regions to allow error tolerance.4 A UTO SEG-LOSSFRAMEWORKIn the Auto Seg-Loss framework, the evaluation metrics are transferred into continuous surrogatelosses with learnable parameters, which are further optimized. Fig. 1 illustrates our approach.4.1 E XTENDING METRICS TO SURROGATESAs shown in Section 3, most segmentation metrics are non-differentiable because they take one-hot prediction maps as input, and contain binary logical operations. We extend these metrics to becontinuous loss surrogates by smoothing the non-differentiable operations within.Extending One-hot Operation. The one-hot prediction map, ^Yn=f^yn;c;h;wgc;h;w , is derived bypicking the highest scoring category at each pixel, which is further turned into one-hot form. Here,we approximate the one-hot predictions with softmax probabilities, as,^yn;c;h;weyn;c;h;w = Softmax c(zn;c;h;w ); (7)wherezn;c;h;w2Ris the category score output by the network (without normalization). Theapproximated one-hot prediction is denoted by eyn;c;h;w .Extending Logical Operations. As shown in Table 1, the non-differentiable logical operations,fAND(y1;y2),fOR(y1;y2), andfXOR(y1;y2), are of indispensable components in these metrics.Because the XOR operation can be constructed by AND andOR,fXOR(y1;y2) =fOR(y1;y2)fAND(y1;y2), we focus on extending fAND(y1;y2)andfOR(y1;y2)to the continuous domain.Following the common practice, the logical operators are substituted with arithmetic operatorsfAND(y1;y2) =y1y2; fOR(y1;y2) =y1+y2y1y2; (8)4Published as a conference paper at ICLR 2021training setSShold -out subsettraining subsetcategoryscoreproxy network one-hotSoftmaxhAND(� yy1,� yy2;ΘAND) ······̃ξξΘ(Nωω;SStrain)Softmax segmentation networkSurrogate Parameter Search SpacegradientsgradientsOptimal Parameterizationcategoryscoresoftmaxprobabilityone-hotpredictionAND ORhOR(� yy1,� yy2;ΘOR)surrogate metricsurrogate metric̃ξξΘ∗(Nωω′;SS)hAND(� yy1,� yy2;ΘAND∗)hOR(� yy1,� yy2;ΘOR∗)target metric rewardsAuto Seg-Loss Parameter SearchSearched parameters Θ∗Searched Surrogate LossSShold-outsplitNωωSStrainproxy networkNωωNωω′ZZ �YYZZ �YYcategoryscoresoftmaxprobabilityZZ �YYξξ(Nωω;SShold-out)Figure 1: Overview of the proposed Auto Seg-Loss framework. The surfaces of hAND andhORshown in the ”Optimal Parameterization” illustrate the searched optimal parameterization for mIoU.wherey1;y22f0;1g. Eq. (8) can be directly extended to take continuous y1;y22[0;1]as in-puts. By such an extension, together with the approximated one-hot operation, a na ̈ıve version ofdifferentiable surrogate losses can be obtained. The strength of such surrogates is that they are di-rectly derived from the metrics, which significantly reduces the gap between training and evaluation.However, there is no guarantee that the loss surfaces formed by na ̈ıvely extending Eq. (8) provideaccurate loss signals. To adjust the loss surfaces, we parameterize the AND andORfunctions ashAND(y1;y2;AND) =g(y1;AND)g(y2;AND);hOR(y1;y2;OR) =g(y1;OR) +g(y2;OR)g(y1;OR)g(y2;OR);(9)whereg(y;) : [0;1]!Ris a scalar function parameterized by .The parameterized function g(y;)can be from arbitrary function families defined on [0;1], e.g.,piecewise linear functions and piecewise B ́ezier curves. With a chosen function family, the param-eterscontrol the shape of loss surfaces. We seek to search for the optimal parameters so as tomaximize the given evaluation metric.Meanwhile, optimal parameter search is non-trivial. With the introduced parameters, the plasticityof loss surfaces is strong. The parameterized loss surfaces may well be chaotic, or be far awayfrom the target evaluation metric even at the binary inputs. For more effective parameter search, weregularize the loss surfaces by introducing two constraints on g(y;).Truth-table constraint is introduced to enforce the surrogate loss surfaces taking the same values asthe evaluation metric score at binary inputs. This is applied by enforcingg(0;) = 0; g(1;) = 1: (10)Thus, the parameterized functions h(y1;y2;)preserve the behavior of the corresponding logicaloperationsf(y1;y2)on binary inputs y1;y22f0;1g.Monotonicity constraint is introduced based on the observation of monotonicity tendency in the truthtables of AND andOR. It pushes the loss surfaces towards a benign landscape, avoiding dramaticnon-smoothness. The monotonicity constraint is enforced on hAND(y1;y2)andhOR(y1;y2), as@hAND=@yi0; @h OR=@yi0;8yi2[0;1]; i= 1;2:Applying the chain rule and the truth table constraint, the monotonicity constraint implies@g(y;)=@y0;8y2[0;1]: (11)Empirically we find it important to enforce these two constraints in parameterization.Extending Evaluation Metrics. Now we can extend the metrics to surrogate losses by a) replacingthe one-hot predictions with softmax probabilities, and b) substituting the logical operations withparameterized functions. Note that if the metric contains several logical operations, their parameterswill not be shared. The collection of parameters in one metric are denoted as . For a segmentationnetwork Nand evaluation dataset S, the score of the evaluation metric is denoted as (N;S). Andthe parameterized surrogate loss is denoted as e(N;S).5Published as a conference paper at ICLR 20214.2 S URROGATE PARAMETERIZATIONThe parameterized function can be from any function families defined on [0, 1], such as picewiseB ́ezier curve and piecewise linear functions. Here we choose the piecewise B ́ezier curve for param-eterizingg(y;), which is widely used in computer graphics and is easy to enforce the constraintsvia its control points. We also verify the effectiveness of parameterizing g(y;)by piecewise linearfunctions. See Fig. 2 for visualization and Appendix B for more details.A piecewise B ́ezier curve consists of a series of quadratic B ́ezier curves, where the last control pointof one curve segment coincides with the first control point of the next curve segment. If there are nsegments in a piecewise B ́ezier curve, the k-th segment is defined asB(k;s) = (1s)2B2k+ 2s(1s)B2k+1+s2B2k+2;0s1 (12)wherestransverses the k-th segment, B2k+i= (B(2k+i);u;B(2k+i);v) (i= 0;1;2)denotes thei-thcontrol point on the k-th segment, in which u;vindex the 2-d plane axes. A piecewise B ́ezier curvewithnsegments has 2n+ 1control points in total. To parameterize g(y;), we assigny= (1s)2B2k;u+ 2s(1s)B(2k+1);u+s2B(2k+2);u; (13a)g(y;) = (1s)2B2k;v+ 2s(1s)B(2k+1);v+s2B(2k+2);v; (13b)s.t.B2k;uyB(2k+2);u; (13c)whereis the control point set, B2k;u< B (2k+1);u< B (2k+2);u,0kn1. Given an inputy, the segment index kand the transversal parameter sare derived from Eq. (13c) and Eq. (13a),respectively. Then g(y;)is assigned as Eq. (13b). Because g(y;)is defined on y2[0;1], wearrange the control points in the u-axis as,B0;u= 0; B2n;u= 1, where theu-coordinate of the firstand the last control points are at 0and1, respectively.The strength of the piecewise B ́ezier curve is that the curve shape is defined explicitly via the controlpoints. Here we enforce the truth-table and the monotonicity constraints on the control points via,B0;v= 0; B2n;v= 1; (truth-table constraint )B2k;vB(2k+1);vB(2k+2);v; k = 0;1;:::;n1: (monotonicity constraint )To fulfill the above restrictions in optimization, the specific form of the parameters is given by=Bi;uB(i1);uB2n;uB(i1);u;Bi;vB(i1);vB2n;vB(i1);vji= 1;2;:::; 2n1;withB0= (0;0)andB2n= (1;1)fixed. So every i= (i;u;i;v)is in range [0;1]2and it isstraight-forward to compute the actual coordinates of control points from this parameterized form.Such parameterization makes each iindependent with each other, and thus simplifies the optimiza-tion. By default, we use piecewise B ́ezier curve with two segments to parameterize g(y;).0.0 0.2y 0.4 0.6 0.8 1.00.00.20.4g(y;θ)0.60.81.0s1−s(B2,u,B2,v)(B3,u,B3,v)(B4,u,B4,v)vuy= (1−s)2B2,u+ 2s(1−s)B3,u+s2B4,ug(y;θ) = (1−s)2B2,v+ 2s(1−s)B3,v+s2B4,vFigure 2: Parameterization of g(y;)using Piecewise B ́ezier curve with foursegments. The red points are controlpoints. The purple point is on the curve,which shows the relationship among y,g(y;)and the transversal parameter s.Algorithm 1: Auto Seg-Loss Parameter SearchInput: Initialized network N!0, initialized distribution 1and2, target metric , training setStrainandhold-out training set Shold-outResult: Obtained optimal parameters fort = 1 toTdofori = 1 toMdoSample parameter (t)iN trunc[0;1](t; 2I);Network training!((t)i) = arg max!e(t)i(N!;Strain),withwinitialized from w0;Compute the evaluation metric score((t)i) =(N!((t)i);Shold-out );endUpdatet+1= arg max1MPMi=1R(;t;(t)i);endreturn = arg maxtPMi=1((t)i);8t= 1;:::;T + 16Published as a conference paper at ICLR 20214.3 S URROGATE PARAMETER OPTIMIZATIONAlgorithm 1 describes our parameter search algorithm. The training set is split into two subsets, Strainfor training andShold-out for evaluation in the search algorithm, respectively. Specifically, suppose wehave a segmentation network N!with weights !, our search target is the parameters that maximizethe evaluation metric on the hold-out training set (N!;Shold-out )max() =(N!();Shold-out );s.t.!() = arg max!e(N!;Strain): (14)To optimize Eq. (14), the segmentation network is trained with SGD as the inner-level problem.At the outer-level, we use reinforcement learning as our searching algorithm, following the com-mon practice in AutoML (Zoph & Le, 2017; Pham et al., 2018). Other searching algorithms,such as evolutionary algorithm, may also be employed. Specifically, the surrogate parametersare searched via the PPO2 algorithm (Schulman et al., 2017). The process consists of Tsam-pling steps. In the t-th step, we aim to explore the search space around that from t1. HereMsets of parameters f(t)igMi=1are sampled independently from a truncated normal distribu-tion (Burkardt, 2014), as N trunc[0;1](t; 2I), with each variable in range [0;1]. In it,tand2Idenote the mean and covariance of the parent normal distribution ( is fixed as 0.2 inthis paper). tsummarizes the information from the (t1)-th step.Msurrogate losses are con-structed with the sampled parameters, which drive the training of Msegmentation networks sep-arately. To optimize the outer-level problem, we evaluate these models with the target metric andtake the evaluation scores as rewards for PPO2. Following the PPO2 algorithm, t+1is computedast+1= arg max1MPMi=1R(;t;i),where the reward R(;t;i)is asR(;t;i) = minp(i;;2I)p(i;t;2I)(i);CLIPp(i;;2I)p(i;t;2I);1;1 +(i);where min(;)picks the smaller item from its inputs, CLIP (x;1;1 +)clipsxto be within1and1 +, andp(i;;2I)is the PDF of the truncated normal distribution. Note that themean reward of the Msamples is subtracted when computing (i)for better convergence. AfterTsteps, the mean twith the highest average evaluation score is output as the final parameters .Empirically we find the searched losses have good transferability, i.e., they can be applied for dif-ferent datasets and networks. Benefiting from this, we use a light proxy task for parameter search.In it, we utilize a smaller image size, a shorter learning schedule and a lightweight network. Thus,the whole search process is quite efficient (8 hours on PASCAL VOC with 8 NVIDIA Tesla V100GPUs). More details are in Appendix A. In addition, the search process can be conducted only oncefor a specific metric and the resulting surrogate loss can be directly used for training henceforth.5 E XPERIMENTSWe evaluate on the PASCAL VOC 2012 (Everingham et al., 2015) and the Cityscapes (Cordts et al.,2016) datasets. We use Deeplabv3+ (Chen et al., 2018) with ResNet-50/101 (He et al., 2016) as thenetwork model. During the surrogate parameter search, we randomly sample 1500 training images inPASCAL VOC and 500 training images in Cityscapes to form the hold-out set Shold-out , respectively.The remaining training images form the training set Strainin search.0is set to make g(y;) =y.The backbone network is ResNet-50. The images are down-sampled to be of 128128resolution.SGD lasts only 1000 iterations with a mini-batch size of 32. After the search procedure, we re-trainthe segmentation networks with ResNet-101 using the searched losses on the full training set andevaluate them on the actual validation set. The re-train settings are the same as Deeplabv3+ (Chenet al., 2018), except that the loss function is substituted by the obtained surrogate loss. The searchtime is counted on 8 NVIDIA Tesla V100 GPUs. More details are in Appendix A.5.1 S EARCHING FOR DIFFERENT METRICSIn Table 2, we compare our searched surrogate losses against the widely-used cross-entropy lossand its variants, and some other metric-specific surrogate losses. We also seek to compare with theAutoML-based method in Li et al. (2019), which was originally designed for other tasks. But wecannot get reasonable results due to convergence issues. The results show that our searched losses7Published as a conference paper at ICLR 2021are on par or better the previous losses on their target metrics. It is interesting to note that theobtained surrogates for boundary metrics (such as BIoU and BF1) only focus on the boundary areas,see Appendix C for further discussion. We also tried training segmentation networks driven by bothsearched mIoU and BIoU/BF1 surrogate losses. Such combined losses refine the boundaries whilekeeping reasonable global performance.Table 2: Performance of different losses on PASCAL VOC and Cityscapes segmentation. The resultsof each loss function’s target metrics are underlined . The scores whose difference with the highestis less than 0.3 are marked in bold .Dataset PASCAL VOC CityscapesLoss Function mIoU FWIoU BIoU BF1 mAcc gAcc mIoU FWIoU BIoU BF1 mAcc gAccCross Entropy 78.69 91.31 70.61 65.30 87.31 95.17 79.97 93.33 62.07 62.24 87.01 96.44WCE (Ronneberger et al., 2015) 69.60 85.64 61.80 37.59 92.61 91.11 73.01 90.51 53.07 51.19 89.22 94.56DPCE (Caliva et al., 2019) 79.82 91.76 71.87 66.54 87.76 95.45 80.27 93.38 62.57 65.99 86.99 96.46SSIM (Qin et al., 2019) 79.26 91.68 71.54 66.35 87.87 95.38 80.65 93.22 63.04 72.20 86.88 96.39DiceLoss (Milletari et al., 2016) 77.78 91.34 69.85 64.38 87.47 95.11 79.30 93.25 60.93 59.94 86.38 96.39Lov`asz (Berman et al., 2018) 79.72 91.78 72.47 66.65 88.64 95.42 77.67 92.51 56.71 53.48 82.05 96.03Searched mIoU 80.97 92.09 73.44 68.86 88.23 95.68 80.67 93.30 63.05 67.97 87.20 96.44Searched FWIoU 80.00 91.93 75.14 65.67 89.23 95.44 79.42 93.33 61.71 59.68 87.96 96.37Searched BIoU 48.97 69.89 79.27 38.99 81.28 62.64 45.89 39.80 63.89 38.29 62.80 58.15Searched BF1 1.93 0.96 7.39 74.83 6.51 2.66 6.78 3.19 18.37 77.40 12.09 8.19Searched mAcc 69.80 85.86 72.85 35.62 92.66 91.28 74.10 90.79 54.62 53.45 89.22 94.75Searched gAcc 79.73 91.76 74.09 64.41 88.95 95.47 79.41 93.30 61.65 62.04 87.08 96.51Searched mIoU + BIoU 81.19 92.19 76.89 69.56 88.36 95.75 80.43 93.34 63.88 65.87 87.03 96.45Searched mIoU + BF1 78.72 90.80 71.81 73.57 86.70 94.88 78.30 93.00 61.62 71.73 87.13 96.235.2 G ENERALIZATION OF THE LOSSGeneralization among datasets. Table 3 evaluates the generalization ability of our searched losssurrogates among different datasets. Due to limited computational resource, we train networks onlywith the searched mIoU, BF1 and mAcc surrogate losses. The results show that our searched surro-gate losses generalize well between these two datasets with quite different scenes and categories.Table 3: Generalization of our searched surrogate losses between PASCAL VOC and Cityscapes.Datasets Cityscapes!VOC VOC!CityscapesLoss Function mIoU FWIoU BIoU BF1 mAcc gAcc mIoU FWIoU BIoU BF1 mAcc gAccCross Entropy 78.69 91.31 70.61 65.30 87.31 95.17 79.97 93.33 62.07 62.24 87.01 96.44Searched mIoU 80.05 91.72 73.97 67.61 88.01 95.45 80.67 93.31 62.96 66.48 87.36 96.44Searched BF1 1.84 0.93 7.42 75.85 6.48 1.47 6.67 3.20 19.00 77.99 12.12 4.09Searched mAcc 70.90 86.29 73.43 37.18 93.19 91.43 73.50 90.68 54.34 54.04 88.66 94.68Generalization among segmentation networks. The surrogate losses are searched with ResNet-50 + DeepLabv3+ on PASCAL VOC. The searched losses drive the training of ResNet-101 +DeepLabv3+, PSPNet (Zhao et al., 2017) and HRNet (Sun et al., 2019) on PASCAL VOC. Ta-ble 4 shows the results. The results demonstrate that our searched loss functions can be applied tovarious semantic segmentation networks.5.3 A BLATIONParameterization and constraints. Table 5 ablates the parameterization and the search space con-straints. In it, a surrogate without parameters refers to Eq. (8), with the domain extended from dis-crete pointsf0, 1gto continuous interval [0, 1]. This naive surrogate deliver much lower accuracy,indicating the essence of parameterization. Without the truth-table constraint, the training processdiverges at the very beginning, where the loss gradients become “NaN”. And the performance dropsif the monotonicity constraint is not enforced. The performance drops or even the algorithm failswithout the constraints.8Published as a conference paper at ICLR 2021Table 4: Generalization of our searched surrogate losses among different network architectures onPASCAL VOC. The losses are searched with ResNet-50 + DeepLabv3+ on PASCAL VOC.Network R50-DeepLabv3+ R101-DeepLabv3+ R101-PSPNet HRNetV2p-W48Loss Function mIoU BF1 mAcc mIoU BF1 mAcc mIoU BF1 mAcc mIoU BF1 mAccCross Entropy 76.22 61.75 85.43 78.69 65.30 87.31 77.91 64.70 85.71 76.35 61.19 85.12Searched mIoU 78.35 66.93 85.53 80.97 68.86 88.23 78.93 65.65 87.42 77.26 63.52 86.80Searched BF1 1.35 70.81 6.05 1.93 74.83 6.51 1.62 71.84 6.33 1.34 68.41 5.99Searched mAcc 69.82 36.92 91.61 69.80 35.62 92.66 71.66 39.44 92.06 68.22 35.90 91.46Proxy tasks for parameter search. Table 6 ablates this. The bottom row is our default settingwith a light-weight backbone, down-sampled image size and shorter learning schedule. The defaultsetting delivers on par accuracy with heavier settings. This is consistent with the generalizationability of our surrogate losses. Thus we can improve the search efficiency via light proxy tasks.Parameter search algorithm. Fig. 3 compares the employed PPO2 (Schulman et al., 2017) al-gorithm with random search. The much better performance of PPO2 suggests that surrogate losssearch is non-trivial and reinforcement learning helps to improve the search efficiency.Table 5: Ablation on search space constraints.Parameter Truth-table Monotonicity VOC mIoU7 7 7 46.993 7 7 Fail3 3 7 77.763 3 3 80.64Table 6: Ablation on search proxy tasks.Backbone Image Size Iterations Time(hours) VOC mIoUR50 256256 1000 33.0 81.15R50 128128 2000 17.1 80.56R101 128128 1000 13.3 80.75R50 128128 1000 8.5 80.970 5 10 15 20 25 30 35sampling step t0.300.350.400.450.500.550.600.65searching mIoUPPO2Random(a) search for mIoU0.640.650.660.670.680.69PPO2Randomsampling step tsearching BF10 5 10 15 20 25 30 35 (b) search for BF10.500.550.600.650.700.750.800.85PPO2Randomsampling step tsearching mAcc0 5 10 15 20 25 30 35 (c) search for mAccFigure 3: Ablation on loss parameter search. Each curve presents the highest average evaluationscore up to the t-th step in one search process. The search process is repeated four times.6 C ONCLUSIONThe introduced Auto Seg-Loss is a powerful framework to search for the parameterized surrogatelosses for mainstream segmentation evalutation metrics. The non-differentiable operators are substi-tuted by their parameterized continuous counterparts. The parameters are optimized to improve thefinal evaluation metrics with essential constraints. It would be interesting to extend the frameworkto more tasks, like object detection, pose estimation and machine translation problems.ACKNOWLEDGMENTSThe work is supported by the National Key R&D Program of China (2020AAA0105200), BeijingAcademy of Artificial Intelligence and the Institute for Guo Qiang of Tsinghua University.
Mjs8kwRjcof
Careful study of semantic segmentation proxy loss functions
7: Good paper, accept
This paper looks at loss functions for semantic segmentation. Typical metrics are not easily differentiable w.r.t. the outputs of the DNNs that generate the labels. Instead, a proxy/surrogate loss function is learned jointly with the network in a two-level optimization. Pros: i) Good accuracy results. Code will allow others to verify and build on these. ii) High-quality ablation studies. Avoiding any parameterization (one of the the more surprising components of the method) is a good ablation to have. Also, comparison to random in Fig 3 demonstrates that the outer-level optimization is working as expected. Q1: What is the naive surrogate used in Table 5? Equation (8)? Cons: iii) Seems like an ad-hoc approach that doesn't incorporate classical techniques for optimizing "hard" loss functions. Non-differentiable loss functions, or losses over discrete variables, are less common when dealing with CNNs. But they can be included in deep learning with techniques like that in: Chen et. al. "Learning Deep Structured Models" Notable citations that take this a "classical" approach are: Ranjbar et. al., "Optimizing Non-Decomposable Loss Functions in Structured Prediction" This kind of approach is an omission from related work that might be worth correcting. Especially because it is the "direct" approach, in that it is optimizing the original loss functions using known techniques, without additional levels of learning or approximatinos as in the submission. Overall, though the empirical results are strong enough that it is likely that the submission is doing something useful well. So the approach in the submission seems well-supported by that fact, even if previous literature makes it a non-obvious way to do things. iv) Not much illustration or exploration of the learned loss surfaces. Experiments partially leave open the question: How closely is the surrogate loss matching the target metrics? Basically, is there more direct evidence that for the intuition that h_AND is approximately equivalent to f_AND? The networks trained on the surrogate losses do well on the original metrics. But it's not impossible that some trivial solution or unexpected loss function, that is not clearly similar to e.g. mIoU, can train a network to learn a segmentation model that produces segmentations with good mIoU. Minor comments: Q2: Reason for bolding in Table 2 is unclear, what is a "(co)-highest result?"
3: The reviewer is fairly confident that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Auto Seg-Loss: Searching Metric Surrogates for Semantic Segmentation ### Paper Abstract Designing proper loss functions is essential in training deep networks. Especially in the field of semantic segmentation, various evaluation metrics have been proposed for diverse scenarios. Despite the success of the widely adopted cross-entropy loss and its variants, the mis-alignment between the loss functions and evaluation metrics degrades the network performance. Meanwhile, manually designing loss functions for each specific metric requires expertise and significant manpower. In this paper, we propose to automate the design of metric-specific loss functions by searching differentiable surrogate losses for each metric. We substitute the non-differentiable operations in the metrics with parameterized functions, and conduct parameter search to optimize the shape of loss surfaces. Two constraints are introduced to regularize the search space and make the search efficient. Extensive experiments on PASCAL VOC and Cityscapes demonstrate that the searched surrogate losses outperform the manually designed loss functions consistently. The searched losses can generalize well to other datasets and networks. Code shall be released at https://github.com/fundamentalvision/Auto-Seg-Loss. ### Paper Keywords ["Loss Function Search", "Metric Surrogate", "Semantic Segmentation"] ### Paper Content ABSTRACTDesigning proper loss functions is essential in training deep networks. Especiallyin the field of semantic segmentation, various evaluation metrics have been pro-posed for diverse scenarios. Despite the success of the widely adopted cross-entropy loss and its variants, the mis-alignment between the loss functions andevaluation metrics degrades the network performance. Meanwhile, manually de-signing loss functions for each specific metric requires expertise and significantmanpower. In this paper, we propose to automate the design of metric-specific lossfunctions by searching differentiable surrogate losses for each metric. We substi-tute the non-differentiable operations in the metrics with parameterized functions,and conduct parameter search to optimize the shape of loss surfaces. Two con-straints are introduced to regularize the search space and make the search efficient.Extensive experiments on PASCAL VOC and Cityscapes demonstrate that thesearched surrogate losses outperform the manually designed loss functions con-sistently. The searched losses can generalize well to other datasets and networks.Code shall be released at https://github.com/fundamentalvision/Auto-Seg-Loss .1 I NTRODUCTIONLoss functions are of indispensable components in training deep networks, as they drive the featurelearning process for various applications with specific evaluation metrics. However, most metrics,like the commonly used 0-1 classification error, are non-differentiable in their original forms andcannot be directly optimized via gradient-based methods. Empirically, the cross-entropy loss serveswell as an effective surrogate objective function for a variety of tasks concerning categorization.This phenomenon is especially prevailing in image semantic segmentation, where various evaluationmetrics have been designed to address the diverse task focusing on different scenarios. Some metricsmeasure the accuracy on the whole image, while others focus more on the segmentation boundaries.Although cross-entropy and its variants work well for many metrics, the mis-alignment betweennetwork training and evaluation still exist and inevitably leads to performance degradation.Typically, there are two ways for designing metric-specific loss functions in semantic segmenta-tion. The first is to modify the standard cross-entropy loss to meet the target metric (Ronnebergeret al., 2015; Wu et al., 2016). The other is to design other clever surrogate losses for specific eval-uation metrics (Rahman & Wang, 2016; Milletari et al., 2016). Despite the improvements, thesehandcrafted losses need expertise and are non-trivial to extend to other evaluation metrics.In contrast to designing loss functions manually, an alternative approach is to find a framework thatcan design proper loss functions for different evaluation metrics in an automated manner, motivatedby recent progress in AutoML (Zoph & Le, 2017; Pham et al., 2018; Liu et al., 2018; Li et al., 2019).Although automating the design process for loss functions is attractive, it is non-trivial to apply anEqual contribution.yThis work is done when Hao Li and Chenxin Tao are interns at SenseTime Research.zCorresponding author.1Published as a conference paper at ICLR 2021AutoML framework to loss functions. Typical AutoML algorithms require a proper search space, inwhich some search algorithms are conducted. Previous search spaces are either unsuitable for lossdesign, or too general to be searched efficiently. Recently Li et al. (2019) and Wang et al. (2020)proposed search spaces based on existing handcrafted loss functions. And the algorithm searches forthe best combination. However, these search spaces are still limited to the variants of cross-entropyloss, and thus do not address the mis-alignment problem well.In this paper, we propose a general framework for searching surrogate losses for mainstream non-differentiable segmentation metrics. The key idea is that we can build the search space according tothe form of evaluation metrics. In this way, the training criteria and evaluation metrics are unified.Meanwhile, the search space is compact enough for efficient search. Specifically, the metrics arefirst relaxed to the continuous domain by substituting the one-hot prediction and logical operations,which are the non-differentiable parts in most metrics, with their differentiable approximations.Parameterized functions are introduced to approximate the logical operations, ensuring that the losssurfaces are smooth while effective for training. The loss parameterization functions can be ofarbitrary families defined on [0;1]. Parameter search is further conducted on the chosen familyso as to optimize the network performance on the validation set with the given evaluation metric.Two essential constraints are introduced to regularize the parameter search space. We find that thesearched surrogate losses can effectively generalize to different networks and datasets. Extensiveexperiments on Pascal VOC (Everingham et al., 2015) and Cityscapes (Cordts et al., 2016) showour approach delivers accuracy superior than the existing losses specifically designed for individualsegmentation metrics with a mild computational overhead.Our contributions can be summarized as follows: 1) Our approach is the first general framework ofsurrogate loss search for mainstream segmentation metrics. 2) We propose an effective parameterregularization and parameter search algorithm, which can find loss surrogates optimizing the targetmetric performance with mild computational overhead. 3) The surrogate losses obtained via the pro-posed searching framework promote our understandings on loss function design and by themselvesare novel contributions, because they are different from existing loss functions specifically designedfor individual metrics, and are transferable across different datasets and networks.2 R ELATED WORKLoss function design is an active topic in deep network training (Ma, 2020). In the area of imagesemantic segmentation, cross-entropy loss is widely used (Ronneberger et al., 2015; Chen et al.,2018). But the cross-entropy loss is designed for optimizing the global accuracy measure (Rahman& Wang, 2016; Patel et al., 2020), which is not aligned with many other metrics. Numerous studiesare conducted to design proper loss functions for the prevalent evaluation metrics. For the mIoUmetric, many works (Ronneberger et al., 2015; Wu et al., 2016) incorporate class frequency to mit-igate the class imbalance problem. For the boundary F1 score, the losses at boundary regions areup-weighted (Caliva et al., 2019; Qin et al., 2019), so as to deliver more accurate boundaries. Theseworks carefully analyze the property of specific evaluation metrics, and design the loss functions ina fully handcrafted way, which needs expertise. By contrast, we propose a unified framework forderiving parameterized surrogate losses for various evaluation metrics. Wherein, the parameters aresearched by reinforcement learning in an automatic way. The networks trained with the searchedsurrogate losses deliver accuracy on par or even superior than those with the best handcrafted losses.Direct loss optimization for non-differentiable evaluation metrics has long been studied for struc-tural SVM models (Joachims, 2005; Yue et al., 2007; Ranjbar et al., 2012). However, the gradientsw.r.t. features cannot be derived from these approaches. Therefore, they cannot drive the trainingof deep networks through back-propagation. Hazan et al. (2010) proposes to optimize structuralSVM with gradient descent, where loss-augmented inference is applied to get the gradients of theexpectation of evaluation metrics. Song et al. (2016) further extends this approach to non-linearmodels (e.g., deep neural networks). However, the computational complexity is very high duringeach step in gradient descent. Although Song et al. (2016) and Mohapatra et al. (2018) have de-signed efficient algorithms for the Average Precision (AP) metric, other metrics still need speciallydesigned efficient algorithms. Our method, by contrast, is general for the mainstream segmentationmetrics. Thanks to the good generalizability, our method only needs to perform the search process2Published as a conference paper at ICLR 2021once for a specific metric, and the searched surrogate loss can be directly used henceforth. Applyingthe searched loss for training networks brings very little additional computational cost.Surrogate loss is introduced to derive loss gradients for the non-differentiable evaluation metrics.There are usually two ways for designing surrogate losses. The first is to handcraft an approximateddifferentiable metric function. For the IoU measure, Rahman & Wang (2016) propose to approxi-mate the intersection and union seperately using the softmax probabilities in a differentiable form,and show its effectiveness on binary segmentation tasks. Berman et al. (2018) further deal withmulti-class segmentation problems by extending mIoU from binary inputs to the continuous domainwith the convex Lov `asz extension, and their method outperforms standard cross entropy loss inmulti-class segmentation tasks. For the F1 measure, dice loss is proposed by Milletari et al. (2016)as a direct objective by substituting the binary prediction with the softmax probability. In spite ofthe success, they do not apply for other metrics.The second solution is to train a network to approximate the target metric. Nagendar et al. (2018)train a network to approximate mIoU. Patel et al. (2020) design a neural network to learn embeddingsfor predictions and ground truths for tasks other than segmentation. This line of research focuseson minimizing the approximation error w.r.t. the target metrics. But there is no guarantee that theirapproximations provide good loss signals for training. These approximated losses are just employedin a post-tuning setup, still relying on cross-entropy pre-trained models. Our method significantlydiffers in that we search surrogate losses to directly optimize the evaluation metrics in applications.AutoML is a long-pursued target of machine learning (He et al., 2019). Recently a sub-field ofAutoML, neural architecture search (NAS), has attracted much attention due to its success in au-tomating the process of neural network architecture design (Zoph & Le, 2017; Pham et al., 2018;Liu et al., 2018). As an essential element, loss function has also raised the interest of researchers toautomate its design process. Li et al. (2019) and Wang et al. (2020) design search spaces based on ex-isting human-designed loss functions and search for the best combination parameters. There are twoissues: a) the search process outputs whole network models rather than loss functions. For every newnetwork or dataset, the expensive search procedure is conducted again, and b) the search space arefilled with variants of cross-entropy, which cannot solve the mis-alignment between cross-entropyloss and many target metrics. By contrast, our method outputs the searched surrogate loss functionsof close form with the target metrics, which are transferable between networks and datasets.3 R EVISITING EVALUATION METRICS FOR SEMANTIC SEGMENTATIONVarious evaluation metrics are defined for semantic segmentation, to address the diverse task focus-ing on different scenarios. Most of them are of three typical classes: Acc-based, IoU-based, andF1-score-based. This section revisits the evaluation metrics, under a unified notation set.Table 1 summarizes the mainstream evaluation metrics. The notations are as follows: suppose thevalidation set is composed of Nimages, labeled with categories from Cclasses (background in-cluded). Let In;n2f1;:::;Ngbe then-th image, and Ynbe the corresponding ground-truthsegmentation mask. Here Yn=fyn;c;h;wgc;h;w is a one-hot vector, where yn;c;h;w2f0;1gindi-cates whether the pixel at spatial location (h;w)belongs to the c-th category ( c2f1;:::;Cg).In evaluation, the ground-truth segmentation mask Ynis compared to the network prediction^Yn=f^yn;c;h;wgc;h;w , where ^yn;c;h;w2f0;1g.^yn;c;h;w is quantized from the continuous scoresproduced by the network (by argmax operation).Acc-based metrics. The global accuracy measure (gAcc) counts the number of pixels correctlyclassified. It can be written with logical operator AND as Eq. (1). The gAcc metric counts eachpixel equally, so the results of the long-tailed categories have little impact on the metric number.The mean accuracy (mAcc) metric mitigates this by normalizing within each category as in Eq. (2).IoU-based metrics. The evaluation is on set similarity rather than pixel accuracy. The intersection-over-union (IoU) score is evaluated between the prediction and the ground-truth mask of each cate-gory. The mean IoU (mIoU) metric averages the IoU scores of all categories, as in Eq. (3).In the variants, the frequency weighted IoU (FWIoU) metric weighs each category IoU score by thecategory pixel number, as in Eq. (4). The boudary IoU (BIoU) (Kohli et al., 2009) metric only caresabout the segmentation quality around the boundary, so it picks the boundary pixels out in evaluation3Published as a conference paper at ICLR 2021Table 1: Revisiting mainstream metrics for semantic segmentation. The metrics with ymeasure thesegmentation accuracy on the whole image. The metrics with focus on the boundary quality.Type Name FormulaAcc-basedGlobal Accuracyy gAcc =Pn;c;h;w^yn;c;h;w ANDyn;c;h;wPn;c;h;wyn;c;h;w(1)Mean Accuracyy mAcc =1CXcPn;h;w^yn;c;h;w ANDyn;c;h;wPn;h;wyn;c;h;w(2)IoU-basedMean IoUy mIoU =1CXcPn;h;w^yn;c;h;w ANDyn;c;h;wPn;h;w^yn;c;h;w ORyn;c;h;w(3)Frequency Weighted IoUy FWIoU =XcPn;h;wyn;c;h;wPn;c0;h;wyn;c0;h;wPn;h;w^yn;c;h;w ANDyn;c;h;wPn;h;w^yn;c;h;w ORyn;c;h;w(4)Boundary IoUBIoU =1CXcPnPh;w2BD(yn)^yn;c;h;w ANDyn;c;h;wPnPh;w2BD(yn)^yn;c;h;w ORyn;c;h;wwhere BD(y) =yXOR Min -Pooling(y)(5)F1-score-based Boundary F1 ScoreBF1-score =1CXc2preccrecallc(precc+ recallc)where precc=Pn;h;w BD(^yn)c;h;w AND Max -Pooling(BD( yn)c;h;w)Pn;h;w BD(^yn)c;h;w;recallc=Pn;h;w Max -Pooling(BD(^ yn)c;h;w) AND(BD( yn)c;h;w)Pn;h;wBD(yn)c;h;w(6)and ignores the rest pixels. It can be calculated with Eq. (5), in which BD(yn)denotes the boundaryregion in map yn.BD(yn)is derived by applying XOR operation on the min-pooled ground-truthmask. The stride of the Min-Pooling()is 1.F1-score-based metrics. F1-score is a criterion that takes both precision and recall into consider-ation. A well-known metric of this type is boundary F1-score (BF1-score) (Csurka et al., 2013),which is widely used for evaluating boundary segmentation accuracy. The computation of precisionand recall in BF1-score is as in Eq. (6), where BD(^yn)andBD(yn)are derived from Eq. (5). Maxpooling with stride 1, Max -Pooling(), is applied on the boundary regions to allow error tolerance.4 A UTO SEG-LOSSFRAMEWORKIn the Auto Seg-Loss framework, the evaluation metrics are transferred into continuous surrogatelosses with learnable parameters, which are further optimized. Fig. 1 illustrates our approach.4.1 E XTENDING METRICS TO SURROGATESAs shown in Section 3, most segmentation metrics are non-differentiable because they take one-hot prediction maps as input, and contain binary logical operations. We extend these metrics to becontinuous loss surrogates by smoothing the non-differentiable operations within.Extending One-hot Operation. The one-hot prediction map, ^Yn=f^yn;c;h;wgc;h;w , is derived bypicking the highest scoring category at each pixel, which is further turned into one-hot form. Here,we approximate the one-hot predictions with softmax probabilities, as,^yn;c;h;weyn;c;h;w = Softmax c(zn;c;h;w ); (7)wherezn;c;h;w2Ris the category score output by the network (without normalization). Theapproximated one-hot prediction is denoted by eyn;c;h;w .Extending Logical Operations. As shown in Table 1, the non-differentiable logical operations,fAND(y1;y2),fOR(y1;y2), andfXOR(y1;y2), are of indispensable components in these metrics.Because the XOR operation can be constructed by AND andOR,fXOR(y1;y2) =fOR(y1;y2)fAND(y1;y2), we focus on extending fAND(y1;y2)andfOR(y1;y2)to the continuous domain.Following the common practice, the logical operators are substituted with arithmetic operatorsfAND(y1;y2) =y1y2; fOR(y1;y2) =y1+y2y1y2; (8)4Published as a conference paper at ICLR 2021training setSShold -out subsettraining subsetcategoryscoreproxy network one-hotSoftmaxhAND(� yy1,� yy2;ΘAND) ······̃ξξΘ(Nωω;SStrain)Softmax segmentation networkSurrogate Parameter Search SpacegradientsgradientsOptimal Parameterizationcategoryscoresoftmaxprobabilityone-hotpredictionAND ORhOR(� yy1,� yy2;ΘOR)surrogate metricsurrogate metric̃ξξΘ∗(Nωω′;SS)hAND(� yy1,� yy2;ΘAND∗)hOR(� yy1,� yy2;ΘOR∗)target metric rewardsAuto Seg-Loss Parameter SearchSearched parameters Θ∗Searched Surrogate LossSShold-outsplitNωωSStrainproxy networkNωωNωω′ZZ �YYZZ �YYcategoryscoresoftmaxprobabilityZZ �YYξξ(Nωω;SShold-out)Figure 1: Overview of the proposed Auto Seg-Loss framework. The surfaces of hAND andhORshown in the ”Optimal Parameterization” illustrate the searched optimal parameterization for mIoU.wherey1;y22f0;1g. Eq. (8) can be directly extended to take continuous y1;y22[0;1]as in-puts. By such an extension, together with the approximated one-hot operation, a na ̈ıve version ofdifferentiable surrogate losses can be obtained. The strength of such surrogates is that they are di-rectly derived from the metrics, which significantly reduces the gap between training and evaluation.However, there is no guarantee that the loss surfaces formed by na ̈ıvely extending Eq. (8) provideaccurate loss signals. To adjust the loss surfaces, we parameterize the AND andORfunctions ashAND(y1;y2;AND) =g(y1;AND)g(y2;AND);hOR(y1;y2;OR) =g(y1;OR) +g(y2;OR)g(y1;OR)g(y2;OR);(9)whereg(y;) : [0;1]!Ris a scalar function parameterized by .The parameterized function g(y;)can be from arbitrary function families defined on [0;1], e.g.,piecewise linear functions and piecewise B ́ezier curves. With a chosen function family, the param-eterscontrol the shape of loss surfaces. We seek to search for the optimal parameters so as tomaximize the given evaluation metric.Meanwhile, optimal parameter search is non-trivial. With the introduced parameters, the plasticityof loss surfaces is strong. The parameterized loss surfaces may well be chaotic, or be far awayfrom the target evaluation metric even at the binary inputs. For more effective parameter search, weregularize the loss surfaces by introducing two constraints on g(y;).Truth-table constraint is introduced to enforce the surrogate loss surfaces taking the same values asthe evaluation metric score at binary inputs. This is applied by enforcingg(0;) = 0; g(1;) = 1: (10)Thus, the parameterized functions h(y1;y2;)preserve the behavior of the corresponding logicaloperationsf(y1;y2)on binary inputs y1;y22f0;1g.Monotonicity constraint is introduced based on the observation of monotonicity tendency in the truthtables of AND andOR. It pushes the loss surfaces towards a benign landscape, avoiding dramaticnon-smoothness. The monotonicity constraint is enforced on hAND(y1;y2)andhOR(y1;y2), as@hAND=@yi0; @h OR=@yi0;8yi2[0;1]; i= 1;2:Applying the chain rule and the truth table constraint, the monotonicity constraint implies@g(y;)=@y0;8y2[0;1]: (11)Empirically we find it important to enforce these two constraints in parameterization.Extending Evaluation Metrics. Now we can extend the metrics to surrogate losses by a) replacingthe one-hot predictions with softmax probabilities, and b) substituting the logical operations withparameterized functions. Note that if the metric contains several logical operations, their parameterswill not be shared. The collection of parameters in one metric are denoted as . For a segmentationnetwork Nand evaluation dataset S, the score of the evaluation metric is denoted as (N;S). Andthe parameterized surrogate loss is denoted as e(N;S).5Published as a conference paper at ICLR 20214.2 S URROGATE PARAMETERIZATIONThe parameterized function can be from any function families defined on [0, 1], such as picewiseB ́ezier curve and piecewise linear functions. Here we choose the piecewise B ́ezier curve for param-eterizingg(y;), which is widely used in computer graphics and is easy to enforce the constraintsvia its control points. We also verify the effectiveness of parameterizing g(y;)by piecewise linearfunctions. See Fig. 2 for visualization and Appendix B for more details.A piecewise B ́ezier curve consists of a series of quadratic B ́ezier curves, where the last control pointof one curve segment coincides with the first control point of the next curve segment. If there are nsegments in a piecewise B ́ezier curve, the k-th segment is defined asB(k;s) = (1s)2B2k+ 2s(1s)B2k+1+s2B2k+2;0s1 (12)wherestransverses the k-th segment, B2k+i= (B(2k+i);u;B(2k+i);v) (i= 0;1;2)denotes thei-thcontrol point on the k-th segment, in which u;vindex the 2-d plane axes. A piecewise B ́ezier curvewithnsegments has 2n+ 1control points in total. To parameterize g(y;), we assigny= (1s)2B2k;u+ 2s(1s)B(2k+1);u+s2B(2k+2);u; (13a)g(y;) = (1s)2B2k;v+ 2s(1s)B(2k+1);v+s2B(2k+2);v; (13b)s.t.B2k;uyB(2k+2);u; (13c)whereis the control point set, B2k;u< B (2k+1);u< B (2k+2);u,0kn1. Given an inputy, the segment index kand the transversal parameter sare derived from Eq. (13c) and Eq. (13a),respectively. Then g(y;)is assigned as Eq. (13b). Because g(y;)is defined on y2[0;1], wearrange the control points in the u-axis as,B0;u= 0; B2n;u= 1, where theu-coordinate of the firstand the last control points are at 0and1, respectively.The strength of the piecewise B ́ezier curve is that the curve shape is defined explicitly via the controlpoints. Here we enforce the truth-table and the monotonicity constraints on the control points via,B0;v= 0; B2n;v= 1; (truth-table constraint )B2k;vB(2k+1);vB(2k+2);v; k = 0;1;:::;n1: (monotonicity constraint )To fulfill the above restrictions in optimization, the specific form of the parameters is given by=Bi;uB(i1);uB2n;uB(i1);u;Bi;vB(i1);vB2n;vB(i1);vji= 1;2;:::; 2n1;withB0= (0;0)andB2n= (1;1)fixed. So every i= (i;u;i;v)is in range [0;1]2and it isstraight-forward to compute the actual coordinates of control points from this parameterized form.Such parameterization makes each iindependent with each other, and thus simplifies the optimiza-tion. By default, we use piecewise B ́ezier curve with two segments to parameterize g(y;).0.0 0.2y 0.4 0.6 0.8 1.00.00.20.4g(y;θ)0.60.81.0s1−s(B2,u,B2,v)(B3,u,B3,v)(B4,u,B4,v)vuy= (1−s)2B2,u+ 2s(1−s)B3,u+s2B4,ug(y;θ) = (1−s)2B2,v+ 2s(1−s)B3,v+s2B4,vFigure 2: Parameterization of g(y;)using Piecewise B ́ezier curve with foursegments. The red points are controlpoints. The purple point is on the curve,which shows the relationship among y,g(y;)and the transversal parameter s.Algorithm 1: Auto Seg-Loss Parameter SearchInput: Initialized network N!0, initialized distribution 1and2, target metric , training setStrainandhold-out training set Shold-outResult: Obtained optimal parameters fort = 1 toTdofori = 1 toMdoSample parameter (t)iN trunc[0;1](t; 2I);Network training!((t)i) = arg max!e(t)i(N!;Strain),withwinitialized from w0;Compute the evaluation metric score((t)i) =(N!((t)i);Shold-out );endUpdatet+1= arg max1MPMi=1R(;t;(t)i);endreturn = arg maxtPMi=1((t)i);8t= 1;:::;T + 16Published as a conference paper at ICLR 20214.3 S URROGATE PARAMETER OPTIMIZATIONAlgorithm 1 describes our parameter search algorithm. The training set is split into two subsets, Strainfor training andShold-out for evaluation in the search algorithm, respectively. Specifically, suppose wehave a segmentation network N!with weights !, our search target is the parameters that maximizethe evaluation metric on the hold-out training set (N!;Shold-out )max() =(N!();Shold-out );s.t.!() = arg max!e(N!;Strain): (14)To optimize Eq. (14), the segmentation network is trained with SGD as the inner-level problem.At the outer-level, we use reinforcement learning as our searching algorithm, following the com-mon practice in AutoML (Zoph & Le, 2017; Pham et al., 2018). Other searching algorithms,such as evolutionary algorithm, may also be employed. Specifically, the surrogate parametersare searched via the PPO2 algorithm (Schulman et al., 2017). The process consists of Tsam-pling steps. In the t-th step, we aim to explore the search space around that from t1. HereMsets of parameters f(t)igMi=1are sampled independently from a truncated normal distribu-tion (Burkardt, 2014), as N trunc[0;1](t; 2I), with each variable in range [0;1]. In it,tand2Idenote the mean and covariance of the parent normal distribution ( is fixed as 0.2 inthis paper). tsummarizes the information from the (t1)-th step.Msurrogate losses are con-structed with the sampled parameters, which drive the training of Msegmentation networks sep-arately. To optimize the outer-level problem, we evaluate these models with the target metric andtake the evaluation scores as rewards for PPO2. Following the PPO2 algorithm, t+1is computedast+1= arg max1MPMi=1R(;t;i),where the reward R(;t;i)is asR(;t;i) = minp(i;;2I)p(i;t;2I)(i);CLIPp(i;;2I)p(i;t;2I);1;1 +(i);where min(;)picks the smaller item from its inputs, CLIP (x;1;1 +)clipsxto be within1and1 +, andp(i;;2I)is the PDF of the truncated normal distribution. Note that themean reward of the Msamples is subtracted when computing (i)for better convergence. AfterTsteps, the mean twith the highest average evaluation score is output as the final parameters .Empirically we find the searched losses have good transferability, i.e., they can be applied for dif-ferent datasets and networks. Benefiting from this, we use a light proxy task for parameter search.In it, we utilize a smaller image size, a shorter learning schedule and a lightweight network. Thus,the whole search process is quite efficient (8 hours on PASCAL VOC with 8 NVIDIA Tesla V100GPUs). More details are in Appendix A. In addition, the search process can be conducted only oncefor a specific metric and the resulting surrogate loss can be directly used for training henceforth.5 E XPERIMENTSWe evaluate on the PASCAL VOC 2012 (Everingham et al., 2015) and the Cityscapes (Cordts et al.,2016) datasets. We use Deeplabv3+ (Chen et al., 2018) with ResNet-50/101 (He et al., 2016) as thenetwork model. During the surrogate parameter search, we randomly sample 1500 training images inPASCAL VOC and 500 training images in Cityscapes to form the hold-out set Shold-out , respectively.The remaining training images form the training set Strainin search.0is set to make g(y;) =y.The backbone network is ResNet-50. The images are down-sampled to be of 128128resolution.SGD lasts only 1000 iterations with a mini-batch size of 32. After the search procedure, we re-trainthe segmentation networks with ResNet-101 using the searched losses on the full training set andevaluate them on the actual validation set. The re-train settings are the same as Deeplabv3+ (Chenet al., 2018), except that the loss function is substituted by the obtained surrogate loss. The searchtime is counted on 8 NVIDIA Tesla V100 GPUs. More details are in Appendix A.5.1 S EARCHING FOR DIFFERENT METRICSIn Table 2, we compare our searched surrogate losses against the widely-used cross-entropy lossand its variants, and some other metric-specific surrogate losses. We also seek to compare with theAutoML-based method in Li et al. (2019), which was originally designed for other tasks. But wecannot get reasonable results due to convergence issues. The results show that our searched losses7Published as a conference paper at ICLR 2021are on par or better the previous losses on their target metrics. It is interesting to note that theobtained surrogates for boundary metrics (such as BIoU and BF1) only focus on the boundary areas,see Appendix C for further discussion. We also tried training segmentation networks driven by bothsearched mIoU and BIoU/BF1 surrogate losses. Such combined losses refine the boundaries whilekeeping reasonable global performance.Table 2: Performance of different losses on PASCAL VOC and Cityscapes segmentation. The resultsof each loss function’s target metrics are underlined . The scores whose difference with the highestis less than 0.3 are marked in bold .Dataset PASCAL VOC CityscapesLoss Function mIoU FWIoU BIoU BF1 mAcc gAcc mIoU FWIoU BIoU BF1 mAcc gAccCross Entropy 78.69 91.31 70.61 65.30 87.31 95.17 79.97 93.33 62.07 62.24 87.01 96.44WCE (Ronneberger et al., 2015) 69.60 85.64 61.80 37.59 92.61 91.11 73.01 90.51 53.07 51.19 89.22 94.56DPCE (Caliva et al., 2019) 79.82 91.76 71.87 66.54 87.76 95.45 80.27 93.38 62.57 65.99 86.99 96.46SSIM (Qin et al., 2019) 79.26 91.68 71.54 66.35 87.87 95.38 80.65 93.22 63.04 72.20 86.88 96.39DiceLoss (Milletari et al., 2016) 77.78 91.34 69.85 64.38 87.47 95.11 79.30 93.25 60.93 59.94 86.38 96.39Lov`asz (Berman et al., 2018) 79.72 91.78 72.47 66.65 88.64 95.42 77.67 92.51 56.71 53.48 82.05 96.03Searched mIoU 80.97 92.09 73.44 68.86 88.23 95.68 80.67 93.30 63.05 67.97 87.20 96.44Searched FWIoU 80.00 91.93 75.14 65.67 89.23 95.44 79.42 93.33 61.71 59.68 87.96 96.37Searched BIoU 48.97 69.89 79.27 38.99 81.28 62.64 45.89 39.80 63.89 38.29 62.80 58.15Searched BF1 1.93 0.96 7.39 74.83 6.51 2.66 6.78 3.19 18.37 77.40 12.09 8.19Searched mAcc 69.80 85.86 72.85 35.62 92.66 91.28 74.10 90.79 54.62 53.45 89.22 94.75Searched gAcc 79.73 91.76 74.09 64.41 88.95 95.47 79.41 93.30 61.65 62.04 87.08 96.51Searched mIoU + BIoU 81.19 92.19 76.89 69.56 88.36 95.75 80.43 93.34 63.88 65.87 87.03 96.45Searched mIoU + BF1 78.72 90.80 71.81 73.57 86.70 94.88 78.30 93.00 61.62 71.73 87.13 96.235.2 G ENERALIZATION OF THE LOSSGeneralization among datasets. Table 3 evaluates the generalization ability of our searched losssurrogates among different datasets. Due to limited computational resource, we train networks onlywith the searched mIoU, BF1 and mAcc surrogate losses. The results show that our searched surro-gate losses generalize well between these two datasets with quite different scenes and categories.Table 3: Generalization of our searched surrogate losses between PASCAL VOC and Cityscapes.Datasets Cityscapes!VOC VOC!CityscapesLoss Function mIoU FWIoU BIoU BF1 mAcc gAcc mIoU FWIoU BIoU BF1 mAcc gAccCross Entropy 78.69 91.31 70.61 65.30 87.31 95.17 79.97 93.33 62.07 62.24 87.01 96.44Searched mIoU 80.05 91.72 73.97 67.61 88.01 95.45 80.67 93.31 62.96 66.48 87.36 96.44Searched BF1 1.84 0.93 7.42 75.85 6.48 1.47 6.67 3.20 19.00 77.99 12.12 4.09Searched mAcc 70.90 86.29 73.43 37.18 93.19 91.43 73.50 90.68 54.34 54.04 88.66 94.68Generalization among segmentation networks. The surrogate losses are searched with ResNet-50 + DeepLabv3+ on PASCAL VOC. The searched losses drive the training of ResNet-101 +DeepLabv3+, PSPNet (Zhao et al., 2017) and HRNet (Sun et al., 2019) on PASCAL VOC. Ta-ble 4 shows the results. The results demonstrate that our searched loss functions can be applied tovarious semantic segmentation networks.5.3 A BLATIONParameterization and constraints. Table 5 ablates the parameterization and the search space con-straints. In it, a surrogate without parameters refers to Eq. (8), with the domain extended from dis-crete pointsf0, 1gto continuous interval [0, 1]. This naive surrogate deliver much lower accuracy,indicating the essence of parameterization. Without the truth-table constraint, the training processdiverges at the very beginning, where the loss gradients become “NaN”. And the performance dropsif the monotonicity constraint is not enforced. The performance drops or even the algorithm failswithout the constraints.8Published as a conference paper at ICLR 2021Table 4: Generalization of our searched surrogate losses among different network architectures onPASCAL VOC. The losses are searched with ResNet-50 + DeepLabv3+ on PASCAL VOC.Network R50-DeepLabv3+ R101-DeepLabv3+ R101-PSPNet HRNetV2p-W48Loss Function mIoU BF1 mAcc mIoU BF1 mAcc mIoU BF1 mAcc mIoU BF1 mAccCross Entropy 76.22 61.75 85.43 78.69 65.30 87.31 77.91 64.70 85.71 76.35 61.19 85.12Searched mIoU 78.35 66.93 85.53 80.97 68.86 88.23 78.93 65.65 87.42 77.26 63.52 86.80Searched BF1 1.35 70.81 6.05 1.93 74.83 6.51 1.62 71.84 6.33 1.34 68.41 5.99Searched mAcc 69.82 36.92 91.61 69.80 35.62 92.66 71.66 39.44 92.06 68.22 35.90 91.46Proxy tasks for parameter search. Table 6 ablates this. The bottom row is our default settingwith a light-weight backbone, down-sampled image size and shorter learning schedule. The defaultsetting delivers on par accuracy with heavier settings. This is consistent with the generalizationability of our surrogate losses. Thus we can improve the search efficiency via light proxy tasks.Parameter search algorithm. Fig. 3 compares the employed PPO2 (Schulman et al., 2017) al-gorithm with random search. The much better performance of PPO2 suggests that surrogate losssearch is non-trivial and reinforcement learning helps to improve the search efficiency.Table 5: Ablation on search space constraints.Parameter Truth-table Monotonicity VOC mIoU7 7 7 46.993 7 7 Fail3 3 7 77.763 3 3 80.64Table 6: Ablation on search proxy tasks.Backbone Image Size Iterations Time(hours) VOC mIoUR50 256256 1000 33.0 81.15R50 128128 2000 17.1 80.56R101 128128 1000 13.3 80.75R50 128128 1000 8.5 80.970 5 10 15 20 25 30 35sampling step t0.300.350.400.450.500.550.600.65searching mIoUPPO2Random(a) search for mIoU0.640.650.660.670.680.69PPO2Randomsampling step tsearching BF10 5 10 15 20 25 30 35 (b) search for BF10.500.550.600.650.700.750.800.85PPO2Randomsampling step tsearching mAcc0 5 10 15 20 25 30 35 (c) search for mAccFigure 3: Ablation on loss parameter search. Each curve presents the highest average evaluationscore up to the t-th step in one search process. The search process is repeated four times.6 C ONCLUSIONThe introduced Auto Seg-Loss is a powerful framework to search for the parameterized surrogatelosses for mainstream segmentation evalutation metrics. The non-differentiable operators are substi-tuted by their parameterized continuous counterparts. The parameters are optimized to improve thefinal evaluation metrics with essential constraints. It would be interesting to extend the frameworkto more tasks, like object detection, pose estimation and machine translation problems.ACKNOWLEDGMENTSThe work is supported by the National Key R&D Program of China (2020AAA0105200), BeijingAcademy of Artificial Intelligence and the Institute for Guo Qiang of Tsinghua University.<|im_end|> <|im_start|>assistant ### Review Title Careful study of semantic segmentation proxy loss functions ### Review Text This paper looks at loss functions for semantic segmentation. Typical metrics are not easily differentiable w.r.t. the outputs of the DNNs that generate the labels. Instead, a proxy/surrogate loss function is learned jointly with the network in a two-level optimization. Pros: i) Good accuracy results. Code will allow others to verify and build on these. ii) High-quality ablation studies. Avoiding any parameterization (one of the the more surprising components of the method) is a good ablation to have. Also, comparison to random in Fig 3 demonstrates that the outer-level optimization is working as expected. Q1: What is the naive surrogate used in Table 5? Equation (8)? Cons: iii) Seems like an ad-hoc approach that doesn't incorporate classical techniques for optimizing "hard" loss functions. Non-differentiable loss functions, or losses over discrete variables, are less common when dealing with CNNs. But they can be included in deep learning with techniques like that in: Chen et. al. "Learning Deep Structured Models" Notable citations that take this a "classical" approach are: Ranjbar et. al., "Optimizing Non-Decomposable Loss Functions in Structured Prediction" This kind of approach is an omission from related work that might be worth correcting. Especially because it is the "direct" approach, in that it is optimizing the original loss functions using known techniques, without additional levels of learning or approximatinos as in the submission. Overall, though the empirical results are strong enough that it is likely that the submission is doing something useful well. So the approach in the submission seems well-supported by that fact, even if previous literature makes it a non-obvious way to do things. iv) Not much illustration or exploration of the learned loss surfaces. Experiments partially leave open the question: How closely is the surrogate loss matching the target metrics? Basically, is there more direct evidence that for the intuition that h_AND is approximately equivalent to f_AND? The networks trained on the surrogate losses do well on the original metrics. But it's not impossible that some trivial solution or unexpected loss function, that is not clearly similar to e.g. mIoU, can train a network to learn a segmentation model that produces segmentations with good mIoU. Minor comments: Q2: Reason for bolding in Table 2 is unclear, what is a "(co)-highest result?" ### Review Rating 7: Good paper, accept ### Review Confidence 3: The reviewer is fairly confident that the evaluation is correct<|im_end|> <|im_end|>
8Ln-Bq0mZcy
ICLR.cc/2021/Conference
2021
On the Critical Role of Conventions in Adaptive Human-AI Collaboration
["Andy Shih", "Arjun Sawhney", "Jovana Kondic", "Stefano Ermon", "Dorsa Sadigh"]
Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi.
["Multi-agent games", "emergent behavior", "transfer learning", "human-AI collaboration"]
ABSTRACTHumans can quickly adapt to new partners in collaborative tasks (e.g. playingbasketball), because they understand which fundamental skills of the task (e.g.how to dribble, how to shoot) carry over across new partners. Humans can alsoquickly adapt to similar tasks with the same partners by carrying over conventionsthat they have developed (e.g. raising hand signals pass the ball), without learningto coordinate from scratch. To collaborate seamlessly with humans, AI agentsshould adapt quickly to new partners and new tasks as well. However, currentapproaches have not attempted to distinguish between the complexities intrinsic toa task and the conventions used by a partner, and more generally there has beenlittle focus on leveraging conventions for adapting to new settings. In this work, wepropose a learning framework that teases apart rule-dependent representation fromconvention-dependent representation in a principled way. We show that, undersome assumptions, our rule-dependent representation is a sufficient statistic of thedistribution over best-response strategies across partners. Using this separationof representations, our agents are able to adapt quickly to new partners, and tocoordinate with old partners on new tasks in a zero-shot manner. We experimen-tally validate our approach on three collaborative tasks varying in complexity: acontextual multi-armed bandit, a block placing task, and the card game Hanabi.1 I NTRODUCTIONHumans collaborate well together in complex tasks by adapting to each other through repeatedinteractions. What emerges from these repeated interactions is shared knowledge about the interactionhistory. We intuitively refer to this shared knowledge as conventions . Convention formation helpsexplain why teammates collaborate better than groups of strangers, and why friends develop lingoincomprehensible to outsiders. The notion of conventions has been studied in language (Clark &Wilkes-Gibbs, 1986; Clark, 1996; Hawkins et al., 2017; Khani et al., 2018) and also alluded to inmore general multiagent collaborative tasks (Boutilier, 1996; Stone et al., 2010; Foerster et al., 2019;Carroll et al., 2019; Lerer & Peysakhovich, 2019; Hu et al., 2020). For example, Foerster et al. (2019)trained agents to play the card game Hanabi, and noted the emergent convention that “hinting for redor yellow indicates that the newest card of the other player is playable”.One established characterization of a convention that is commonly used (Boutilier, 1996; Hawkinset al., 2017; Lerer & Peysakhovich, 2019) is an arbitrary solution to a recurring coordinationproblem (Lewis, 1969). A convention is thus one of many possible solutions that a group of partnershappens to converge to. This is in contrast to problem solutions that are enforced by the ruleconstraints, and would have arisen no matter how the partners collaborated and what behavior theyconverged to. Success in a collaborative task typically involves learning both types of knowledge,which we will refer to as convention-dependent andrule-dependent behavior. The distinction betweenthese two types of behavior has been studied extensively in the linguistic literature (Franke et al.;Brochhagen, 2020). In this work, we focus on the less-explored setting of implicit communication,and provide a concrete approach to learn representations for these two different types of behavior.In the context of multi-agent or human-AI collaboration, we would like our AI agents to adaptquickly to new partners. AI agents should be able to flexibly adjust their partner-specific convention-1Published as a conference paper at ICLR 2021Repeated InteractionsConvention DependenceHighLow ρi ρ2 ρ3Rule representationConvention representation4 player chessFriendly Rock Paper ScissorstimegtgpFigure 1: Partners form conventions in a collaborative task through repeated interactions. An AI agent canadapt quickly to conventions with new partners by reusing a shared rule representation gtand learning a newpartner-specific convention representation gp. Certain collaborative tasks, such as friendly Rock-Paper-Scissors,are more convention-dependent than others, such as 4-player chess.dependent behavior while reusing the same rule-dependent behavior to simplify the learning problem– just like how humans quickly adapt when playing basketball with a new friend, without the need tore-learn the rules of the game. We would also like our AI agents to coordinate well on similar taskswhen paired with the same partners – just like how humans can coordinate better when playing a newsport but with an old friend.Although many existing works similarly recognize conventions as an important factor of successfulcollaboration, they do not focus on separating conventions from rules of the task. This means thatadapting to a new partner can be as difficult as learning a new task altogether. For example, existingtechniques that emphasize modeling the partner’s policy, such as theory of mind (Simon, 1995; Bakeret al., 2017; Brooks & Szafir, 2019) or multi-agent reinforcement learning (Foerster et al., 2018),attempt to model everything about the agent’s belief of the partner’s state and policies. Such beliefmodeling approaches very quickly become computationally intractable, as opposed to solely focusingon the relevant conventions developed with a partner.To address the challenges above, we propose a framework that explicitly separates convention-dependent representations and rule-dependent representations through repeated interactions withmultiple partners. After many rounds of solving a task (e.g. playing basketball) with different partners,an AI agent can learn to distinguish between conventions formed with a specific partner (e.g. pointingdown signals bounce pass) and intrinsic complexities of the task (e.g. dribbling). This enables us toleverage the representations separately for fast adaptation to new interactive settings.In the rest of this paper, we formalize the problem setting, and describe the underlying model ofpartners and tasks. Next, we present our framework for learning separations between rule andconvention representations. We show that, under some conditions, our rule representation learns asufficient statistic of the distribution over best-response strategies. We then run a study on human-human interactions to test if our hypothesis – that partners can carry over the same conventions acrosstasks – indeed holds for human subjects. Finally, we show the merits of our method on 3collaborativetasks: contextual bandit, block placing, and a small scale version of Hanabi (Bard et al., 2020).2 R ELATED WORKConvention formation has been studied under the form of iterated reference games (Hawkins et al.,2017; 2019), and language emergence (Mordatch & Abbeel, 2018; Lazaridou et al., 2017). In theseworks, partners learn to reference objects more efficiently by forming conventions through repeatedinteractions. But in these tasks, there is little intrinsic task difficulty beyond breaking symmetries.In multi-agent reinforcement learning, techniques such as self-play, cross-training, or opponentlearning (Foerster et al., 2018) have been used to observe emergent behavior in complex, physical set-tings (Nikolaidis & Shah, 2013; Liu et al., 2019; Baker et al., 2020). In addition, convention formationhas been shown in tasks like Hanabi (Foerster et al., 2019; Hu & Foerster, 2019), Overcooked (Carrollet al., 2019; Wang et al., 2020), and negotiation tasks (Cao et al., 2018), where agents learn both howto solve a task and how to coordinate with a partner. But, these works qualitatively analyze emergentconventions through post-hoc inspection of the learned policies, and do not learn representations2Published as a conference paper at ICLR 2021Task 1 Conventionson Task 1Conventions carriedover to Task 2ABABABa1a2a3a4Figure 2: Collaborative contextual bandit task. On the left we see a task with 2contexts represented by rowsand4arms represented by cells. At each context, the prize if both partners pull the same green arm is 1, and 0otherwise. In the middle, we see a possible convention that two partners may converge to: pulling a1in contextA, and pulling a2in context B. On the right, when the task changes in context A but stays the same in context B,we expect the same two partners to continue to pull a2in context B.that separate conventions from rule-dependent behavior. This makes it difficult to leverage learnedconventions when adapting to new partners. More recently, an approach was proposed to designagents that avoid relying on conventions altogether; instead, the agents aim for unambiguous butpotentially sub-optimal strategies (Hu et al., 2020).Other approaches for studying conventions include game theory (Lerer & Peysakhovich, 2019)and theory of mind (Simon, 1995; Rabinowitz et al., 2018). Pragmatic reasoning (Goodman &Frank, 2016) is another framework that aims to estimate beliefs over the partner’s states and policies,but all these frameworks that rely on modelling beliefs quickly become intractable. Instead ofapproximating these belief modelling frameworks (Monroe & Potts, 2015), we instead focus onlearning representations for rules and conventions. Our work shares similarities with the paradigmsof meta-learning (Schmidhuber, 1987; Finn et al., 2017) or multi-task learning (Caruana, 1997), if weview collaboration with different partners as new but related tasks. However, we are also interested inhow conventions persist across tasks with similar symmetries.More related to our work is perhaps modular policy networks (Devin et al., 2017), which learns policymodules and composes them together to transfer across robot arms with different degrees of freedomand across different tasks. However, their approach relies on input cleanly split between backgroundobservations of the task, and robot’s internal observations. We do not assume such a split in the input;our approach tries to learn the separation between rules and partners with one input channel.3 P RELIMINARIESWe begin by formally defining our problem setting as a two-player Markov Decision Process (MDP).Two-Player MDP We consider a two-agent MDP with identical payoffs, which is a tuplefS;fAe;Apg;P;Rg.Sis a set of states, Aeis a set of actions for agent e,Apis a set of ac-tions for agent p. In general, erepresents the ego agent (that we control), and prepresents agentswho partner with e.P:SAeApS![0;1]is the probability of reaching a state giventhe current state and actions of all agents, and R:SS!Ris the real-valued reward function.Since we are interested in repeated interactions, we consider finite-horizon MDPs. A policy is astochastic mapping from a state to an action. For our setting, our policy = (e;p)has two parts:e:S!Aeandp:S!Ap, mapping the state into actions for agent eandprespectively.We also consider collaborative tasks that are partially observable in our experiments. In addition,some tasks are turn-based, which can be handled by including the turn information as part of the state,and ignoring the other player’s actions. Here, we focus on tasks with discrete state and action spaces.Running Example: Collaborative Contextual Bandit We describe a collaborative task to groundour discussion around adapting to new partners, and adapting to new tasks with old partners. Considera contextual bandit setting with 2contexts, 4actions, and a prize for each action. The two playerseach independently pick an arm to pull, and they score prize (a)points if they both picked the samearma; otherwise, they score 0points. An example is depicted in Figure 2, where green boxesrepresent arms with prize 1, the rest have prize 0, and red and blue circles show the player’s actions.In Task 1of Figure 2, context A only has one good arm a1while the context B has two good arms a2anda4. After repeated rounds of playing, two agents can eventually converge to coordinating on aconvention that chooses one of the two good arms in context B (e.g. selecting the leftmost good arma2). When the task shifts slightly as shown in Task 2 but context B remains the same across the two3Published as a conference paper at ICLR 2021tasks, we can reasonably expect the partners to adhere to the same convention they developed forcontext B of Task 1when playing context B of Task 2.There are two axes of generalization at play. First, for a fixed task we would like to learn the underlyingstructure of the task (e.g. green arm locations) to quickly adapt to a new partner when playing thesame task. Second, for a fixed partner we would like to keep track of developed conventions toquickly coordinate with this partner on a new task. In the next sections, we define the notion of a newtask and a new partner, and propose a framework to effectively generalize across both dimensions.4 M ODEL OF PARTNERS AND TASKSaitsiRtRstatespartneritasktFigure 3: Generating process of part-ner’s actions for a two-player MDP. Foreach task we first sample a reward func-tionRtfor the MDP. This determinestheQ-function at each state. For eachpartneriat each state sof taskt, wesample a function ithat maps the Q-function at state sto an actionaits.We consider a family of tasks that share the same state space,action space, and transition dynamics (all of which we referto as the domain), but can differ in the reward function of theunderlying MDP. We also consider a family of partners that ourego agent would like to coordinate with. The underlying modelgoverning the actions taken by a partner at a state of a giventask can be seen in Figure 3. For a new task, a new rewardfunction is sampled from R, while the domain of the MDP isfixed. For a new partner, we sample from a new function ,which maps the Q-function at a state of the task to an action.We refer to as the convention of a partner. In other words,the actionaitsof a partner iat statesof tasktdepends on thepartner’s conventions and (indirectly through the Q-functionat states) the reward of the task. is the distribution overconventions and can correspond to, for example, the distributionover emergent conventions from AI-agents trained from self-play, or can be derived from human partners.More formally, for a fixed domain let QRs(ap) =maxaeQR(s;ae;ap), whereQRis the optimal Q-function forthe two player MDP with reward R(Boutilier, 1996; Oliehoeket al., 2008). In other words, QRsis theQ-function from thepartner’s perspective at state sin the task with reward Rassum-ing best response by the ego agent. Then, the convention of a partner :SRjApj!Apdeterminesthe partner’s action at state s, given byi(s;QRs). For example, we might expect the distribution ofactions across different partners at a state sto follow Boltzmann rationality (Ziebart et al., 2008):E[ 1[(s;QRs) =ap]]/exp(QRs(ap)). Lastly, we assume that behavior at different states areuncorrelated: a choice of action at one state tells us nothing about actions at other states.Given this formulation, we can hope to learn the underlying structure of a task by playing with anumber of partners on the same task. For example, if many sampled partners all make the sameactionapat a states, it is likely that QRs(ap)is much higher than the second best action, i.e., theoptimal action is dependent on the rules of the task as opposed to the arbitrariness of partner strategies.Therefore a new partner will likely take action apas well. Additionally, when coordinating with thesame partner across different tasks, we can expect to see developed conventions persist across stateswith similar Q-functions. In particular, if QR1s=QR2sfor two different tasks with reward functionsR1andR2, theni(s;QR1s) =i(s;QR2s), so a partner will take the same action at state sacrossthe two tasks (e.g. context B in Figure 2 across Task 1 and 2).We note that the roles of partners and tasks in our model are asymmetrical, assuming rational partners.A task can completely determine the actions of all partners at states with one good action (e.g. contextA in Figure 2), whereas a partner cannot blindly pick the same action across all tasks. This asymmetryis reflected in our learning framework and our setup of adapting to new partners and new tasks.5 L EARNING RULES AND CONVENTIONSWith the goal of adapting to new partners and tasks in mind, we propose a framework aimedat separating the rule representations associated with a task from the convention representations4Published as a conference paper at ICLR 2021task modulegtgp1gpnstategp1(a|z)gpn(a|z)gt(a|s)π1(a|s)=1Z1gt(a|s)gp1(a|z)πn(a|s)=1Zngt(a|s)gpn(a|z)zspartner module.........Figure 4: Policy network with separate task/partner modules to separate rule/convention representations. Thetask module gtmaps the state observations to an action distribution gt(ajs)and to latent variables z. Eachpartner module maps zto another action distribution gpi(ajz). The policy of the ego agent when playing withpartneriis set to be proportional to the product of gt(ajs)andgpi(ajz).associated with a partner. At training time, we assume access to a task and a set of partners sampledfrom the same partner distribution . When adapting to a new partner sampled from , we wouldlike to learn a new convention representation but keep the learned rule representation. When adaptingto a new task with the original partners, we would like the opposite: to learn a new rule representationbut keep the learned convention representation. In our setup, the ego agent knows the identity ofthe tasks and partners, and the goal is to maximize the total reward with a new task and partner (thecombination of which may be new to the ego agent). Naturally, this points us towards a modulararchitecture that enables integrating new combinations of partner and task representations.We propose the architecture shown in Figure 4, where we learn to play with each of the trainingpartners via reinforcement learning. Each rectangular module represents a multi-layer perceptron,and each box with grey bars represents a distribution over actions in the action space of the MDP.When training with multiple partners on the same task, the policy network uses one single sharedtask module gt, and uses one partner module gpifor each partner i. The task module takes the stateobservations sas input, and outputs latent variables zand an action distribution gt(ajs). Then, eachpartner module takes zas input, and outputs an action distribution gpi(ajz). Here,gpi(ajz)does notrepresent partner i’s action distribution, but rather represents our ego agent’s action distribution inresponse to partner i. The policy iof the policy network for partner iis set to be proportional to theproduct of the action distributions.i(ajs) =1Zigt(ajs)gpi(ajz)As it stands, many possible combinations of task and partner module outputs could lead to optimalaction distributions i(ajs)8i. To encourage the task/partner modules to learn the rule/conventionrepresentations respectively, we include a regularization term that pushes gt(ajs)to be the marginalbest-response strategy over partners at state s. In practice, this amounts to minimizing the Wassersteindistance between the task module’s output and the average best-response strategies i.D(s) =Xa2Agt(ajs)1nXii(ajs)By pushing the task module gtto learn the marginal best-response strategy, we can cleanly capture therule-dependent representations of the task, to be used when adapting to a new partner. In fact, undersome assumptions, we can show that gtis learning the optimal representation, as it is a sufficientstatistic of the distribution over best-response strategies across possible partners.In particular, for a fixed task let f(s;ae;ap)represent the probability of the ego agent taking action aeusing its best-response strategy to partner action apat states. So,Paef(s;ae;ap) = 1 . We say theego agent is deterministic if it can only take on deterministic strategies: 8s;ap9ae:f(s;ae;ap) = 1 .Lemma 1. For a deterministic ego agent, the marginal best-response strategy across partners is asufficient statistic of the distribution of best-response strategies across partners.Next, we lay out the learning procedure in more detail in Algorithm 1. We use the task module gtandthe partner module gpiwhen playing with partner i. We push the task module to learn the marginalbest-response action by adding the loss term Don top of a standard policy gradient (PG) loss term.5Published as a conference paper at ICLR 2021Algorithm 1: Learning Separate Representations for Partners and TasksInput: A MDPM,npartners1:::nOutput: Policy network modules for adapting to new partners from and new tasks1Initialize the task module gtandnpartner modules gp1;:::;gpn2G;T fgt;gp1;:::;gpng, number of iterations3forj 1;Tdo4 fori 1;ndo5 Collect rollouts = (s;a;r)usingfgt;gpigwith partner ionM6 UpdateGwith lossLPG(gt;gpi;) +Es[D(s)]Return:GAdapting to new partner When adapting to a new partner, we reuse the task module gtalong witha reinitialized partner module, and train both modules. At states with only one optimal action a?regardless of the partner’s convention, the marginal best-response strategy will concentrate on a?. Assuch, the task module gtwill immediately push the ego agent to choose a?, improving adaptationtime. In contrast, at states with many possible optimal actions depending on the partner’s conventions,the marginal best-response strategy will spread the action distribution among the possible optimalactions, allowing the ego agent to explore amongst the optimal options with the new partner.Coordinating on a new task When faced with a new task, we would like to recognize the partsof the task where conventions with old partners on an old task carry over. In particular, in states forwhich the joint Q-function has not changed, our partners models will take the same actions as before,and our ego agent can employ the same response as before. With this in mind, suppose we havetrained a task module gtand a set of 2npartner modules gp1;:::;gp2nwith a set of partners on an oldtask. We can learn to coordinate on a new task with partners 1tonby fine-tuning the task module gt,paired with partner modules gp1;:::;gpnwith frozen parameters. At states where the joint Q-functionhas not changed, the same latent variable outputs zwill work well for all partners, so the task modulewill output the same zand the actions of the ego agent will not change. Then, we can coordinate withpartnersn+ 1to2nin a zero-shot manner by pairing gtwith partner modules gpn+1;:::;gp2n.Train-A Test-A Train-B Test-B Train-C Test-C00:51Score: 1st TryTrain TestABCABCFigure 5: User study on the collaborative contextual bandit task. Participants were asked to coordinate on theTrain task and, immediately afterwards, with the same partner on the Test task (shown on the right). Somecontexts have two equally optimal actions (green boxes), requiring symmetry breaking. On the left we plot thescore, averaged across participants, of the first try at each context. Our participants were able to coordinate wellon Test-C on the first try, even though the context requires symmetry breaking.5.1 U NDERSTANDING CONVENTIONSWhen adapting to a new task with the same partner, we are relying on the hypothesis that our partnerswill carry over the same conventions developed on an earlier task, at states where the Q-function hasnot changed. Here, we would like to test this hypothesis and study if and when conventions persistacross tasks, through a study of human-human interactions in the collaborative contextual bandit task.User Study: Conventions in Human-Human Interactions. We conducted a study with 23pairsof human partners (46 subjects), using the collaborative contextual bandit task from Section 3. Thestudy was conducted via an online interface, with partners in separate rooms. The actions of partnersare revealed to each other at the end of each try, and the partners had no other form of communication.- Independent Variables: We vary the prize of the arms at different contexts, i.e., which arms are goodto coordinate on. Pairs of partners were given 5tries to coordinate on 3 different contexts (as shown6Published as a conference paper at ICLR 2021in Fig. 5). We then ask the same pair of partners to coordinate on the Test task with 3 new contexts.- Dependent Measures: We measure the score of each pair’s first try at each context. The maximumscore in each run for each context is 1, corresponding to the two partners picking the same good arm.- Hypothesis: Our hypothesis is that partners can coordinate well on the test task by carrying overover conventions developed on the train task.- Results: On the left of Figure 5 we plot the average score of the first try (zero-shot) of our participantsat solving each context in each task. In the Train task (grey bars), the partner pairs are coordinatingfor the first time. As expected, they generally had no trouble coordinating on Train-A (since there isonly1good option), but scored lower on Train-B and Train-C (since there are 2good options each).Immediately following the 5trials on the training task, the partner pairs coordinated on each contextof the Test task (orange bars in Fig. 5), to see if they can leverage developed conventions on a newtask. On Test-A, they did well since there is only one good option. However, they coordinated muchbetter on Test-C compared to Test-B, even though both contexts have two good options. The maindifference between the two contexts is that Test-C shares the same set of optimal actions ( a2anda4)as Train-C, whereas Test-B does not share the same set of optimal actions as Train-B.Overall, this study suggests that human partners can successfully carry over conventions developedthrough repeated interactions with their partners, and further coordinate better when carrying overconventions across tasks at states where the set of optimal actions are similar (e.g. the higher gap forsame context such as context C across the train and test tasks).6 E XPERIMENTSWe experiment with our approach on three coordination tasks: collaborative contextual bandit,collaborative block placing, and Hanabi. We show results on adaptation to new partners for a fixedtask, and zero-shot coordination with same partners on a new task. We compare with 1) a baseline(BaselineAgg) that learns a shared policy network to play with all training partners, using the averageof the gradients with each partner and 2) a baseline (BaselineModular) that similarly uses a modularpolicy approach with separate modules for each partner, but does not explicitly represent the marginalbest-response strategy, and 3) First-Order MAML (Nichol et al., 2018). For our method, we vary from 0:0to0:5: a higher value pushes the task module output to more closely match the marginalaction distribution. We also test the use of a low dimensional z(the interface between the task andpartner module). In general, we find that our method with regularization performs the best, and thatit is unclear if using a low dimensional zis helpful. The partners in our experiments were eithergenerated via self-play or were hand-designed to use varying conventions.(a) Arms 2 (b) Arms 3(c) Arms 4 (d) Arms User StudyFigure 6: Contextual bandit game: adapting to a single new partner. In Figures 6a-c, we train and test onhand-designed partners. “Arms m” refers to a task having mcontexts with symmetries (exact task details in theAppendix). In Figure 6d, we train and test on partner policies derived from the data collected in our user study,and we use the Train task shown in Figure 5.7Published as a conference paper at ICLR 2021Arms 2 Arms 3 Arms 4 Blocks00:511:5:51 :46:61:53:04 :01 :05:15:01 :01 :02:111:151:02:441:26:75 :74:46DistanceFigure 7: Wasserstein distance between task module output and true marginals assuming uniform preferenceover optimal actions. Lower is better. We do not compare with BaselineModular since it does not have outputthat can be interpreted as marginals. The left three sets of bars are for the contextual bandit task, and the right setof bars are for the blocks task. We omit Hanabi since it is not easy to determine which actions are optimal. It isinteresting that even without regularization = 0:0, the distance to ground truth is still lower than the baselines,suggesting the model is learning some level of task-specific representation just due to the architecture.Contextual Bandit We study the collaborative contextual bandit task described in Section 3, using4contexts and 8arms. We study variations of the task by altering the number of contexts with morethan one good option (i.e. symmetries). We write “Arms m” to refer to a task having mcontexts withsymmetries – coordination should be easier for tasks having fewer contexts with symmetries.In Figure 6a-c we show results on adaptation to new hand-designed partners (Figure 11 for self-playpartners), varying the number of contexts with symmetries. We also plot the green oracle curve,by using our learning framework along with oracle best-response marginals (assuming uniformdistribution over optimal actions across partners). To see if our method learns the correct marginals,we measure the Wasserstein distance between the learned marginals and the oracle marginals, inFigure 7. Using = 0:5produces task module outputs that closely match the oracle marginals. Totest zero-shot coordination with the same partners on new tasks, we tweak the task by altering contextswith only one good action, keeping the contexts with symmetries the same (similar to the change inFigure 2). We use hand-designed partners to make sure that partners use the same conventions atunchanged contexts. In Figure 10 we see that our method outperforms BaselineModular.We include an additional plot in Figure 6d, where we instead use the Train task from the user studyas shown in Figure 5. We use the data collected in our user study to create the partner policies, in lieuof the hand-designed partners. We observe similar trends – that the modular learning framework withmarginal regularization performs the best.Block Placing Next, we study a collaborative block placing task which involves a 22goal-grid,with a single red block and a single blue block on the grid. The goal is to reconstruct the goal-gridfrom a blank working-grid, with partner 1 (P1) only controlling a red block and partner 2 (P2) onlycontrolling a blue block. The partners take turns moving their blocks on the working-grid, and cansee each other’s actions on the working-grid. However, only P1 can see the goal-grid configuration;P2 only receives information from the working-grid. Each game lasts 6turns, with P1 taking the firstturn. Specifically, on P1’s turn, it can move the red block to one of the four cells, remove it fromthe grid, or pass the turn (similarly for P2 / blue block). Blocks cannot be placed on top of eachother. The partners share rewards, and earn 10points for each correctly placed block, for a maximumof20points per game. P1 needs to both place its own red block correctly, and communicate theposition of the blue block to P2. As such, success in this task requires a mixture of rule-dependentand convention-dependent skills.In Figures 9a and 9b we plot the results on adaptation to new hand-designed and self-play partners,where our method outperforms the baselines for non-zero values of . To verify that our task moduleis learning a good task representation, we compute the distance between the task module output andthe oracle marginal (Figure 7). We see that our approach with = 0:5leads to a smallest gap fromthe oracle marginals. Finally, similar to the contextual bandit task, we test zero-shot coordinationwith the same partners on new tasks in Figure 10. We tweak the task by changing the colors oftarget blocks, and use hand-designed partners to ensure the developed conventions persist. Again,our method performs better on the new task/partner combination than BaselineModular. We providemore details of the experimental setup in the Appendix.Hanabi Finally, we experiment on a light version of the collaborative card game Hanabi, whichhas been studied in many related works in MARL. We focus on the two-player version of Hanabi,with1color and 5ranks (for a maximum score of 5). Since it is not straightforward how to create a8Published as a conference paper at ICLR 2021Goal Visibility Working GridP1 P2 StartFigure 8: Block placing task: each row displays a new round of the task. On the left, we see the goal-grid andhow it appears to each player. Since P2 cannot see the goal-grid, we show a fully grey grid. On the right-side wesee the working grid evolve over the course of 6turns. P1 edits the red block on turns 1;3;5and P2 edits theblue block on turns 2;4;6.variety of hand-designed partners, or tweak the task while maintaining the same symmetries, we useself-play partners only and omit experiments on coordinating on a new task with the same partners.The results on adaptation to new self-play partners are shown in Figure 9c.(a) Blocks: hand-designed partners. (b) Blocks: self-play partners. (c) Hanabi: self-play partners.Figure 9: Adapting to a single new partner for block placing task and Hanabi.Arms 1 Arms 2 Arms 300:51ScoreBlocks121620ScoreFigure 10: Zero shot performance on new task/partner combination. Higher is better. Orange refers to our methodwith= 0:5and grey refers to BaselineModular. Arms mmeans the task has mcontexts with symmetries. Wecreate the new task by altering contexts where there is only one good action (as a result, we omit Arms 4 since ithas no contexts with only one good action). We do not compare with BaselineAgg since it is non-modular andcannot be composed for new task/partner combinations.7 C ONCLUSIONWe study the problem of adapting to new settings in multi-agent interactions. To develop AI-agentsthat quickly adapt to new partners and tasks, we introduced a framework that learns rule-dependentand convention-dependent representations. Our AI agents can adapt to conventions from new partnerswithout re-learning the full complexities of a task, and can carry over existing conventions with samepartners on new tasks. We run a study of human-human interactions that suggests human conventionspersist across tasks with similar symmetries. Our experiments show improvements in adapting tonew settings on a contextual bandit task, a block placing task, and Hanabi.ACKNOWLEDGMENTSThis work was supported by NSF (#1941722, #2006388), and the DARPA HICON-LEARN project.9Published as a conference paper at ICLR 2021
GbUX4VGLOgn
Review
7: Good paper, accept
This paper makes the observation that when performing cooperative tasks with partners, there are two components to learn: how to perform the task, and how to coordinate with the partner according to conventions. Therefore, it proposes to separate these two components via a modular architecture, which learns a task-specific action distribution that attempts to marginalize out all possible partner conventions, and a series of partner-specific action distributions. The agent's own policy is the product of these two distributions. When coordinating with a new partner, the partner-specific component is learned from scratch using a pre-trained task-specific component, and vice versa. The paper goes after an ambitious and useful problem (rapid adaptation to coordinating with novel partners on new tasks), and proposes a novel technique for doing so. A weakness is that the paper does not use reasonable baselines, and effectively compares only to ablations of their own model. Why not compare to a meta-learning technique? Or compare to some of the existing SOTA methods for Hanabi? In general the paper is well written, but it could be made significantly clearer by providing further details on how the partner action distributions g^p_i(a|s) are obtained. Given the explanation in the beginning of Section 4, I was initially under the impression that these represented the partner's policy distribution produced by its Q-values, or perhaps the partner's actual action frequencies obtained from observing trajectories. However, it seems that the model is learned entirely end-to-end, and so these distributions actually represent how the agent's own policy should be modified according to which partner it is playing with. Is this correct? If so, this explanation should be added to the paper to make it more clear how the technique can apply beyond simple domains like the contextual bandit, in which agents must choose the *same* actions as the partner. The fact that the partner module must be re-initialized and learned from scratch for each new partner is a weakness of the method. Why not learn some type of partner embedding that would enable generalization to new partners at test time that use similar conventions to training partners? The experiments section of the paper felt rushed and lacking in explanation compared to the first 6 pages. The clarity/impact could be enhanced by explaining the experiments in more detail. In particular, the block placing task is not explained (do agents place blocks separately? do they have to place a block together at the same time?). Also, the need for "hand-designed" partners is not explained, nor is what they are hand-designed to do. Since the paper collects a human user study on conventions, why not test how well the trained models are able to coordinate with humans? This would significantly enhance the impact of the paper. Other suggestions: - Figure 2 caption does not include the explanation that agents must choose the same action to get a reward - A legend should be added to Figures 7, 8, and 9. - Figure 7 is interesting in that even without the Wasserstein distance penalty (when lambda=0), the Wasserstein distance to the ground truth marginal best response is still low, suggesting the model is learning some level of task-specific representation just due to the architecture. This could be explained further in the text. Edit: I have updated my score based on the new experiments added during the rebuttal process.
4: The reviewer is confident but not absolutely certain that the evaluation is correct
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title On the Critical Role of Conventions in Adaptive Human-AI Collaboration ### Paper Abstract Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Humans can also quickly adapt to similar tasks with the same partners by carrying over conventions that they have developed (e.g. raising hand signals pass the ball), without learning to coordinate from scratch. To collaborate seamlessly with humans, AI agents should adapt quickly to new partners and new tasks as well. However, current approaches have not attempted to distinguish between the complexities intrinsic to a task and the conventions used by a partner, and more generally there has been little focus on leveraging conventions for adapting to new settings. In this work, we propose a learning framework that teases apart rule-dependent representation from convention-dependent representation in a principled way. We show that, under some assumptions, our rule-dependent representation is a sufficient statistic of the distribution over best-response strategies across partners. Using this separation of representations, our agents are able to adapt quickly to new partners, and to coordinate with old partners on new tasks in a zero-shot manner. We experimentally validate our approach on three collaborative tasks varying in complexity: a contextual multi-armed bandit, a block placing task, and the card game Hanabi. ### Paper Keywords ["Multi-agent games", "emergent behavior", "transfer learning", "human-AI collaboration"] ### Paper Content ABSTRACTHumans can quickly adapt to new partners in collaborative tasks (e.g. playingbasketball), because they understand which fundamental skills of the task (e.g.how to dribble, how to shoot) carry over across new partners. Humans can alsoquickly adapt to similar tasks with the same partners by carrying over conventionsthat they have developed (e.g. raising hand signals pass the ball), without learningto coordinate from scratch. To collaborate seamlessly with humans, AI agentsshould adapt quickly to new partners and new tasks as well. However, currentapproaches have not attempted to distinguish between the complexities intrinsic toa task and the conventions used by a partner, and more generally there has beenlittle focus on leveraging conventions for adapting to new settings. In this work, wepropose a learning framework that teases apart rule-dependent representation fromconvention-dependent representation in a principled way. We show that, undersome assumptions, our rule-dependent representation is a sufficient statistic of thedistribution over best-response strategies across partners. Using this separationof representations, our agents are able to adapt quickly to new partners, and tocoordinate with old partners on new tasks in a zero-shot manner. We experimen-tally validate our approach on three collaborative tasks varying in complexity: acontextual multi-armed bandit, a block placing task, and the card game Hanabi.1 I NTRODUCTIONHumans collaborate well together in complex tasks by adapting to each other through repeatedinteractions. What emerges from these repeated interactions is shared knowledge about the interactionhistory. We intuitively refer to this shared knowledge as conventions . Convention formation helpsexplain why teammates collaborate better than groups of strangers, and why friends develop lingoincomprehensible to outsiders. The notion of conventions has been studied in language (Clark &Wilkes-Gibbs, 1986; Clark, 1996; Hawkins et al., 2017; Khani et al., 2018) and also alluded to inmore general multiagent collaborative tasks (Boutilier, 1996; Stone et al., 2010; Foerster et al., 2019;Carroll et al., 2019; Lerer & Peysakhovich, 2019; Hu et al., 2020). For example, Foerster et al. (2019)trained agents to play the card game Hanabi, and noted the emergent convention that “hinting for redor yellow indicates that the newest card of the other player is playable”.One established characterization of a convention that is commonly used (Boutilier, 1996; Hawkinset al., 2017; Lerer & Peysakhovich, 2019) is an arbitrary solution to a recurring coordinationproblem (Lewis, 1969). A convention is thus one of many possible solutions that a group of partnershappens to converge to. This is in contrast to problem solutions that are enforced by the ruleconstraints, and would have arisen no matter how the partners collaborated and what behavior theyconverged to. Success in a collaborative task typically involves learning both types of knowledge,which we will refer to as convention-dependent andrule-dependent behavior. The distinction betweenthese two types of behavior has been studied extensively in the linguistic literature (Franke et al.;Brochhagen, 2020). In this work, we focus on the less-explored setting of implicit communication,and provide a concrete approach to learn representations for these two different types of behavior.In the context of multi-agent or human-AI collaboration, we would like our AI agents to adaptquickly to new partners. AI agents should be able to flexibly adjust their partner-specific convention-1Published as a conference paper at ICLR 2021Repeated InteractionsConvention DependenceHighLow ρi ρ2 ρ3Rule representationConvention representation4 player chessFriendly Rock Paper ScissorstimegtgpFigure 1: Partners form conventions in a collaborative task through repeated interactions. An AI agent canadapt quickly to conventions with new partners by reusing a shared rule representation gtand learning a newpartner-specific convention representation gp. Certain collaborative tasks, such as friendly Rock-Paper-Scissors,are more convention-dependent than others, such as 4-player chess.dependent behavior while reusing the same rule-dependent behavior to simplify the learning problem– just like how humans quickly adapt when playing basketball with a new friend, without the need tore-learn the rules of the game. We would also like our AI agents to coordinate well on similar taskswhen paired with the same partners – just like how humans can coordinate better when playing a newsport but with an old friend.Although many existing works similarly recognize conventions as an important factor of successfulcollaboration, they do not focus on separating conventions from rules of the task. This means thatadapting to a new partner can be as difficult as learning a new task altogether. For example, existingtechniques that emphasize modeling the partner’s policy, such as theory of mind (Simon, 1995; Bakeret al., 2017; Brooks & Szafir, 2019) or multi-agent reinforcement learning (Foerster et al., 2018),attempt to model everything about the agent’s belief of the partner’s state and policies. Such beliefmodeling approaches very quickly become computationally intractable, as opposed to solely focusingon the relevant conventions developed with a partner.To address the challenges above, we propose a framework that explicitly separates convention-dependent representations and rule-dependent representations through repeated interactions withmultiple partners. After many rounds of solving a task (e.g. playing basketball) with different partners,an AI agent can learn to distinguish between conventions formed with a specific partner (e.g. pointingdown signals bounce pass) and intrinsic complexities of the task (e.g. dribbling). This enables us toleverage the representations separately for fast adaptation to new interactive settings.In the rest of this paper, we formalize the problem setting, and describe the underlying model ofpartners and tasks. Next, we present our framework for learning separations between rule andconvention representations. We show that, under some conditions, our rule representation learns asufficient statistic of the distribution over best-response strategies. We then run a study on human-human interactions to test if our hypothesis – that partners can carry over the same conventions acrosstasks – indeed holds for human subjects. Finally, we show the merits of our method on 3collaborativetasks: contextual bandit, block placing, and a small scale version of Hanabi (Bard et al., 2020).2 R ELATED WORKConvention formation has been studied under the form of iterated reference games (Hawkins et al.,2017; 2019), and language emergence (Mordatch & Abbeel, 2018; Lazaridou et al., 2017). In theseworks, partners learn to reference objects more efficiently by forming conventions through repeatedinteractions. But in these tasks, there is little intrinsic task difficulty beyond breaking symmetries.In multi-agent reinforcement learning, techniques such as self-play, cross-training, or opponentlearning (Foerster et al., 2018) have been used to observe emergent behavior in complex, physical set-tings (Nikolaidis & Shah, 2013; Liu et al., 2019; Baker et al., 2020). In addition, convention formationhas been shown in tasks like Hanabi (Foerster et al., 2019; Hu & Foerster, 2019), Overcooked (Carrollet al., 2019; Wang et al., 2020), and negotiation tasks (Cao et al., 2018), where agents learn both howto solve a task and how to coordinate with a partner. But, these works qualitatively analyze emergentconventions through post-hoc inspection of the learned policies, and do not learn representations2Published as a conference paper at ICLR 2021Task 1 Conventionson Task 1Conventions carriedover to Task 2ABABABa1a2a3a4Figure 2: Collaborative contextual bandit task. On the left we see a task with 2contexts represented by rowsand4arms represented by cells. At each context, the prize if both partners pull the same green arm is 1, and 0otherwise. In the middle, we see a possible convention that two partners may converge to: pulling a1in contextA, and pulling a2in context B. On the right, when the task changes in context A but stays the same in context B,we expect the same two partners to continue to pull a2in context B.that separate conventions from rule-dependent behavior. This makes it difficult to leverage learnedconventions when adapting to new partners. More recently, an approach was proposed to designagents that avoid relying on conventions altogether; instead, the agents aim for unambiguous butpotentially sub-optimal strategies (Hu et al., 2020).Other approaches for studying conventions include game theory (Lerer & Peysakhovich, 2019)and theory of mind (Simon, 1995; Rabinowitz et al., 2018). Pragmatic reasoning (Goodman &Frank, 2016) is another framework that aims to estimate beliefs over the partner’s states and policies,but all these frameworks that rely on modelling beliefs quickly become intractable. Instead ofapproximating these belief modelling frameworks (Monroe & Potts, 2015), we instead focus onlearning representations for rules and conventions. Our work shares similarities with the paradigmsof meta-learning (Schmidhuber, 1987; Finn et al., 2017) or multi-task learning (Caruana, 1997), if weview collaboration with different partners as new but related tasks. However, we are also interested inhow conventions persist across tasks with similar symmetries.More related to our work is perhaps modular policy networks (Devin et al., 2017), which learns policymodules and composes them together to transfer across robot arms with different degrees of freedomand across different tasks. However, their approach relies on input cleanly split between backgroundobservations of the task, and robot’s internal observations. We do not assume such a split in the input;our approach tries to learn the separation between rules and partners with one input channel.3 P RELIMINARIESWe begin by formally defining our problem setting as a two-player Markov Decision Process (MDP).Two-Player MDP We consider a two-agent MDP with identical payoffs, which is a tuplefS;fAe;Apg;P;Rg.Sis a set of states, Aeis a set of actions for agent e,Apis a set of ac-tions for agent p. In general, erepresents the ego agent (that we control), and prepresents agentswho partner with e.P:SAeApS![0;1]is the probability of reaching a state giventhe current state and actions of all agents, and R:SS!Ris the real-valued reward function.Since we are interested in repeated interactions, we consider finite-horizon MDPs. A policy is astochastic mapping from a state to an action. For our setting, our policy = (e;p)has two parts:e:S!Aeandp:S!Ap, mapping the state into actions for agent eandprespectively.We also consider collaborative tasks that are partially observable in our experiments. In addition,some tasks are turn-based, which can be handled by including the turn information as part of the state,and ignoring the other player’s actions. Here, we focus on tasks with discrete state and action spaces.Running Example: Collaborative Contextual Bandit We describe a collaborative task to groundour discussion around adapting to new partners, and adapting to new tasks with old partners. Considera contextual bandit setting with 2contexts, 4actions, and a prize for each action. The two playerseach independently pick an arm to pull, and they score prize (a)points if they both picked the samearma; otherwise, they score 0points. An example is depicted in Figure 2, where green boxesrepresent arms with prize 1, the rest have prize 0, and red and blue circles show the player’s actions.In Task 1of Figure 2, context A only has one good arm a1while the context B has two good arms a2anda4. After repeated rounds of playing, two agents can eventually converge to coordinating on aconvention that chooses one of the two good arms in context B (e.g. selecting the leftmost good arma2). When the task shifts slightly as shown in Task 2 but context B remains the same across the two3Published as a conference paper at ICLR 2021tasks, we can reasonably expect the partners to adhere to the same convention they developed forcontext B of Task 1when playing context B of Task 2.There are two axes of generalization at play. First, for a fixed task we would like to learn the underlyingstructure of the task (e.g. green arm locations) to quickly adapt to a new partner when playing thesame task. Second, for a fixed partner we would like to keep track of developed conventions toquickly coordinate with this partner on a new task. In the next sections, we define the notion of a newtask and a new partner, and propose a framework to effectively generalize across both dimensions.4 M ODEL OF PARTNERS AND TASKSaitsiRtRstatespartneritasktFigure 3: Generating process of part-ner’s actions for a two-player MDP. Foreach task we first sample a reward func-tionRtfor the MDP. This determinestheQ-function at each state. For eachpartneriat each state sof taskt, wesample a function ithat maps the Q-function at state sto an actionaits.We consider a family of tasks that share the same state space,action space, and transition dynamics (all of which we referto as the domain), but can differ in the reward function of theunderlying MDP. We also consider a family of partners that ourego agent would like to coordinate with. The underlying modelgoverning the actions taken by a partner at a state of a giventask can be seen in Figure 3. For a new task, a new rewardfunction is sampled from R, while the domain of the MDP isfixed. For a new partner, we sample from a new function ,which maps the Q-function at a state of the task to an action.We refer to as the convention of a partner. In other words,the actionaitsof a partner iat statesof tasktdepends on thepartner’s conventions and (indirectly through the Q-functionat states) the reward of the task. is the distribution overconventions and can correspond to, for example, the distributionover emergent conventions from AI-agents trained from self-play, or can be derived from human partners.More formally, for a fixed domain let QRs(ap) =maxaeQR(s;ae;ap), whereQRis the optimal Q-function forthe two player MDP with reward R(Boutilier, 1996; Oliehoeket al., 2008). In other words, QRsis theQ-function from thepartner’s perspective at state sin the task with reward Rassum-ing best response by the ego agent. Then, the convention of a partner :SRjApj!Apdeterminesthe partner’s action at state s, given byi(s;QRs). For example, we might expect the distribution ofactions across different partners at a state sto follow Boltzmann rationality (Ziebart et al., 2008):E[ 1[(s;QRs) =ap]]/exp(QRs(ap)). Lastly, we assume that behavior at different states areuncorrelated: a choice of action at one state tells us nothing about actions at other states.Given this formulation, we can hope to learn the underlying structure of a task by playing with anumber of partners on the same task. For example, if many sampled partners all make the sameactionapat a states, it is likely that QRs(ap)is much higher than the second best action, i.e., theoptimal action is dependent on the rules of the task as opposed to the arbitrariness of partner strategies.Therefore a new partner will likely take action apas well. Additionally, when coordinating with thesame partner across different tasks, we can expect to see developed conventions persist across stateswith similar Q-functions. In particular, if QR1s=QR2sfor two different tasks with reward functionsR1andR2, theni(s;QR1s) =i(s;QR2s), so a partner will take the same action at state sacrossthe two tasks (e.g. context B in Figure 2 across Task 1 and 2).We note that the roles of partners and tasks in our model are asymmetrical, assuming rational partners.A task can completely determine the actions of all partners at states with one good action (e.g. contextA in Figure 2), whereas a partner cannot blindly pick the same action across all tasks. This asymmetryis reflected in our learning framework and our setup of adapting to new partners and new tasks.5 L EARNING RULES AND CONVENTIONSWith the goal of adapting to new partners and tasks in mind, we propose a framework aimedat separating the rule representations associated with a task from the convention representations4Published as a conference paper at ICLR 2021task modulegtgp1gpnstategp1(a|z)gpn(a|z)gt(a|s)π1(a|s)=1Z1gt(a|s)gp1(a|z)πn(a|s)=1Zngt(a|s)gpn(a|z)zspartner module.........Figure 4: Policy network with separate task/partner modules to separate rule/convention representations. Thetask module gtmaps the state observations to an action distribution gt(ajs)and to latent variables z. Eachpartner module maps zto another action distribution gpi(ajz). The policy of the ego agent when playing withpartneriis set to be proportional to the product of gt(ajs)andgpi(ajz).associated with a partner. At training time, we assume access to a task and a set of partners sampledfrom the same partner distribution . When adapting to a new partner sampled from , we wouldlike to learn a new convention representation but keep the learned rule representation. When adaptingto a new task with the original partners, we would like the opposite: to learn a new rule representationbut keep the learned convention representation. In our setup, the ego agent knows the identity ofthe tasks and partners, and the goal is to maximize the total reward with a new task and partner (thecombination of which may be new to the ego agent). Naturally, this points us towards a modulararchitecture that enables integrating new combinations of partner and task representations.We propose the architecture shown in Figure 4, where we learn to play with each of the trainingpartners via reinforcement learning. Each rectangular module represents a multi-layer perceptron,and each box with grey bars represents a distribution over actions in the action space of the MDP.When training with multiple partners on the same task, the policy network uses one single sharedtask module gt, and uses one partner module gpifor each partner i. The task module takes the stateobservations sas input, and outputs latent variables zand an action distribution gt(ajs). Then, eachpartner module takes zas input, and outputs an action distribution gpi(ajz). Here,gpi(ajz)does notrepresent partner i’s action distribution, but rather represents our ego agent’s action distribution inresponse to partner i. The policy iof the policy network for partner iis set to be proportional to theproduct of the action distributions.i(ajs) =1Zigt(ajs)gpi(ajz)As it stands, many possible combinations of task and partner module outputs could lead to optimalaction distributions i(ajs)8i. To encourage the task/partner modules to learn the rule/conventionrepresentations respectively, we include a regularization term that pushes gt(ajs)to be the marginalbest-response strategy over partners at state s. In practice, this amounts to minimizing the Wassersteindistance between the task module’s output and the average best-response strategies i.D(s) =Xa2Agt(ajs)1nXii(ajs)By pushing the task module gtto learn the marginal best-response strategy, we can cleanly capture therule-dependent representations of the task, to be used when adapting to a new partner. In fact, undersome assumptions, we can show that gtis learning the optimal representation, as it is a sufficientstatistic of the distribution over best-response strategies across possible partners.In particular, for a fixed task let f(s;ae;ap)represent the probability of the ego agent taking action aeusing its best-response strategy to partner action apat states. So,Paef(s;ae;ap) = 1 . We say theego agent is deterministic if it can only take on deterministic strategies: 8s;ap9ae:f(s;ae;ap) = 1 .Lemma 1. For a deterministic ego agent, the marginal best-response strategy across partners is asufficient statistic of the distribution of best-response strategies across partners.Next, we lay out the learning procedure in more detail in Algorithm 1. We use the task module gtandthe partner module gpiwhen playing with partner i. We push the task module to learn the marginalbest-response action by adding the loss term Don top of a standard policy gradient (PG) loss term.5Published as a conference paper at ICLR 2021Algorithm 1: Learning Separate Representations for Partners and TasksInput: A MDPM,npartners1:::nOutput: Policy network modules for adapting to new partners from and new tasks1Initialize the task module gtandnpartner modules gp1;:::;gpn2G;T fgt;gp1;:::;gpng, number of iterations3forj 1;Tdo4 fori 1;ndo5 Collect rollouts = (s;a;r)usingfgt;gpigwith partner ionM6 UpdateGwith lossLPG(gt;gpi;) +Es[D(s)]Return:GAdapting to new partner When adapting to a new partner, we reuse the task module gtalong witha reinitialized partner module, and train both modules. At states with only one optimal action a?regardless of the partner’s convention, the marginal best-response strategy will concentrate on a?. Assuch, the task module gtwill immediately push the ego agent to choose a?, improving adaptationtime. In contrast, at states with many possible optimal actions depending on the partner’s conventions,the marginal best-response strategy will spread the action distribution among the possible optimalactions, allowing the ego agent to explore amongst the optimal options with the new partner.Coordinating on a new task When faced with a new task, we would like to recognize the partsof the task where conventions with old partners on an old task carry over. In particular, in states forwhich the joint Q-function has not changed, our partners models will take the same actions as before,and our ego agent can employ the same response as before. With this in mind, suppose we havetrained a task module gtand a set of 2npartner modules gp1;:::;gp2nwith a set of partners on an oldtask. We can learn to coordinate on a new task with partners 1tonby fine-tuning the task module gt,paired with partner modules gp1;:::;gpnwith frozen parameters. At states where the joint Q-functionhas not changed, the same latent variable outputs zwill work well for all partners, so the task modulewill output the same zand the actions of the ego agent will not change. Then, we can coordinate withpartnersn+ 1to2nin a zero-shot manner by pairing gtwith partner modules gpn+1;:::;gp2n.Train-A Test-A Train-B Test-B Train-C Test-C00:51Score: 1st TryTrain TestABCABCFigure 5: User study on the collaborative contextual bandit task. Participants were asked to coordinate on theTrain task and, immediately afterwards, with the same partner on the Test task (shown on the right). Somecontexts have two equally optimal actions (green boxes), requiring symmetry breaking. On the left we plot thescore, averaged across participants, of the first try at each context. Our participants were able to coordinate wellon Test-C on the first try, even though the context requires symmetry breaking.5.1 U NDERSTANDING CONVENTIONSWhen adapting to a new task with the same partner, we are relying on the hypothesis that our partnerswill carry over the same conventions developed on an earlier task, at states where the Q-function hasnot changed. Here, we would like to test this hypothesis and study if and when conventions persistacross tasks, through a study of human-human interactions in the collaborative contextual bandit task.User Study: Conventions in Human-Human Interactions. We conducted a study with 23pairsof human partners (46 subjects), using the collaborative contextual bandit task from Section 3. Thestudy was conducted via an online interface, with partners in separate rooms. The actions of partnersare revealed to each other at the end of each try, and the partners had no other form of communication.- Independent Variables: We vary the prize of the arms at different contexts, i.e., which arms are goodto coordinate on. Pairs of partners were given 5tries to coordinate on 3 different contexts (as shown6Published as a conference paper at ICLR 2021in Fig. 5). We then ask the same pair of partners to coordinate on the Test task with 3 new contexts.- Dependent Measures: We measure the score of each pair’s first try at each context. The maximumscore in each run for each context is 1, corresponding to the two partners picking the same good arm.- Hypothesis: Our hypothesis is that partners can coordinate well on the test task by carrying overover conventions developed on the train task.- Results: On the left of Figure 5 we plot the average score of the first try (zero-shot) of our participantsat solving each context in each task. In the Train task (grey bars), the partner pairs are coordinatingfor the first time. As expected, they generally had no trouble coordinating on Train-A (since there isonly1good option), but scored lower on Train-B and Train-C (since there are 2good options each).Immediately following the 5trials on the training task, the partner pairs coordinated on each contextof the Test task (orange bars in Fig. 5), to see if they can leverage developed conventions on a newtask. On Test-A, they did well since there is only one good option. However, they coordinated muchbetter on Test-C compared to Test-B, even though both contexts have two good options. The maindifference between the two contexts is that Test-C shares the same set of optimal actions ( a2anda4)as Train-C, whereas Test-B does not share the same set of optimal actions as Train-B.Overall, this study suggests that human partners can successfully carry over conventions developedthrough repeated interactions with their partners, and further coordinate better when carrying overconventions across tasks at states where the set of optimal actions are similar (e.g. the higher gap forsame context such as context C across the train and test tasks).6 E XPERIMENTSWe experiment with our approach on three coordination tasks: collaborative contextual bandit,collaborative block placing, and Hanabi. We show results on adaptation to new partners for a fixedtask, and zero-shot coordination with same partners on a new task. We compare with 1) a baseline(BaselineAgg) that learns a shared policy network to play with all training partners, using the averageof the gradients with each partner and 2) a baseline (BaselineModular) that similarly uses a modularpolicy approach with separate modules for each partner, but does not explicitly represent the marginalbest-response strategy, and 3) First-Order MAML (Nichol et al., 2018). For our method, we vary from 0:0to0:5: a higher value pushes the task module output to more closely match the marginalaction distribution. We also test the use of a low dimensional z(the interface between the task andpartner module). In general, we find that our method with regularization performs the best, and thatit is unclear if using a low dimensional zis helpful. The partners in our experiments were eithergenerated via self-play or were hand-designed to use varying conventions.(a) Arms 2 (b) Arms 3(c) Arms 4 (d) Arms User StudyFigure 6: Contextual bandit game: adapting to a single new partner. In Figures 6a-c, we train and test onhand-designed partners. “Arms m” refers to a task having mcontexts with symmetries (exact task details in theAppendix). In Figure 6d, we train and test on partner policies derived from the data collected in our user study,and we use the Train task shown in Figure 5.7Published as a conference paper at ICLR 2021Arms 2 Arms 3 Arms 4 Blocks00:511:5:51 :46:61:53:04 :01 :05:15:01 :01 :02:111:151:02:441:26:75 :74:46DistanceFigure 7: Wasserstein distance between task module output and true marginals assuming uniform preferenceover optimal actions. Lower is better. We do not compare with BaselineModular since it does not have outputthat can be interpreted as marginals. The left three sets of bars are for the contextual bandit task, and the right setof bars are for the blocks task. We omit Hanabi since it is not easy to determine which actions are optimal. It isinteresting that even without regularization = 0:0, the distance to ground truth is still lower than the baselines,suggesting the model is learning some level of task-specific representation just due to the architecture.Contextual Bandit We study the collaborative contextual bandit task described in Section 3, using4contexts and 8arms. We study variations of the task by altering the number of contexts with morethan one good option (i.e. symmetries). We write “Arms m” to refer to a task having mcontexts withsymmetries – coordination should be easier for tasks having fewer contexts with symmetries.In Figure 6a-c we show results on adaptation to new hand-designed partners (Figure 11 for self-playpartners), varying the number of contexts with symmetries. We also plot the green oracle curve,by using our learning framework along with oracle best-response marginals (assuming uniformdistribution over optimal actions across partners). To see if our method learns the correct marginals,we measure the Wasserstein distance between the learned marginals and the oracle marginals, inFigure 7. Using = 0:5produces task module outputs that closely match the oracle marginals. Totest zero-shot coordination with the same partners on new tasks, we tweak the task by altering contextswith only one good action, keeping the contexts with symmetries the same (similar to the change inFigure 2). We use hand-designed partners to make sure that partners use the same conventions atunchanged contexts. In Figure 10 we see that our method outperforms BaselineModular.We include an additional plot in Figure 6d, where we instead use the Train task from the user studyas shown in Figure 5. We use the data collected in our user study to create the partner policies, in lieuof the hand-designed partners. We observe similar trends – that the modular learning framework withmarginal regularization performs the best.Block Placing Next, we study a collaborative block placing task which involves a 22goal-grid,with a single red block and a single blue block on the grid. The goal is to reconstruct the goal-gridfrom a blank working-grid, with partner 1 (P1) only controlling a red block and partner 2 (P2) onlycontrolling a blue block. The partners take turns moving their blocks on the working-grid, and cansee each other’s actions on the working-grid. However, only P1 can see the goal-grid configuration;P2 only receives information from the working-grid. Each game lasts 6turns, with P1 taking the firstturn. Specifically, on P1’s turn, it can move the red block to one of the four cells, remove it fromthe grid, or pass the turn (similarly for P2 / blue block). Blocks cannot be placed on top of eachother. The partners share rewards, and earn 10points for each correctly placed block, for a maximumof20points per game. P1 needs to both place its own red block correctly, and communicate theposition of the blue block to P2. As such, success in this task requires a mixture of rule-dependentand convention-dependent skills.In Figures 9a and 9b we plot the results on adaptation to new hand-designed and self-play partners,where our method outperforms the baselines for non-zero values of . To verify that our task moduleis learning a good task representation, we compute the distance between the task module output andthe oracle marginal (Figure 7). We see that our approach with = 0:5leads to a smallest gap fromthe oracle marginals. Finally, similar to the contextual bandit task, we test zero-shot coordinationwith the same partners on new tasks in Figure 10. We tweak the task by changing the colors oftarget blocks, and use hand-designed partners to ensure the developed conventions persist. Again,our method performs better on the new task/partner combination than BaselineModular. We providemore details of the experimental setup in the Appendix.Hanabi Finally, we experiment on a light version of the collaborative card game Hanabi, whichhas been studied in many related works in MARL. We focus on the two-player version of Hanabi,with1color and 5ranks (for a maximum score of 5). Since it is not straightforward how to create a8Published as a conference paper at ICLR 2021Goal Visibility Working GridP1 P2 StartFigure 8: Block placing task: each row displays a new round of the task. On the left, we see the goal-grid andhow it appears to each player. Since P2 cannot see the goal-grid, we show a fully grey grid. On the right-side wesee the working grid evolve over the course of 6turns. P1 edits the red block on turns 1;3;5and P2 edits theblue block on turns 2;4;6.variety of hand-designed partners, or tweak the task while maintaining the same symmetries, we useself-play partners only and omit experiments on coordinating on a new task with the same partners.The results on adaptation to new self-play partners are shown in Figure 9c.(a) Blocks: hand-designed partners. (b) Blocks: self-play partners. (c) Hanabi: self-play partners.Figure 9: Adapting to a single new partner for block placing task and Hanabi.Arms 1 Arms 2 Arms 300:51ScoreBlocks121620ScoreFigure 10: Zero shot performance on new task/partner combination. Higher is better. Orange refers to our methodwith= 0:5and grey refers to BaselineModular. Arms mmeans the task has mcontexts with symmetries. Wecreate the new task by altering contexts where there is only one good action (as a result, we omit Arms 4 since ithas no contexts with only one good action). We do not compare with BaselineAgg since it is non-modular andcannot be composed for new task/partner combinations.7 C ONCLUSIONWe study the problem of adapting to new settings in multi-agent interactions. To develop AI-agentsthat quickly adapt to new partners and tasks, we introduced a framework that learns rule-dependentand convention-dependent representations. Our AI agents can adapt to conventions from new partnerswithout re-learning the full complexities of a task, and can carry over existing conventions with samepartners on new tasks. We run a study of human-human interactions that suggests human conventionspersist across tasks with similar symmetries. Our experiments show improvements in adapting tonew settings on a contextual bandit task, a block placing task, and Hanabi.ACKNOWLEDGMENTSThis work was supported by NSF (#1941722, #2006388), and the DARPA HICON-LEARN project.9Published as a conference paper at ICLR 2021<|im_end|> <|im_start|>assistant ### Review Title Review ### Review Text This paper makes the observation that when performing cooperative tasks with partners, there are two components to learn: how to perform the task, and how to coordinate with the partner according to conventions. Therefore, it proposes to separate these two components via a modular architecture, which learns a task-specific action distribution that attempts to marginalize out all possible partner conventions, and a series of partner-specific action distributions. The agent's own policy is the product of these two distributions. When coordinating with a new partner, the partner-specific component is learned from scratch using a pre-trained task-specific component, and vice versa. The paper goes after an ambitious and useful problem (rapid adaptation to coordinating with novel partners on new tasks), and proposes a novel technique for doing so. A weakness is that the paper does not use reasonable baselines, and effectively compares only to ablations of their own model. Why not compare to a meta-learning technique? Or compare to some of the existing SOTA methods for Hanabi? In general the paper is well written, but it could be made significantly clearer by providing further details on how the partner action distributions g^p_i(a|s) are obtained. Given the explanation in the beginning of Section 4, I was initially under the impression that these represented the partner's policy distribution produced by its Q-values, or perhaps the partner's actual action frequencies obtained from observing trajectories. However, it seems that the model is learned entirely end-to-end, and so these distributions actually represent how the agent's own policy should be modified according to which partner it is playing with. Is this correct? If so, this explanation should be added to the paper to make it more clear how the technique can apply beyond simple domains like the contextual bandit, in which agents must choose the *same* actions as the partner. The fact that the partner module must be re-initialized and learned from scratch for each new partner is a weakness of the method. Why not learn some type of partner embedding that would enable generalization to new partners at test time that use similar conventions to training partners? The experiments section of the paper felt rushed and lacking in explanation compared to the first 6 pages. The clarity/impact could be enhanced by explaining the experiments in more detail. In particular, the block placing task is not explained (do agents place blocks separately? do they have to place a block together at the same time?). Also, the need for "hand-designed" partners is not explained, nor is what they are hand-designed to do. Since the paper collects a human user study on conventions, why not test how well the trained models are able to coordinate with humans? This would significantly enhance the impact of the paper. Other suggestions: - Figure 2 caption does not include the explanation that agents must choose the same action to get a reward - A legend should be added to Figures 7, 8, and 9. - Figure 7 is interesting in that even without the Wasserstein distance penalty (when lambda=0), the Wasserstein distance to the ground truth marginal best response is still low, suggesting the model is learning some level of task-specific representation just due to the architecture. This could be explained further in the text. Edit: I have updated my score based on the new experiments added during the rebuttal process. ### Review Rating 7: Good paper, accept ### Review Confidence 4: The reviewer is confident but not absolutely certain that the evaluation is correct<|im_end|> <|im_end|>
H1_EDpogx
ICLR.cc/2017/conference
2017
Near-Data Processing for Machine Learning
["Hyeokjun Choe", "Seil Lee", "Hyunha Nam", "Seongsik Park", "Seijoon Kim", "Eui-Young Chung", "Sungroh Yoon"]
In computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML.
["processing", "ndp", "isp", "sgd", "machine learning", "memory", "data", "potential", "ssd"]
ABSTRACTIn computer architecture, near-data processing (NDP) refers to augmenting thememory or the storage with processing power so that it can process the data storedtherein. By offloading the computational burden of CPU and saving the need fortransferring raw data in its entirety, NDP exhibits a great potential for accelera-tion and power reduction. Despite this potential, specific research activities onNDP have witnessed only limited success until recently, often owing to perfor-mance mismatches between logic and memory process technologies that put alimit on the processing capability of memory. Recently, there have been two ma-jor changes in the game, igniting the resurgence of NDP with renewed interest.The first is the success of machine learning (ML), which often demands a greatdeal of computation for training, requiring frequent transfers of big data. Thesecond is the advent of NAND flash-based solid-state drives (SSDs) containingmulticore processors that can accommodate extra computation for data process-ing. Sparked by these application needs and technological support, we evaluatethe potential of NDP for ML using a new SSD platform that allows us to simulatein-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) isa full-fledged simulator of a realistic multi-channel SSD that can execute variousML algorithms using the data stored in the SSD. For thorough performance anal-ysis and in-depth comparison with alternatives, we focus on a specific algorithm:stochastic gradient decent (SGD), which is the de facto standard for training dif-ferentiable learning machines including deep neural networks. We implement andcompare three variants of SGD (synchronous, Downpour, and elastic averaging)using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. Inaddition, we compare the performance of ISP and that of conventional in-hostprocessing, revealing the advantages of ISP. Based on the advantages and limita-tions identified through our experiments, we further discuss directions for futureresearch on ISP for accelerating ML.1 I NTRODUCTIONRecent successes in deep learning can be accredited to the availability of big data that has madethe training of large deep neural networks possible. In the conventional memory hierarchy, thetraining data stored at the low level (e.g., hard disks) need to be moved upward all the way to theCPU registers. As larger and larger data are being used for training large-scale models such as deepnetworks (LeCun et al., 2015), the overhead incurred by the data movement in the hierarchy becomesmore salient, critically affecting the overall computational efficiency and power consumption.To whom correspondence should be addressed.1Under review as a conference paper at ICLR 2017The idea of near-data processing (NDP) (Balasubramonian et al., 2014) is to equip the memoryor storage with intelligence (i.e., processors) and let it process the data stored therein firsthand.A successful NDP implementation would reduce the data transfers and power consumption, notto mention offloading the computational burden of CPUs. The types of NDP realizations includeprocessing in memory (PIM) (Gokhale et al., 1995) and in-storage processing (ISP) (Acharya et al.,1998; Kim et al., 2016c; Lee et al., 2016; Choi & Kee, 2015). Despite the potential of NDP, it has notbeen considered significantly for commercial systems. For PIM, there has been a wide performancegap between the separate processes to manufacture logic and memory chips. For ISP, commercialhard disk drives (HDDs), the mainstream storage devices for a long time, normally have limitedprocessing capabilities due to tight selling prices.Recently, we have seen a resurrection of NDP with renewed interest, which has been triggered bytwo major factors, one in the application side and the other in the technology side: First, computing-and data-intensive deep learning is rapidly becoming the method of choice for various machinelearning tasks. To train deep neural networks, a large volume of data is typically needed to ensureperformance. Although GPUs and multicore CPUs often provide an effective means for massivecomputation required by deep learning, it remains inevitable to store big training data in the storageand then transfer them to the CPU/GPU level for computation. Second, NAND flash-based solid-state drives (SSDs) are becoming popular, gradually replacing HDDs in various computing sectors.To interface SSDs with the host seamlessly replacing HDDs, SSDs require various software runninginside, e.g., for address translation and garbage collection (Kim et al., 2002; Gupta et al., 2009). Tosuit such needs, SSDs are often equipped with multicore processors, which provide far more pro-cessing capabilities than those in HDDs. Usually, there exists a plenty of idle time in the processorsin SSDs that can be exploited for other purposes than SSD housekeeping (Kim et al., 2010; 2016b).Motivated by these changes and opportunities, we propose a new SSD platform that allows us tosimulate in-storage processing (ISP) of machine learning workloads and evaluate the potential ofNDP for machine learning in ISP. Our platform named ISP-ML is a full-fledged system-level simu-lator of a realistic multi-channel SSD that can execute various machine learning algorithms using thedata stored in the SSD. For thorough performance analysis and in-depth comparison with alterna-tives, we focus on describing our implementation of a specific algorithm in this paper: the stochasticgradient decent (SGD) algorithm, which is the de facto standard for training differentiable learningmachines including deep neural networks. Specifically, we implement three types of parallel SGD:synchronous SGD (Zinkevich et al., 2010), Downpour SGD (Dean et al., 2012), and elastic aver-aging SGD (EASGD) (Zhang et al., 2015). We compare the performance of these implementationsof parallel SGD using a 10 times amplified version of MNIST (LeCun et al., 1998). Furthermore,to evaluate the effectiveness of ISP-based optimization by SGD, we compare the performance ofISP-based and the conventional in-host processing (IHP)-based optimization.To the best of the authors’ knowledge, this work is one of the first attempts to apply NDP to a multi-channel SSD for accelerating SGD-based optimization for training differentiable learning machines.Our specific contributions can be stated as follows:We created a full-fledged ISP-supporting SSD platform called ISP-ML, which required multi-year team efforts. ISP-ML is versatile and can simulate not only storage-related functionalitiesof a multi-channel SSD but also NDP-related functionalities in realistic manner. ISP-ML canexecute various machine learning algorithms using the data stored in the SSD while supportingthe simulation of multi-channel NAND flash SSDs to exploit data-level parallelism.We thoroughly tested the effectiveness of our platform by implementing and comparing multipleversions of parallel SGD, which is widely used for training various machine learning algorithmsincluding deep learning. We also devised a methodology that can carefully and fairly compare theperformance of IHP-based and ISP-based optimization.We identified intriguing future research opportunities in terms of exploiting the parallelism pro-vided by the multiple NAND channels inside SSDs. As in high-performance computing, thereexist multiple “nodes” (i.e., NAND channel controllers) for sharing workloads, but the commu-nication cost is negligible (due to negligible-latency on-chip communication) unlike the conven-tional parallel computing. Using our platform, we envision new designs of parallel optimizationand training algorithms that can exploit this characteristic, producing enhanced results.2Under review as a conference paper at ICLR 20172 B ACKGROUND AND RELATED WORK2.1 M ACHINE LEARNING AS AN OPTIMIZATION PROBLEMVarious types of machine learning algorithms exist (Murphy, 2012; Goodfellow et al., 2016), andtheir core concept can often be explained using the following equations:F(D;) =L(D;) +r() (1)t+1=t+ (D) (2)(D) =rF(D;) (3)whereDanddenote the input data and model parameters, respectively, and a loss function L(D;)reflects the difference between the optimal and current hypotheses. A regularizer to handle over-fitting is denoted by r(), and the objective function F(D;)is the sum of the loss and regularizerterms. The main purpose of supervised machine learning can then be formulated as finding optimalthat minimizes F(D;). Gradient descent is a first-order iterative optimization algorithm to findthe minimum value of F(D;)by updating on every iteration tto the direction of negative gra-dient ofF(D;), whereis the learning rate. SGD computes the gradient of the parameters andupdates them using a single training sample per iteration. Minibatch (stochastic) gradient decentuses multiple (but far less than the whole) samples per iteration. As will be explained shortly, weemploy minibatch SGD in our framework, setting the size of a minibatch to the number of trainingsamples in a NAND flash page, which is named ‘page-minibatch’ (see Figure 2).2.2 P ARALLEL AND DISTRIBUTED SGDZinkevich et al. (2010) proposed an algorithm that implements parallel SGD in a distributed com-puting setup. This algorithm often suffers from excessive latency caused by the need for synchro-nization of all slave nodes. To overcome this weakness, Recht et al. (2011) proposed the lock-freeHogwild! algorithm that can update parameters asynchronously. Hogwild! is normally implementedin a single machine with a multicore processor. Dean et al. (2012) proposed the Downpour SGD fora distributed computing systems by extending the Hodwild! algorithm. While they successfully im-plemented asynchronous SGD in a distributed computing system, it often fails to overcome commu-nication bottlenecks and shows inefficient bandwidth usage, caused by substantial data movementsbetween computing nodes. Recently proposed EASGD (Zhang et al., 2015) attempted to minimizecommunication overhead by reducing the frequency of parameter updates. Many EASGD-basedapproaches reported its effectiveness in distributed environments.2.3 F UNDAMENTALS OF SOLID -STATE DRIVES (SSD S)SSDs have emerged as a type of next-generation storage device using NAND flash memory (Kimet al., 2010). As shown in the right image in Figure 1(a), a typical SSD consists of an SSD con-troller, a DRAM buffer, and a NAND flash array. The SSD controller is typically composed ofan embedded processor, a cache controller, and channel controllers. The DRAM component, con-trolled by the cache controller, plays the role of a cache buffer when the NAND flash array is reador written. The NAND flash array contains multiple NAND chips that can be accessed simultane-ously thanks to multi-channel configurations and per-channel controllers. Every channel controlleris managed by the software called flash translation layer (FTL), which executes wear-leveling andgarbage collection to improve the performance and durability of the NAND flash array.2.4 P REVIOUS WORK ON NEAR-DATA PROCESSINGMost of the previous work on ISP focused on popular but inherently simple algorithms, such as scan,join, and query operations (Kim et al., 2016c). Lee et al. (2016) proposed to run the merge operation(frequently used by external sort operation in Hadoop) inside an SSD to reduce IO transfers andread/write operations, also extending the lifetime of the NAND flash inside the SSD. Choi & Kee(2015) implemented algorithms for linear regression, k-means, and string match in the flash memorycontroller (FMC) via reconfigurable stream processors. In addition, they implemented a MapRe-duce application inside the embedded processor and FMC of the SSD by using partitioning andpipelining methods that could improve performance and reduce power consumption. BlueDBM (Jun3Under review as a conference paper at ICLR 2017Host I/FCacheControllerNAND FlashChannelControllerclk/rstEmbeddedProcessorSRAM (b)CPUMain MemoryOSSSD controllerChannelControllerHost I/FEmbeddedProcessor(CPU)CacheControllerDRAMSRAMChannelControllerNAND FlashNAND FlashNAND FlashNAND FlashSSDISP HW ISP HWISP HWISP SWSSD controllerChannelControlle rHost I/FARM ProcessorDRAMCacheControlle rSRAMChannelControlle rNAND FlashNAND FlashNAND FlashNAND FlashISP-SSDISP HW ISP HWISP HWISP SWSSD(a) User Applicat ionFigure 1: (a) Block diagram of a typical computing system equipped with an SSD and a magnifiedview of a usual SSD depicting its internal components and their connections. (b) Schematic of theproposed ISP-ML framework, which is implemented in SystemC using Synopsys Platform Architect(http://www.synopsys.com).et al., 2015) is an ISP system architecture for distributed computing systems with a flash memory-based embedded field programmable gate array (FPGA). The authors implemented nearest-neighborsearch, graph traversal, and string search algorithms. No prior work ever implemented and evaluatedSSD-based optimization of machine learning algorithms using SGD.3 P ROPOSED METHODOLOGYFigure 1(a) shows the block diagram of a typical computing system, which is assumed to have anSSD as its storage device. Also shown in the figure is a magnified view of the SSD block diagramthat shows the major components of an SSD and their interconnections. Starting from the baselineSSD depicted above, we can implement ISP functionalities by modifying the components markedwith black boxes (i.e., ISP HW and ISP SW in the figure). Figure 1(b) shows the detailed schematicof our proposed ISP-ML platform that corresponds to the SSD block (with ISP components) shownin Figure 1(a).In this section, we provide more details of our ISP-ML framework. In addition, we propose a perfor-mance comparison methodology that can compare the performance of ISP and the conventional IHPin a fair manner. As a specific example of the ML algorithms that can be implemented in ISP-ML,we utilize parallel SGD.4Under review as a conference paper at ICLR 20173.1 ISP-ML: ISP P LATFORM FOR MACHINE LEARNING ON SSD SOur ISP-ML is a system-level simulator implemented in SystemC on the Synopsys Platform Ar-chitect environment (http://www.synopsys.com). ISP-ML can simulate hardware and software ISPcomponents marked in Figure 1(b) simultaneously. This integrative functionality is crucial for de-sign space exploration in SSD developments. Moreover, ISP-ML allows us to execute various ma-chine learning algorithms described in high-level languages (C or C++) directly on ISP-ML onlywith minor modifications.At the conception of this research, we could not find any publicly available SSD simulator that couldbe modified for implementing ISP functionalities. This motivated us to implement a new simulator.There exist multiple ways of realizing the idea of ISP in an SSD. The first option would be to use theembedded core inside the SSD controller (Figure 1(a)). This option does not require designing a newhardware logic and is also flexible, since the ISP capability is implemented by software. However,this option is not ideal for exploiting hardware acceleration and parallelization. The second optionwould be to design dedicated hardware logics (such as those boxes with black marks in Figure 1(a)and the entire Figure 1(b)) and integrate them into the SSD controller. Although significantly moreefforts are needed for this option compared the first, we chose this second option due to its long-termadvantages provided by hardware acceleration and power reduction.Specifically, we implemented two types of ISP hardware components, in addition to the softwarecomponents. First, we let each channel controller not only manage read/write operations to/fromits NAND flash channel (as in the usual SSDs) but also perform primitive operations on the datastored in its NAND channel. The type of primitive operation performed depends on the machinelearning algorithm used (the next subsection explains more details of such operations for SGD).Additionally, each channel controller in ISP-ML (slave) communicates with the cache controller(master) in a master-slave architecture. Second, we designed the cache controller so that it cancollect the outcomes from each of the channel controllers, in addition to its inherent functionality as acache (DRAM) manager inside the SSD controller. This master-slave architecture can be interpretedas a tiny-scale version of the master-slave architecture commonly used in distributed systems. Justas the channel controllers, the exact functionality of the cache controller can be optimized dependingon the specific algorithm used. Both the channel controllers and the cache controller have internalmemory, but the memory size in the latter is far greater than that in the former.Specific parameters and considerations used in our implementation can be found in Section 4.1.There are a few points worth mentioning. Unlike existing conventional SSD simulators, the base-line SSD implemented in ISP-ML can store data in the NAND flash memory inside. In order tosupport reasonable simulation speed, we modeled ISP-ML at cycle-accurate transaction level whileminimizing negative impact on accuracy. We omit to describe other minor details of hardware logicimplementations, as are beyond the scope of the conference.3.2 P ARALLEL SGD I MPLEMENTATION ON ISP-MLUsing our ISP-ML platform, we implemented the three types of parallel SGD algorithms outlinedin Figure 2: synchronous SGD (Zinkevich et al., 2010), Downpour SGD (Dean et al., 2012), andEASGD (Zhang et al., 2015). For brevity, we focus on describing the implementation details ofthese algorithms in ISP-ML and omit the purely algorithmic details of each algorithm; we refer theinterested to the corresponding references. Note that the size of a minibatch for the minibatch SGDin our framework is set to the number of training samples in a NAND flash page (referred to as‘page-minibatch’ in Figure 2).For implementing synchronous SGD, we let each of the nchannel controllers synchronously com-pute the gradient. Firstly, each channel controller reads page-sized data from the NAND flash mem-ory and then stores the data in the channel controller’s buffer. Secondly, the channel controllerpulls the cache controller’s parameters ( cache ) and stores them in the buffer. Using the data andparameters stored in the buffer, each channel controller calculates the gradient in parallel. Aftertransferring the gradient to the cache controller, the channel controllers wait for a signal from thecache controller. The cache controller aggregates and updates the parameters and then sends thechannel controller signals to pull and replicate the parameters.We implemented Downpour SGD in a similar way to implementing synchronous SGD; the majordifference is that each channel controller immediately begins the next iteration after transferring the5Under review as a conference paper at ICLR 2017Synchronous SGD Processing by i-th channe l con troller and cache co ntroll er Downpour SGD Proces sing by i-th cha nnel controller and cache controller EASGD Processing by i-th channel controller and cache contro ller Read a page from NAND pull θcache θi = θcache θi = 0 for page-minibatch θi = θi -η▽ti(θ) θi = θi + η▽ti(θ) t++ push θi and wait sync. θcache = θcache - 1/n /uni2219 ∑θi RRepeat RepeatRepeatRepeatRepeat Repeatend endendendendendendif thenif thenend Read a page from NAND pull θcache θi = θcache θi = 0 for page-minibatch θi = θi -η▽ti(θ) θi = θi + η▽ti(θ) t++ (τ devides t) push θi θcache = θcache - θi Read a page from NAND for page-miniba tch θi = θi -η▽ti(θ) t++ (τ devides t) pull θcache θi = θ - α(θi - θcache) push (θi - θcache) θcache = θcache + α(θi - θcache) Figure 2: Pseudo-code of the three SGD algorithms implemented in ISP-ML: synchronousSGD (Zinkevich et al., 2010), Downpour SGD (Dean et al., 2012), and EASGD (Zhang et al., 2015).The shaded line indicates the computation occurring in the cache controller (master); the other linesare executed in the channel controllers (slaves). Note that the term ‘page-minibatch’ refers to theminibatch SGD used in our framework, where the size of a minibatch is set to the number of trainingsamples in a NAND flash page.gradient to cache controller. The cache controller updates the parameters with the gradient from thechannel controllers sequentially.For EASGD, we let each of the channel controllers have its own SGD parameters unlike synchronousSGD and Downpour SGD. Each channel controller pulls the parameters from the cache controllerafter computing the gradient and updating its own parameters. Each channel controller calculatesthe differences between its own parameters and the cache controller’s parameters and then pushesthe differences to the cache controller.Of note is that, besides its widespread use, SGD has some appealing characteristics that facilitatehardware implementations. We can implement parallel SGD on top of the master-slave architecturerealized by the cache controller and the channel controllers. We can also take advantage of effectivetechniques developed in the distributed and parallel computation domain. Importantly, each SGDiteration is so simple that it can be implemented without incurring excessive hardware overhead.3.3 M ETHODOLOGY FOR IHP-ISP P ERFORMANCE COMPARISONTo evaluate the effectiveness of ISP, it is crucial to accurately and fairly compare the performancesof ISP and the conventional IHP. However, performing this type of comparison is not trivial (seeSection 4.3 for additional discussion). Furthermore, the accurate modeling of commercial SSDsequipped with ISP-ML is impossible due to lack of information about commercial SSDs (e.g., thereis no public information on the FTL and internal architectures of any commercial SSD). Therefore,we propose a practical methodology for accurate comparison of IHP and ISP performances, asdepicted in Figure 3. Note that this comparison methodology is applicable not only to the parallelSGD implementations explained above but also to other ML algorithms that can be executed inISP-ML.In the proposed comparison methodology, we focus on the data IO latency time of the storage(denoted as TIO), since it is the most critical factor among those that affect the execution time ofIHP. The total processing time of IHP ( IHP timeorTtotal) can then be divided into the data IO timeand the non-data IO time ( TnonIO ) as follows:IHP time=Ttotal=TnonIO +TIO: (4)To calculate the expected IHP simulation time adjusted to ISP-ML, the data IO time of IHP isreplaced by the data IO time of the baseline SSD in ISP-ML ( TIOsim ). By using Eq. (4), the expectedIHP simulation time can then be represented byExpected IHP simulation time =TnonIO +TIOsim =TtotalTIO+TIOsim: (5)6Under review as a conference paper at ICLR 2017HostStorageISP-ML(baseline)IO Trace ISP-ML(ISP implemented)ISP Cmd(b) (a)Simulator Real SystemMeasure total execution application t ime(Ttotal) Measure baseline simulation time with IO trace(TIOsim)Measure IO service time(TIO)Extract IO trace while executing applicationInHostInSSD(Sim)Figure 3: (a) Overview of our methdology to compare the performance of in-host processing (IHP)and in-storage processing (ISP). (b) Details of our IHP-ISP comparison flow.The overall flow of the proposed comparison methodology is depicted in Figure 3(b). First, the totalprocessing time ( Ttotal) and the data IO time of storage ( TIO) are measured in IHP, extracting the IOtrace of storage during an application execution. The simulation IO time ( TIOsim ) is then measuredusing the IO trace (extracted from IHP) on the baseline SSD of ISP-ML. Finally, the expected IHPsimulation time is calculated by plugging the total processing time ( Ttotal), the data IO time ofstorage (TIO) and the simulation IO time ( TIOsim ) into Eq. (5). With the proposed method and ISP-ML, which is applicable to a variety of IHP environments regardless of the type of storage used, itis possible to quickly and easily compare performances of various ISP implementations and IHP ina simulation environment.4 E XPERIMENTAL RESULTS4.1 S ETUPAll the experiments presented in this section were run on a machine equipped with an 8-core Intel(R)Core i7-3770K CPU (3.50GHz) with DDR3 32GB RAM, Samsung SSD 840 Pro, and Ubuntu 14.04LTS (kernel version: 3.19.0-26-generic). We used ARM 926EJ-S (400MHz) as the embedded pro-cessor inside ISP-ML and DFTL (Gupta et al., 2009) as the FTL of ISP-ML. The simulation modelwe used was derived from a commercial product (Micron NAND MT29F8G08ABACA) and had thefollowing specifications: page size =8KB,tprog=300s,tread=75s, andtblock erase =5ms.1Each channel controller had 24KB of memory [8KB (page size) for data and 16KB for ISP] and afloating point unit (FPU) having 0.5 instruction/cycle performance (with pipelining). The cache con-troller had memory of (n+ 1)8KB (page size), where nis the number of channels ( n= 4;8;16).Depending on the algorithm running in ISP-ML, we can adjust these parameters.Note that the main purpose of our experiments in this paper was to verify the functionality of ourISP-ML framework and to evaluate the effectiveness of ISP over the conventional IHP using SGD,even though our framework is certainly not limited only to SGD. To this end, we selected logisticregression, a fundamental ML algorithm that can directly show the advantage of ISP-based optimiza-tions over IHP-based optimizations without unnecessary complications. We thus implemented thelogistic regression algorithm as a single-layer perceptron (with cross entropy loss) in SystemC anduploaded it to ISP-ML. As stated in Section 5.3, our future work includes the implementation andtesting of more complicated models (such as deep neural networks) by reflecting the improvementopportunities revealed from the experiments presented in this paper.As test data, we utilized the samples from the MNIST database (LeCun et al., 1998). To amplifythe number of training samples (for showing the scalability of our approach), we used elastic distor-tion (Simard et al., 2003), producing 10 times more data than the original MNIST (approximately600,000 training and 10,000 test samples were used in total). To focus on the performance evalua-tion of running ISP operations, we preloaded our NAND flash simulation model with the simulation1These are conservative settings, compared with those of the original commercial product; using the speci-fications of a commercial product will thus improve the performance of ISP-ML.7Under review as a conference paper at ICLR 20170.820.860.900.94Time(sec)(a) 4-ChannelTest accuracy0 2 4 6 8 10 120.820.860.900.94Time(sec)(c) 16-ChannelTest accuracy0 2 4 6 8 10 120.820.860.900.94Time(sec)(b) 8-Ch annelTest accuracy0 2 4 6 8 10 12Synchronou s SGDDownpour SGDEASGDSynchronou s SGDDownpour SGDEASGDSynchronou s SGDDownpour SGDEASGDFigure 4: Test accuracy of three ISP-based SGD algorithms versus wall-clock time with a varyingnumber of NAND flash channels: (a) 4 channels, (b) 8 channels, and (c) 16 channels.0.800.840.880.920 4 8 12 16 20IHP(2G B-memory) IHP(16 GB-memory) ISP(EASGD, 4CH)IHP(4G B-memory) IHP(32 GB-memory) ISP(EASGD, 8CH)IHP(8G B-memory) ISP(EASGD, 16CH)Time(sec)Test accuracyFigure 5: Test accuracy of ISP-based EASGD in the 4, 8, and 16 channel configurations and IHP-based minibatch SGD using diverse memory sizes.data (the same condition was used for the alternatives for fairness). Based on the size of a trainingsample this dataset and the size of a NAND page (8KB), we set the size of each minibatch to 10.4.2 P ERFORMANCE COMPARISON : ISP-B ASED OPTIMIZATIONAs previously explained, to identify which SGD algorithm would be best suited for use in ISP, weimplemented and analyzed three types of SGD algorithms: synchronous SGD, Downpour SGD, andEASGD. For EASGD, we set the moving rate ( ) and the communication period ( ) to 0.001 and1, respectively. For a fair comparison, we chose different learning rates for different algorithms thatgave the best performance for each algorithm. Figure 4 shows the test accuracy of three algorithmswith varying numbers of channels (4, 8, and 16) with respect to wall-clock time.As shown in Figure 4, using EASGD gave the best convergence speed in all of the cases tested.EASGD outperformed synchronous and Downpour SGD by factors of 5.24 and 1.96 on average,respectively. Synchronous SGD showed a slower convergence speed when compared to DownpourSGD because it could not start learning on the next set of minibatch until the results of all thechannel controllers reported to the cache controller. Moreover, one delayed worker could halt theentire process. This result suggests that EASGD is adequate for all the channel configurations testedin that ISP can benefit from ultra-fast on-chip level communication and employ application-specifichardware that can eliminate any interruptions from other processors.4.3 P ERFORMANCE COMPARISON : IHP VERSUS ISPIn large-scale machine learning, the computing systems used may suffer from memory shortage,which incurs significant data swapping overhead. In this regard, ISP can provide an effective solutionthat can potentially reduce data transfer penalty by processing core operations at the storage level.In this context, we carried out additional experiments to compare the performance of IHP-basedand ISP-based EASGD. We tested the effectiveness of ISP in a memory shortage situation with 5different configurations of IHP memory: 2GB, 4GB, 8GB, 16GB, and 32GB. We assumed that thehost already loaded all of the data to the main memory for IHP. This assumption is realistic because8Under review as a conference paper at ICLR 20170.820.860.900.94Time(sec)(b) Downpour SGDTest accuracy0 2 4 6 8 10 124-Channel8-Channel16-Channel0.820.860.900.94Time(sec)(c) EASGDTest accuracy0 2 4 6 8 10 124-Channel8-Channel16-Channel0.820.860.900.94Time(sec)(a) Synchronous SGDTest accuracy0 2 4 6 8 10 124-Channel8-Channel16-Channel1244 8 16Synchronous SGDDownpour SGDEASGDChannelSpeed up(d) Speed upFigure 6: Test accuracy of different ISP-based SGD algorithms for a varied number of channels: (a)synchronous SGD, (b) Downpour SGD, and (c) EASGD. (d) Training speed-up for the three SGDalgorithms for a various number of channels.state-of-the-art machine learning techniques often employ a prefetch strategy to hide the initial datatransfer latency.As depicted in Figure 5, ISP-based EASGD with 16 channels gave the best performance in ourexperiments. The convergence speed of the IHP-based optimization slowed down, in accordancewith the reduced memory size. The results with 16GB and 32GB of memory gave similar resultsbecause using 16GB of memory was enough to load and allocate most of the resource required bythe process. As a result, ISP was more efficient when memory was insufficient, as would be oftenthe case with large-scale datasets in practice.4.4 C HANNEL PARALLELISMTo closely examine the effect of exploiting data-level parallelism on performance, we compared theaccuracy of the three SGD algorithms, varying the number of channels (4, 8, and 16), as shownin Figure 6. All the three algorithms resulted in convergence speed-up by using more channels;synchronous SGD achieved 1:48speed-up when the number of channels increased from 8 to 16.From Figure 6(d), we can also note that the convergence speed-up tends to be proportional to numberof channels. These results suggest that the communication overhead in ISP is negligible, and that ISPdoes not suffer from the communication bottleneck that commonly occurs in distributed computingsystems.4.5 E FFECTS OF COMMUNICATION PERIOD IN ASYNCHRONOUS SGDFinally, we investigated how changes in the communication period (i.e., how often data exchangeoccurs during distributed optimization) affect SGD performance in the ISP environment. Figure 7shows the test accuracy of the Downpour SGD and EASGD algorithms versus wall-clock time whenwe varied their communication periods. As described in Zhang et al. (2015), Downpour SGD nor-mally achieved a high performance for a low communication period [ = 1;4] and became unstablefor a high communication period [ = 16;64] in ISP. Interestingly, in contrast to the conventional9Under review as a conference paper at ICLR 20170.500.600.700.800.900 2 4 6 8 10(a) Downpour SGDTime(sec)Test accuracy0.860.880.900.920 2 4 6 810(b) EASGDTime(sec)Test accuracy τ = 1 τ = 4 τ = 16 τ = 64 τ = 1 τ = 4 τ = 16 τ = 64Figure 7: Test accuracy of ISP-based Downpour SGD and EASGD algorithms versus wall-clocktime for different communication periods.distributed computing system setting, the performance of EASGD decreased as the communicationperiod increased in the ISP setting. This is because the on-chip communication overhead in ISP issignificantly lower than that in the distributed computing system. As a result, there would be no needfor extending the communication period to reduce communication overhead in the ISP environment.5 D ISCUSSION5.1 P ARALLELISM IN ISPGiven the advances in underlying hardware and semiconductor technology, ISP can provide vari-ous advantages for data processing involved in machine learning. For example, our ISP-ML couldminimize (practically eliminate) the communication overheads between parallel nodes leveraged byultra-fast on-chip communication inside an SSD. Minimizing communication overheads can im-prove various key aspects of data-processing systems, such as energy efficiency, data management,security, and reliability. By exploiting this advantage of fast on-chip communications in ISP, we en-vision that we will be able to devise a new kind of parallel algorithms for optimization and machinelearning running on ISP-based SSDs.Our experiment results also revealed that a high degree of parallelism could be achieved by increas-ing the number of channels inside an SSD. Some of the currently available commercial SSDs haveas many as 16 channels. Given that the commercial ISP-supporting SSDs would (at least initially) betargeted at high-end SSD markets with many NAND flash channels, our approach is expected to adda valuable functionality to such SSDs. Unless carefully optimized, a conventional distributed systemwill see diminishing returns as the number of nodes increases, due to the increased communicationoverhead and other factors. Exploiting a hierarchy of parallelism (i.e., parallel computing nodes,each of which has ISP-based SSDs with parallelism inside) may provide an effective accelerationscheme, although a fair amount of additional research is needed before we can realize this idea.5.2 ISP-IHP C OMPARISON METHODOLOGYTo fairly compare the performances of ISP and IHP, it would be ideal to implement ISP-ML in a realsemiconductor chip, or to simulate IHP in the ISP-ML framework. Selecting either option, however,is possible but not plausible (at least in academia), because of high cost of manufacturing a chip,and the prohibitively high simulation time for simulating IHP in the Synopsys Platform Architectenvironment (we would have to implement many components of a modern computer system in orderto simulate IHP). Another option would be to implement both ISP and IHP using FPGAs, but it willtake another round of significant efforts for developments.To overcome these challenges (still assuring a fair comparison between ISP and IHP), we have pro-posed the comparison methodology described in Section 3.3. In terms of measuring the absoluterunning time, our methodology may not be ideal. However, in terms of highlighting relative perfor-mance between alternatives, our method should provide a satisfactory solution.10Under review as a conference paper at ICLR 2017Our comparison methodology extracts IO trace from the storage while executing an application inthe host, which is used for measuring simulation IO time in the baseline SSD in ISP-ML. In thisprocedure, we assume that the non-IO time of IHP is consistent regardless of the kind of storage thehost has. The validity of this assumption is warranted by the fact that the amount of non-IO timechanged by the storage is usually negligible compared with the total execution time or IO time.5.3 O PPORTUNITIES FOR FUTURE RESEARCHIn this paper we focused on the implementation and testing of ISP-based SGD as a proof of concept.The simplicity and popularity of (parallel) SGD underlie our choice. By design, it is possible torun other algorithms in our ISP-ML framework immediately; recall that our framework includes ageneral-purpose ARM processor that can run executables compiled from C/C++ code. However, itwould be meaningless just to have an ISP-based implementation, if its performance is unsatisfactory.To unleash the full power of ISP, we need additional ISP-specific optimization efforts, as is typicallythe case with hardware design.With this in mind, we have started implementing deep neural networks (with realistic numbers oflayers and hyperparameters) using our ISP-ML framework. Especially, we are carefully devisinga way of balancing the memory usage in the DRAM buffer, the cache controller, and the channelcontrollers inside ISP-ML. It would be reasonable to see an SSD with a DRAM cache with a fewgigabytes of memory, whereas it is unrealistic to design a channel controller with that much memory.Given that a large amount of memory is needed only to store the parameters of such deep models, andthat IHP and ISP have different advantage and disadvantages, it would be intriguing to investigatehow to make IHP and ISP can cooperate to enhance the overall performance. For instance, we canlet ISP-based SSDs perform low-level data-dependent tasks while assigning high-level tasks to thehost, expanding the current roles of the cache controller and the channel controllers inside ISP-MLto the whole system level.Our future work also includes the following: First, we will be able to implement adaptive opti-mization algorithms such as Adagrad (Duchi et al., 2011) and Adadelta (Zeiler, 2012). Second,precomputing meta-data during data writes (instead of data reads) could provide another directionof research that can bring even more speedup. Third, we will be able to implement data shufflefunctionality in order to maximize the effect of data-level parallelism. Currently, ISP-ML arbitrarilysplits the input data into its multi-channel NAND flash array. Fourth, we may investigate the effectof NAND flash design on performance, such as the NAND flash page size. Typically, the size of aNAND flash page significantly affects the performance of SSDs, given that the page size (e.g., 8KB)is the basic unit of NAND operation (read and write). In case where the size of a single example of-ten exceeds the page size, frequent data fragmentation is inevitable, eventually affecting the overallperformance. The effectiveness of using multiple page sizes was already reported for conventionalSSDs (Kim et al., 2016a), and we may borrow this idea to further optimize ISP-ML.ACKNOWLEDGMENTSThe authors would like to thank Byunghan Lee at Data Science Laboratory, Seoul National Univer-sity for proofreading the manuscript. This work was supported in part by BK21 Plus (Electrical andComputer Engineering, Seoul National University) in 2016, in part by a grant from SK Hynix, andin part by a grant from Samsung Electronics.
B1dxnA-El
Near-Data Processing for Machine Learning
5: Marginally below acceptance threshold
While the idea of moving the processing for machine learning into silicon contained within the (SSD) data storage devices is intriguing and offers the potential for low-power efficient computation, it is a rather specialized topic, so I don't feel it will be of especially wide interest to the ICLR audience. The paper describes simulation results, rather than actual hardware implementation, and describes implementations of existing algorithms. The comparisons of algorithms' train/test performance does not seem relevant (since there is no novelty in the algorithms) and the use of a single layer perceptron on MNIST calls into question the practicality of the system, since this is a tiny neural network by today's standards. I did not understand from the paper how it was thought that this could scale to contemporary scaled networks, in terms of numbers of parameters for both storage and bandwidth. I am not an expert in this area, so have not evaluated in depth.
2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Near-Data Processing for Machine Learning ### Paper Abstract In computer architecture, near-data processing (NDP) refers to augmenting the memory or the storage with processing power so that it can process the data stored therein. By offloading the computational burden of CPU and saving the need for transferring raw data in its entirety, NDP exhibits a great potential for acceleration and power reduction. Despite this potential, specific research activities on NDP have witnessed only limited success until recently, often owing to performance mismatches between logic and memory process technologies that put a limit on the processing capability of memory. Recently, there have been two major changes in the game, igniting the resurgence of NDP with renewed interest. The first is the success of machine learning (ML), which often demands a great deal of computation for training, requiring frequent transfers of big data. The second is the advent of NAND flash-based solid-state drives (SSDs) containing multicore processors that can accommodate extra computation for data processing. Sparked by these application needs and technological support, we evaluate the potential of NDP for ML using a new SSD platform that allows us to simulate in-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) is a full-fledged simulator of a realistic multi-channel SSD that can execute various ML algorithms using the data stored in the SSD. For thorough performance analysis and in-depth comparison with alternatives, we focus on a specific algorithm: stochastic gradient decent (SGD), which is the de facto standard for training differentiable learning machines including deep neural networks. We implement and compare three variants of SGD (synchronous, Downpour, and elastic averaging) using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. In addition, we compare the performance of ISP and that of conventional in-host processing, revealing the advantages of ISP. Based on the advantages and limitations identified through our experiments, we further discuss directions for future research on ISP for accelerating ML. ### Paper Keywords ["processing", "ndp", "isp", "sgd", "machine learning", "memory", "data", "potential", "ssd"] ### Paper Content ABSTRACTIn computer architecture, near-data processing (NDP) refers to augmenting thememory or the storage with processing power so that it can process the data storedtherein. By offloading the computational burden of CPU and saving the need fortransferring raw data in its entirety, NDP exhibits a great potential for accelera-tion and power reduction. Despite this potential, specific research activities onNDP have witnessed only limited success until recently, often owing to perfor-mance mismatches between logic and memory process technologies that put alimit on the processing capability of memory. Recently, there have been two ma-jor changes in the game, igniting the resurgence of NDP with renewed interest.The first is the success of machine learning (ML), which often demands a greatdeal of computation for training, requiring frequent transfers of big data. Thesecond is the advent of NAND flash-based solid-state drives (SSDs) containingmulticore processors that can accommodate extra computation for data process-ing. Sparked by these application needs and technological support, we evaluatethe potential of NDP for ML using a new SSD platform that allows us to simulatein-storage processing (ISP) of ML workloads. Our platform (named ISP-ML) isa full-fledged simulator of a realistic multi-channel SSD that can execute variousML algorithms using the data stored in the SSD. For thorough performance anal-ysis and in-depth comparison with alternatives, we focus on a specific algorithm:stochastic gradient decent (SGD), which is the de facto standard for training dif-ferentiable learning machines including deep neural networks. We implement andcompare three variants of SGD (synchronous, Downpour, and elastic averaging)using ISP-ML, exploiting the multiple NAND channels for parallelizing SGD. Inaddition, we compare the performance of ISP and that of conventional in-hostprocessing, revealing the advantages of ISP. Based on the advantages and limita-tions identified through our experiments, we further discuss directions for futureresearch on ISP for accelerating ML.1 I NTRODUCTIONRecent successes in deep learning can be accredited to the availability of big data that has madethe training of large deep neural networks possible. In the conventional memory hierarchy, thetraining data stored at the low level (e.g., hard disks) need to be moved upward all the way to theCPU registers. As larger and larger data are being used for training large-scale models such as deepnetworks (LeCun et al., 2015), the overhead incurred by the data movement in the hierarchy becomesmore salient, critically affecting the overall computational efficiency and power consumption.To whom correspondence should be addressed.1Under review as a conference paper at ICLR 2017The idea of near-data processing (NDP) (Balasubramonian et al., 2014) is to equip the memoryor storage with intelligence (i.e., processors) and let it process the data stored therein firsthand.A successful NDP implementation would reduce the data transfers and power consumption, notto mention offloading the computational burden of CPUs. The types of NDP realizations includeprocessing in memory (PIM) (Gokhale et al., 1995) and in-storage processing (ISP) (Acharya et al.,1998; Kim et al., 2016c; Lee et al., 2016; Choi & Kee, 2015). Despite the potential of NDP, it has notbeen considered significantly for commercial systems. For PIM, there has been a wide performancegap between the separate processes to manufacture logic and memory chips. For ISP, commercialhard disk drives (HDDs), the mainstream storage devices for a long time, normally have limitedprocessing capabilities due to tight selling prices.Recently, we have seen a resurrection of NDP with renewed interest, which has been triggered bytwo major factors, one in the application side and the other in the technology side: First, computing-and data-intensive deep learning is rapidly becoming the method of choice for various machinelearning tasks. To train deep neural networks, a large volume of data is typically needed to ensureperformance. Although GPUs and multicore CPUs often provide an effective means for massivecomputation required by deep learning, it remains inevitable to store big training data in the storageand then transfer them to the CPU/GPU level for computation. Second, NAND flash-based solid-state drives (SSDs) are becoming popular, gradually replacing HDDs in various computing sectors.To interface SSDs with the host seamlessly replacing HDDs, SSDs require various software runninginside, e.g., for address translation and garbage collection (Kim et al., 2002; Gupta et al., 2009). Tosuit such needs, SSDs are often equipped with multicore processors, which provide far more pro-cessing capabilities than those in HDDs. Usually, there exists a plenty of idle time in the processorsin SSDs that can be exploited for other purposes than SSD housekeeping (Kim et al., 2010; 2016b).Motivated by these changes and opportunities, we propose a new SSD platform that allows us tosimulate in-storage processing (ISP) of machine learning workloads and evaluate the potential ofNDP for machine learning in ISP. Our platform named ISP-ML is a full-fledged system-level simu-lator of a realistic multi-channel SSD that can execute various machine learning algorithms using thedata stored in the SSD. For thorough performance analysis and in-depth comparison with alterna-tives, we focus on describing our implementation of a specific algorithm in this paper: the stochasticgradient decent (SGD) algorithm, which is the de facto standard for training differentiable learningmachines including deep neural networks. Specifically, we implement three types of parallel SGD:synchronous SGD (Zinkevich et al., 2010), Downpour SGD (Dean et al., 2012), and elastic aver-aging SGD (EASGD) (Zhang et al., 2015). We compare the performance of these implementationsof parallel SGD using a 10 times amplified version of MNIST (LeCun et al., 1998). Furthermore,to evaluate the effectiveness of ISP-based optimization by SGD, we compare the performance ofISP-based and the conventional in-host processing (IHP)-based optimization.To the best of the authors’ knowledge, this work is one of the first attempts to apply NDP to a multi-channel SSD for accelerating SGD-based optimization for training differentiable learning machines.Our specific contributions can be stated as follows:We created a full-fledged ISP-supporting SSD platform called ISP-ML, which required multi-year team efforts. ISP-ML is versatile and can simulate not only storage-related functionalitiesof a multi-channel SSD but also NDP-related functionalities in realistic manner. ISP-ML canexecute various machine learning algorithms using the data stored in the SSD while supportingthe simulation of multi-channel NAND flash SSDs to exploit data-level parallelism.We thoroughly tested the effectiveness of our platform by implementing and comparing multipleversions of parallel SGD, which is widely used for training various machine learning algorithmsincluding deep learning. We also devised a methodology that can carefully and fairly compare theperformance of IHP-based and ISP-based optimization.We identified intriguing future research opportunities in terms of exploiting the parallelism pro-vided by the multiple NAND channels inside SSDs. As in high-performance computing, thereexist multiple “nodes” (i.e., NAND channel controllers) for sharing workloads, but the commu-nication cost is negligible (due to negligible-latency on-chip communication) unlike the conven-tional parallel computing. Using our platform, we envision new designs of parallel optimizationand training algorithms that can exploit this characteristic, producing enhanced results.2Under review as a conference paper at ICLR 20172 B ACKGROUND AND RELATED WORK2.1 M ACHINE LEARNING AS AN OPTIMIZATION PROBLEMVarious types of machine learning algorithms exist (Murphy, 2012; Goodfellow et al., 2016), andtheir core concept can often be explained using the following equations:F(D;) =L(D;) +r() (1)t+1=t+ (D) (2)(D) =rF(D;) (3)whereDanddenote the input data and model parameters, respectively, and a loss function L(D;)reflects the difference between the optimal and current hypotheses. A regularizer to handle over-fitting is denoted by r(), and the objective function F(D;)is the sum of the loss and regularizerterms. The main purpose of supervised machine learning can then be formulated as finding optimalthat minimizes F(D;). Gradient descent is a first-order iterative optimization algorithm to findthe minimum value of F(D;)by updating on every iteration tto the direction of negative gra-dient ofF(D;), whereis the learning rate. SGD computes the gradient of the parameters andupdates them using a single training sample per iteration. Minibatch (stochastic) gradient decentuses multiple (but far less than the whole) samples per iteration. As will be explained shortly, weemploy minibatch SGD in our framework, setting the size of a minibatch to the number of trainingsamples in a NAND flash page, which is named ‘page-minibatch’ (see Figure 2).2.2 P ARALLEL AND DISTRIBUTED SGDZinkevich et al. (2010) proposed an algorithm that implements parallel SGD in a distributed com-puting setup. This algorithm often suffers from excessive latency caused by the need for synchro-nization of all slave nodes. To overcome this weakness, Recht et al. (2011) proposed the lock-freeHogwild! algorithm that can update parameters asynchronously. Hogwild! is normally implementedin a single machine with a multicore processor. Dean et al. (2012) proposed the Downpour SGD fora distributed computing systems by extending the Hodwild! algorithm. While they successfully im-plemented asynchronous SGD in a distributed computing system, it often fails to overcome commu-nication bottlenecks and shows inefficient bandwidth usage, caused by substantial data movementsbetween computing nodes. Recently proposed EASGD (Zhang et al., 2015) attempted to minimizecommunication overhead by reducing the frequency of parameter updates. Many EASGD-basedapproaches reported its effectiveness in distributed environments.2.3 F UNDAMENTALS OF SOLID -STATE DRIVES (SSD S)SSDs have emerged as a type of next-generation storage device using NAND flash memory (Kimet al., 2010). As shown in the right image in Figure 1(a), a typical SSD consists of an SSD con-troller, a DRAM buffer, and a NAND flash array. The SSD controller is typically composed ofan embedded processor, a cache controller, and channel controllers. The DRAM component, con-trolled by the cache controller, plays the role of a cache buffer when the NAND flash array is reador written. The NAND flash array contains multiple NAND chips that can be accessed simultane-ously thanks to multi-channel configurations and per-channel controllers. Every channel controlleris managed by the software called flash translation layer (FTL), which executes wear-leveling andgarbage collection to improve the performance and durability of the NAND flash array.2.4 P REVIOUS WORK ON NEAR-DATA PROCESSINGMost of the previous work on ISP focused on popular but inherently simple algorithms, such as scan,join, and query operations (Kim et al., 2016c). Lee et al. (2016) proposed to run the merge operation(frequently used by external sort operation in Hadoop) inside an SSD to reduce IO transfers andread/write operations, also extending the lifetime of the NAND flash inside the SSD. Choi & Kee(2015) implemented algorithms for linear regression, k-means, and string match in the flash memorycontroller (FMC) via reconfigurable stream processors. In addition, they implemented a MapRe-duce application inside the embedded processor and FMC of the SSD by using partitioning andpipelining methods that could improve performance and reduce power consumption. BlueDBM (Jun3Under review as a conference paper at ICLR 2017Host I/FCacheControllerNAND FlashChannelControllerclk/rstEmbeddedProcessorSRAM (b)CPUMain MemoryOSSSD controllerChannelControllerHost I/FEmbeddedProcessor(CPU)CacheControllerDRAMSRAMChannelControllerNAND FlashNAND FlashNAND FlashNAND FlashSSDISP HW ISP HWISP HWISP SWSSD controllerChannelControlle rHost I/FARM ProcessorDRAMCacheControlle rSRAMChannelControlle rNAND FlashNAND FlashNAND FlashNAND FlashISP-SSDISP HW ISP HWISP HWISP SWSSD(a) User Applicat ionFigure 1: (a) Block diagram of a typical computing system equipped with an SSD and a magnifiedview of a usual SSD depicting its internal components and their connections. (b) Schematic of theproposed ISP-ML framework, which is implemented in SystemC using Synopsys Platform Architect(http://www.synopsys.com).et al., 2015) is an ISP system architecture for distributed computing systems with a flash memory-based embedded field programmable gate array (FPGA). The authors implemented nearest-neighborsearch, graph traversal, and string search algorithms. No prior work ever implemented and evaluatedSSD-based optimization of machine learning algorithms using SGD.3 P ROPOSED METHODOLOGYFigure 1(a) shows the block diagram of a typical computing system, which is assumed to have anSSD as its storage device. Also shown in the figure is a magnified view of the SSD block diagramthat shows the major components of an SSD and their interconnections. Starting from the baselineSSD depicted above, we can implement ISP functionalities by modifying the components markedwith black boxes (i.e., ISP HW and ISP SW in the figure). Figure 1(b) shows the detailed schematicof our proposed ISP-ML platform that corresponds to the SSD block (with ISP components) shownin Figure 1(a).In this section, we provide more details of our ISP-ML framework. In addition, we propose a perfor-mance comparison methodology that can compare the performance of ISP and the conventional IHPin a fair manner. As a specific example of the ML algorithms that can be implemented in ISP-ML,we utilize parallel SGD.4Under review as a conference paper at ICLR 20173.1 ISP-ML: ISP P LATFORM FOR MACHINE LEARNING ON SSD SOur ISP-ML is a system-level simulator implemented in SystemC on the Synopsys Platform Ar-chitect environment (http://www.synopsys.com). ISP-ML can simulate hardware and software ISPcomponents marked in Figure 1(b) simultaneously. This integrative functionality is crucial for de-sign space exploration in SSD developments. Moreover, ISP-ML allows us to execute various ma-chine learning algorithms described in high-level languages (C or C++) directly on ISP-ML onlywith minor modifications.At the conception of this research, we could not find any publicly available SSD simulator that couldbe modified for implementing ISP functionalities. This motivated us to implement a new simulator.There exist multiple ways of realizing the idea of ISP in an SSD. The first option would be to use theembedded core inside the SSD controller (Figure 1(a)). This option does not require designing a newhardware logic and is also flexible, since the ISP capability is implemented by software. However,this option is not ideal for exploiting hardware acceleration and parallelization. The second optionwould be to design dedicated hardware logics (such as those boxes with black marks in Figure 1(a)and the entire Figure 1(b)) and integrate them into the SSD controller. Although significantly moreefforts are needed for this option compared the first, we chose this second option due to its long-termadvantages provided by hardware acceleration and power reduction.Specifically, we implemented two types of ISP hardware components, in addition to the softwarecomponents. First, we let each channel controller not only manage read/write operations to/fromits NAND flash channel (as in the usual SSDs) but also perform primitive operations on the datastored in its NAND channel. The type of primitive operation performed depends on the machinelearning algorithm used (the next subsection explains more details of such operations for SGD).Additionally, each channel controller in ISP-ML (slave) communicates with the cache controller(master) in a master-slave architecture. Second, we designed the cache controller so that it cancollect the outcomes from each of the channel controllers, in addition to its inherent functionality as acache (DRAM) manager inside the SSD controller. This master-slave architecture can be interpretedas a tiny-scale version of the master-slave architecture commonly used in distributed systems. Justas the channel controllers, the exact functionality of the cache controller can be optimized dependingon the specific algorithm used. Both the channel controllers and the cache controller have internalmemory, but the memory size in the latter is far greater than that in the former.Specific parameters and considerations used in our implementation can be found in Section 4.1.There are a few points worth mentioning. Unlike existing conventional SSD simulators, the base-line SSD implemented in ISP-ML can store data in the NAND flash memory inside. In order tosupport reasonable simulation speed, we modeled ISP-ML at cycle-accurate transaction level whileminimizing negative impact on accuracy. We omit to describe other minor details of hardware logicimplementations, as are beyond the scope of the conference.3.2 P ARALLEL SGD I MPLEMENTATION ON ISP-MLUsing our ISP-ML platform, we implemented the three types of parallel SGD algorithms outlinedin Figure 2: synchronous SGD (Zinkevich et al., 2010), Downpour SGD (Dean et al., 2012), andEASGD (Zhang et al., 2015). For brevity, we focus on describing the implementation details ofthese algorithms in ISP-ML and omit the purely algorithmic details of each algorithm; we refer theinterested to the corresponding references. Note that the size of a minibatch for the minibatch SGDin our framework is set to the number of training samples in a NAND flash page (referred to as‘page-minibatch’ in Figure 2).For implementing synchronous SGD, we let each of the nchannel controllers synchronously com-pute the gradient. Firstly, each channel controller reads page-sized data from the NAND flash mem-ory and then stores the data in the channel controller’s buffer. Secondly, the channel controllerpulls the cache controller’s parameters ( cache ) and stores them in the buffer. Using the data andparameters stored in the buffer, each channel controller calculates the gradient in parallel. Aftertransferring the gradient to the cache controller, the channel controllers wait for a signal from thecache controller. The cache controller aggregates and updates the parameters and then sends thechannel controller signals to pull and replicate the parameters.We implemented Downpour SGD in a similar way to implementing synchronous SGD; the majordifference is that each channel controller immediately begins the next iteration after transferring the5Under review as a conference paper at ICLR 2017Synchronous SGD Processing by i-th channe l con troller and cache co ntroll er Downpour SGD Proces sing by i-th cha nnel controller and cache controller EASGD Processing by i-th channel controller and cache contro ller Read a page from NAND pull θcache θi = θcache θi = 0 for page-minibatch θi = θi -η▽ti(θ) θi = θi + η▽ti(θ) t++ push θi and wait sync. θcache = θcache - 1/n /uni2219 ∑θi RRepeat RepeatRepeatRepeatRepeat Repeatend endendendendendendif thenif thenend Read a page from NAND pull θcache θi = θcache θi = 0 for page-minibatch θi = θi -η▽ti(θ) θi = θi + η▽ti(θ) t++ (τ devides t) push θi θcache = θcache - θi Read a page from NAND for page-miniba tch θi = θi -η▽ti(θ) t++ (τ devides t) pull θcache θi = θ - α(θi - θcache) push (θi - θcache) θcache = θcache + α(θi - θcache) Figure 2: Pseudo-code of the three SGD algorithms implemented in ISP-ML: synchronousSGD (Zinkevich et al., 2010), Downpour SGD (Dean et al., 2012), and EASGD (Zhang et al., 2015).The shaded line indicates the computation occurring in the cache controller (master); the other linesare executed in the channel controllers (slaves). Note that the term ‘page-minibatch’ refers to theminibatch SGD used in our framework, where the size of a minibatch is set to the number of trainingsamples in a NAND flash page.gradient to cache controller. The cache controller updates the parameters with the gradient from thechannel controllers sequentially.For EASGD, we let each of the channel controllers have its own SGD parameters unlike synchronousSGD and Downpour SGD. Each channel controller pulls the parameters from the cache controllerafter computing the gradient and updating its own parameters. Each channel controller calculatesthe differences between its own parameters and the cache controller’s parameters and then pushesthe differences to the cache controller.Of note is that, besides its widespread use, SGD has some appealing characteristics that facilitatehardware implementations. We can implement parallel SGD on top of the master-slave architecturerealized by the cache controller and the channel controllers. We can also take advantage of effectivetechniques developed in the distributed and parallel computation domain. Importantly, each SGDiteration is so simple that it can be implemented without incurring excessive hardware overhead.3.3 M ETHODOLOGY FOR IHP-ISP P ERFORMANCE COMPARISONTo evaluate the effectiveness of ISP, it is crucial to accurately and fairly compare the performancesof ISP and the conventional IHP. However, performing this type of comparison is not trivial (seeSection 4.3 for additional discussion). Furthermore, the accurate modeling of commercial SSDsequipped with ISP-ML is impossible due to lack of information about commercial SSDs (e.g., thereis no public information on the FTL and internal architectures of any commercial SSD). Therefore,we propose a practical methodology for accurate comparison of IHP and ISP performances, asdepicted in Figure 3. Note that this comparison methodology is applicable not only to the parallelSGD implementations explained above but also to other ML algorithms that can be executed inISP-ML.In the proposed comparison methodology, we focus on the data IO latency time of the storage(denoted as TIO), since it is the most critical factor among those that affect the execution time ofIHP. The total processing time of IHP ( IHP timeorTtotal) can then be divided into the data IO timeand the non-data IO time ( TnonIO ) as follows:IHP time=Ttotal=TnonIO +TIO: (4)To calculate the expected IHP simulation time adjusted to ISP-ML, the data IO time of IHP isreplaced by the data IO time of the baseline SSD in ISP-ML ( TIOsim ). By using Eq. (4), the expectedIHP simulation time can then be represented byExpected IHP simulation time =TnonIO +TIOsim =TtotalTIO+TIOsim: (5)6Under review as a conference paper at ICLR 2017HostStorageISP-ML(baseline)IO Trace ISP-ML(ISP implemented)ISP Cmd(b) (a)Simulator Real SystemMeasure total execution application t ime(Ttotal) Measure baseline simulation time with IO trace(TIOsim)Measure IO service time(TIO)Extract IO trace while executing applicationInHostInSSD(Sim)Figure 3: (a) Overview of our methdology to compare the performance of in-host processing (IHP)and in-storage processing (ISP). (b) Details of our IHP-ISP comparison flow.The overall flow of the proposed comparison methodology is depicted in Figure 3(b). First, the totalprocessing time ( Ttotal) and the data IO time of storage ( TIO) are measured in IHP, extracting the IOtrace of storage during an application execution. The simulation IO time ( TIOsim ) is then measuredusing the IO trace (extracted from IHP) on the baseline SSD of ISP-ML. Finally, the expected IHPsimulation time is calculated by plugging the total processing time ( Ttotal), the data IO time ofstorage (TIO) and the simulation IO time ( TIOsim ) into Eq. (5). With the proposed method and ISP-ML, which is applicable to a variety of IHP environments regardless of the type of storage used, itis possible to quickly and easily compare performances of various ISP implementations and IHP ina simulation environment.4 E XPERIMENTAL RESULTS4.1 S ETUPAll the experiments presented in this section were run on a machine equipped with an 8-core Intel(R)Core i7-3770K CPU (3.50GHz) with DDR3 32GB RAM, Samsung SSD 840 Pro, and Ubuntu 14.04LTS (kernel version: 3.19.0-26-generic). We used ARM 926EJ-S (400MHz) as the embedded pro-cessor inside ISP-ML and DFTL (Gupta et al., 2009) as the FTL of ISP-ML. The simulation modelwe used was derived from a commercial product (Micron NAND MT29F8G08ABACA) and had thefollowing specifications: page size =8KB,tprog=300s,tread=75s, andtblock erase =5ms.1Each channel controller had 24KB of memory [8KB (page size) for data and 16KB for ISP] and afloating point unit (FPU) having 0.5 instruction/cycle performance (with pipelining). The cache con-troller had memory of (n+ 1)8KB (page size), where nis the number of channels ( n= 4;8;16).Depending on the algorithm running in ISP-ML, we can adjust these parameters.Note that the main purpose of our experiments in this paper was to verify the functionality of ourISP-ML framework and to evaluate the effectiveness of ISP over the conventional IHP using SGD,even though our framework is certainly not limited only to SGD. To this end, we selected logisticregression, a fundamental ML algorithm that can directly show the advantage of ISP-based optimiza-tions over IHP-based optimizations without unnecessary complications. We thus implemented thelogistic regression algorithm as a single-layer perceptron (with cross entropy loss) in SystemC anduploaded it to ISP-ML. As stated in Section 5.3, our future work includes the implementation andtesting of more complicated models (such as deep neural networks) by reflecting the improvementopportunities revealed from the experiments presented in this paper.As test data, we utilized the samples from the MNIST database (LeCun et al., 1998). To amplifythe number of training samples (for showing the scalability of our approach), we used elastic distor-tion (Simard et al., 2003), producing 10 times more data than the original MNIST (approximately600,000 training and 10,000 test samples were used in total). To focus on the performance evalua-tion of running ISP operations, we preloaded our NAND flash simulation model with the simulation1These are conservative settings, compared with those of the original commercial product; using the speci-fications of a commercial product will thus improve the performance of ISP-ML.7Under review as a conference paper at ICLR 20170.820.860.900.94Time(sec)(a) 4-ChannelTest accuracy0 2 4 6 8 10 120.820.860.900.94Time(sec)(c) 16-ChannelTest accuracy0 2 4 6 8 10 120.820.860.900.94Time(sec)(b) 8-Ch annelTest accuracy0 2 4 6 8 10 12Synchronou s SGDDownpour SGDEASGDSynchronou s SGDDownpour SGDEASGDSynchronou s SGDDownpour SGDEASGDFigure 4: Test accuracy of three ISP-based SGD algorithms versus wall-clock time with a varyingnumber of NAND flash channels: (a) 4 channels, (b) 8 channels, and (c) 16 channels.0.800.840.880.920 4 8 12 16 20IHP(2G B-memory) IHP(16 GB-memory) ISP(EASGD, 4CH)IHP(4G B-memory) IHP(32 GB-memory) ISP(EASGD, 8CH)IHP(8G B-memory) ISP(EASGD, 16CH)Time(sec)Test accuracyFigure 5: Test accuracy of ISP-based EASGD in the 4, 8, and 16 channel configurations and IHP-based minibatch SGD using diverse memory sizes.data (the same condition was used for the alternatives for fairness). Based on the size of a trainingsample this dataset and the size of a NAND page (8KB), we set the size of each minibatch to 10.4.2 P ERFORMANCE COMPARISON : ISP-B ASED OPTIMIZATIONAs previously explained, to identify which SGD algorithm would be best suited for use in ISP, weimplemented and analyzed three types of SGD algorithms: synchronous SGD, Downpour SGD, andEASGD. For EASGD, we set the moving rate ( ) and the communication period ( ) to 0.001 and1, respectively. For a fair comparison, we chose different learning rates for different algorithms thatgave the best performance for each algorithm. Figure 4 shows the test accuracy of three algorithmswith varying numbers of channels (4, 8, and 16) with respect to wall-clock time.As shown in Figure 4, using EASGD gave the best convergence speed in all of the cases tested.EASGD outperformed synchronous and Downpour SGD by factors of 5.24 and 1.96 on average,respectively. Synchronous SGD showed a slower convergence speed when compared to DownpourSGD because it could not start learning on the next set of minibatch until the results of all thechannel controllers reported to the cache controller. Moreover, one delayed worker could halt theentire process. This result suggests that EASGD is adequate for all the channel configurations testedin that ISP can benefit from ultra-fast on-chip level communication and employ application-specifichardware that can eliminate any interruptions from other processors.4.3 P ERFORMANCE COMPARISON : IHP VERSUS ISPIn large-scale machine learning, the computing systems used may suffer from memory shortage,which incurs significant data swapping overhead. In this regard, ISP can provide an effective solutionthat can potentially reduce data transfer penalty by processing core operations at the storage level.In this context, we carried out additional experiments to compare the performance of IHP-basedand ISP-based EASGD. We tested the effectiveness of ISP in a memory shortage situation with 5different configurations of IHP memory: 2GB, 4GB, 8GB, 16GB, and 32GB. We assumed that thehost already loaded all of the data to the main memory for IHP. This assumption is realistic because8Under review as a conference paper at ICLR 20170.820.860.900.94Time(sec)(b) Downpour SGDTest accuracy0 2 4 6 8 10 124-Channel8-Channel16-Channel0.820.860.900.94Time(sec)(c) EASGDTest accuracy0 2 4 6 8 10 124-Channel8-Channel16-Channel0.820.860.900.94Time(sec)(a) Synchronous SGDTest accuracy0 2 4 6 8 10 124-Channel8-Channel16-Channel1244 8 16Synchronous SGDDownpour SGDEASGDChannelSpeed up(d) Speed upFigure 6: Test accuracy of different ISP-based SGD algorithms for a varied number of channels: (a)synchronous SGD, (b) Downpour SGD, and (c) EASGD. (d) Training speed-up for the three SGDalgorithms for a various number of channels.state-of-the-art machine learning techniques often employ a prefetch strategy to hide the initial datatransfer latency.As depicted in Figure 5, ISP-based EASGD with 16 channels gave the best performance in ourexperiments. The convergence speed of the IHP-based optimization slowed down, in accordancewith the reduced memory size. The results with 16GB and 32GB of memory gave similar resultsbecause using 16GB of memory was enough to load and allocate most of the resource required bythe process. As a result, ISP was more efficient when memory was insufficient, as would be oftenthe case with large-scale datasets in practice.4.4 C HANNEL PARALLELISMTo closely examine the effect of exploiting data-level parallelism on performance, we compared theaccuracy of the three SGD algorithms, varying the number of channels (4, 8, and 16), as shownin Figure 6. All the three algorithms resulted in convergence speed-up by using more channels;synchronous SGD achieved 1:48speed-up when the number of channels increased from 8 to 16.From Figure 6(d), we can also note that the convergence speed-up tends to be proportional to numberof channels. These results suggest that the communication overhead in ISP is negligible, and that ISPdoes not suffer from the communication bottleneck that commonly occurs in distributed computingsystems.4.5 E FFECTS OF COMMUNICATION PERIOD IN ASYNCHRONOUS SGDFinally, we investigated how changes in the communication period (i.e., how often data exchangeoccurs during distributed optimization) affect SGD performance in the ISP environment. Figure 7shows the test accuracy of the Downpour SGD and EASGD algorithms versus wall-clock time whenwe varied their communication periods. As described in Zhang et al. (2015), Downpour SGD nor-mally achieved a high performance for a low communication period [ = 1;4] and became unstablefor a high communication period [ = 16;64] in ISP. Interestingly, in contrast to the conventional9Under review as a conference paper at ICLR 20170.500.600.700.800.900 2 4 6 8 10(a) Downpour SGDTime(sec)Test accuracy0.860.880.900.920 2 4 6 810(b) EASGDTime(sec)Test accuracy τ = 1 τ = 4 τ = 16 τ = 64 τ = 1 τ = 4 τ = 16 τ = 64Figure 7: Test accuracy of ISP-based Downpour SGD and EASGD algorithms versus wall-clocktime for different communication periods.distributed computing system setting, the performance of EASGD decreased as the communicationperiod increased in the ISP setting. This is because the on-chip communication overhead in ISP issignificantly lower than that in the distributed computing system. As a result, there would be no needfor extending the communication period to reduce communication overhead in the ISP environment.5 D ISCUSSION5.1 P ARALLELISM IN ISPGiven the advances in underlying hardware and semiconductor technology, ISP can provide vari-ous advantages for data processing involved in machine learning. For example, our ISP-ML couldminimize (practically eliminate) the communication overheads between parallel nodes leveraged byultra-fast on-chip communication inside an SSD. Minimizing communication overheads can im-prove various key aspects of data-processing systems, such as energy efficiency, data management,security, and reliability. By exploiting this advantage of fast on-chip communications in ISP, we en-vision that we will be able to devise a new kind of parallel algorithms for optimization and machinelearning running on ISP-based SSDs.Our experiment results also revealed that a high degree of parallelism could be achieved by increas-ing the number of channels inside an SSD. Some of the currently available commercial SSDs haveas many as 16 channels. Given that the commercial ISP-supporting SSDs would (at least initially) betargeted at high-end SSD markets with many NAND flash channels, our approach is expected to adda valuable functionality to such SSDs. Unless carefully optimized, a conventional distributed systemwill see diminishing returns as the number of nodes increases, due to the increased communicationoverhead and other factors. Exploiting a hierarchy of parallelism (i.e., parallel computing nodes,each of which has ISP-based SSDs with parallelism inside) may provide an effective accelerationscheme, although a fair amount of additional research is needed before we can realize this idea.5.2 ISP-IHP C OMPARISON METHODOLOGYTo fairly compare the performances of ISP and IHP, it would be ideal to implement ISP-ML in a realsemiconductor chip, or to simulate IHP in the ISP-ML framework. Selecting either option, however,is possible but not plausible (at least in academia), because of high cost of manufacturing a chip,and the prohibitively high simulation time for simulating IHP in the Synopsys Platform Architectenvironment (we would have to implement many components of a modern computer system in orderto simulate IHP). Another option would be to implement both ISP and IHP using FPGAs, but it willtake another round of significant efforts for developments.To overcome these challenges (still assuring a fair comparison between ISP and IHP), we have pro-posed the comparison methodology described in Section 3.3. In terms of measuring the absoluterunning time, our methodology may not be ideal. However, in terms of highlighting relative perfor-mance between alternatives, our method should provide a satisfactory solution.10Under review as a conference paper at ICLR 2017Our comparison methodology extracts IO trace from the storage while executing an application inthe host, which is used for measuring simulation IO time in the baseline SSD in ISP-ML. In thisprocedure, we assume that the non-IO time of IHP is consistent regardless of the kind of storage thehost has. The validity of this assumption is warranted by the fact that the amount of non-IO timechanged by the storage is usually negligible compared with the total execution time or IO time.5.3 O PPORTUNITIES FOR FUTURE RESEARCHIn this paper we focused on the implementation and testing of ISP-based SGD as a proof of concept.The simplicity and popularity of (parallel) SGD underlie our choice. By design, it is possible torun other algorithms in our ISP-ML framework immediately; recall that our framework includes ageneral-purpose ARM processor that can run executables compiled from C/C++ code. However, itwould be meaningless just to have an ISP-based implementation, if its performance is unsatisfactory.To unleash the full power of ISP, we need additional ISP-specific optimization efforts, as is typicallythe case with hardware design.With this in mind, we have started implementing deep neural networks (with realistic numbers oflayers and hyperparameters) using our ISP-ML framework. Especially, we are carefully devisinga way of balancing the memory usage in the DRAM buffer, the cache controller, and the channelcontrollers inside ISP-ML. It would be reasonable to see an SSD with a DRAM cache with a fewgigabytes of memory, whereas it is unrealistic to design a channel controller with that much memory.Given that a large amount of memory is needed only to store the parameters of such deep models, andthat IHP and ISP have different advantage and disadvantages, it would be intriguing to investigatehow to make IHP and ISP can cooperate to enhance the overall performance. For instance, we canlet ISP-based SSDs perform low-level data-dependent tasks while assigning high-level tasks to thehost, expanding the current roles of the cache controller and the channel controllers inside ISP-MLto the whole system level.Our future work also includes the following: First, we will be able to implement adaptive opti-mization algorithms such as Adagrad (Duchi et al., 2011) and Adadelta (Zeiler, 2012). Second,precomputing meta-data during data writes (instead of data reads) could provide another directionof research that can bring even more speedup. Third, we will be able to implement data shufflefunctionality in order to maximize the effect of data-level parallelism. Currently, ISP-ML arbitrarilysplits the input data into its multi-channel NAND flash array. Fourth, we may investigate the effectof NAND flash design on performance, such as the NAND flash page size. Typically, the size of aNAND flash page significantly affects the performance of SSDs, given that the page size (e.g., 8KB)is the basic unit of NAND operation (read and write). In case where the size of a single example of-ten exceeds the page size, frequent data fragmentation is inevitable, eventually affecting the overallperformance. The effectiveness of using multiple page sizes was already reported for conventionalSSDs (Kim et al., 2016a), and we may borrow this idea to further optimize ISP-ML.ACKNOWLEDGMENTSThe authors would like to thank Byunghan Lee at Data Science Laboratory, Seoul National Univer-sity for proofreading the manuscript. This work was supported in part by BK21 Plus (Electrical andComputer Engineering, Seoul National University) in 2016, in part by a grant from SK Hynix, andin part by a grant from Samsung Electronics.<|im_end|> <|im_start|>assistant ### Review Title Near-Data Processing for Machine Learning ### Review Text While the idea of moving the processing for machine learning into silicon contained within the (SSD) data storage devices is intriguing and offers the potential for low-power efficient computation, it is a rather specialized topic, so I don't feel it will be of especially wide interest to the ICLR audience. The paper describes simulation results, rather than actual hardware implementation, and describes implementations of existing algorithms. The comparisons of algorithms' train/test performance does not seem relevant (since there is no novelty in the algorithms) and the use of a single layer perceptron on MNIST calls into question the practicality of the system, since this is a tiny neural network by today's standards. I did not understand from the paper how it was thought that this could scale to contemporary scaled networks, in terms of numbers of parameters for both storage and bandwidth. I am not an expert in this area, so have not evaluated in depth. ### Review Rating 5: Marginally below acceptance threshold ### Review Confidence 2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper<|im_end|> <|im_end|>
HklUCCVKDB
ICLR.cc/2020/Conference
2020
Uncertainty-guided Continual Learning with Bayesian Neural Networks
["Sayna Ebrahimi", "Mohamed Elhoseiny", "Trevor Darrell", "Marcus Rohrbach"]
Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to.
["continual learning", "catastrophic forgetting"]
ABSTRACTContinual learning aims to learn new tasks without forgetting previously learnedones. This is especially challenging when one cannot access data from previoustasks and when the model has a fixed capacity. Current regularization-basedcontinual learning algorithms need an external representation and extra computationto measure the parameters’ importance . In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks ( UCB ), where the learning rate adaptsaccording to the uncertainty defined in the probability distribution of the weights innetworks. Uncertainty is a natural way to identify what to remember andwhat tochange as we continually learn, and thus mitigate catastrophic forgetting. We alsoshow a variant of our model, which uses uncertainty for weight pruning and retainstask performance after pruning by saving binary masks per tasks. We evaluate ourUCB approach extensively on diverse object classification datasets with short andlong sequences of tasks and report superior or on-par performance compared toexisting approaches. Additionally, we show that our model does not necessarilyneed task information at test time, i.e. it does not presume knowledge of whichtask a sample belongs to.1 I NTRODUCTIONHumans can easily accumulate and maintain knowledge gained from previously observed tasks, andcontinuously learn to solve new problems or tasks. Artificial learning systems typically forget priortasks when they cannot access all training data at once but are presented with task data in sequence.Overcoming these challenges is the focus of continual learning , sometimes also referred to as lifelonglearning orsequential learning .Catastrophic forgetting (McCloskey & Cohen, 1989; McClellandet al., 1995) refers to the significant drop in the performance of a learner when switching from atrained task to a new one. This phenomenon occurs because trained parameters on the initial taskchange in favor of learning new objectives.Given a network of limited capacity, one way to address this problem is to identify the importance ofeach parameter and penalize further changes to those parameters that were deemed to be important forthe previous tasks (Kirkpatrick et al., 2017; Aljundi et al., 2018; Zenke et al., 2017). An alternative isto freeze the most important parameters and allow future tasks to only adapt the remaining parametersto new tasks (Mallya & Lazebnik, 2018). Such models rely on the explicit parametrization ofimportance. We propose here implicit uncertainty-guided importance representation.Bayesian approaches to neural networks (MacKay, 1992b) can potentially avoid some of the pitfallsof explicit parameterization of importance in regular neural networks. Bayesian techniques, naturallyaccount for uncertainty in parameters estimates. These networks represent each parameter witha distribution defined by a mean and variance over possible values drawn from a shared latentprobability distribution (Blundell et al., 2015). Variational inference can approximate posteriordistributions using Monte Carlo sampling for gradient estimation. These networks act like ensemblemethods in that they reduce the prediction variance but only use twice the number of parameterspresent in a regular neural network. We propose to use the predicted mean and variance of the latentdistributions to characterize the importance of each parameter. We perform continual learning withCorresponding author: sayna@berkeley.eduyWork done while at Facebook AI Research1Published as a conference paper at ICLR 2020(a) (b) (c) Illustration of evolution of weight distributions through learning two tasks. (a) circles represent weight parameters, initialized by distributions with mean and variance values randomly sampled from Ɲ(0,0.1). As an example we show five color-coded and plot their distributions. (b) Shows posterior distribution after learning Task 1. While W1 and W2 exhibit lower uncertainties (more contributions in learning Task 1), W3, W4, and W5 appear to have larger uncertainties, with the highest STD in W5, making them available to learn more tasks. (c) Task 2 is learned using higher learning rates for previously uncertain parameters (W3 and W4, W5) while learning rates for W1 and W2 are moderated according to their predicted low uncertainty after finishing task 1. p( θ)θ θ θTraining Task 1 Training Task 2 1324p( θ) p( θ)5p(y| x,[ θ1, θ2, θ3, θ4, θ5]) ⩰ p(y)p(y| x,[ θ1, θ2, θ3, θ4, θ5]) ⩰ p(y|x,[ θ1, θ2])p(y| x, [ θ1, θ2, θ3, θ4, θ5]) ⩰ p(y|x,[ θ1, θ2, θ3, θ4])Figure 1: Illustration of the evolution of weight distributions – uncertain weights adapt more quickly –when learning two tasks using UCB . (a) weight parameter initialized by distributions initialized withmean and variance values randomly sampled from N(0;0:1). (b) posterior distribution after learningtask one; while 1and2exhibit lower uncertainties after learning the first task, 3,4, and5havelarger uncertainties, making them available to learn more tasks. (c) a second task is learned usinghigher learning rates for previously uncertain parameters ( 1,2,3, and4) while learning rates for1and2are reduced. Size of the arrows indicate the magnitude of the change of the distributionmean upon gradient update.Bayesian neural networks by controlling the learning rate of each parameter as a function of itsuncertainty. Figure 1 illustrates how posterior distributions evolve for certain and uncertain weightdistributions while learning two consecutive tasks. Intuitively, the more uncertain a parameter is, themore learnable it can be and therefore, larger gradient steps can be taken for it to learn the currenttask. As a hard version of this regularization technique, we also show that pruning, i.e., preventingthe most important model parameters from any change and learning new tasks with the remainingparameters, can be also integrated into UCB. We refer to this method as UCB-P.Contributions: We propose to perform continual learning with Bayesian neural networks anddevelop a new method which exploits the inherent measure of uncertainty therein to adapt the learningrate of individual parameters (Sec. 4). Second, we introduce a hard-threshold variant of our methodthat decides which parameters to freeze (Sec. 4.2). Third, in Sec. 5, we extensively validate ourapproach experimentally, comparing it to prior art both on single datasets split into different tasks, aswell as for the more difficult scenario of learning a sequence of different datasets. Forth, in contrastto most prior work, our approach does not rely on knowledge about task boundaries at inference time,which humans do not need and might not be always available. We show in Sec. 6 that our approachnaturally supports this scenario and does not require task information at test time, sometimes alsoreferred to as a “single head” scenario for all tasks. We refer to evaluation metric of a “single head”model without task information at test time as “generalized accuracy”. Our code is available athttps://github.com/SaynaEbrahimi/UCB .2 R ELATED WORKConceptually, approaches to continual learning can be divided into the following categories: dynamicarchitectural methods, memory-based methods, and regularization methods.Dynamic architectural methods : In this setting, the architecture grows while keeping past knowl-edge fixed and storing new knowledge in different forms such as additional layers, nodes, or modules.In this approach, the objective function remains fixed whereas the model capacity grows –oftenexponentially– with the number of tasks. Progressive networks (Rusu et al., 2016; Schwarz et al.,2018) was one of the earliest works in this direction and was successfully applied to reinforcementlearning problems; the base architecture was duplicated and lateral connections added in response tonew tasks. Dynamically Expandable Network (DEN) (Yoon et al., 2018) also expands its network byselecting drifting units and retraining them on new tasks. In contrast to our method, these approachesrequire the architecture grow with each new task.Memory-based methods: In this regime, previous information is partially stored to be used later asa form of rehearsal (Robins, 1995). Gradient episodic memory (GEM) (Lopez-Paz et al., 2017) usesthis idea to store the data at the end of each episode to be used later to prevent gradient updates fromdeviating from their previous values. GEM also allows for positive backward knowledge transfer, i.e,2Published as a conference paper at ICLR 2020an improvement on previously learned tasks, and it was the first method capable of learning usinga single training example. Recent approaches in this category have mitigated forgetting by usingexternal data combined with distillation loss and/or confidence-based sampling strategies to selectthe most representative samples. (Castro et al., 2018; Wu et al., 2019; Lee et al., 2019)Regularization methods : In these approaches, significant changes to the representation learned forprevious tasks are prevented. This can be performed through regularizing the objective function ordirectly enforced on weight parameters. Typically, this importance measure is engineered to representthe importance of each parameter. Inspired by Bayesian learning, in elastic weight consolidation(EWC) method (Kirkpatrick et al., 2017) important parameters are those to have the highest in termsof the Fisher information matrix. In Synaptic Intelligence (SI) (Zenke et al., 2017) this parameterimportance notion is engineered to correlate with the loss function: parameters that contribute moreto the loss are more important. Similar to SI, Memory-aware Synapses (MAS) (Aljundi et al., 2018)proposed an online way of computing importance adaptive to the test set using the change in the modeloutputs w.r.t the inputs. While all the above algorithms are task-dependent, in parallel developmentto this work, (Aljundi et al., 2019) has recently investigated task-free continual learning by buildingupon MAS and using a protocol to update the weights instead of waiting until the tasks are finished.PackNet (Mallya & Lazebnik, 2018) used iterative pruning to fully restrict gradient updates onimportant weights via binary masks. This method requires knowing which task is being tested touse the appropriate mask. PackNet also ranks the weight importance by their magnitude which isnot guaranteed to be a proper importance indicative. HAT (Serra et al., 2018) identifies importantneurons by learning an attention vector to the task embedding to control the gradient propagation. Itmaintains the information learned on previous tasks using an almost-binary mask per previous tasks.Bayesian approaches: Using Bayesian approach in learning neural networks has been studiedfor few decades (MacKay, 1992b;a). Several approaches have been proposed for Bayesian neuralnetworks, based on, e.g., the Laplace approximation (MacKay, 1992a), Hamiltonian Monte Carlo(Neal, 2012), variational inference (Hinton & Van Camp, 1993; Graves, 2011), and probabilisticbackpropagation (Hern ́andez-Lobato & Adams, 2015). Variational continual learning (Nguyen et al.,2018) uses Bayesian inference to perform continual learning where new posterior distribution issimply obtained by multiplying the previous posterior by the likelihood of the dataset belongingto the new task. They also showed that by using a core-set, a small representative set of data fromprevious tasks, VCL can experience less forgetting. In contrast, we rely on Bayesian neural networksto use their predictive uncertainty to perform continual learning. Moreover, we do not use episodicmemory or any other way to access or store previous data in our approach.Natural gradient descent methods: A fast natural gradient descent method for variational inferencewas introduced in (Khan & Nielsen, 2018) in which, the Fisher Information matrix is approximatedusing the generalized Gauss-Newton method. In contrast, in our work, we use classic gradient descent.Although second order optimization algorithms are proven to be more accurate than the first ordermethods, they add considerable computational cost. Tseran et al. (2018); Chen et al. (2019) bothinvestigate the effect of natural gradient descent methods as an alternative to classic gradient descentused in VCL and EWC methods. GNG (Chen et al., 2019) uses Gaussian natural gradients in theAdam optimizer (Kingma & Ba, 2014) in the framework of VCL because as opposed to conventionalgradient methods which perform in Euclidian space, natural gradients cause a small difference interms of distributions following the changes in parameters in the Riemannian space. Similar to VCL,they obtained their best performance by adding a coreset of previous examples. Tseran et al. (2018)introduce two modifications to VCL called Natural-VCL (N-VCL) and VCL-Vadam. N-VCL (Tseranet al., 2018) uses a Gauss-Newton approximation introduced by (Schraudolph, 2002; Graves, 2011)to estimate the VCL objective function and used natural gradient method proposed in (Khan et al.,2018) to exploit the Riemannian geometry of the variational posterior by scaling the gradient withan adaptive learning rate equal to 2obtained by approximating the Fisher Information matrix inan online fashion. VCL-Vadam (Tseran et al., 2018) is a simpler version of N-VCL to trade-offaccuracy for simplicity which uses Vadam (Khan et al., 2018) to update the gradients by perturbingthe weights with a Gaussian noise using a reparameterization trick and scaling by 1instead of itssquared. N-VCL/VCL-Vadam both use variational inference to adapt the learning rate within Adamoptimizer at every time step, whereas in our method below, gradient decent is used with constantlearning rate during each task where learning rate scales with uncertainty only after finishing a task.We show extensive comparison with state-of-the-art results on short and relatively long sequenceof vision datasets with Bayesian convolutional neural networks, whereas VCL-Vadam only rely on3Published as a conference paper at ICLR 2020multi-layer perceptron networks. We also like to highlight that this is the first work which evaluatesand shows the working of convolutional Bayesian Neural Networks rather than only fully connectedMLP models for continual learning.3 B ACKGROUND : VARIATIONAL BAYES -BY-BACKPROPIn this section, we review the Bayes-by-Backprop (BBB) framework which was introduced by(Blundell et al., 2015); to learn a probability distribution over network parameters. (Blundell et al.,2015) showed a back-propagation-compatible algorithm which acts as a regularizer and yieldscomparable performance to dropout on the MNIST dataset. In Bayesian models, latent variables aredrawn from a prior density p(w)which are related to the observations through the likelihood p(xjw).During inference, the posterior distribution p(wjx)is computed conditioned on the given inputdata. However, in practice, this probability distribution is intractable and is often estimated throughapproximate inference. Markov Chain Monte Carlo (MCMC) sampling (Hastings, 1970) has beenwidely used and explored for this purpose, see (Robert & Casella, 2013) for different methods underthis category. However, MCMC algorithms, despite providing guarantees for finding asymptoticallyexact samples from the target distribution, are not suitable for large datasets and/or large modelsas they are bounded by speed and scalability issues. Alternatively, variational inference provides afaster solution to the same problem in which the posterior is approximated using optimization ratherthan being sampled from a chain (Hinton & Van Camp, 1993).Variational inference methods alwaystake advantage of fast optimization techniques such as stochastic methods or distributed methods,which allow them to explore data models quickly. See (Blei et al., 2017) for a complete review of thetheory and (Shridhar et al., 2018) for more discussion on how to use Bayes by Backprop (BBB) inconvolutioal neural networks.3.1 B AYES BY BACKPROP (BBB)Letx2I Rnbe a set of observed variables and wbe a set of latent variables. A neural network, asa probabilistic model P(yjx;w), given a set of training examples D= (x;y)can output ywhichbelongs to a set of classes by using the set of weight parameters w. Variational inference aims tocalculate this conditional probability distribution over the latent variables by finding the closest proxyto the exact posterior by solving an optimization problem.We first assume a family of probability densities over the latent variables wparametrized by , i.e.,q(wj). We then find the closest member of this family to the true conditional probability of interestP(wjD)by minimizing the Kullback-Leibler (KL) divergence between qandPwhich is equivalentto minimizing variational free energy or maximizing the expected lower bound:= arg minKLq(wj)kP(wjD)(1)The objective function can be written as:LBBB(;D) = KLq(wj)kP(w)Eq(wj)log(P(Djw))(2)Eq. 2 can be approximated using NMonte Carlo samples wifrom the variational posterior (Blundellet al., 2015):LBBB(;D)NXi=1logq(wij)logP(wi)log(P(Djwi)) (3)We assumeq(wj)to have a Gaussian pdf with diagonal covariance and parametrized by = (;).A sample weight of the variational posterior can be obtained by sampling from a unit Gaussianand reparametrized by w=+whereis the noise drawn from unit Gaussian, and isa pointwise multipliation. Standard deviation is parametrized as = log(1 + exp( ))and thusis always positive. For the prior, as suggested by Blundell et al. (2015), a scale mixture of twoGaussian pdfs are chosen which are zero-centered while having different variances of 21and22. Theuncertainty obtained for every parameter has been successfully used in model compression (Han et al.,2015) and uncertainty-based exploration in reinforcement learning (Blundell et al., 2015). In thiswork we propose to use this framework to learn sequential tasks without forgetting using per-weightuncertainties.4Published as a conference paper at ICLR 20204 U NCERTAINTY -GUIDED CONTINUAL LEARNING IN BAYESIAN NEURALNETWORKSIn this section, we introduce Uncertainty-guided Continual learning approach with Bayesian neuralnetworks ( UCB ), which exploits the estimated uncertainty of the parameters’ posterior distributionto regulate the change in “important” parameters both in a soft way (Section 4.1) or setting a hardthreshold (Section 4.2).4.1 UCB WITH LEARNING RATE REGULARIZATIONA common strategy to perform continual learning is to reduce forgetting by regularizing furtherchanges in the model representation based on parameters’ importance . InUCB the regularization isperformed with the learning rate such that the learning rate of each parameter and hence its gradientupdate becomes a function of its importance . As shown in the following equations, in particular,we scale the learning rate of andfor each parameter distribution inversely proportional to itsimportance to reduce changes in important parameters while allowing less important parameters toalter more in favor of learning new tasks. = (4) = (5)The core idea of this work is to base the definition of importance on the well-defined uncertaintyin parameters distribution of Bayesian neural networks, i.e., setting the importance to be inverselyproportional to the standard deviation which represents the parameter uncertainty in the Baysianneural network:/1= (6)We explore different options to set in our ablation study presented in Section A.2 of the appendix,Table 1. We empirically found that = 1=and not adapting the learning rate for (i.e.= 1)yields the highest accuracy and the least forgetting.The key benefit of UCB with learning rate as the regularizer is that it neither requires additionalmemory, as opposed to pruning technique nor tracking the change in parameters with respect to thepreviously learned task, as needed in common weight regularization methods.More importantly, this method does not need to be aware of task switching as it only needs to adjustthe learning rates of the means in the posterior distribution based on their current uncertainty. Thecomplete algorithm for UCB is shown in Algorithm 1 with parameter update function given inAlgorithm 2.4.2 UCB USING WEIGHT PRUNING (UCB-P)In this section, we introduce a variant of our method, UCB -P, which is related to recent efforts inweight pruning in the context of reducing inference computation and network compression (Liu et al.,2017; Molchanov et al., 2016). More specifically, weight pruning has been recently used in continuallearning (Mallya & Lazebnik, 2018), where the goal is to continue learning multiple tasks using asingle network’s capacity. (Mallya & Lazebnik, 2018) accomplished this by freeing up parametersdeemed to be unimportant to the current task according to their magnitude. Forgetting is preventedin pruning by saving a task-specific binary mask of important vs. unimportant parameters. Here,we adapt pruning to Bayesian neural networks. Specifically, we propose a different criterion formeasuring importance: the statistically-grounded uncertainty defined in Bayesian neural networks.Unlike regular deep neural networks, in a BBB model weight parameters are represented by proba-bility distributions parametrized by their mean and standard deviation. Similar to (Blundell et al.,2015), in order to take into account both mean and standard deviation, we use the signal-to-noiseratio (SNR) for each parameter defined as = SNR =jj= (7)5Published as a conference paper at ICLR 2020Algorithm 1 Uncertainty-guided Continual Learning with Bayesian Neural Networks UCB1:Require Training data for all tasks D= (x;y),(mean of posterior), ,1and2(std for the scaledmixture Gaussian pdf of prior), (weighting factor for prior), N(number of samples in a mini-batch), M(Number of minibatches per epoch), initial learning rate ( 0)2:==03:forevery task do4: repeat5:N(0;I)6:= log(1 + exp( )) BEnsuresis always positive7: w=+ Bw=fw1;:::;wi;:::;wNgposterior samples of weights8:l1=PNi=1logN(wij;2) Bl1:=Log-posterior9:l2=PNi=1logN(wij0;21) + (1)N(wij0;22)Bl2:=Log-prior10:l3=PNi=1log(p(Djwi)) Bl3:=Log-likelihood of data11:LBBB =1M(l1l2l3)12: rLBBB 13: rLBBB 14: until loss plateaus15:; LearningRateUpdate( ;;; ) BSee Algorithm 2 for UCB and 3 for UCB-P16:end forAlgorithm 2 LearningRateUpdate in UCB1:function LearningRateUpdate( ;;)2: foreach parameter do3: 1=4: 15: =6: =7: end for8:end functionAlgorithm 3 LearningRateUpdate in UCB-P1:function LearningRateUpdate( ;;; )2: foreach parameter jin each layer ldo3: jj= BSignal to noise ratio4: if[j]2topp%ofs inlthen5:== 06: end if7: end for8:end functionSNR is a commonly used measure in signal processing to distinguish between “useful” informationfrom unwanted noise contained in a signal. In the context of neural models, the SNR can be thoughtas an indicative of parameter importance; the higher the SNR, the more effective or important theparameter is to the model predictions for a given task.UCB -P, as shown in Algorithms 1 and 3, is performed as follows: for every layer, convolutional orfully-connected, the parameters are ordered by their SNR value and those with the lowest importanceare pruned (set to zero). The pruned parameters are marked using a binary mask so that they can beused later in learning new tasks whereas the important parameters remain fixed throughout trainingon future tasks. Once a task is learned, an associated binary mask is saved which will be used duringinference to recover key parameters and hence the exact performance to the desired task.The overhead memory per parameter in encoding the mask as well as saving it on the disk is asfollows. Assuming we have ntasks to learn using a single network, the total number of required bitsto encode an accumulated mask for a parameter is at max log2nbits assuming a parameter deemedto be important from task 1and kept being encoded in the mask.5 R ESULTS5.1 E XPERIMENTAL SETUPDatasets: We evaluate our approach in two common scenarios for continual learning: 1) class-incremental learning of a single or two randomly alternating datasets, where each task covers onlya subset of the classes in a dataset, and 2) continual learning of multiple datasets, where each taskis a dataset. We use Split MNIST with 5 tasks (5-Split MNIST) similar to (Nguyen et al., 2018;Chen et al., 2019; Tseran et al., 2018) and permuted MNIST (Srivastava et al., 2013) for classincremental learning with similar experimental settings as used in (Serra et al., 2018; Tseran et al.,2018). Furthermore, to have a better understanding of our method, we evaluate our approach oncontinually learning a sequence of 8datasets with different distributions using the identical sequence6Published as a conference paper at ICLR 2020as in (Serra et al., 2018), which includes FaceScrub (Ng & Winkler, 2014), MNIST, CIFAR100,NotMNIST (Bulatov, 2011), SVHN (Netzer et al., 2011), CIFAR10, TrafficSigns (Stallkamp et al.,2011), and FashionMNIST (Xiao et al., 2017). Details of each are summarized in Table 4 in appendix.No data augmentation of any kind has been used in our analysis.Baselines: Within the Bayesian framework, we compare to three models which do not incorporatethe importance of parameters, namely fine-tuning, feature extraction, and joint training. In fine-tuning(BBB -FT), training continues upon arrival of new tasks without any forgetting avoidance strategy.Feature extraction, denoted as ( BBB -FE), refers to freezing all layers in the network after trainingthe first task and training only the last layer for the remaining tasks. In joint training ( BBB -JT) welearn all the tasks jointly in a multitask learning fashion which serves as the upper bound for averageaccuracy on all tasks, as it does not adhere to the continual learning scenario. We also perform thecounterparts for FT, FE, and JT using ordinary neural networks and denote them as ORD-FT, ORD-FE, and ORD-JT. From the prior work, we compare with state-of-the-art approaches including ElasticWeight Consolidation (EWC) (Kirkpatrick et al., 2017), Incremental Moment Matching (IMM) (Leeet al., 2017), Learning Without Forgetting (LWF) (Li & Hoiem, 2016), Less-Forgetting Learning(LFL) (Jung et al., 2016), PathNet (Fernando et al., 2017), Progressive neural networks (PNNs) (Rusuet al., 2016), and Hard Attention Mask (HAT) (Serra et al., 2018) using implementations provided by(Serra et al., 2018). On Permuted MNIST results for SI (Zenke et al., 2017) are reported from (Serraet al., 2018). On Split and Permuted MNIST, results for VCL (Nguyen et al., 2018) are obtained usingtheir original provided code whereas for VCL-GNG (Chen et al., 2019) and VCL-Vadam (Tseranet al., 2018) results are reported from the original work without re-implementation. Because ourmethod lies into the regularization-based regime, we only compare against baselines which do notbenefit from episodic or coreset memory.Hyperparameter tuning: Unlike commonly used tuning techniques which use a validation setcomposed of allclasses in the dataset, we only rely on the first two task and their validations set,similar to the setup in (Chaudhry et al., 2019). In all our experiments we consider a 0:15split for thevalidation set on the first two tasks. After tuning, training starts from the beginning of the sequence.Our scheme is different from (Chaudhry et al., 2019), where the models are trained on the first (e.g.three) tasks for validation and then training is restarted for the remaining ones and the reportedperformance is only on the remaining tasks.Training details: It is important to note that in all our experiments, no pre-trained model is used .We used stochastic gradient descent with a batch size of 64and a learning rate of 0:01, decaying itby a factor of 0:3once the loss plateaued. Dataset splits and batch shuffle are identically in all UCBexperiments and all baselines.Pruning procedure and mask size : Once a task is learned, we compute the performance drop for aset of arbitrary pruning percentages from the maximum training accuracy achieved when no pruningis applied. The pruning portion is then chosen using a threshold beyond which the performancedrop is not accepted. Mask size is chosen without having the knowledge of how many tasks to learnin the future. Upon learning each task we used a uniform distribution of pruning ratios ( 50-100% )and picked the ratio resulted in at most 1%,2%, and 3%forgetting for MNIST, CIFAR, and 8tasksexperiments, respectively. We did not tune this parameter because in our hyperparameter tuning, weonly assume we have validation sets of the first two tasks.Parameter regularization and importance measurement: Table 1 ablates different ways to com-pute the importance of an parameter in Eq. 4 and 5. As shown in Table 1 the configurationthat yields the highest accuracy and the least forgetting (maximum BWT) occurs when the learningrate regularization is performed only on of the posteriors using = 1=as the importance and= 1.Performance measurement: Letnbe the total number of tasks. Once all are learned, we evaluateour model on all ntasks. ACC is the average test classification accuracy across all tasks. Tomeasure forgetting we report backward transfer, BWT, which indicates how much learning new taskshas influenced the performance on previous tasks. While BWT<0directly reports catastrophicforgetting ,BWT>0indicates that learning new tasks has helped with the preceding tasks. Formally,BWT and ACC are as follows:BWT =1nnXi=1Ri;nRi;i;ACC =1nnXi=1Ri;n (8)7Published as a conference paper at ICLR 2020Table 1: Variants of learning rate regularization and importance measurement on 2-Split MNISTMethod Importance BWT (%) ACC (%)UCB x - 1= 0:00 99 :2UCB - x 1=0:04 98:7UCB x x 1=0:02 98:0UCB x - jj=0:03 98:4UCB - x jj=0:52 98:7UCB x x jj=0:32 98:8UCB-P x x jj=0:01 99:0UCB-P x x 1=0:01 98:9Table 2: Continually learning on different datasets. BWT and ACC in %. (*) denotes that methods do notadhere to the continual learning setup: BBB-JT and ORD-JT serve as the upper bound for ACC for BBB/ORDnetworks, respectively. zdenotes results reported by (Serra et al., 2018). ydenotes the result reported fromoriginal work. BWT was not reported in zandy. All others results are (re)produced by us and are averaged over3runs with standard deviations given in Section A.3 of the appendix.(a)5-Split MNIST, 5 tasks.Method BWT ACCVCL-Vadamy -99:17VCL-GNGy -96:50VCL - 0:56 98:20IMM - 11:20 88:54EWC - 4:20 95:78HAT 0:00 99:59ORD-FT - 9:18 90:60ORD-FE 0:00 98:54BBB-FT - 6:45 93:42BBB-FE 0:00 98:76UCB-P (Ours) - 0:72 99:32UCB (Ours) 0:0099:63ORD-JT0:00 99:78BBB-JT0:00 99:87(b)Permuted MNIST, 10permutations.Method #Params BWT ACCSIz0:1M - 86:0EWCz0:1M - 88:2HATz0:1M -91:6VCL-Vadamy0:1M - 86:34VCL-GNGy0:1M - 90:50VCL 0:1M - 7:90 88:80UCB (Ours) 0:1M -0:38 91:44LWF 1:9M - 31:17 65:65IMM 1:9M - 7:14 90:51HAT 1:9M 0:03 97:34BBB-FT 1:9M - 0:58 90:01BBB-FE 1:9M 0:02 93:54UCB-P (Ours) 1:9M - 0:95 97:24UCB (Ours) 1:9M 0:0397:42BBB-JT1:9M 0:00 98:12(c)Alternating CIFAR10/100Method BWT ACCPathNet 0:00 28:94LWF - 37:9 42:93LFL - 24:22 47:67IMM - 12:23 69:37PNN 0:00 70:73EWC - 1:53 72:46HAT - 0:04 78:32BBB-FE - 0:04 51:04BBB-FT - 7:43 68:89UCB-P (Ours) - 1:89 77:32UCB (Ours) -0:7279:44BBB-JT1:52 83:93(d)Sequence of 8tasksMethod BWT ACCLFL - 10:0 8:61PathNet 0:00 20:22LWF - 54:3 28:22IMM - 38:5 43:93EWC - 18:04 50:68PNN 0:00 76:78HAT - 0:14 81:59BBB-FT - 23:1 43:09BBB-FE - 0:01 58:07UCB-P (Ours) - 2:54 80:38UCB (Ours) -0:8484:04BBB-JT-1:2 84:1whereRi;nis the test classification accuracy on task iafter sequentially finishing learning the nthtask. Note that in UCB -P,Ri;irefers the test accuracy on task ibefore pruning and Ri;nafter pruningwhich is equivalent to the end of sequence performance. In Section 6, we show that our UCB modelcan be used when tasks labels are not available at inference time by training it with a “single head”architecture with a sum of number of classes for all tasks. We refer to the ACC measured for thisscenario as “Generalized Accuracy”.5.2 5-S PLIT MNISTWe first present our results for class incremental learning of MNIST (5-Split MNIST) in whichwe learn the digits 09in five tasks with 2classes at a time in 5pairs of 0=1,2=3,4=5,6=7,and8=9. Table 2a shows the results for reference baselines in Bayesian and non-Bayesian neuralnetworks including fine-tuning ( BBB -FT,ORD -FT), feature extraction ( BBB -FE,ORD -FE)and, joint training ( BBB -JT,ORD -JT) averaged over 3runs and standard deviations are given inTable 9 in the appendix. Although the MNIST dataset is an “easy” dataset, we observe throughoutall experiments that Bayesian fine-tuning and joint training perform significantly better than theircounterparts, ORD -FTandORD -JT. For Bayesian methods, we compare against VCL and itsvariations named as VCL with Variational Adam (VCL-Vadam), VCL with Adam and Gaussiannatural gradients (VCL-GNG). For non-Bayesian methods, we compare against HAT, IMM, andEWC (EWC can be regarded as Bayesian-inspired). VCL-Vadam (ACC= 99:17%) appears to beoutperforming VCL (ACC= 98:20%) and VCL-GNG (ACC= 96:50%) in average accuracy. However,full comparison is not possible because forgetting was not reported for Vadam and GNG. Nevertheless,UCB (ACC= 99:63%) is able to surpass all the baselines including VCL-Vadam in average accuracywhile in zero forgetting it is on par with HAT (ACC= 99:59%). We also report results on incrementallylearning MNIST in two tasks (2-Split MNIST) in Table 8 in the appendix, where we compare it8Published as a conference paper at ICLR 2020against PackNet, HAT, and LWF where PackNet, HAT, UCB -P, and UCB have zero forgetting whileUCB has marginally higher accuracy than all others.5.3 P ERMUTED MNISTPermuted MNIST is a popular variant of the MNIST dataset to evaluate continual learning approachesin which each task is considered as a random permutation of the original MNIST pixels. Followingthe literature, we learn a sequence of 10random permutations and report average accuracy at theend. Table 2b shows ACC and BWT of UCB andUCB -Pin comparison to state-of-the-art modelsusing a small and a large network with 0:1Mand1:9Mparameters, respectively (architecture detailsare given in Section A.2 of the appendix). The accuracy achieved by UCB (ACC= 91:440:04%)using the small network outperforms the ACC reported by Serra et al. (2018) for SI (ACC= 86:0%),EWC (ACC= 88:2%), while HAT attains a slightly better performance (ACC= 91:6%). Comparingthe average accuracy reported in VCL-Vadam (ACC= 86:34%) and VCL-GNG (ACC= 90:50%) aswell as obtained results for VCL (ACC= 88:80%) shows UCB with BWT=( 0:03%0:00%) is ableto outperform other Bayesian approaches in accuracy while forgetting significantly less compared toVCL with BWT=7:9%. While we do not experiment with memory in this work, not surprisinglyadding memory to most approaches will improve their performance significantly as it allows lookinginto past tasks. E.g. Chen et al. (2019) report ACC= 94:37% for VCL-GNC when adding a memoryof size 200.Next, we compare the results for the larger network (1.9M). While HAT and UCB have zero forgetting,UCB , reaching ACC= 97:420:01%, performs better than all baselines including HAT which obtainsACC= 97:340:05% using 1:9Mparameters. We also observe again that BBB -FT, despite beingnot specifically penalized to prevent forgetting, exhibits reasonable negative BWT values, performingbetter than IMM and LWF baselines. It is close to joint training, BBB -JT, with ACC= 98:1%, whichcan be seen as an upper bound.5.4 A LTERNATING CIFAR10 AND CIFAR100In this experiment, we randomly alternate between class incremental learning of CIFAR10 andCIFAR100. Both datasets are divided into 5tasks each with 2and20classes per task, respectively.Table 2c presents ACC and BWT obtained with UCB -P,UCB , and three BBB reference methodscompared against various continual learning baselines. Among the baselines presented in Table 2c,PNN and PathNet are the only zero-forgetting-guaranteed approaches. It is interesting to note that inthis setup, some baselines (PathNet, LWF, and LFL) do not perform better than the naive accuracyachieved by feature extraction. PathNet suffers from bad pre-assignment of the network’s capacityper task which causes poor performance on the initial task from which it never recovers. IMMperforms almost similar to fine-tuning in ACC, yet forgets more. PNN, EWC, and HAT are the onlybaselines that perform better than BBB -FEandBBB -FT. EWC and HAT are both allowed to forgetby construction, however, HAT shows zero forgetting behavior. While EWC is outperformed by bothof our UCB variants, HAT exhibits 1%better ACC over UCB -P. Despite having a slightly higherforgetting, the overall accuracy of UCB is higher, reaching 79:4%.BBB -JTin this experimentachieves a positive BWT which shows that learning the entire sequence improves the performance onearlier tasks.5.5 M ULTIPLE DATASETS LEARNINGFinally, we present our results for continual learning of 8tasks using UCB -PandUCB in Table 2d.Similar to the previous experiments we look at both ACC and BWT obtained for UCB -P,UCB , BBBreferences (FT, FE, JT) as well as various baselines. Considering the ACC achieved by BBB -FEorBBB -FT(58:1%) as a lower bound we observe again that some baselines are not able to do betterthanBBB -FTincluding LFL, PathNet, LWF, IMM, and EWC while PNN and HAT remain the onlystrong baselines for our UCB -PandUCB approaches. UCB -Pagain outperforms PNN by 3:6%inACC. HAT exhibits only 0:1%BWT, but our UCB achieves 2:4%higher ACC.6 S INGLE HEAD AND GENERALIZED ACCURACY OF UCBUCB can be used even if the task information is not given at test time. For this purpose, at trainingtime, instead of using a separate fully connected classification head for each task, we use a single9Published as a conference paper at ICLR 2020Table 3: Single Head vs. Multi-Head architecture and Generalized vs. Standard Accuracy. Generalized accuracymeans that task information is not available at test time. SM, PM, CF, and 8T denote the 5-Split MNIST,Permuted MNIST, Alternating CIFAR10/100, and sequence of 8tasks, respectively.Generalized ACC ACCSingle Head Single Head Multi HeadExp UCB BBB-FT UCB BBB-FT UCB BBB-FTSM 98:7 98:1 98:9 98:7 99:2 98:4PM 92:5 86:1 95:1 88:3 97:7 90:0CF 71:2 65:2 74:3 67:8 79:4 68:98T 76:8 47:6 79:9 53:2 84:0 43:1head with the total number of outputs for all tasks. For example in the 8-dataset experiment we onlyuse one head with 293number of output classes, rather than using 8separate heads, during trainingand inference time.Table 3 presents our results for UCB andBBB -FTtrained with a single head against having amulti-head architecture, in columns 4-7. Interestingly, we see only a small performance degradeforUCB from training with multi-head to a single head. The ACC reduction is 0:3%,2:6%,5:1%,and4:1%for 2-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8tasksexperiments, respectively.We evaluated UCB andBBB -FTwith a more challenging metric where the prediction space coversthe classes across all the tasks. Hence, confusion of similar class labels across tasks can be measured.Performance for this condition is reported as Generalized ACC in Table 3 in columns 2-3. We observea small performance reduction in going from ACC to Generalized ACC, suggesting non-significantconfusion caused by the presence of more number of classes at test time. The performance degradationfrom ACC to Generalized ACC is 0:2%,2:6%,3:1%, and 3:1%for 2-Split MNIST, Permuted MNIST,Alternating CIFAR10/100, and sequence of 8tasks, respectively. This shows that UCB can performcompetitively in more realistic conditions such as unavailability of task information at test time.We believe the main insight of our approach is that instead of computing additional measurementsof importance, which are often task, input or output dependent, we directly use predicted weightuncertainty to find important parameters. We can freeze them using a binary mask, as in UCB -P, orregularize changes conditioned on current uncertainty, as in UCB.7 C ONCLUSIONIn this work, we propose a continual learning formulation with Bayesian neural networks, calledUCB , that uses uncertainty predictions to perform continual learning: important parameters can beeither fully preserved through a saved binary mask ( UCB -P) or allowed to change conditioned ontheir uncertainty for learning new tasks ( UCB ). We demonstrated how the probabilistic uncertaintydistributions per weight are helpful to continually learning short and long sequences of benchmarkdatasets compared against baselines and prior work. We show that UCB performs superior or on parwith state-of-the-art models such as HAT (Serra et al., 2018) across all the experiments. Choosingbetween the two UCB variants depends on the application scenario: While UCB -Penforces noforgetting after the initial pruning stage by saving a small binary mask per task, UCB does notrequire additional memory and allows for more learning flexibility in the network by allowing smallforgetting to occur. UCB can also be used in a single head setting where the right subset of classesbelonging to the task is not known during inference leading to a competitive model that can bedeployed where it is not possible to distinguish tasks in a continuous stream of the data at test time.UCB can also be deployed in a single head scenario and where tasks information is not available attest time.
S1ewYgHaKr
Official Blind Review #2
6: Weak Accept
**** Post Rebuttal **** I have read the author's response and other reviewers' comments. In light of comments by other reviewers, I am increasing the score. The paper reports decent empirical results in some challenging settings which might be useful to the continual learning community. **** End **** The paper presents a simple yet effective way to avoid catastrophic forgetting in a continual learning setting. The proposed approach is referred to as UCB - "Uncertainty Guided Bayesian Neural Networks". The main idea of the approach is to weight the learning rate of each parameter in the neural network by the standard deviation of its posterior distribution. This leads to regularizing parameters that are "important" to tasks seen earlier and thus avoiding forgetting. Results indicate an improvement over other baselines. However, I do not see any analysis of the method that explains this improvement. I do not recommend acceptance. Cons: - My main concern with the paper is that it fails to justify the superiority of the method over other baselines. The numbers reported in the paper do seem good, but I don't see an explanation of why this is the case. What are the drawbacks of EWC, VCL or HAT that the proposed method solves? Why using uncertainty to define importance works better than using online VI in VCL or fisher information in EWC? There is no discussion in the paper about that. Without such a discussion it seems that the model was run a number of times and the best score was reported out of all those runs (especially because the improvement is only marginal). - I am not sure why weighting the learning rate would be a good idea? Having high uncertainty may increase the learning rate arbitrarily. Is there a constraint on the standard deviation? Does having a very high weight for learning rate not cause instability during optimization? I think the method would be very sensitive to the initialization of the standard deviation. Overall I think the idea of using uncertainties for continual learning is interesting. But from where it stands, I am not fully convinced that this method should do better than existing approaches.
<|im_start|>system You are a helpful assistant that reviews papers and provides feedback on their quality.<|im_end|> <|im_start|>user ### Paper Title Uncertainty-guided Continual Learning with Bayesian Neural Networks ### Paper Abstract Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' \textit{importance}. In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify \textit{what to remember} and \textit{what to change} as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to. ### Paper Keywords ["continual learning", "catastrophic forgetting"] ### Paper Content ABSTRACTContinual learning aims to learn new tasks without forgetting previously learnedones. This is especially challenging when one cannot access data from previoustasks and when the model has a fixed capacity. Current regularization-basedcontinual learning algorithms need an external representation and extra computationto measure the parameters’ importance . In contrast, we propose Uncertainty-guided Continual Bayesian Neural Networks ( UCB ), where the learning rate adaptsaccording to the uncertainty defined in the probability distribution of the weights innetworks. Uncertainty is a natural way to identify what to remember andwhat tochange as we continually learn, and thus mitigate catastrophic forgetting. We alsoshow a variant of our model, which uses uncertainty for weight pruning and retainstask performance after pruning by saving binary masks per tasks. We evaluate ourUCB approach extensively on diverse object classification datasets with short andlong sequences of tasks and report superior or on-par performance compared toexisting approaches. Additionally, we show that our model does not necessarilyneed task information at test time, i.e. it does not presume knowledge of whichtask a sample belongs to.1 I NTRODUCTIONHumans can easily accumulate and maintain knowledge gained from previously observed tasks, andcontinuously learn to solve new problems or tasks. Artificial learning systems typically forget priortasks when they cannot access all training data at once but are presented with task data in sequence.Overcoming these challenges is the focus of continual learning , sometimes also referred to as lifelonglearning orsequential learning .Catastrophic forgetting (McCloskey & Cohen, 1989; McClellandet al., 1995) refers to the significant drop in the performance of a learner when switching from atrained task to a new one. This phenomenon occurs because trained parameters on the initial taskchange in favor of learning new objectives.Given a network of limited capacity, one way to address this problem is to identify the importance ofeach parameter and penalize further changes to those parameters that were deemed to be important forthe previous tasks (Kirkpatrick et al., 2017; Aljundi et al., 2018; Zenke et al., 2017). An alternative isto freeze the most important parameters and allow future tasks to only adapt the remaining parametersto new tasks (Mallya & Lazebnik, 2018). Such models rely on the explicit parametrization ofimportance. We propose here implicit uncertainty-guided importance representation.Bayesian approaches to neural networks (MacKay, 1992b) can potentially avoid some of the pitfallsof explicit parameterization of importance in regular neural networks. Bayesian techniques, naturallyaccount for uncertainty in parameters estimates. These networks represent each parameter witha distribution defined by a mean and variance over possible values drawn from a shared latentprobability distribution (Blundell et al., 2015). Variational inference can approximate posteriordistributions using Monte Carlo sampling for gradient estimation. These networks act like ensemblemethods in that they reduce the prediction variance but only use twice the number of parameterspresent in a regular neural network. We propose to use the predicted mean and variance of the latentdistributions to characterize the importance of each parameter. We perform continual learning withCorresponding author: sayna@berkeley.eduyWork done while at Facebook AI Research1Published as a conference paper at ICLR 2020(a) (b) (c) Illustration of evolution of weight distributions through learning two tasks. (a) circles represent weight parameters, initialized by distributions with mean and variance values randomly sampled from Ɲ(0,0.1). As an example we show five color-coded and plot their distributions. (b) Shows posterior distribution after learning Task 1. While W1 and W2 exhibit lower uncertainties (more contributions in learning Task 1), W3, W4, and W5 appear to have larger uncertainties, with the highest STD in W5, making them available to learn more tasks. (c) Task 2 is learned using higher learning rates for previously uncertain parameters (W3 and W4, W5) while learning rates for W1 and W2 are moderated according to their predicted low uncertainty after finishing task 1. p( θ)θ θ θTraining Task 1 Training Task 2 1324p( θ) p( θ)5p(y| x,[ θ1, θ2, θ3, θ4, θ5]) ⩰ p(y)p(y| x,[ θ1, θ2, θ3, θ4, θ5]) ⩰ p(y|x,[ θ1, θ2])p(y| x, [ θ1, θ2, θ3, θ4, θ5]) ⩰ p(y|x,[ θ1, θ2, θ3, θ4])Figure 1: Illustration of the evolution of weight distributions – uncertain weights adapt more quickly –when learning two tasks using UCB . (a) weight parameter initialized by distributions initialized withmean and variance values randomly sampled from N(0;0:1). (b) posterior distribution after learningtask one; while 1and2exhibit lower uncertainties after learning the first task, 3,4, and5havelarger uncertainties, making them available to learn more tasks. (c) a second task is learned usinghigher learning rates for previously uncertain parameters ( 1,2,3, and4) while learning rates for1and2are reduced. Size of the arrows indicate the magnitude of the change of the distributionmean upon gradient update.Bayesian neural networks by controlling the learning rate of each parameter as a function of itsuncertainty. Figure 1 illustrates how posterior distributions evolve for certain and uncertain weightdistributions while learning two consecutive tasks. Intuitively, the more uncertain a parameter is, themore learnable it can be and therefore, larger gradient steps can be taken for it to learn the currenttask. As a hard version of this regularization technique, we also show that pruning, i.e., preventingthe most important model parameters from any change and learning new tasks with the remainingparameters, can be also integrated into UCB. We refer to this method as UCB-P.Contributions: We propose to perform continual learning with Bayesian neural networks anddevelop a new method which exploits the inherent measure of uncertainty therein to adapt the learningrate of individual parameters (Sec. 4). Second, we introduce a hard-threshold variant of our methodthat decides which parameters to freeze (Sec. 4.2). Third, in Sec. 5, we extensively validate ourapproach experimentally, comparing it to prior art both on single datasets split into different tasks, aswell as for the more difficult scenario of learning a sequence of different datasets. Forth, in contrastto most prior work, our approach does not rely on knowledge about task boundaries at inference time,which humans do not need and might not be always available. We show in Sec. 6 that our approachnaturally supports this scenario and does not require task information at test time, sometimes alsoreferred to as a “single head” scenario for all tasks. We refer to evaluation metric of a “single head”model without task information at test time as “generalized accuracy”. Our code is available athttps://github.com/SaynaEbrahimi/UCB .2 R ELATED WORKConceptually, approaches to continual learning can be divided into the following categories: dynamicarchitectural methods, memory-based methods, and regularization methods.Dynamic architectural methods : In this setting, the architecture grows while keeping past knowl-edge fixed and storing new knowledge in different forms such as additional layers, nodes, or modules.In this approach, the objective function remains fixed whereas the model capacity grows –oftenexponentially– with the number of tasks. Progressive networks (Rusu et al., 2016; Schwarz et al.,2018) was one of the earliest works in this direction and was successfully applied to reinforcementlearning problems; the base architecture was duplicated and lateral connections added in response tonew tasks. Dynamically Expandable Network (DEN) (Yoon et al., 2018) also expands its network byselecting drifting units and retraining them on new tasks. In contrast to our method, these approachesrequire the architecture grow with each new task.Memory-based methods: In this regime, previous information is partially stored to be used later asa form of rehearsal (Robins, 1995). Gradient episodic memory (GEM) (Lopez-Paz et al., 2017) usesthis idea to store the data at the end of each episode to be used later to prevent gradient updates fromdeviating from their previous values. GEM also allows for positive backward knowledge transfer, i.e,2Published as a conference paper at ICLR 2020an improvement on previously learned tasks, and it was the first method capable of learning usinga single training example. Recent approaches in this category have mitigated forgetting by usingexternal data combined with distillation loss and/or confidence-based sampling strategies to selectthe most representative samples. (Castro et al., 2018; Wu et al., 2019; Lee et al., 2019)Regularization methods : In these approaches, significant changes to the representation learned forprevious tasks are prevented. This can be performed through regularizing the objective function ordirectly enforced on weight parameters. Typically, this importance measure is engineered to representthe importance of each parameter. Inspired by Bayesian learning, in elastic weight consolidation(EWC) method (Kirkpatrick et al., 2017) important parameters are those to have the highest in termsof the Fisher information matrix. In Synaptic Intelligence (SI) (Zenke et al., 2017) this parameterimportance notion is engineered to correlate with the loss function: parameters that contribute moreto the loss are more important. Similar to SI, Memory-aware Synapses (MAS) (Aljundi et al., 2018)proposed an online way of computing importance adaptive to the test set using the change in the modeloutputs w.r.t the inputs. While all the above algorithms are task-dependent, in parallel developmentto this work, (Aljundi et al., 2019) has recently investigated task-free continual learning by buildingupon MAS and using a protocol to update the weights instead of waiting until the tasks are finished.PackNet (Mallya & Lazebnik, 2018) used iterative pruning to fully restrict gradient updates onimportant weights via binary masks. This method requires knowing which task is being tested touse the appropriate mask. PackNet also ranks the weight importance by their magnitude which isnot guaranteed to be a proper importance indicative. HAT (Serra et al., 2018) identifies importantneurons by learning an attention vector to the task embedding to control the gradient propagation. Itmaintains the information learned on previous tasks using an almost-binary mask per previous tasks.Bayesian approaches: Using Bayesian approach in learning neural networks has been studiedfor few decades (MacKay, 1992b;a). Several approaches have been proposed for Bayesian neuralnetworks, based on, e.g., the Laplace approximation (MacKay, 1992a), Hamiltonian Monte Carlo(Neal, 2012), variational inference (Hinton & Van Camp, 1993; Graves, 2011), and probabilisticbackpropagation (Hern ́andez-Lobato & Adams, 2015). Variational continual learning (Nguyen et al.,2018) uses Bayesian inference to perform continual learning where new posterior distribution issimply obtained by multiplying the previous posterior by the likelihood of the dataset belongingto the new task. They also showed that by using a core-set, a small representative set of data fromprevious tasks, VCL can experience less forgetting. In contrast, we rely on Bayesian neural networksto use their predictive uncertainty to perform continual learning. Moreover, we do not use episodicmemory or any other way to access or store previous data in our approach.Natural gradient descent methods: A fast natural gradient descent method for variational inferencewas introduced in (Khan & Nielsen, 2018) in which, the Fisher Information matrix is approximatedusing the generalized Gauss-Newton method. In contrast, in our work, we use classic gradient descent.Although second order optimization algorithms are proven to be more accurate than the first ordermethods, they add considerable computational cost. Tseran et al. (2018); Chen et al. (2019) bothinvestigate the effect of natural gradient descent methods as an alternative to classic gradient descentused in VCL and EWC methods. GNG (Chen et al., 2019) uses Gaussian natural gradients in theAdam optimizer (Kingma & Ba, 2014) in the framework of VCL because as opposed to conventionalgradient methods which perform in Euclidian space, natural gradients cause a small difference interms of distributions following the changes in parameters in the Riemannian space. Similar to VCL,they obtained their best performance by adding a coreset of previous examples. Tseran et al. (2018)introduce two modifications to VCL called Natural-VCL (N-VCL) and VCL-Vadam. N-VCL (Tseranet al., 2018) uses a Gauss-Newton approximation introduced by (Schraudolph, 2002; Graves, 2011)to estimate the VCL objective function and used natural gradient method proposed in (Khan et al.,2018) to exploit the Riemannian geometry of the variational posterior by scaling the gradient withan adaptive learning rate equal to 2obtained by approximating the Fisher Information matrix inan online fashion. VCL-Vadam (Tseran et al., 2018) is a simpler version of N-VCL to trade-offaccuracy for simplicity which uses Vadam (Khan et al., 2018) to update the gradients by perturbingthe weights with a Gaussian noise using a reparameterization trick and scaling by 1instead of itssquared. N-VCL/VCL-Vadam both use variational inference to adapt the learning rate within Adamoptimizer at every time step, whereas in our method below, gradient decent is used with constantlearning rate during each task where learning rate scales with uncertainty only after finishing a task.We show extensive comparison with state-of-the-art results on short and relatively long sequenceof vision datasets with Bayesian convolutional neural networks, whereas VCL-Vadam only rely on3Published as a conference paper at ICLR 2020multi-layer perceptron networks. We also like to highlight that this is the first work which evaluatesand shows the working of convolutional Bayesian Neural Networks rather than only fully connectedMLP models for continual learning.3 B ACKGROUND : VARIATIONAL BAYES -BY-BACKPROPIn this section, we review the Bayes-by-Backprop (BBB) framework which was introduced by(Blundell et al., 2015); to learn a probability distribution over network parameters. (Blundell et al.,2015) showed a back-propagation-compatible algorithm which acts as a regularizer and yieldscomparable performance to dropout on the MNIST dataset. In Bayesian models, latent variables aredrawn from a prior density p(w)which are related to the observations through the likelihood p(xjw).During inference, the posterior distribution p(wjx)is computed conditioned on the given inputdata. However, in practice, this probability distribution is intractable and is often estimated throughapproximate inference. Markov Chain Monte Carlo (MCMC) sampling (Hastings, 1970) has beenwidely used and explored for this purpose, see (Robert & Casella, 2013) for different methods underthis category. However, MCMC algorithms, despite providing guarantees for finding asymptoticallyexact samples from the target distribution, are not suitable for large datasets and/or large modelsas they are bounded by speed and scalability issues. Alternatively, variational inference provides afaster solution to the same problem in which the posterior is approximated using optimization ratherthan being sampled from a chain (Hinton & Van Camp, 1993).Variational inference methods alwaystake advantage of fast optimization techniques such as stochastic methods or distributed methods,which allow them to explore data models quickly. See (Blei et al., 2017) for a complete review of thetheory and (Shridhar et al., 2018) for more discussion on how to use Bayes by Backprop (BBB) inconvolutioal neural networks.3.1 B AYES BY BACKPROP (BBB)Letx2I Rnbe a set of observed variables and wbe a set of latent variables. A neural network, asa probabilistic model P(yjx;w), given a set of training examples D= (x;y)can output ywhichbelongs to a set of classes by using the set of weight parameters w. Variational inference aims tocalculate this conditional probability distribution over the latent variables by finding the closest proxyto the exact posterior by solving an optimization problem.We first assume a family of probability densities over the latent variables wparametrized by , i.e.,q(wj). We then find the closest member of this family to the true conditional probability of interestP(wjD)by minimizing the Kullback-Leibler (KL) divergence between qandPwhich is equivalentto minimizing variational free energy or maximizing the expected lower bound:= arg minKLq(wj)kP(wjD)(1)The objective function can be written as:LBBB(;D) = KLq(wj)kP(w)Eq(wj)log(P(Djw))(2)Eq. 2 can be approximated using NMonte Carlo samples wifrom the variational posterior (Blundellet al., 2015):LBBB(;D)NXi=1logq(wij)logP(wi)log(P(Djwi)) (3)We assumeq(wj)to have a Gaussian pdf with diagonal covariance and parametrized by = (;).A sample weight of the variational posterior can be obtained by sampling from a unit Gaussianand reparametrized by w=+whereis the noise drawn from unit Gaussian, and isa pointwise multipliation. Standard deviation is parametrized as = log(1 + exp( ))and thusis always positive. For the prior, as suggested by Blundell et al. (2015), a scale mixture of twoGaussian pdfs are chosen which are zero-centered while having different variances of 21and22. Theuncertainty obtained for every parameter has been successfully used in model compression (Han et al.,2015) and uncertainty-based exploration in reinforcement learning (Blundell et al., 2015). In thiswork we propose to use this framework to learn sequential tasks without forgetting using per-weightuncertainties.4Published as a conference paper at ICLR 20204 U NCERTAINTY -GUIDED CONTINUAL LEARNING IN BAYESIAN NEURALNETWORKSIn this section, we introduce Uncertainty-guided Continual learning approach with Bayesian neuralnetworks ( UCB ), which exploits the estimated uncertainty of the parameters’ posterior distributionto regulate the change in “important” parameters both in a soft way (Section 4.1) or setting a hardthreshold (Section 4.2).4.1 UCB WITH LEARNING RATE REGULARIZATIONA common strategy to perform continual learning is to reduce forgetting by regularizing furtherchanges in the model representation based on parameters’ importance . InUCB the regularization isperformed with the learning rate such that the learning rate of each parameter and hence its gradientupdate becomes a function of its importance . As shown in the following equations, in particular,we scale the learning rate of andfor each parameter distribution inversely proportional to itsimportance to reduce changes in important parameters while allowing less important parameters toalter more in favor of learning new tasks. = (4) = (5)The core idea of this work is to base the definition of importance on the well-defined uncertaintyin parameters distribution of Bayesian neural networks, i.e., setting the importance to be inverselyproportional to the standard deviation which represents the parameter uncertainty in the Baysianneural network:/1= (6)We explore different options to set in our ablation study presented in Section A.2 of the appendix,Table 1. We empirically found that = 1=and not adapting the learning rate for (i.e.= 1)yields the highest accuracy and the least forgetting.The key benefit of UCB with learning rate as the regularizer is that it neither requires additionalmemory, as opposed to pruning technique nor tracking the change in parameters with respect to thepreviously learned task, as needed in common weight regularization methods.More importantly, this method does not need to be aware of task switching as it only needs to adjustthe learning rates of the means in the posterior distribution based on their current uncertainty. Thecomplete algorithm for UCB is shown in Algorithm 1 with parameter update function given inAlgorithm 2.4.2 UCB USING WEIGHT PRUNING (UCB-P)In this section, we introduce a variant of our method, UCB -P, which is related to recent efforts inweight pruning in the context of reducing inference computation and network compression (Liu et al.,2017; Molchanov et al., 2016). More specifically, weight pruning has been recently used in continuallearning (Mallya & Lazebnik, 2018), where the goal is to continue learning multiple tasks using asingle network’s capacity. (Mallya & Lazebnik, 2018) accomplished this by freeing up parametersdeemed to be unimportant to the current task according to their magnitude. Forgetting is preventedin pruning by saving a task-specific binary mask of important vs. unimportant parameters. Here,we adapt pruning to Bayesian neural networks. Specifically, we propose a different criterion formeasuring importance: the statistically-grounded uncertainty defined in Bayesian neural networks.Unlike regular deep neural networks, in a BBB model weight parameters are represented by proba-bility distributions parametrized by their mean and standard deviation. Similar to (Blundell et al.,2015), in order to take into account both mean and standard deviation, we use the signal-to-noiseratio (SNR) for each parameter defined as = SNR =jj= (7)5Published as a conference paper at ICLR 2020Algorithm 1 Uncertainty-guided Continual Learning with Bayesian Neural Networks UCB1:Require Training data for all tasks D= (x;y),(mean of posterior), ,1and2(std for the scaledmixture Gaussian pdf of prior), (weighting factor for prior), N(number of samples in a mini-batch), M(Number of minibatches per epoch), initial learning rate ( 0)2:==03:forevery task do4: repeat5:N(0;I)6:= log(1 + exp( )) BEnsuresis always positive7: w=+ Bw=fw1;:::;wi;:::;wNgposterior samples of weights8:l1=PNi=1logN(wij;2) Bl1:=Log-posterior9:l2=PNi=1logN(wij0;21) + (1)N(wij0;22)Bl2:=Log-prior10:l3=PNi=1log(p(Djwi)) Bl3:=Log-likelihood of data11:LBBB =1M(l1l2l3)12: rLBBB 13: rLBBB 14: until loss plateaus15:; LearningRateUpdate( ;;; ) BSee Algorithm 2 for UCB and 3 for UCB-P16:end forAlgorithm 2 LearningRateUpdate in UCB1:function LearningRateUpdate( ;;)2: foreach parameter do3: 1=4: 15: =6: =7: end for8:end functionAlgorithm 3 LearningRateUpdate in UCB-P1:function LearningRateUpdate( ;;; )2: foreach parameter jin each layer ldo3: jj= BSignal to noise ratio4: if[j]2topp%ofs inlthen5:== 06: end if7: end for8:end functionSNR is a commonly used measure in signal processing to distinguish between “useful” informationfrom unwanted noise contained in a signal. In the context of neural models, the SNR can be thoughtas an indicative of parameter importance; the higher the SNR, the more effective or important theparameter is to the model predictions for a given task.UCB -P, as shown in Algorithms 1 and 3, is performed as follows: for every layer, convolutional orfully-connected, the parameters are ordered by their SNR value and those with the lowest importanceare pruned (set to zero). The pruned parameters are marked using a binary mask so that they can beused later in learning new tasks whereas the important parameters remain fixed throughout trainingon future tasks. Once a task is learned, an associated binary mask is saved which will be used duringinference to recover key parameters and hence the exact performance to the desired task.The overhead memory per parameter in encoding the mask as well as saving it on the disk is asfollows. Assuming we have ntasks to learn using a single network, the total number of required bitsto encode an accumulated mask for a parameter is at max log2nbits assuming a parameter deemedto be important from task 1and kept being encoded in the mask.5 R ESULTS5.1 E XPERIMENTAL SETUPDatasets: We evaluate our approach in two common scenarios for continual learning: 1) class-incremental learning of a single or two randomly alternating datasets, where each task covers onlya subset of the classes in a dataset, and 2) continual learning of multiple datasets, where each taskis a dataset. We use Split MNIST with 5 tasks (5-Split MNIST) similar to (Nguyen et al., 2018;Chen et al., 2019; Tseran et al., 2018) and permuted MNIST (Srivastava et al., 2013) for classincremental learning with similar experimental settings as used in (Serra et al., 2018; Tseran et al.,2018). Furthermore, to have a better understanding of our method, we evaluate our approach oncontinually learning a sequence of 8datasets with different distributions using the identical sequence6Published as a conference paper at ICLR 2020as in (Serra et al., 2018), which includes FaceScrub (Ng & Winkler, 2014), MNIST, CIFAR100,NotMNIST (Bulatov, 2011), SVHN (Netzer et al., 2011), CIFAR10, TrafficSigns (Stallkamp et al.,2011), and FashionMNIST (Xiao et al., 2017). Details of each are summarized in Table 4 in appendix.No data augmentation of any kind has been used in our analysis.Baselines: Within the Bayesian framework, we compare to three models which do not incorporatethe importance of parameters, namely fine-tuning, feature extraction, and joint training. In fine-tuning(BBB -FT), training continues upon arrival of new tasks without any forgetting avoidance strategy.Feature extraction, denoted as ( BBB -FE), refers to freezing all layers in the network after trainingthe first task and training only the last layer for the remaining tasks. In joint training ( BBB -JT) welearn all the tasks jointly in a multitask learning fashion which serves as the upper bound for averageaccuracy on all tasks, as it does not adhere to the continual learning scenario. We also perform thecounterparts for FT, FE, and JT using ordinary neural networks and denote them as ORD-FT, ORD-FE, and ORD-JT. From the prior work, we compare with state-of-the-art approaches including ElasticWeight Consolidation (EWC) (Kirkpatrick et al., 2017), Incremental Moment Matching (IMM) (Leeet al., 2017), Learning Without Forgetting (LWF) (Li & Hoiem, 2016), Less-Forgetting Learning(LFL) (Jung et al., 2016), PathNet (Fernando et al., 2017), Progressive neural networks (PNNs) (Rusuet al., 2016), and Hard Attention Mask (HAT) (Serra et al., 2018) using implementations provided by(Serra et al., 2018). On Permuted MNIST results for SI (Zenke et al., 2017) are reported from (Serraet al., 2018). On Split and Permuted MNIST, results for VCL (Nguyen et al., 2018) are obtained usingtheir original provided code whereas for VCL-GNG (Chen et al., 2019) and VCL-Vadam (Tseranet al., 2018) results are reported from the original work without re-implementation. Because ourmethod lies into the regularization-based regime, we only compare against baselines which do notbenefit from episodic or coreset memory.Hyperparameter tuning: Unlike commonly used tuning techniques which use a validation setcomposed of allclasses in the dataset, we only rely on the first two task and their validations set,similar to the setup in (Chaudhry et al., 2019). In all our experiments we consider a 0:15split for thevalidation set on the first two tasks. After tuning, training starts from the beginning of the sequence.Our scheme is different from (Chaudhry et al., 2019), where the models are trained on the first (e.g.three) tasks for validation and then training is restarted for the remaining ones and the reportedperformance is only on the remaining tasks.Training details: It is important to note that in all our experiments, no pre-trained model is used .We used stochastic gradient descent with a batch size of 64and a learning rate of 0:01, decaying itby a factor of 0:3once the loss plateaued. Dataset splits and batch shuffle are identically in all UCBexperiments and all baselines.Pruning procedure and mask size : Once a task is learned, we compute the performance drop for aset of arbitrary pruning percentages from the maximum training accuracy achieved when no pruningis applied. The pruning portion is then chosen using a threshold beyond which the performancedrop is not accepted. Mask size is chosen without having the knowledge of how many tasks to learnin the future. Upon learning each task we used a uniform distribution of pruning ratios ( 50-100% )and picked the ratio resulted in at most 1%,2%, and 3%forgetting for MNIST, CIFAR, and 8tasksexperiments, respectively. We did not tune this parameter because in our hyperparameter tuning, weonly assume we have validation sets of the first two tasks.Parameter regularization and importance measurement: Table 1 ablates different ways to com-pute the importance of an parameter in Eq. 4 and 5. As shown in Table 1 the configurationthat yields the highest accuracy and the least forgetting (maximum BWT) occurs when the learningrate regularization is performed only on of the posteriors using = 1=as the importance and= 1.Performance measurement: Letnbe the total number of tasks. Once all are learned, we evaluateour model on all ntasks. ACC is the average test classification accuracy across all tasks. Tomeasure forgetting we report backward transfer, BWT, which indicates how much learning new taskshas influenced the performance on previous tasks. While BWT<0directly reports catastrophicforgetting ,BWT>0indicates that learning new tasks has helped with the preceding tasks. Formally,BWT and ACC are as follows:BWT =1nnXi=1Ri;nRi;i;ACC =1nnXi=1Ri;n (8)7Published as a conference paper at ICLR 2020Table 1: Variants of learning rate regularization and importance measurement on 2-Split MNISTMethod Importance BWT (%) ACC (%)UCB x - 1= 0:00 99 :2UCB - x 1=0:04 98:7UCB x x 1=0:02 98:0UCB x - jj=0:03 98:4UCB - x jj=0:52 98:7UCB x x jj=0:32 98:8UCB-P x x jj=0:01 99:0UCB-P x x 1=0:01 98:9Table 2: Continually learning on different datasets. BWT and ACC in %. (*) denotes that methods do notadhere to the continual learning setup: BBB-JT and ORD-JT serve as the upper bound for ACC for BBB/ORDnetworks, respectively. zdenotes results reported by (Serra et al., 2018). ydenotes the result reported fromoriginal work. BWT was not reported in zandy. All others results are (re)produced by us and are averaged over3runs with standard deviations given in Section A.3 of the appendix.(a)5-Split MNIST, 5 tasks.Method BWT ACCVCL-Vadamy -99:17VCL-GNGy -96:50VCL - 0:56 98:20IMM - 11:20 88:54EWC - 4:20 95:78HAT 0:00 99:59ORD-FT - 9:18 90:60ORD-FE 0:00 98:54BBB-FT - 6:45 93:42BBB-FE 0:00 98:76UCB-P (Ours) - 0:72 99:32UCB (Ours) 0:0099:63ORD-JT0:00 99:78BBB-JT0:00 99:87(b)Permuted MNIST, 10permutations.Method #Params BWT ACCSIz0:1M - 86:0EWCz0:1M - 88:2HATz0:1M -91:6VCL-Vadamy0:1M - 86:34VCL-GNGy0:1M - 90:50VCL 0:1M - 7:90 88:80UCB (Ours) 0:1M -0:38 91:44LWF 1:9M - 31:17 65:65IMM 1:9M - 7:14 90:51HAT 1:9M 0:03 97:34BBB-FT 1:9M - 0:58 90:01BBB-FE 1:9M 0:02 93:54UCB-P (Ours) 1:9M - 0:95 97:24UCB (Ours) 1:9M 0:0397:42BBB-JT1:9M 0:00 98:12(c)Alternating CIFAR10/100Method BWT ACCPathNet 0:00 28:94LWF - 37:9 42:93LFL - 24:22 47:67IMM - 12:23 69:37PNN 0:00 70:73EWC - 1:53 72:46HAT - 0:04 78:32BBB-FE - 0:04 51:04BBB-FT - 7:43 68:89UCB-P (Ours) - 1:89 77:32UCB (Ours) -0:7279:44BBB-JT1:52 83:93(d)Sequence of 8tasksMethod BWT ACCLFL - 10:0 8:61PathNet 0:00 20:22LWF - 54:3 28:22IMM - 38:5 43:93EWC - 18:04 50:68PNN 0:00 76:78HAT - 0:14 81:59BBB-FT - 23:1 43:09BBB-FE - 0:01 58:07UCB-P (Ours) - 2:54 80:38UCB (Ours) -0:8484:04BBB-JT-1:2 84:1whereRi;nis the test classification accuracy on task iafter sequentially finishing learning the nthtask. Note that in UCB -P,Ri;irefers the test accuracy on task ibefore pruning and Ri;nafter pruningwhich is equivalent to the end of sequence performance. In Section 6, we show that our UCB modelcan be used when tasks labels are not available at inference time by training it with a “single head”architecture with a sum of number of classes for all tasks. We refer to the ACC measured for thisscenario as “Generalized Accuracy”.5.2 5-S PLIT MNISTWe first present our results for class incremental learning of MNIST (5-Split MNIST) in whichwe learn the digits 09in five tasks with 2classes at a time in 5pairs of 0=1,2=3,4=5,6=7,and8=9. Table 2a shows the results for reference baselines in Bayesian and non-Bayesian neuralnetworks including fine-tuning ( BBB -FT,ORD -FT), feature extraction ( BBB -FE,ORD -FE)and, joint training ( BBB -JT,ORD -JT) averaged over 3runs and standard deviations are given inTable 9 in the appendix. Although the MNIST dataset is an “easy” dataset, we observe throughoutall experiments that Bayesian fine-tuning and joint training perform significantly better than theircounterparts, ORD -FTandORD -JT. For Bayesian methods, we compare against VCL and itsvariations named as VCL with Variational Adam (VCL-Vadam), VCL with Adam and Gaussiannatural gradients (VCL-GNG). For non-Bayesian methods, we compare against HAT, IMM, andEWC (EWC can be regarded as Bayesian-inspired). VCL-Vadam (ACC= 99:17%) appears to beoutperforming VCL (ACC= 98:20%) and VCL-GNG (ACC= 96:50%) in average accuracy. However,full comparison is not possible because forgetting was not reported for Vadam and GNG. Nevertheless,UCB (ACC= 99:63%) is able to surpass all the baselines including VCL-Vadam in average accuracywhile in zero forgetting it is on par with HAT (ACC= 99:59%). We also report results on incrementallylearning MNIST in two tasks (2-Split MNIST) in Table 8 in the appendix, where we compare it8Published as a conference paper at ICLR 2020against PackNet, HAT, and LWF where PackNet, HAT, UCB -P, and UCB have zero forgetting whileUCB has marginally higher accuracy than all others.5.3 P ERMUTED MNISTPermuted MNIST is a popular variant of the MNIST dataset to evaluate continual learning approachesin which each task is considered as a random permutation of the original MNIST pixels. Followingthe literature, we learn a sequence of 10random permutations and report average accuracy at theend. Table 2b shows ACC and BWT of UCB andUCB -Pin comparison to state-of-the-art modelsusing a small and a large network with 0:1Mand1:9Mparameters, respectively (architecture detailsare given in Section A.2 of the appendix). The accuracy achieved by UCB (ACC= 91:440:04%)using the small network outperforms the ACC reported by Serra et al. (2018) for SI (ACC= 86:0%),EWC (ACC= 88:2%), while HAT attains a slightly better performance (ACC= 91:6%). Comparingthe average accuracy reported in VCL-Vadam (ACC= 86:34%) and VCL-GNG (ACC= 90:50%) aswell as obtained results for VCL (ACC= 88:80%) shows UCB with BWT=( 0:03%0:00%) is ableto outperform other Bayesian approaches in accuracy while forgetting significantly less compared toVCL with BWT=7:9%. While we do not experiment with memory in this work, not surprisinglyadding memory to most approaches will improve their performance significantly as it allows lookinginto past tasks. E.g. Chen et al. (2019) report ACC= 94:37% for VCL-GNC when adding a memoryof size 200.Next, we compare the results for the larger network (1.9M). While HAT and UCB have zero forgetting,UCB , reaching ACC= 97:420:01%, performs better than all baselines including HAT which obtainsACC= 97:340:05% using 1:9Mparameters. We also observe again that BBB -FT, despite beingnot specifically penalized to prevent forgetting, exhibits reasonable negative BWT values, performingbetter than IMM and LWF baselines. It is close to joint training, BBB -JT, with ACC= 98:1%, whichcan be seen as an upper bound.5.4 A LTERNATING CIFAR10 AND CIFAR100In this experiment, we randomly alternate between class incremental learning of CIFAR10 andCIFAR100. Both datasets are divided into 5tasks each with 2and20classes per task, respectively.Table 2c presents ACC and BWT obtained with UCB -P,UCB , and three BBB reference methodscompared against various continual learning baselines. Among the baselines presented in Table 2c,PNN and PathNet are the only zero-forgetting-guaranteed approaches. It is interesting to note that inthis setup, some baselines (PathNet, LWF, and LFL) do not perform better than the naive accuracyachieved by feature extraction. PathNet suffers from bad pre-assignment of the network’s capacityper task which causes poor performance on the initial task from which it never recovers. IMMperforms almost similar to fine-tuning in ACC, yet forgets more. PNN, EWC, and HAT are the onlybaselines that perform better than BBB -FEandBBB -FT. EWC and HAT are both allowed to forgetby construction, however, HAT shows zero forgetting behavior. While EWC is outperformed by bothof our UCB variants, HAT exhibits 1%better ACC over UCB -P. Despite having a slightly higherforgetting, the overall accuracy of UCB is higher, reaching 79:4%.BBB -JTin this experimentachieves a positive BWT which shows that learning the entire sequence improves the performance onearlier tasks.5.5 M ULTIPLE DATASETS LEARNINGFinally, we present our results for continual learning of 8tasks using UCB -PandUCB in Table 2d.Similar to the previous experiments we look at both ACC and BWT obtained for UCB -P,UCB , BBBreferences (FT, FE, JT) as well as various baselines. Considering the ACC achieved by BBB -FEorBBB -FT(58:1%) as a lower bound we observe again that some baselines are not able to do betterthanBBB -FTincluding LFL, PathNet, LWF, IMM, and EWC while PNN and HAT remain the onlystrong baselines for our UCB -PandUCB approaches. UCB -Pagain outperforms PNN by 3:6%inACC. HAT exhibits only 0:1%BWT, but our UCB achieves 2:4%higher ACC.6 S INGLE HEAD AND GENERALIZED ACCURACY OF UCBUCB can be used even if the task information is not given at test time. For this purpose, at trainingtime, instead of using a separate fully connected classification head for each task, we use a single9Published as a conference paper at ICLR 2020Table 3: Single Head vs. Multi-Head architecture and Generalized vs. Standard Accuracy. Generalized accuracymeans that task information is not available at test time. SM, PM, CF, and 8T denote the 5-Split MNIST,Permuted MNIST, Alternating CIFAR10/100, and sequence of 8tasks, respectively.Generalized ACC ACCSingle Head Single Head Multi HeadExp UCB BBB-FT UCB BBB-FT UCB BBB-FTSM 98:7 98:1 98:9 98:7 99:2 98:4PM 92:5 86:1 95:1 88:3 97:7 90:0CF 71:2 65:2 74:3 67:8 79:4 68:98T 76:8 47:6 79:9 53:2 84:0 43:1head with the total number of outputs for all tasks. For example in the 8-dataset experiment we onlyuse one head with 293number of output classes, rather than using 8separate heads, during trainingand inference time.Table 3 presents our results for UCB andBBB -FTtrained with a single head against having amulti-head architecture, in columns 4-7. Interestingly, we see only a small performance degradeforUCB from training with multi-head to a single head. The ACC reduction is 0:3%,2:6%,5:1%,and4:1%for 2-Split MNIST, Permuted MNIST, Alternating CIFAR10/100, and sequence of 8tasksexperiments, respectively.We evaluated UCB andBBB -FTwith a more challenging metric where the prediction space coversthe classes across all the tasks. Hence, confusion of similar class labels across tasks can be measured.Performance for this condition is reported as Generalized ACC in Table 3 in columns 2-3. We observea small performance reduction in going from ACC to Generalized ACC, suggesting non-significantconfusion caused by the presence of more number of classes at test time. The performance degradationfrom ACC to Generalized ACC is 0:2%,2:6%,3:1%, and 3:1%for 2-Split MNIST, Permuted MNIST,Alternating CIFAR10/100, and sequence of 8tasks, respectively. This shows that UCB can performcompetitively in more realistic conditions such as unavailability of task information at test time.We believe the main insight of our approach is that instead of computing additional measurementsof importance, which are often task, input or output dependent, we directly use predicted weightuncertainty to find important parameters. We can freeze them using a binary mask, as in UCB -P, orregularize changes conditioned on current uncertainty, as in UCB.7 C ONCLUSIONIn this work, we propose a continual learning formulation with Bayesian neural networks, calledUCB , that uses uncertainty predictions to perform continual learning: important parameters can beeither fully preserved through a saved binary mask ( UCB -P) or allowed to change conditioned ontheir uncertainty for learning new tasks ( UCB ). We demonstrated how the probabilistic uncertaintydistributions per weight are helpful to continually learning short and long sequences of benchmarkdatasets compared against baselines and prior work. We show that UCB performs superior or on parwith state-of-the-art models such as HAT (Serra et al., 2018) across all the experiments. Choosingbetween the two UCB variants depends on the application scenario: While UCB -Penforces noforgetting after the initial pruning stage by saving a small binary mask per task, UCB does notrequire additional memory and allows for more learning flexibility in the network by allowing smallforgetting to occur. UCB can also be used in a single head setting where the right subset of classesbelonging to the task is not known during inference leading to a competitive model that can bedeployed where it is not possible to distinguish tasks in a continuous stream of the data at test time.UCB can also be deployed in a single head scenario and where tasks information is not available attest time.<|im_end|> <|im_start|>assistant ### Review Title Official Blind Review #2 ### Review Text **** Post Rebuttal **** I have read the author's response and other reviewers' comments. In light of comments by other reviewers, I am increasing the score. The paper reports decent empirical results in some challenging settings which might be useful to the continual learning community. **** End **** The paper presents a simple yet effective way to avoid catastrophic forgetting in a continual learning setting. The proposed approach is referred to as UCB - "Uncertainty Guided Bayesian Neural Networks". The main idea of the approach is to weight the learning rate of each parameter in the neural network by the standard deviation of its posterior distribution. This leads to regularizing parameters that are "important" to tasks seen earlier and thus avoiding forgetting. Results indicate an improvement over other baselines. However, I do not see any analysis of the method that explains this improvement. I do not recommend acceptance. Cons: - My main concern with the paper is that it fails to justify the superiority of the method over other baselines. The numbers reported in the paper do seem good, but I don't see an explanation of why this is the case. What are the drawbacks of EWC, VCL or HAT that the proposed method solves? Why using uncertainty to define importance works better than using online VI in VCL or fisher information in EWC? There is no discussion in the paper about that. Without such a discussion it seems that the model was run a number of times and the best score was reported out of all those runs (especially because the improvement is only marginal). - I am not sure why weighting the learning rate would be a good idea? Having high uncertainty may increase the learning rate arbitrarily. Is there a constraint on the standard deviation? Does having a very high weight for learning rate not cause instability during optimization? I think the method would be very sensitive to the initialization of the standard deviation. Overall I think the idea of using uncertainties for continual learning is interesting. But from where it stands, I am not fully convinced that this method should do better than existing approaches. ### Review Rating 6: Weak Accept ### Review Confidence <|im_end|> <|im_end|>